text
stringlengths
216
4.52M
meta
dict
\section{Introduction} Even with advances in modern computational capabilities, high-fidelity, full-scale simulations of chemically reacting flows in realistic applications remain computationally expensive \cite{HUANGMPLSVT,doi:10.1080/10618562.2014.911848,doi:10.2514/6.2018-4675,doi:10.2514/6.2018-1183}. Traditional model reduction methods \cite{RozzaPateraSurvey,SIREVSurvey,SerkanInterpolatory,PWG17MultiSurvey} that seek reduced solutions in low-dimensional subspaces fail for problems that involve chemically reacting advection-dominated flows because the strong advection of sharp gradients in the solution fields lead to high-dimensional features in the latent dynamics; see \cite{Notices} for an overview of the challenges of model reduction of strongly advecting flows and other problems with high-dimensional latent dynamics. We demonstrate on a model premixed flame problem \cite{chris_wentland_2021_5517532}---which greatly simplifies the reaction and flow dynamics but preserves some of the model reduction challenges of more realistic chemically reacting flows---that adapting subspaces of reduced models over time \cite{Peherstorfer15aDEIM,P18AADEIM,CKMP19ADEIMQuasiOptimalPoints} can help to provide accurate future-state predictions with only a few degrees of freedom. Traditional reduced models are formulated via projection-based approaches that seek approximate solutions in lower dimensional subspaces of the high-dimensional solution spaces of full models; see \cite{RozzaPateraSurvey,SIREVSurvey,SerkanInterpolatory} for surveys on model reduction. Mathematically, the traditional approximations in subspaces lead to linear approximations in the sense that the parameters of the reduced models that can be changed over time enter linearly in the reduced solutions. It has been observed empirically that for certain types of dynamics, which are found in a wide range of science and engineering applications, including chemically reacting flows, the accuracy of such linear approximations grows slowly with the dimension $n$ of the reduced space. Examples of such dynamics are flows that are dominated by advection. In fact, for solutions of the linear advection equation, it has been shown that under certain assumptions on the metric and the ambient space the best-approximation error of linear approximations in subspaces cannot decay faster than $1/\sqrt{n}$. This slowly decaying lower bound is referred to as the Kolmogorov barrier, because the Kolmogorov $n$-width of a set of functions is defined as the best-approximation error obtained over all subspaces; see \cite{Ohlberger16,Greif19,10.1093/imanum/dru066} for details. A wide range of methods has been introduced that aim to circumvent the Kolmogorov barrier; see \cite{Notices} for a brief survey. There are methods that introduce nonlinear transformations and nonlinear embeddings to recover low-dimensional structures. Examples are transformations based on Wasserstein metrics \cite{ehrlacher19}, deep network and deep autoencoders \cite{LEE2020108973,KIM2022110841,https://doi.org/10.48550/arxiv.2203.00360,https://doi.org/10.48550/arxiv.2203.01360}, shifted proper orthogonal decomposition and its extensions \cite{doi:10.1137/17M1140571,PAPAPICCO2022114687}, quadratic manifolds \cite{https://doi.org/10.48550/arxiv.2205.02304,BARNETT2022111348}, and other transformations \cite{OHLBERGER2013901,TaddeiShock,https://doi.org/10.48550/arxiv.1911.06598,Cagniart2019}. In this work, we focus on online adaptive reduced models that adapt the reduced space over time to achieve nonlinear approximations. In particular, we build on online adaptive empirical interpolation with adaptive sampling (AADEIM), which adapts reduced spaces with additive low-rank updates that are derived from sparse samples of the full-model residual \cite{Peherstorfer15aDEIM,P18AADEIM,CKMP19ADEIMQuasiOptimalPoints}. The AADEIM method builds on empirical interpolation \cite{barrault_empirical_2004,deim2010,QDEIM}. We refer to \cite{SAPSIS20092347,doi:10.1137/050639703,ZPW17SIMAXManifold,doi:10.1137/16M1088958,doi:10.1137/140967787,Musharbash2020,refId0Hesthaven} for other adaptive basis and adaptive low-rank approximations. The idea of evolving basis functions over time has a long history in numerical analysis and scientific computing, which dates back to at least Dirac~\cite{dirac_1930}. We apply AADEIM to construct reduced models of a model premixed flame problem with artificial pressure forcing. Our numerical results demonstrate that reduced models obtained with AADEIM provide accurate predictions of the fluid flow and flame dynamics with only a few degrees of freedom. In particular, the AADEIM model that we derive predicts the dynamics far outside of the training regime and in regimes where traditional, static reduced models, which keep the reduced spaces fixed over time, fail to provide meaningful prediction. The manuscript is organized as follows. We first provide preliminaries in Section~\ref{sec:Prelim} on traditional, static reduced modeling. We then recap AADEIM in Section~\ref{sec:AADEIM} and highlight a few modifications that we made compared to the original AADEIM method introduced in \cite{Peherstorfer15aDEIM,P18AADEIM}. The reacting flow solver PERFORM \cite{chris_wentland_2021_5517532} and the model premixed flame problem is discussed in Section~\ref{sec:PERFORM}. Numerical results that demonstrate AADEIM on the premixed flame problem are shown in Section~\ref{sec:NumExp} and conclusions are drawn in Section~\ref{sec:Conc}. \section{Static model reduction with empirical interpolation}\label{sec:Prelim} We briefly recap model reduction with empirical interpolation \cite{barrault_empirical_2004,grepl_efficient_2007,deim2010,QDEIM} using reduced spaces that are fixed over time. We refer to reduced models with fixed reduced spaces as static models in the following sections. \subsection{Static reduced models} Discretizing a system of partial differential equations in space and time can lead to a dynamical system of the form \begin{equation}\label{eq:FOM} \boldsymbol q_k(\boldsymbol \mu) = \boldsymbol f(\boldsymbol q_{k-1}(\boldsymbol \mu); \boldsymbol \mu)\,,\qquad k = 1, \dots, K\,, \end{equation} with state $\boldsymbol q_k(\boldsymbol \mu) \in \mathbb{R}^{N}$ at time step $k = 1, \dots, K$ and physical parameter $\boldsymbol \mu \in \mathcal{D}$. The function $\boldsymbol f: \mathbb{R}^{N} \times \mathcal{D} \to \mathbb{R}^{N}$ is vector-valued and nonlinear in the first argument $\boldsymbol q$ in the following. A system of form \eqref{eq:FOM} is obtained, for example, after an implicit time discretization of the time-continuous system. The initial condition is $\boldsymbol q_0(\boldsymbol \mu) \in \mathcal{Q}_0 \subseteq \mathbb{R}^{N}$ and is an element of the set of initial conditions~$\mathcal{Q}_0$. Consider now training parameters $\boldsymbol \mu_1, \dots, \boldsymbol \mu_M \in \mathcal{D}$ with training initial conditions $\boldsymbol q_0(\boldsymbol \mu_1), \dots, \boldsymbol q_0(\boldsymbol \mu_M)$. Let further $\boldsymbol Q(\boldsymbol \mu_1), \dots, \boldsymbol Q(\boldsymbol \mu_M) \in \mathbb{R}^{N \times (K + 1)}$ be the corresponding training trajectories defined as \[ \boldsymbol Q(\boldsymbol \mu_i) = [\boldsymbol q_0(\boldsymbol \mu_i), \dots, \boldsymbol q_K(\boldsymbol \mu_i)]\,,\qquad i = 1, \dots, M\,. \] From the snapshot matrix $\boldsymbol Q = [\boldsymbol Q(\boldsymbol \mu_1), \dots, \boldsymbol Q(\boldsymbol \mu_M)] \in \mathbb{R}^{N \times M(K+1)}$, a reduced space $\mathcal{V}$ of dimension $n \ll N$ with basis matrix $\boldsymbol V \in \mathbb{R}^{N \times n}$ is constructed, for example, via proper orthogonal decomposition (POD) or greedy methods \cite{RozzaPateraSurvey,SIREVSurvey}. The static Galerkin reduced model is \[ \tilde{\boldsymbol q}_k(\boldsymbol \mu) = \boldsymbol V^T\boldsymbol f(\boldsymbol V\tilde{\boldsymbol q}_{k - 1}(\boldsymbol \mu); \boldsymbol \mu)\,,\qquad k = 1, \dots, K\,, \] with the initial condition $\tilde{\bfq}_0(\boldsymbol \mu) \in \tilde{\mathcal{Q}}_0 \subseteq \mathbb{R}^{n}$ for a parameter $\boldsymbol \mu \in \mathcal{D}$ and reduced state $\tilde{\bfq}_k(\boldsymbol \mu) \in \mathbb{R}^{n}$ at time $k = 1, \dots, K$. However, evaluating the function $\boldsymbol f$ still requires evaluating $\boldsymbol f$ at all $N$ components. Empirical interpolation \cite{barrault_empirical_2004,grepl_efficient_2007,deim2010,QDEIM} provides an approximation $\tilde{\bff}: \mathbb{R}^{n} \times \mathcal{D} \to \mathbb{R}^{n}$ of $\boldsymbol f$ that can be evaluated $\tilde{\bff}(\tilde{\bfq}; \boldsymbol \mu)$ at a vector $\tilde{\bfq} \in \mathbb{R}^{n}$ with costs that grow as $\mathcal{O}(n)$. Consider the matrix $\boldsymbol P = [\boldsymbol e_{p_1}, \dots, \boldsymbol e_{p_m}]\in \mathbb{R}^{N \times m}$ that has as columns the $N$-dimensional unit vectors with ones at $m$ unique components $p_1, \dots, p_{m} \in \{1, \dots, N\}$. It holds for the number of points $m \geq n$. The points $p_1, \dots, p_{m}$ can be computed, for example, with greedy \cite{barrault_empirical_2004,deim2010}, QDEIM \cite{QDEIM}, or oversampling algorithms \cite{PDG18ODEIM,doi:10.2514/6.2021-1371}. We denote with $\boldsymbol P^T\boldsymbol f(\boldsymbol q; \boldsymbol \mu)$ that only the component functions $f_{p_1}, \dots, f_{p_{m}}$ corresponding to the points $p_1, \dots, p_{m}$ of the vector-valued function $\boldsymbol f = [f_1, \dots, f_{N}]$ are evaluated at $\boldsymbol q \in \mathbb{R}^{N}$ and $\boldsymbol \mu \in \mathcal{D}$. The empirical-interpolation approximation of $\boldsymbol f$ is \[ \tilde{\bff}(\tilde{\bfq}; \boldsymbol \mu) = (\boldsymbol P^T\boldsymbol V)^{\dagger}\boldsymbol P^T\boldsymbol f(\boldsymbol V\tilde{\boldsymbol q}; \boldsymbol \mu) \] where $(\boldsymbol P^T\boldsymbol V)^{\dagger}$ denotes the Moore--Penrose inverse (pseudoinverse) of $\boldsymbol P^T\boldsymbol V$. Based on the empirical-interpolation approximation $\tilde{\bff}$, we derive the static reduced model \[ \tilde{\boldsymbol q}_k(\boldsymbol \mu) = \tilde{\boldsymbol f}(\tilde{\boldsymbol q}_{k - 1}(\boldsymbol \mu); \boldsymbol \mu)\,,\qquad k = 1, \dots, K\,, \] with the reduced states $\tilde{\bfq}_k(\boldsymbol \mu)$ at time steps $k = 1, \dots, K$. \subsection{The Kolmogorov barrier of static reduced models}\label{sec:ProblemFormulation} The empirical-interpolation approximation $\tilde{\boldsymbol f}$ depends on the basis matrix $\boldsymbol V$, which is fixed over all time steps $k = 1, \dots, K$. This means that the reduced approximation $\tilde{\boldsymbol q}_k$ at time $k$ depends linearly on the basis vectors of the reduced space $\mathcal{V}$, which are the columns of the basis matrix $\boldsymbol V$. Thus, the lowest error that such a static reduced model can achieve is related to the Kolmogorov $n$-width, i.e., the best-approximation error in any subspace of dimension $n$. We refer to \cite{Notices} for an overview of the Kolmogorov barrier in model reduction and to \cite{10.1093/imanum/dru066,MADAY2002289,Ohlberger16} for in-depth details. It has been observed empirically, and in some limited cases proven, that systems that are governed by dynamics with strong advection and transport exhibit a slowly decaying $n$-width, which means that linear, static reduced models are inefficient in providing accurate predictions. \section{Online adaptive empirical interpolation methods for nonlinear model reduction}\label{sec:AADEIM} In this work, we apply online adaptive model reduction methods to problems motivated by chemically reacting flows, which are often dominated by advection and complex transport dynamics that make traditional static reduced models inefficient. We focus on reduced models obtained with AADEIM, which builds on the online adaptive empirical interpolation method \cite{Peherstorfer15aDEIM} for adapting the basis and on the adaptive sampling scheme introduced in \cite{P18AADEIM,CKMP19ADEIMQuasiOptimalPoints}. We utilize the one-dimensional compressible reacting flow solver PERFORM \cite{chris_wentland_2021_5517532}, which provides several benchmark problems motivated by combustion applications. We will consider a model premixed flame problem with artificial pressure forcing and show that AADEIM provides reduced models that can accurately predict the flame dynamics over time, whereas traditional static reduced models fail to make meaningful predictions. For ease of exposition, we drop the dependence of the states $\boldsymbol q_k(\boldsymbol \mu), k = 1, \dots, K$ on the parameter $\boldsymbol \mu$ in this section. \subsection{Adapting the basis}\label{sec:ADEIMAdaptBasis} To allow adapting the reduced space over time, we formally make the basis matrix $\boldsymbol V_k$ and the points matrix $\boldsymbol P_k$ with points $p_1^{(k)}, \dots, p_{m}^{(k)} \in \{1, \dots, N\}$ depend on the time step $k = 1, \dots, K$. In AADEIM, the basis matrix $\boldsymbol V_k$ is adapted at time step $k$ to the basis matrix $\boldsymbol V_{k + 1}$ via a rank-one update \[ \boldsymbol V_{k + 1} = \boldsymbol V_k + \boldsymbol \alpha_k\boldsymbol \beta_k^T\,, \] given by $\boldsymbol \alpha_k \in \mathbb{R}^{N}$ and $\boldsymbol \beta_k \in \mathbb{R}^{n}$. To compute an update $\boldsymbol \alpha_k\boldsymbol \beta_k^T$ at time step $k$, we introduce the data matrix $\boldsymbol F_k \in \mathbb{R}^{N \times w}$, where $w \in \mathbb{N}$ is a window size. First, similarly the empirical-interpolation points, we consider $m_s \leq N$ sampling points $s_1^{(k)}, \dots, s_{m_s}^{(k)} \in \{1, \dots, N\}$ and the corresponding sampling matrix $\boldsymbol S_k = [\boldsymbol e_{s_1^{(k)}}, \dots, \boldsymbol e_{s_{m_s}^{(k)}}] \in \mathbb{R}^{N \times m_s}$. We additionally consider the complement set of sampling points $\{1, \dots, N\} \setminus \{s_1^{(k)}, \dots, s_{m_s}^{(k)}\}$ and the corresponding matrix $\breve{\bfS}_k$. We will also need the matrix corresponding to the union of the set of sampling points $\{s_1^{(k)}, \dots, s_{m_s}^{(k)}\}$ and the points $\{p_1, \dots, p_{m}\}$, which we denote with $\boldsymbol G$, and its complement $\breve{\boldsymbol G}$. The data matrix $\boldsymbol F_k$ at time step $k$ is then given by \[ \boldsymbol F_k = [\hat{\bfq}_{k - w + 1}, \dots, \hat{\bfq}_{k}] \in \mathbb{R}^{N \times w}\,, \] where we add the vector $\hat{\bfq}_k$ that is defined as \begin{equation} \boldsymbol G^T_k\hat{\bfq}_k = \boldsymbol G^T_k\boldsymbol f(\boldsymbol V_k\tilde{\bfq}_{k-1})\,,\quad \breve{\boldsymbol G}^T_k\hat{\bfq}_k = \breve{\boldsymbol G}^T_k\boldsymbol V_k(\boldsymbol G_k^T\boldsymbol V_k)^{\dagger}\boldsymbol G_k^T\boldsymbol f(\boldsymbol V_k\tilde{\bfq}_{k-1})\,. \label{eq:FillRHSMatrixVector} \end{equation} The state $\tilde{\bfq}_k$ used in \eqref{eq:FillRHSMatrixVector} is the reduced state at time step $k$. The vector $\hat{\bfq}_k$ at time step $k$ serves as an approximation of the full-model state $\boldsymbol q_k$ at time $k$. This is motivated by the full-model equations \eqref{eq:FOM} with the reduced state $\tilde{\bfq}_{k-1}$ as an approximation of the full-model state $\boldsymbol q_{k-1}$ at time step $k - 1$; we refer to \cite{P18AADEIM} for details about this motivation. The AADEIM basis update $\boldsymbol \alpha_k\boldsymbol \beta_k^T$ at time step $k$ is the solution to the minimization problem \begin{equation} \min_{\boldsymbol \alpha_k \in \mathbb{R}^{N},\, \boldsymbol \beta_k \in \mathbb{R}^{n}}\, \left\|(\boldsymbol V_k + \boldsymbol \alpha_k\boldsymbol \beta_k^T)\boldsymbol C_k - \boldsymbol F_k\right\|_F^2\,,\label{eq:ADEIMUpdate} \end{equation} where the coefficient matrix is \begin{equation} \boldsymbol C_k = \boldsymbol V_k^T\boldsymbol F_k. \label{eq:CoeffMat} \end{equation} The matrix $\boldsymbol P_k$ is adapted to $\boldsymbol P_{k + 1}$ by applying QDEIM \cite{QDEIM} to the adapted basis matrix $\boldsymbol V_{k + 1}$ We make two modifications compared to the original AADEIM approach. First, we sample from the points given by $\boldsymbol G$, which is the union of the sampling points $\{s_1^{(k)}, \dots, s_{m_s}^{(k)}\}$ and the points $\{p_1, \dots, p_{m}\}$. This comes with no extra costs because the full-model right-hand side function needs to be evaluated at the points corresponding to $\boldsymbol S$ and $\boldsymbol P$ even in the original AADEIM approach. Second, as proposed in \cite{ChengADEIMImprovements}, we adapt the basis at all components of the residual in the objective, rather than only at the sampling points given by $\boldsymbol S$. This requires no additional full-model right-hand side function evaluations but comes with increased computational costs when solving the optimization problem \eqref{eq:ADEIMUpdate}. However, typically solving the optimization problem is negligible compared to sparsely evaluating the full-model right-hand side function. \subsection{Adapting sampling points}\label{sec:ADEIM:AdaptSamplingPoints} When adapting the sampling matrix $\boldsymbol S_{k-1}$ to $\boldsymbol S_k$ at time step $k$, we evaluate the full-model right-hand side function $\boldsymbol f$ at all $N$ components to obtain \begin{equation}\label{eq:SamplingUpdateQk} \hat{\bfq}_k = \boldsymbol f(\boldsymbol V_k\tilde{\bfq}_{k-1}) \end{equation} and put it as column into the data matrix $\boldsymbol F_k$. We then compute the residual matrix \[ \boldsymbol R_k = \boldsymbol F_k - \boldsymbol V_k(\boldsymbol P_k^T\boldsymbol V_k)^{\dagger}\boldsymbol P_k^T\boldsymbol F_k\,. \] Let $r_k^{(i)}$ denote the 2-norm of the $i$-th row of $\boldsymbol R_k$ and let $i_1, \dots, i_{N}$ be an ordering such that \[ r^{(i_1)}_k \geq \dots \geq r^{(i_{N})}_k\,. \] At time step $k$, we pick the first $m_s$ indices $i_1 = s_1^{(k)}, \dots, i_{m_s} = s_{m_s}^{(k)}$ as the sampling points to form $\boldsymbol S_k$, which is subsequently used to adapt the basis matrix from $\boldsymbol V_k$ to $\boldsymbol V_{k + 1}$. Two remarks are in order. First, the sampling points are quasi-optimal with respect to an upper bound of the adaptation error \cite{CKMP19ADEIMQuasiOptimalPoints}. Second, adapting the sampling points requires evaluating the residual at all $i = 1, \dots, N$ components, which incurs computational costs that scale with the dimension $N$ of the full-model states. However, we adapt the sampling points not every time step, but only every $z$-th time step as proposed in \cite{P18AADEIM}. \begin{algorithm}[t] \caption{AADEIM algorithm}\label{alg:ABAS} \begin{algorithmic}[1] \Procedure{AADEIM}{$\boldsymbol q_0, \boldsymbol f, \boldsymbol \mu, n, w_{\text{init}}, w, m_s, z$} \State Solve full model for $w_{\text{init}}$ time steps $\boldsymbol Q = \texttt{solveFOM}(\boldsymbol q_0, \boldsymbol f, \boldsymbol \mu)$\label{alg:ABAS:SolveFOM} \State Set $k = w_{\text{init}}+1$\label{alg:ABAS:StartROMInit} \State Compute $n$-dimensional POD basis $\boldsymbol V_k$ of $\boldsymbol Q$\label{alg:ABAS:PODBasisConstruction} \State Compute QDEIM interpolation points $\boldsymbol p_k = \texttt{qdeim}(\boldsymbol V_k)$ \State Initialize $\boldsymbol F = \boldsymbol Q[:, k-w+1:k-1]$ and $\tilde{\boldsymbol q}_{k - 1} = \boldsymbol V_k^T\boldsymbol Q[:, k - 1]$\label{alg:ABAS:EndROMInit} \For{$k = w_{\text{init}} + 1, \dots, K$}\label{alg:ABAS:ROMLoop} \State Solve $\tilde{\boldsymbol q}_{k - 1} = \tilde{\boldsymbol f}(\tilde{\boldsymbol q}_k; \boldsymbol \mu)$ with DEIM, using basis matrix $\boldsymbol V_k$ and points $\boldsymbol p_k$\label{alg:ABAS:SolveROM} \State Store $\boldsymbol Q[:, k] = \boldsymbol V_k\tilde{\boldsymbol q}_k$ \If{$\operatorname{mod}(k, z) == 0 || k == w_{\text{init}} + 1$}\label{alg:ABAS:IfSampling} \State Compute $\boldsymbol F[:, k] = \boldsymbol f(\boldsymbol Q[:, k]; \boldsymbol \mu)$\label{alg:ABAS:AdaptSamplingPointsStart} \State $\boldsymbol R_k = \boldsymbol F[:, k - w + 1:k] - \boldsymbol V_k(\boldsymbol V_k[\boldsymbol p_k, :])^{-1}\boldsymbol F[\boldsymbol p_k, k - w + 1:k]$ \State $[\sim, \boldsymbol s_k] = \texttt{sort}(\texttt{sum}(\boldsymbol R_k.\widehat{~~}2, 2), \text{'descend'})$ \State Set $\breve{\boldsymbol s}_k = \boldsymbol s_k[m_s+1:\text{end}]$ and $\boldsymbol s_k = \boldsymbol s_k[1:m_s]$\label{alg:ABAS:AdaptSamplingPointsEnd} \Else \State Set $\boldsymbol s_k = \boldsymbol s_{k-1}$ and $\breve{\boldsymbol s}_k = \breve{\boldsymbol s}_{k - 1}$ \State Take the union of points in $\boldsymbol s_k$ and $\boldsymbol p_k$ to get $\boldsymbol g_k$ and complement $\breve{\boldsymbol g}_k$ \State Compute $\boldsymbol F[\boldsymbol g_k, k] = \boldsymbol f(\boldsymbol Q[\boldsymbol g_k, k]; \boldsymbol \mu)$\label{alg:ABAS:EvalResidualAtSamplingPoints} \State Approximate $\boldsymbol F[\breve{\boldsymbol g}_k, k] = \boldsymbol V_k[\breve{\boldsymbol g}_k, :](\boldsymbol V_k[\boldsymbol g_k, :])^{-1}\boldsymbol F[\boldsymbol g_k, k]$\label{alg:ABAS:ApproxResidualAtSamplingPoints} \EndIf \State Compute update $\boldsymbol \alpha_k, \boldsymbol \beta_k$ by solving \eqref{eq:ADEIMUpdate} for $\boldsymbol F[:, k - w + 1:k]$ and $\boldsymbol V_k$\label{alg:ABAS:CompBasisUpdate} \State Adapt basis $\boldsymbol V_{k + 1} = \boldsymbol V_k + \boldsymbol \alpha_k\boldsymbol \beta_k$ and orthogonalize $\boldsymbol V_{k + 1}$\label{alg:ABAS:ApplyBasisUpdate} \State Compute points $\boldsymbol p_{k + 1}$ by applying QDEIM to $\boldsymbol V_{k + 1}$ \label{alg:ABAS:AdaptP} \EndFor\\ \Return Return trajectory $\boldsymbol Q$ \EndProcedure \end{algorithmic} \end{algorithm} \subsection{Computational procedure and costs} By combining the basis adaptation described in Section~\ref{sec:ADEIMAdaptBasis} and sampling-points adaptation of Section~\ref{sec:ADEIM:AdaptSamplingPoints}, we obtain the AADEIM reduced model \[ \tilde{\bfq}_k = \tilde{\bff}_k(\tilde{\bfq}_{k-1}; \boldsymbol \mu)\,,\qquad k = 1, \dots, K\,, \] where now the approximation $\tilde{\bff}_k: \mathbb{R}^{N} \times \mathcal{D} \to \mathbb{R}^{N}$ is \[ \tilde{\bff}_k(\tilde{\bfq}_{k-1}) = (\boldsymbol P^T_k\boldsymbol V_k)^{\dagger}\boldsymbol P_k^T\boldsymbol f(\boldsymbol V_k\tilde{\bfq}_{k-1})\,, \] which depends on the time step $k = 1, \dots, K$ because the basis $\boldsymbol V_k$ and the points matrix $\boldsymbol P_k$ depend on the time step. The AADEIM algorithm is summarized in Algorithm~\ref{alg:ABAS}. Inputs to the algorithm are the initial condition $\boldsymbol q_0 \in \mathcal{Q}_0$, full-model right-hand side function $\boldsymbol f$, parameter $\boldsymbol \mu$, reduced dimension $n$, initial window size $w_{\text{init}}$, window size $w$, and the frequency of updating the sampling points $z$. The algorithm returns the trajectory $\boldsymbol Q \in \mathbb{R}^{N \times K}$. Lines~\ref{alg:ABAS:SolveFOM}--\ref{alg:ABAS:EndROMInit} initialize the AADEIM reduced model by first solving the full model for $w_{\text{init}}$ time steps to compute the snapshots and store them in the columns of the matrix $\boldsymbol Q$. From these $w_{\text{init}}$ snapshots, a POD basis matrix $\boldsymbol V_k$ for time step $k = w_{\text{init}} + 1$ is constructed. The time-integration loop starts in line~\ref{alg:ABAS:ROMLoop}. In each iteration $k = w_{\text{init}} + 1, \dots, K$, the reduced state $\tilde{\bfq}_{k - 1}$ is propagated forward to obtain $\tilde{\bfq}_k$ by solving the AADEIM reduced model for one time step. The first branch of the \texttt{if} clause in line~\ref{alg:ABAS:IfSampling} is entered if the sampling points are to be updated, which is the case every $z$-th time step. If the sampling points are updated, the full-model right-hand side function is evaluated at all $N$ components to compute the residual matrix $\boldsymbol R_k$. The new sampling points are selected based on the largest norm of the rows of the entry-wise squared residual matrix $\boldsymbol R_k$. If the sampling points are not updated, the full-model right-hand side function $\boldsymbol f$ is evaluated only at the points corresponding to $\boldsymbol s_k$ and $\boldsymbol p_k$. All other components are approximated with empirical interpolation. In lines~\ref{alg:ABAS:CompBasisUpdate} and~\ref{alg:ABAS:ApplyBasisUpdate}, the basis update $\boldsymbol \alpha_k,\boldsymbol \beta_k$ is computed and then used to obtain the adapted basis matrix $\boldsymbol V_{k + 1} = \boldsymbol V_k + \boldsymbol \alpha_k\boldsymbol \beta_k^T$. The points $\boldsymbol p_k$ are adapted to $\boldsymbol p_{k + 1}$ by applying QDEIM to the adapted basis matrix $\boldsymbol V_{k + 1}$ in line~\ref{alg:ABAS:AdaptP}. The method \texttt{solveFOM()} refers to the full-model solver and the method \texttt{qdeim()} to QDEIM \cite{QDEIM} \section{Benchmarks of chemically reacting flow problems}\label{sec:PERFORM} A collection of benchmarks for model reduction of transport-dominated problems is provided with PERFORM \cite{chris_wentland_2021_5517532}. Documentation of the code and benchmark problems is available online\footnote{\url{https://perform.readthedocs.io/}}. The benchmarks are motivated by combustion processes and modeled after the General Equations and Mesh Solver (GEMS), which provides a reacting flow solver in three spatial dimensions \cite{10.1007/3-540-31801-1_89}. \begin{figure}[p] \begin{tabular}{ccc} \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/FOM_snapshot_t7.5e-06_pressure}} & \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/FOM_snapshot_t2e-05_pressure}} & \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/FOM_snapshot_t3e-05_pressure}}\\ (a) pressure, $t = 7.5\times 10^{-6}$ & (b) pressure, $t = 2\times 10^{-5}$ & (c) pressure, $t = 3\times 10^{-5}$\\ \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/FOM_snapshot_t7.5e-06_velocity}} & \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/FOM_snapshot_t2e-05_velocity}} & \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/FOM_snapshot_t3e-05_velocity}}\\ (d) velocity, $t = 7.5\times 10^{-6}$ & (e) velocity, $t = 2\times 10^{-5}$ & (f) velocity, $t = 3\times 10^{-5}$\\ \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/FOM_snapshot_t7.5e-06_temperature}} & \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/FOM_snapshot_t2e-05_temperature}} & \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/FOM_snapshot_t3e-05_temperature}}\\ (g) temperature, $t = 7.5\times 10^{-6}$ & (h) temperature, $t = 2\times 10^{-5}$ & (i) temperature, $t = 3\times 10^{-5}$\\ \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/FOM_snapshot_t7.5e-06_massfrac}} & \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/FOM_snapshot_t2e-05_massfrac}} & \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/FOM_snapshot_t3e-05_massfrac}}\\ (j) mass fraction, $t = 7.5\times 10^{-6}$ & (k) mass fraction, $t = 2\times 10^{-5}$& (l) mass fraction, $t = 3\times 10^{-5}$ \end{tabular} \caption{The states of the full model of the premixed flame problem. The sharp temperature and species gradients, and multiscale interactions between acoustics and flame, indicate that traditional static reduced models become inefficient.} \label{fig:FOMVis} \end{figure} \subsection{Numerical solver description} PERFORM numerically solves the one-dimensional Navier-Stokes equations with chemical species transport and a chemical reaction source term: \[ \frac{\partial}{\partial t}q(t, x) + \frac{\partial}{\partial x}\left(f(t, x) - f_v(t, x)\right) = f_s(t, x)\,, \] with \begin{equation} q = \begin{bmatrix}\rho\\\rho u\\\rho h^0 - p\\\rho Y_l\end{bmatrix}\,,\quad f = \begin{bmatrix}\rho u\\\rho u^2 + p\\\rho h^0 u\\\rho Y_l\end{bmatrix}\,,\quad f_v = \begin{bmatrix}0\\\tau \\u\tau - q\\ -\rho V_l Y_l\end{bmatrix}\,,\quad f_s = \begin{bmatrix}0\\0\\0\\\dot{\omega}_l\end{bmatrix}\,, \label{eq:QuantitiesOfPDE} \end{equation} where $q$ is the conserved state at time $t$ and spatial coordinate $x$, $f$ is the inviscid flux vector, $f_v$ is the viscous flux vector, and $f_s$ is the source term. Additionally, $\rho$ is density, $u$ is velocity, $h^0$ is stagnation enthalpy, $p$ is static pressure, and $Y_l$ is mass fraction of the $l$th chemical species. The reaction source $\dot{\omega}_l$ corresponds to the reaction model, which is described by an irreversible Arrhenius rate equation. The problem is discretized in the spatial domain with a second-order accurate finite volume scheme. The inviscid flux is computed by the Roe scheme~\cite{Roe1981}. Gradients are limited by the Venkatakrishnan limiter~\cite{VENKATAKRISHNAN1993}. The time derivative is discretized with the first-order backwards differentiation formula scheme (i.e. backward Euler). Calculation of the viscous stress $\tau$, heat flux $q$, and diffusion velocity $V_l$, and any additional details about the implementation can be found in PERFORM's online documentation. \subsection{Premixed flame with artificial forcing}\label{sec:PERFORM:Benchmark} We consider a setup corresponding to a model premixed flame with artificial pressure forcing. There are two chemical species: ``reactant'' and ``product''. The reaction is a single-step irreversible mechanism that converts low-temperature reactant to high-temperature product, modeling a premixed combustion process. An artificial sinusoidal pressure forcing is applied at the outlet, which causes an acoustic wave to propagate upstream. The interaction between different length and time scales given by the system acoustics caused by the forcing and the flame leads to strongly nonlinear system dynamics with multiscale effects. The result are dynamics that evolve in high-dimensional spaces and thus are inefficient to reduce with static reduced models; see~Section~\ref{sec:ProblemFormulation}. The states of the full model and how they evolve over time are shown in Figure~\ref{fig:FOMVis}. \section{Numerical results}\label{sec:NumExp} We demonstrate nonlinear model reduction with online adaptive empirical interpolation on the model premixed flame problem introduced in Section~\ref{sec:PERFORM:Benchmark}. \begin{figure}[t] \begin{tabular}{cc} \includegraphics[width=0.48\linewidth]{figures/FOM_pressure} & \hspace*{-0.2cm}\includegraphics[width=0.48\linewidth]{figures/FOM_velocity}\\ (a) pressure & (b) velocity\\ \includegraphics[width=0.48\linewidth]{figures/FOM_temperature} & \hspace*{-0.2cm}\includegraphics[width=0.48\linewidth]{figures/FOM_massfraction}\\ (c) temperature & (d) species mass fraction \end{tabular} \caption{Full model: Plot (a) shows oscillations of pressure waves, which is evidence of the transport-dominated dynamics in this benchmark example.} \label{fig:FOM2D} \end{figure} \begin{figure}[t] \begin{tabular}{cc} \resizebox{0.5\columnwidth}{!}{\LARGE\input{figures/singVal}} & \resizebox{0.5\columnwidth}{!}{\LARGE\input{figures/singVal_local}}\\ (a) global in time & (b) local in time \end{tabular} \caption{Singular values: Plot (a) shows the slow decay of the normalized singular values of the snapshots. Plot (b) shows that the normalized singular values computed from local trajectories in time decay faster.} \label{fig:SingVal} \end{figure} \subsection{Numerical setup of full model} Consider the problem described in Section~\ref{sec:PERFORM:Benchmark}. Each of the four conserved quantities $\rho$, $\rho u$, $\rho h^0 - p$, $\rho Y_1$ is discretized on $512$ equidistant grid points in the domain $\Omega = [0, 10^{-2}]$ m, which leads to a total of $N = 4 \times 512 = 2{,}048$ unknowns of the full model. The time-step size is $\delta t =1 \times 10^{-9}$ s and the end time is $T = 3.5 \times 10^{-5}$ s, which is a total of $35{,}000$ time steps. A 10\% sinusoidal pressure perturbation at a frequency of 50 kHz is applied at the outlet. Space-time plots of pressure, velocity, temperature, and species mass fraction are shown in Figure~\ref{fig:FOM2D}. Pressure and velocity shown in plots (a) and (b) of Figure~\ref{fig:FOM2D} show a transport-dominated behavior, which is in agreement with the pressure and velocity waves shown in Figure~\ref{fig:FOMVis}. \begin{figure}[t] \begin{tabular}{cc} \includegraphics[width=0.48\linewidth]{figures/StaticROM_pressure_rdim6} & \hspace*{-0.2cm}\includegraphics[width=0.48\linewidth]{figures/StaticROM_velocity_rdim6}\\ (a) pressure & (b) velocity\\ \includegraphics[width=0.48\linewidth]{figures/StaticROM_temperature_rdim6} & \hspace*{-0.2cm}\includegraphics[width=0.48\linewidth]{figures/StaticROM_massfraction_rdim6}\\ (c) temperature & (d) species mass fraction \end{tabular} \caption{Static reduced model of dimension $n = 6$: Because of the coupling of dynamics over various length scales in the premixed flame example, a static reduced model is insufficient to provide accurate approximations of the full-model dynamics shown in Figure~\ref{fig:FOM2D}. The time-space plot corresponding to the static reduced model with $n = 7$ is not shown here because it provides a comparably poor approximation.} \label{fig:StaticSpaceTimeD6} \end{figure} \subsection{Static reduced models}\label{sec:NumExp:StaticROM} Snapshots are generated with the full model by querying \eqref{eq:FOM} for 35,000 time steps and storing the full model state every 50 time steps. The singular values of the snapshot matrix are shown in Figure~\ref{fig:SingVal}a, which decay slowly and indicate that static reduced models are inefficient. From the snapshots, we generate POD bases $\boldsymbol V$ of dimension $n = 6$ and $n = 7$ to derive reduced models with empirical interpolation. The interpolation points are selected with QDEIM \cite{QDEIM}. The empirical interpolation points are computed separately with QDEIM for each of the four variables in \eqref{eq:QuantitiesOfPDE}. Then, we select a component of the state $\boldsymbol q$ as an empirical interpolation point if it is selected for at least one variable. The time-space plots of the static reduced approximation of the full-model dynamics are shown in Figure~\ref{fig:StaticSpaceTimeD6} for dimension $n = 6$. The approximation is poor, which is in agreement with the transport-dominated dynamics and the slow decay of the singular values shown in Figure~\ref{fig:SingVal}a. The time-space plot of the static reduced model of dimension $n = 7$ gives comparably poor approximations and is not shown here. It is important to note that the static reduced model is derived from snapshots over the whole time range from $t = 0$ to end time $T = 3.5 \times 10^{-5}$, which means that the static reduced model has to merely reconstruct the dynamics that were seen during training, rather than predicting unseen dynamics. This is in stark contrast to the adaptive reduced model derived with AADEIM in the following subsection, where the reduced model will predict states far outside of the training window. \begin{figure}[t] \begin{tabular}{cc} \includegraphics[width=0.48\linewidth]{figures/ADEIMROM_pressure_dim6_ae2_uf3_iw15_ws7_res1024_AFDEIM_dt1e-09} & \hspace*{-0.2cm}\includegraphics[width=0.48\linewidth]{figures/ADEIMROM_velocity_dim6_ae2_uf3_iw15_ws7_res1024_AFDEIM_dt1e-09}\\ (a) pressure & (b) velocity\\ \includegraphics[width=0.48\linewidth]{figures/ADEIMROM_temperature_dim6_ae2_uf3_iw15_ws7_res1024_AFDEIM_dt1e-09} & \hspace*{-0.2cm}\includegraphics[width=0.48\linewidth]{figures/ADEIMROM_massfraction_dim6_ae2_uf3_iw15_ws7_res1024_AFDEIM_dt1e-09}\\ (c) temperature & (d) species mass fraction \end{tabular} \caption{The online adaptive reduced model with $n = 6$ dimensions obtained with AADEIM provides accurate predictions of the full-model dynamics shown in Figure~\ref{fig:FOM2D}. The training snapshots used for initializing the AADEIM model cover dynamics up to time $t = 1.6 \times 10^{-8}$ and thus all states later in time up to end time $T = 3.5 \times 10^{-5}$ are predictions outside of the training data.} \label{fig:ADEIM62DPlotHigh} \end{figure} \subsection{Reduced model with AADEIM} We derive a reduced model with AADEIM of dimension $n = 6$. The initial window size is $w_{\text{init}} = 15$ and the window size is $w = n + 1 = 7$, as recommended in \cite{P18AADEIM}. Notice that an initial window $w_{\text{init}} = 15$ means that the AADEIM model predicts unseen dynamics (outside of training data) starting at time step $k = 16$, which corresponds to $t = 1.6 \times 10^{-8}$. This is in stark contrast to Section~\ref{sec:NumExp:StaticROM} where the static reduced model only has to reconstruct seen dynamics. The number of sampling points is $m_s = 1{,}024$ and the frequency of adapting the sampling points is $z = 3$. This means that the sampling points are adapted every third time step; see~Algorithm~\ref{alg:ABAS}. The basis matrix $\boldsymbol V_k$ and the points matrix $\boldsymbol P_k$ are adapted every other time step. \begin{figure}[t] \begin{tabular}{cc} \resizebox{0.5\columnwidth}{!}{\huge\input{figures/RelErrBar}} & \resizebox{0.5\columnwidth}{!}{\huge\input{figures/NEvalsBar}}\\ (a) error & (b) costs \end{tabular}\\\centering\begin{minipage}{1.0\columnwidth}\fbox{\begin{tabular}{rl} A: & dimension $n = 6$, update frequency $z = 4$, \#sampling points $m_s = 768 $ \\ B: & dimension $n=6$, update frequency $z = 3$, \#sampling points $m_s = 768$ \\ C: & dimension $n=6$, update frequency $z = 4$, \#sampling points $m_s = 1024$ \\ D: & dimension $n=6$, update frequency $z = 3$, \#sampling points $m_s = 1024$ \\ E: & dimension $n=7$, update frequency $z = 4$, \#sampling points $m_s = 768$ \\ F: & dimension $n=7$, update frequency $z = 3$, \#sampling points $m_s = 768$ \\ G: & dimension $n=7$, update frequency $z = 4$, \#sampling points $m_s = 1024$ \\ H: & dimension $n=7$, update frequency $z = 3$, \#sampling points $m_s = 1024$ \end{tabular}}~\\~\\~\\\end{minipage} \caption{Performance of AADEIM reduced models for dimensions $n = 6$ and $n = 7$ with various combinations of number of sampling points $m_s$ and update frequency $z$. One observation is that the AADEIM models outperform the static reduced models in terms of error \eqref{eq:ErrorPlot} over all parameters. Another observation is that a higher number of sampling points $m_s$ tends to lead to lower errors in the AADEIM predictions in favor of higher costs.} \label{fig:ErrorVsCosts} \end{figure} \begin{figure}[t] \begin{tabular}{cc} \resizebox{0.5\columnwidth}{!}{\huge\input{figures/scatter_dim6}} & \resizebox{0.5\columnwidth}{!}{\huge\input{figures/scatter_dim7}}\\ (a) dimension $n = 6$ & (b) dimension $n = 7$ \end{tabular} \caption{Cost vs.~error of AADEIM models for dimension $n = 6$ and $n = 7$. All AADEIM models achieve a prediction error \eqref{eq:ErrorPlot} of roughly $10^{-3}$, which indicates an accurate prediction of future-state dynamics and which is in contrast to the approximations obtained with static reduced models of the same dimension. See Figure~\ref{fig:ErrorVsCosts} for legend.} \label{fig:ErrorVsCostsScatter} \end{figure} The time-space plot of the prediction made with the AADEIM model is shown in Figure~\ref{fig:ADEIM62DPlotHigh}. The predicted states obtained with the AADEIM model are in close agreement with the full model (Figure~\ref{fig:FOM2D}), in contrast to the states derived with the static reduced model (Figure~\ref{fig:StaticSpaceTimeD6}). This also in agreement with the fast decay of the singular values of snapshots in local time windows, as shown in Figure~\ref{fig:SingVal}b. We further consider AADEIM models with dimension $n = 7$, initial window $w_{\text{init}}=12$, and $m_s = 768$ sampling points and frequency of adapting the sampling points $z = 4$. We compute the average relative error as \begin{equation}\label{eq:ErrorPlot} e = \frac{\|\tilde{\boldsymbol Q} - \boldsymbol Q\|_F^2}{\|\boldsymbol Q\|_F^2}\,, \end{equation} where $\boldsymbol Q$ is the trajectory obtained with the full model and $\tilde{\boldsymbol Q}$ is the reduced trajectory obtained with AADEIM from Algorithm~\ref{alg:ABAS}. All combinations of models and their performance in terms of average relative error are shown in Figure~\ref{fig:ErrorVsCosts}a. As costs, we count the number of components of the full-model right-hand side function that need to be evaluated and report them in Figure~\ref{fig:ErrorVsCosts}b; see also Figure~\ref{fig:ErrorVsCostsScatter}. All online adaptive reduced models achieve a comparable error of $10^{-3}$, where a higher number of sampling points $m_s$ and a more frequent adaptation $z$ of the sampling points typically leads to lower errors in favor of higher costs. These numerical observations are in agreement with the principles of AADEIM and the results shown in \cite{Peherstorfer15aDEIM,P18AADEIM,CKMP19ADEIMQuasiOptimalPoints}. We now compare the AADEIM and static reduced model based on probes of the states at $x = 0.0025 $, $x = 0.005$, and $x = 0.0075$. The probes for dimension $n = 6$, update frequency $z = 3$ of the samples, and number of sampling points $m_s = 1{,}024$ is shown in Figure~\ref{fig:ProbeN6High}. The probes obtained with the static reduced model of dimension $n = 6$ and the full model are plotted too. The AADEIM model provides an accurate predictions of the full model over all times and probe locations for all quantities, whereas the static reduced model fails to provide meaningful approximations. Note that the mass fraction at probe location 2 and 3 is zero. \begin{figure}[p] \begin{tabular}{ccc} \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/probe_dim6_ae2_uf3_iw15_ws7_res1024_ncycle0_AFDEIM_dt1e-09_Pressure_Probe0}} & \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/probe_dim6_ae2_uf3_iw15_ws7_res1024_ncycle0_AFDEIM_dt1e-09_Pressure_Probe1}} & \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/probe_dim6_ae2_uf3_iw15_ws7_res1024_ncycle0_AFDEIM_dt1e-09_Pressure_Probe2}}\\ (a) pressure, probe 1 & (b) pressure, probe 2 & (c) pressure, probe 3\\ \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/probe_dim6_ae2_uf3_iw15_ws7_res1024_ncycle0_AFDEIM_dt1e-09_Velocity_Probe0}} & \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/probe_dim6_ae2_uf3_iw15_ws7_res1024_ncycle0_AFDEIM_dt1e-09_Velocity_Probe1}} & \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/probe_dim6_ae2_uf3_iw15_ws7_res1024_ncycle0_AFDEIM_dt1e-09_Velocity_Probe2}}\\ (d) velocity, probe 1 & (e) velocity, probe 2 & (f) velocity, probe 3\\ \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/probe_dim6_ae2_uf3_iw15_ws7_res1024_ncycle0_AFDEIM_dt1e-09_Temperature_Probe0}} & \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/probe_dim6_ae2_uf3_iw15_ws7_res1024_ncycle0_AFDEIM_dt1e-09_Temperature_Probe1}} & \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/probe_dim6_ae2_uf3_iw15_ws7_res1024_ncycle0_AFDEIM_dt1e-09_Temperature_Probe2}}\\ (g) temperature, probe 1 & (h) temperature, probe 2 & (i) temperature, probe 3\\ \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/probe_dim6_ae2_uf3_iw15_ws7_res1024_ncycle0_AFDEIM_dt1e-09_MassFraction_Probe0}} & \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/probe_dim6_ae2_uf3_iw15_ws7_res1024_ncycle0_AFDEIM_dt1e-09_MassFraction_Probe1}} & \resizebox{0.33\columnwidth}{!}{\Huge\input{figures/probe_dim6_ae2_uf3_iw15_ws7_res1024_ncycle0_AFDEIM_dt1e-09_MassFraction_Probe2}}\\ (j) mass fraction, probe 1 & (k) mass fraction, probe 2 & (l) mass fraction, probe 3\\ \end{tabular} \caption{This figure compares the full-model states at three probe locations with the predictions obtained with the AADEIM and static reduced model of dimension $n = 6$. The AADEIM model provides accurate predictions for all quantities at all probe locations, whereas the static reduced model provides poor approximations. Note that the species mass fraction at probe location 2 and 3 is zero.} \label{fig:ProbeN6High} \end{figure} \section{Conclusions}\label{sec:Conc} The considered benchmark problem of a model premixed flame with artificial pressure forcing relies on strong simplifications of physics that are present in more realistic scenarios of chemically reacting flows. However, it preserves the transport-dominated and multiscale nature of the dynamics, which are major challenges for model reduction with linear approximations. We showed numerically that online adaptive model reduction with the AADEIM method provides accurate predictions of the flame dynamics with few degrees of freedom. The AADEIM method leverages two properties of the considered problem. First, the states of the considered problem have a local low-rank structure in the sense that the singular values decay quickly for snapshots in a local time window. Second, the residual of the AADEIM approximation is local in the spatial domain, which means that few sampling points are sufficient to inform the adaptation of the reduced basis. Reduced models based on AADEIM build on these two properties to derive nonlinear approximations of latent dynamics and so enable predictions of transport-dominated dynamics far outside of training regimes. \bibliographystyle{splncs04}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Three dimensional (3D) gravity provided us with many important clues about higher dimensional physics. It help that this theory with a negative cosmological constant $\Lambda$ has non-trivial solutions, such as the Ba\~nados-Teitelboim-Zanelli black-hole spacetime \cite{Banados:wn}, which provide important testing ground for quantum gravity and AdS/CFT correspondence. Many other types of 3D solutions with a negative cosmological constant have also been found by coupling matter fields to gravity in different way. The present work was partially motivated by the finding that the nonflat part of the famous G\"odel geometry \cite{Godel:1949ga} can be interpreted as resulting from the squashing of a three dimensional anti-de Sitter (AdS) lightcones \cite{Rooman:1998xf}. Inspired by these results, we consider in this paper the possibility of squashing the AdS lightcones along a different direction. It is possible in this way to find an exact solution of 3D Einstein's equations which posseses the same amount of symmetry as the G\"odel spacetime, without the causal pathologies of the later. This configuration is characterized by two parameters $m$ and $n$ and has no curvature singularities. The metric is given in three different forms, indexed by a parameter $k=0,\pm 1$ which are locally equivalent. We present also a four dimensional (4D) interpretation of this solution; in this case it satisfies the Einstein-Maxwell-scalar field equations with a negative cosmological constant. The 3D solution can be seen as arising from four dimensional gravity and provided us with a clear understanding of how the 3D and 4D gravity are related to each other. We have to mention that this spacetime is not asymptotically flat, nor AdS. The paper is structured as follows: in Section 2 we present the derivation of the new line element and determine a matter content compatible with this geometry. We construct also a global algebraic isometric embedding of these metric in a seven dimensional flat space. In Section 3, a 4D generalization of these solutions is discussed and a matter content compatible with Einstein field equations is found. The geodesic equations of motion are integrated in Section 4 where the properties of trajectories are also discussed. The Section 5 contains a preliminary discussion of scalar field quantization for a particular parametrization of this geometry. The paper closed with Section 6, where our main conclusions and remarks are presented. We use the same metric and curvature conventions as in \cite{kramer}, and we work in units where $c=G=1$. \section{The geometry and matter content} \subsection{The line element} Following Rooman and Spindel \cite{Rooman:1998xf}, we introduce the triad \begin{eqnarray} \label{triad} \theta^1=dx,~~\theta^2=dy+N(x)dt,~~\theta^3=M(x)dt, \end{eqnarray} where \begin{eqnarray} \label{MN} N=\frac{1}{2}(e^{mx}-ke^{-mx}), ~~~M=\frac{1}{2}(e^{mx}+ke^{-mx}), \end{eqnarray} and $k=0,\pm 1$. We consider also the set of metrics \begin{eqnarray} \label{metric} d\sigma^2= (\theta^1)^2+n^2(\theta^2)^2-(\theta^3)^2. \end{eqnarray} Here $n,~m$ are two real parameters; for a value of the squashing parameter $n=1$, the above metric describes the geometry of $AdS_3$ space (with $\Lambda=-m^2/4$), written in unusual coordinates. For example, the transformation $y=\varphi/m+T/2,~t=\varphi/m-T/2,~x=(2/m)~{\rm arcosh}(mr/2)$ brings the $k=-1$ metric into the more usual form \begin{eqnarray} \label{metric-AdS1} d\sigma^2=(\frac{m^2r^2}{4}-1)^{-1}dr^2+r^2d\varphi^2-(\frac{m^2r^2}{4}-1)dT^2. \end{eqnarray} In this description of (a part of) AdS space, $\varphi$ has to be given the full range, $-\infty<\varphi<\infty$ of a hyperbolic angle. By using the rescalling $y \to y/n$ and $N \to nN$ we rewrite (\ref{metric}) as \begin{eqnarray} \label{3D} d\sigma^2=dx^2+(dy+Ndt)^2-M^2dt^2 \end{eqnarray} where hereafter $N=n (e^{mx}-ke^{-mx})/2$, presenting the particular forms \begin{eqnarray} \label{k=-1} d\sigma^2&=&dx^2+(dy+\frac{n}{2}\cosh(mx)dt)^2-\sinh^2(mx)dt^2, ~~{\rm for}\ \ k=-1 \\ \label{k=0} d\sigma^2&=&dx^2+(dy+\frac{n}{2}e^{mx}dt)^2-\frac{e^{2mx}}{4}dt^2,~~~~~~~~~~~~~~~~{\rm for}\ \ k=0 \\ \label{k=1} d\sigma^2&=&dx^2+(dy+\frac{n}{2}\sinh(mx)dt)^2-\cosh^2(mx)dt^2, ~~~{\rm for}\ \ k=1 . \end{eqnarray} At this stage the coordinates $(x,y,t)$ are generic and nothing can be said about the range of values they take. As expected, line elements with the same value of $m$ and $n$ are isometric as proven by the existence of the coordinate transformation \begin{eqnarray} \label{t1} \nonumber \exp{(mx_0)}&=&\exp{(m x_{-1})} \cosh ^{2} (mt_{-1}/2)-\exp{(-m x_{-1})} \sinh ^{2} (mt_{-1}/2), \\ t_0\exp{(mx_0)}&=&\frac{1}{m}\sinh (mx_{-1})\sinh (m t_{-1}/2) , \\ \nonumber \tanh \big(\frac{m}{2n}(y_0-y_{-1})\big)&=&\exp{(-m x_{-1})}\tanh(mt_{-1}/2), \end{eqnarray} which relates the cases $k=0$ and $k=-1$. Similarly, a straightforward calculation shows that the transformation \begin{eqnarray} \label{t2} \nonumber \exp({mx_0})&=&\cos(mt_1)\cosh(mx_{1})+\cosh(mx_{1}), \\ t_0\exp({mx_0})&=&\frac{1}{m}\cosh (mx_{1})\sin (m t_{1}) , \\ \nonumber \tanh \big(\frac{m}{2n}(y_1-y_0)\big)&=&\exp{(-mx_{1})}\tan(m t_{1}/2). \end{eqnarray} carries the $k=0 $ metric into $k=1$ one. In these relations, the indices of the coordinates correspond to the value of the parameter $k$. A standard calculation show that these metrics admit at least four Killing vectors. For $k=0$ we find \begin{eqnarray} \label{killing0} \nonumber K_{1} &=&\frac{\partial}{\partial y}, \\ \nonumber K_{2} &=&\frac{1}{\sqrt{2}} \left(\frac{t}{m}\frac{\partial}{\partial x}+\frac{2ne^{-mx}}{m^2}\frac{\partial}{\partial y} +(1-\frac{1}{2}(t^2+4e^{-2mx}))\frac{\partial}{\partial t}\right), \\ K_{3} &=&\frac{1}{\sqrt{2}} \left(\frac{t}{m}\frac{\partial}{\partial x}+\frac{2ne^{-mx}}{m^2}\frac{\partial}{\partial y} -(1+\frac{1}{2}(t^2+4e^{-2mx}))\frac{\partial}{\partial t}\right), \\ \nonumber K_{4} &=&\frac{1}{m}\frac{\partial}{\partial x}-t\frac{\partial}{\partial t}. \end{eqnarray} The Killing vectors for $k=1$ metrics are \begin{eqnarray} \label{killing1} \nonumber K_{1} &=&\frac{\partial}{\partial y}, \\ \nonumber K_{2} &=&\frac{1}{m} \left( \cos(mt) \frac{\partial}{\partial x} -\frac{n\sin(mt)}{\cosh(mx)}\frac{\partial}{\partial y} -\sin(mt)\tanh(mx)\frac{\partial}{\partial t} \right), \\ K_{3} &=&\frac{1}{m}\frac{\partial}{\partial t}, \\ \nonumber K_{4} &=&\frac{1}{m} \left( \sin(mt) \frac{\partial}{\partial x} +\frac{n\cos(mt)}{\cosh(mx)}\frac{\partial}{\partial y} -\cos(mt)\tanh(mx)\frac{\partial}{\partial t} \right), \end{eqnarray} while for $k=-1$ we find \begin{eqnarray} \label{killing-1} \nonumber K_{1} &=&\frac{\partial}{\partial y}, \\ \nonumber K_{2} &=&\frac{1}{m}\frac{\partial}{\partial t}, \\ K_{3} &=&\frac{1}{m} \left( \sinh(mt) \frac{\partial}{\partial x} +\frac{n\cosh(mt)}{\sinh(mx)}\frac{\partial}{\partial y} -\cosh(mt)\coth(mx)\frac{\partial}{\partial t} \right), \\ \nonumber K_{4} &=&\frac{1}{m} \left( \cosh(mt) \frac{\partial}{\partial x} +\frac{n\sinh(mt)}{\sinh(mx)}\frac{\partial}{\partial y} -\sinh(mt)\coth(mx)\frac{\partial}{\partial t} \right). \end{eqnarray} These vectors obey the algebra \begin{eqnarray} \label{algebra} [K_1,K_i]=0,~~ \lbrack K_2,K_3 \rbrack=K_4, ~~ \lbrack K_2,K_4\rbrack&=&K_3, ~~ [K_3,K_4]=K_2. \end{eqnarray} \subsection{The matter content} A standard calculation of the Einstein tensor for the metric (\ref{3D}) in the orthonormal triad (\ref{triad}) yields for the nonvanishing components \begin{eqnarray} G_x^x=G_t^t=\frac{m^2n^2}{4},~~G_y^y=m^2(1-\frac{3n^2}{4}). \end{eqnarray} To find a matter content compatible with this geometry, we couple the Einstein gravity with a negative cosmological constant $\Lambda$, to an electromagnetic field and a perfect fluid. We find that the source free Maxwell equations \begin{eqnarray} \label{Maxwell} \frac{1}{\sqrt{-g}}\frac{\partial}{\partial x^i}(\sqrt{-g}F^{ik})=0 \end{eqnarray} admit the simple solution $F^{xt}=c/M$. However, the corresponding energy momentum tensor \begin{eqnarray} \label{tMaxwell} T_a^{b(em)}=F_{ac}F^{bc}-\frac{1}{4}\delta_a^bF^2 \end{eqnarray} takes a simple form for $k=0$ only. In this case, the expression of the vector potential is $A=A_ydy+A_tdt$, where \begin{eqnarray} \label{A} A_y=cnx,~~A_t=\frac{c(n^2-1)}{2m}e^{mx}. \end{eqnarray} The constant $c$ is determined from the Einstein equations \begin{eqnarray} \label{Einstein} R_a^b-\frac{1}{2}R\delta_a^b+\Lambda_a^b=8\pi T_a^{b}. \end{eqnarray} Here the energy-momentum tensor $T_a^b$ is the sum of the Maxwell field contribution which in the triad basis (\ref{triad}) reads \begin{eqnarray} \label{tem} T_{x}^{x(em)}=\frac{c^2}{2}(n^2-1),~~T_{y}^{y(em)}=-T_{t}^{t(em)}=\frac{c^2}{2}(n^2+1) \end{eqnarray} and the perfect fluid energy-momentum tensor with a general form \begin{eqnarray} \label{fluid} T_a^{b(f)}=(p+\rho)u_au^b+p\delta_a^b, \end{eqnarray} where $\rho$ is the energy density of the fluid, $p$ the pressure and $u^a$ the three-velocity satisfying $u_a u^a=-1$. For a pressure-free fluid $(p=0)$ we find \begin{eqnarray} \label{c1} 8\pi \rho=-n^2m^2(1-n^2),~~8\pi c^2=m^2(1-n^2),~~ \Lambda=-\frac{m^2}{4}(2n^4-3n^2+2). \end{eqnarray} Clearly the condition $n^2<1$ should be satisfied which implies a violation of the weak energy condition. The above solution remains the same if we change the sign of $m,~n$. As expected, the case $n=1$ corresponds to the $AdS_3$ solution. Given the existence of the coordinate transformations (\ref{t1})-(\ref{t2}) this is a general solution for every value of $k$. However, for $k \neq 0$, the expression of the potential vector $A_i$ looks very complicated. We note also that the cosmological term in Einstein's equations can be regarded, if one wishes, as an energy momentum tensor for a perfect fluid. In this description the cosmological term does not appear explicitly. The Einstein equations give the relations \begin{eqnarray} 8\pi \rho=\frac{m^2}{4}(2n^4-n^2-2),~~8\pi c^2=m^2(1-n^2),~~ 8\pi p=-\frac{m^2}{4}(2n^4-3n^2+2). \end{eqnarray} Since these solutions are not asymptotically flat, nor AdS, the definition of mass and other conserved quantities is not obvious. \subsection{Global properties} The higher dimensional global flat embedding of a curved spacetime is a subject of interest to physicists as well as matematicians. We mention only that several authors have shown that the global embedding Minkowski approach of which a hyperboloid in higher dimensional flat space corresponds to original curved space could provide a unified derivation of Hawking temperature for a wide variety of curved spaces (see e.g. \cite{Deser:1997ri}). Given the large number of symmetries of the line element (\ref{3D}), its geometry takes a simple form in a large number of coordinate systems, which do not usually cover all of the spacetime. For $n^2<1$ (as requested by the Einstein equations) we found that the spacetimes (\ref{3D}) can be introduced using a 7-dimensional formalism as metrics \begin{eqnarray} \label{emb1} ds^2=-(dz^1)^2-(dz^2)^2+(dz^3)^2+(dz^4)^2+(dz^5)^2-(dz^6)^2-(dz^7)^2 \end{eqnarray} restricted by the constraints \begin{eqnarray} \label{s1} (z^1)^2+(z^2)^2-(z^3)^2-(z^4)^2&=&(\frac{2n}{m})^2, \\ \label{s2} \frac{m}{4n^2}\sqrt{1-n^2}\big((z^1)^2-(z^2)^2-(z^3)^2+(z^4)^2\big)&=&z^7, \\ \label{s3} \frac{m}{2n^2}\sqrt{1-n^2}(z^1z^4+z^2z^3)&=&z^5, \\ \label{s4} \frac{m}{2n^2}\sqrt{1-n^2}(z^1z^2+z^3z^4)&=&z^6. \end{eqnarray} The 7-dimensional coordinates $z^i$ are the embedding functions. Various 3-dimensional parametrizations of these surfaces can be considered. For example \begin{eqnarray} \nonumber z^1&=&\frac{2n}{m}\left( \cosh(\frac{mx}{2}) \cosh (\frac{my}{2n}) +\frac{1}{2}mte^{mx/2}\sinh(\frac{my}{2n}) \right), \\ \nonumber z^2&=&\frac{2n}{m}\left( -\sinh(\frac{mx}{2}) \sinh (\frac{my}{2n}) +\frac{1}{2}mte^{mx/2}\cosh(\frac{my}{2n}) \right), \\ \nonumber z^3&=&\frac{2n}{m}\left( \cosh(\frac{mx}{2}) \sinh (\frac{my}{2n}) +\frac{1}{2}mte^{mx/2}\cosh(\frac{my}{2n} )\right), \\ z^4&=&\frac{2n}{m}\left( \sinh(\frac{mx}{2}) \cosh (\frac{my}{2n}) -\frac{1}{2}mte^{mx/2}\sinh(\frac{my}{2n}) \right), \\ \nonumber z^5&=&\sqrt{1-n^2}\frac{1}{4m}\left( 2t^2m^2e^{mx}+4\sinh (mx) \right), \\ \nonumber z^6&=&\sqrt{1-n^2}te^{mx}, \\ \nonumber z^7&=&\sqrt{1-n^2}\frac{1}{4m}\left( -2t^2m^2e^{mx}+4\cosh (mx) \right), \end{eqnarray} coresponds to the $k=0$ metric form (\ref{k=0}). We remark that these coordinates cover entire variety. Another natural parametrization of (\ref{s1})-(\ref{s4}) is \begin{eqnarray} \nonumber z^1&=&\frac{2n}{m}\cosh(\frac{m x}{2})\cosh\left(\frac{m t }{2}+\frac{m y }{2n}\right), \\ \nonumber z^2&=&\frac{2n}{m}\cosh(\frac{m x }{2})\sinh\left(\frac{m t }{2}-\frac{m y }{2n}\right), \\ \nonumber z^3&=&\frac{2n}{m}\sinh(\frac{m x }{2})\sinh\left(\frac{m t }{2}+\frac{m y }{2n}\right), \\ z^4&=&\frac{2n}{m}\sinh(\frac{m x }{2})\cosh\left(\frac{m t }{2}-\frac{m y }{2n}\right), \\ \nonumber z^5&=&\frac{\sqrt{1-n^2}}{m}\sinh (m x )\cosh (m t ), \\ \nonumber z^6&=&\frac{\sqrt{1-n^2}}{m}\sinh (m x )\sinh (m t ), \\ \nonumber z^7&=&\frac{\sqrt{1-n^2}}{m}\cosh (m x ), \end{eqnarray} leading to the line-element (\ref{k=-1}). We remark that in this case the coordinates $(x,~y,~t)$ covers only a half of the hyperboloid (\ref{s1}) since $z^1>z^2,~z^3>z^4$. Thus, these coordinates are an analogous to Rindler coordinates of flat space and need to be analytically extended in the usual fashion to cover all of the spacetime. The parametrization corresponding to a metric form with $k=1$ is \begin{eqnarray} \nonumber z^1&=&\frac{2n}{m}\left( \cosh(\frac{m x}{2})\cosh(\frac{m y}{2n})\cos(\frac{m t}{2})+ \sinh(\frac{m x}{2})\sinh(\frac{m y}{2n})\sin(\frac{m t}{2}) \right), \\ \nonumber z^2&=&\frac{2n}{m}\left( \cosh(\frac{m x}{2})\cosh(\frac{m y}{2n})\sin(\frac{m t}{2})- \sinh(\frac{m x}{2})\sinh(\frac{m y}{2n})\cos(\frac{m t}{2}) \right), \\ \nonumber z^3&=&\frac{2n}{m}\left( \cosh(\frac{m x}{2})\sinh(\frac{m y}{2n})\cos(\frac{m t}{2})+ \sinh(\frac{m x}{2})\cosh(\frac{m y}{2n})\sin(\frac{m t}{2}) \right), \\ \nonumber z^4&=&\frac{2n}{m}\left( -\cosh(\frac{m x}{2})\sinh(\frac{m y}{2n})\sin(\frac{m t}{2})+ \sinh(\frac{m x}{2})\cosh(\frac{m y}{2n})\cos(\frac{m t}{2}) \right), \\ z^5&=&\frac{\sqrt{1-n^2}}{m}\sinh (m x), \\ \nonumber z^6&=&\frac{\sqrt{1-n^2}}{m}\cosh (m x)\sin(m t), \\ \nonumber z^7&=&\frac{\sqrt{1-n^2}}{m}\cosh (m x)\cos(m t). \end{eqnarray} We mention also, without entering into details, the parametrization \begin{eqnarray} \nonumber z^1&=&\frac{2n}{m}\cos(\frac{mx^4}{2})\cosh(\frac{m(x^1+x^2)}{2n}), \\ \nonumber z^2&=&\frac{2n}{m}\sin(\frac{mx^4}{2})\cosh(\frac{m(x^1-x^2)}{2n}), \\ \nonumber z^3&=&\frac{2n}{m}\cos(\frac{mx^4}{2})\sinh(\frac{m(x^1+x^2)}{2n}), \\ z^4&=&\frac{2n}{m}\sin(\frac{mx^4}{2})\sinh(\frac{m(x^1-x^2)}{2n}), \\ \nonumber z^5&=&\frac{\sqrt{1-n^2}}{m}\sin(mx^4)\sinh(\frac{mx^1}{n}), \\ \nonumber z^6&=&\frac{\sqrt{1-n^2}}{m}\sin(mx^4)\cosh(\frac{mx^1}{n}), \\ \nonumber z^7&=&\frac{\sqrt{1-n^2}}{m}\cos(mx^4), \end{eqnarray} which gives a time-dependent line element \begin{eqnarray} \label{par3} d\sigma^2=\frac{\sin^2 (mx^4)}{n^2} ~(dx^1)^2+(dx^2+\cos( mx^4)~dx^1)^2-(dx^4)^2 \end{eqnarray} The transformation rules between the various coordinates may be easily obtained by comparing their definitions in terms of the basic embedding coordinates $z_i$. Note that the embedding presented in this section can easily be extended for $n^2>1$ or $m^2<0$. \section{The solution in D=4} \subsection{The Einstein equations} The metric (\ref{3D}) can be added to a $(D-3)$ dimensional Euclidean metric $d\Sigma^2_{D-3}$ to give a $D-$dimensional generalization of this geometry. This can be achieved by expressing the $D-$dimensional metric as the direct Riemannian sum \begin{equation} \label{D} ds^2=d\sigma^2+d\Sigma^2_{D-3}. \end{equation} The physical interesting case $D=4$ has a particularly simple matter content. The corresponding line elements read \begin{equation} \label{D4} ds^2=dx^2+dy^2+dz^2+2N(x)dydt-(M(x)^2-N(x)^2)dt^2. \end{equation} Here $z$ is a Killing direction in 4D spacetime and has the natural range $-\infty<z<\infty$. In this case the computations are done in a orthonormal tetrad basis $\omega^a=(\theta^a,dz)$. In this basis, the only new components of the Einstein tensor is $G_z^z=m^2-m^2n^2/4$. The determinants of the metrics are the same $g^{(4)}=g^{(3)}$ and also the Ricci scalars. We find that (\ref{A}) still solves the Maxwell equations. However, in order to satisfy the $(zz)$ Einstein equation we have to include a massless scalar field in the theory, apart from a cosmological constant and a perfect fluid. A particular solution of the field equation for a massless minimally coupled scalar field \begin{eqnarray} \frac{1}{\sqrt{g} } \frac{\partial }{\partial x^{i} } (\sqrt{g} g^{ij} \frac{\partial \Psi }{\partial x^{j} } )=0 \end{eqnarray} is $\Psi =ez$, which implies the nonvanishing components of energy-momentum tensor \begin{eqnarray} T_x^{x(s)}=T_y^{y(s)}=-T_z^{z(s)}=T_t^{t(s)}=-\frac{e^2}{2}. \end{eqnarray} For $k=0$ and a pressure free perfect fluid, we find from the 4D Einstein equations \begin{eqnarray} \nonumber 8\pi c^2&=&m^2(1-n^2),~8\pi e^2=\frac{m^2n^2}{2}(3-2n^2), \\ 8\pi \rho&=&-m^2 n^2(1-n^2),~~\Lambda=-\frac{m^2}{2}. \end{eqnarray} We remark that the matter content (\ref{c1}) of the 3D solution is obtained through a compactification along the $z-$direction of the 4D solution. The only effect of the scalar field is to shift the value of the cosmological constant $\Lambda^{(3)}=\Lambda^{(4)}+4\pi e^2$. However, in view of the fact that the cosmological term can be regarded as a perfect fluid, we can again switch the 4D description into a fluid with the energy density given by $8\pi \rho=-m^2(3-2n^2)/2$ and a nonzero pressure $8\pi p=m^2/2$. The values of $c$ and $e$ obtained above are still valid. Again, the matter content for $k=\pm 1$ can be found by using the covariance of the field equations. In four dimensions we find also a different solution of the Maxwell equations, corresponding to a vector potential \begin{eqnarray} A=-\frac{E}{mn} (dy+Ndt), \end{eqnarray} where $E$ has a constant value. Different than (\ref{A}), this gives a simple matter content compatible with the geometry (\ref{metric}) for every value of $k$. Also, we use the same solution of the Klein-Gordon equation, $\Psi=ez$. In this approach, the matter content is given by a Maxwell and a scalar field ($i.e.$ no perfect fluid), the Einstein equations with cosmological constant implying to the following relations \begin{eqnarray} \label{matter} 4\pi E_0^2=\frac{m^2}{2}(1-n^2),~~ 4\pi e^2 =\frac{m^2n^2}{4},~~ \Lambda = -\frac{m^2}{2}. \end{eqnarray} Clearly the relation $n^2<1$ should again be satisfied. However, this time all three energy conditions are satisfied. To examine the Petrov classification of the four line element (\ref{D4}) a complex null tetrad was chosen having the tetrad basis ($i=\sqrt{-1}$) \begin{eqnarray} \nonumber \omega ^{1} &=&\bar{m} _{i} dx^{i} =\frac{1}{\sqrt{2} } (dx-idz), \\ \nonumber \omega ^{2} &=&m_{i} dx^{i} =\frac{1}{\sqrt{2} } (dx+idz), \\ \omega ^{3} &=&\frac{1}{\sqrt{2} } \Big((M(x)-N(x)) dt-dy\Big), \\ \nonumber \omega ^{4} &=&\frac{1}{\sqrt{2} } \Big((M(x)+N(x)) dt+dy\Big). \end{eqnarray} In this null tetrad, the Weyl tensor invariants \cite{kramer} are $ \Psi _{0} =-3\Psi _{2}=\Psi _{4}=m^2(1-n^{2})/4,~~ \Psi _{1} =\Psi _{3}=0. $ Therefore the metric is Petrov type $D$, except the case $n=1$ which is Petrov type $N$. \subsection{Limiting cases and relation with known solutions} The diagonal limit of this solution is obtained for $n=0$ \begin{eqnarray} ds^2=dx^2+dy^2+dz^2-M(x)^2dt^2. \end{eqnarray} For $k=\pm1$ this corresponds to scalar worlds discussed in \cite{dariescuk=1} (the case $M=\cosh(mx)$) and \cite{dariescuk=-1} (the case $M=\sinh(mx)$), where the geodesic equations are solved and the pathological features of these solutions are examined. The influence of the global properties on the behaviour of magnetostatic fields in such universes is also studied. We mention the existence of one more symmetry in this case, corresponding to a rotation in $yz$ plane. The four dimensional solution with $k=1,~n=1$ is well known in the literature. It corresponds to Rebou\c{c}as-Tiomno space-time, originally found as a first example of G\"odel-type homogeneous solution without CTCs \cite{reboucas}. The properties of this line element are discussed in \cite{Reboucas:wa}, where the geodesic equations are integrated. The solution in this case possesses seven isometries and is conformally flat. In fact, the four dimensional line element (\ref{D4}), the two-parameter G\"odel-type homogeneous solution \cite{reboucas} and the static Taub solution \cite{Taub:1951ez} corresponds to different analytical continuations of the same euclidean line element \begin{eqnarray} \label{euclid} ds^{2}_E =dx^2+(dy+\frac{\bar{n}}{2}(e^{mx}-ke^{-mx})d\tau)^2+dz^2 +\frac{1}{4}(e^{mx}+ke^{-mx})d\tau^2. \end{eqnarray} where again $k=0,\pm 1$. The global properties of this line element can easily be obtained following the results presented in Section (2.3). The general line element (\ref{D4}) is obtained by anlytical continuation $\tau \to it,~\bar{n} \to in$. The analytical continuation $\tau \to i\varphi/m,~y \to i(T-n\varphi/m)$ of the $k=-1$ line element (\ref{euclid}) yields the most general expression of a homogeneous G\"odel-type space-time in cylindrical coordinates \begin{equation} \label{godel} ds^2=dr^2+\frac{\sinh^2 m r}{m^2}d\varphi^2+dz^2-\Big(\frac{4\Omega}{m^2} \sinh^2(\frac{m r}{2})d\varphi+dT\Big)^2, \end{equation} after the identification $x=r,~n=2\Omega/m$. A symilar analytical continuation of the $k=0$ line element (\ref{euclid}) yields a G\"odel-type solution in cartezian coordinates. The properties of a G\"odel-type space-time were discussed by many authors (see e.g. \cite{Obukhov:2000jf} for a large list of references). However, the properties of the solution proposed in this paper differs form those of an homogeneous rotating G\"odel-type space-time, excepting the common case $n^2=1$. Another case which we should mention is the static Taub line element \cite{Taub:1951ez}, corresponding to the analytical continuation $z \to iT$ of the line element (\ref{euclid}). \subsection{Causal properties} One of the features of the AdS spaces is that they admit closed timelike curves (CTC). The usual remedy for this is to consider the covering space instead of AdS itself. Looking at (\ref{s1}) we see that our geometry suffers from the same problem and it admits CTC (remember that $z^1$ and $z^2$ both are timelike coordinates). However, these CTC are already present in $AdS_3$ spacetime and can be removed by considering the covering space. Apart from this property, this solution is geodesically complete and satisfy causality conditions such as global hyperbolicity. Different from the G\"odel solution, a cosmic time function can be defined; for the function $f=t$, one has \begin{eqnarray} g^{ij} \frac{\partial f}{\partial x^{i} } \frac{\partial f}{\partial x^{j} } =-\frac{1}{M(x)^{2}} <0 \end{eqnarray} for every finite value of $x$, implying that CTC are not present. Since in $D=4$ a perfect fluid is not necessarily present as source of curvature, the kinematical parameters of the model are not unambiguously defined. However, we can consider an observer with a four-velocity given by $ u^{i} u_{i} =-1 $ with $ u^{i} =\delta_{4}^{i}(M^2-N^2)^{-1/2} $ and find no expansion ($\theta=0$), no shear ($\sigma_{ij}=0$) but a non-null vorticity \begin{eqnarray} \omega ^{i} =\frac{1}{2\sqrt{-g} } k ^{ijkm} u_{j;k} u_{m} =\frac{1}{2(M^2-N^2)}\left(2NM'-mn(M^2+N^2)\right)\delta_{z}^{i} \end{eqnarray} For the case $k=0$ we find a vorticity parallel to the $z$ axis of magnitude $mn/2$. From the expressions above it is obvious that the spacetime (\ref{D4}) has no curvature singularities anywhere and also its 3D version. The coordinate ranges can be taken $-\infty<x,y,z,t<\infty$ except for $k=-1$. While the properties of the $k=0$ and $k=1$ line elements are very similar, the case $k=-1$ is rather special. For $k=-1$ the surface $x=0$ presents all the features of an event horizon. In the limit of no matter $(n=0)$ and zero cosmological constant $(m=0)$ we obtain the Rindler spacetime after a rescaling $t \to t/m$. All the properties of the Rindler solution are shared by this spacetime. As a new feature, we remark the occurence of an ergoregion, induced by the presence of a squashing parameter $n<1$. We can see from (\ref{k=-1}) that in this case, the Killing vector $\partial/\partial t$ is spacelike for $\tanh^2(mx)<n^2$. These properties are manifest when integrating the geodesic equation. \section{Geodesic motion and properties of the trajectories} The study of timelike and null geodesics is an adequate way to visualize the main features of a spacetime. In this section we want to study the geodesic motion for $n^2<1$ and, in particular, to confirm that the spacetime described by (\ref{D4}) is both null and timelike geodesically complete. The metric symmetries enable us to solve directly some the motion equations for timelike and null geodesics for every $k$. For the general line element (\ref{D4}), the equations of the geodesics have the four straightforward first integrals \begin{eqnarray} \nonumber P_{y}&=&\dot{x} _{2} =\dot{y} +N\dot{t} , \\ \nonumber P_{z}&=&\dot{x} _{3} =\dot{z}, \\ \label{int3} E&=&\dot{x} _{4} =N\dot{y}+(N^2-M^2)\dot{t}, \\ \nonumber -\varepsilon&=&\dot{x} ^{2} +\dot{z} ^{2} +(\dot{y}+N\dot{t})^{2} -M^2(x)\dot{t}^{2} \end{eqnarray} where a superposed dot stands for as derivative with respect to the parameter $\tau$ and $\varepsilon=1$ or $0$ for timelike or null geodesics respectively. $\tau$ is an affine parameter along the geodesics; for timelike geodesics, $\tau$ is the proper time. The corresponding relations for $D=3$ are obtained by setting $Py=0$. The first three integrals in (\ref{int3}) are due to the existence of the Killing vector fields $\partial _{y},\partial _{t},\partial _{z} $ respectively. The fourth one is related to the invariance of the timelike or null character of a given geodesic, while $P_y,~P_z$ and $E$ are the corresponding constants of motion. Note that since the solution is not asymptotically flat, nor AdS, these constants cannot be identified as the linear momentum and the energy at infinity. From the above equations we get the simple relations \begin{eqnarray} \label{g1} M^2\dot{x}^2=(NP_y-E)^2-M^2(\varepsilon +P_y^2+P_z^2) \end{eqnarray} and \begin{eqnarray} z=P_{z} (\tau -\tau_{0} ) \end{eqnarray} To integrate the eq. (\ref{g1}) it is convenient to introduce the variable $u=NA+P_yE/A$, where \begin{eqnarray} A=\frac{\sqrt{\varepsilon +P_y^2(1-n^2)+P_z^2}}{n}. \end{eqnarray} This yields the parametric form of the $x-$coordinate \begin{eqnarray} \label{eq-x} N(x) = a+b\sin \alpha(\tau-\tau_0) \end{eqnarray} where \begin{eqnarray} \alpha=mnA,~~a=-\frac{EP_y}{A^2},~~ b=\frac{\sqrt{(\varepsilon+P_y^2+P_z^2)(\frac{E^2}{n^2A^2}-k)}}{A}. \end{eqnarray} Therefore, for $k=1$, the geodesic motion is possible for $E^2>\varepsilon+P_y^2(1-n^2)+P_z^2$ only. Equation (\ref{eq-x}) shows that, for any value of the constants of motion, the massive particles and photons are always confined in a finite region along the $x$ axis. We can easily find a closed form relation between the coordinates $x$ and $t$. Thus \begin{eqnarray} \label{x-t} \nonumber (\frac{NP_y-E}{M})^2&=&m^2E^2(t-t_0)^2+\varepsilon+P_y^2+P_z^2, ~~~~~~~~~~~~~~~~~~~~~~~~~~{\rm for}\ \ k=0 \\ \frac{EN+n^2P_y}{M}&=&n\sqrt{E^2-(\varepsilon+P_y^2(1-n^2)+P_z^2)}\sin(m(t-t_0)),~~~~{\rm for}\ \ k=1 \\ \nonumber \frac{EN-n^2P_y}{M}&=&n\sqrt{E^2+\varepsilon+P_y^2(1-n^2)+P_z^2}\sinh(m(t-t_0)) ~~~~~~{\rm for}\ \ k=-1. \end{eqnarray} For $k=0,-1$, the equation for $y(\tau)$ gives \begin{eqnarray} \label{y-tau} y-y_0=P_y(1-n^2)(\tau-\tau_0) -\frac{n^3P_y}{2\alpha}(I_{+}(k)-I_{-}(k)) +\frac{n^2E}{2\alpha}(I_{+}(k)+I_{-}(k)) \end{eqnarray} where \begin{eqnarray} \nonumber I_{\pm}(k)= \frac{1}{\sqrt{b^2-(a\pm kn)^2}} \ln\left( { \frac{ (a\pm kn)\tan\big( \alpha(\tau-\tau_0) /2\big) +b-\sqrt{b^2-(a\pm kn)^2} } {(a\pm kn)\tan\big( \alpha(\tau-\tau_0) /2\big) +b+\sqrt{b^2-(a\pm kn)^2}}} \right), \end{eqnarray} while for $t(\tau)$ we find \begin{eqnarray} t-t_0&=&\frac{n(nP_y+E)}{2}I_{-}(-1)+\frac{n(nP_y-E)}{2}I_{+}(-1),~~~~{\rm for}\ \ k=-1 \\ \nonumber t-t_0&=&\frac{2b\alpha}{nm^2E}\frac{\cos(\alpha(\tau-\tau_0))}{a+b\sin(\alpha(\tau-\tau_0))}~~~~{\rm for}\ \ k=0. \end{eqnarray} The expresion of $y(\tau)$ and $t(\tau)$ for $k=1$ can also be obtained from (\ref{int3}) but are very complicated and we prefer do not present them here since the conclusions in this case are rather similar to the case $k=0$. The equation for $y$ can also be read from the folllowing straightforward integral \begin{eqnarray} \label{g11} \int\dot{x}^2 d \tau+(y-y_0)P_y+(z-z_0)P_z+(t-t_0)E= -\varepsilon (\tau-\tau_0). \end{eqnarray} It is obvious from the above relations that for $k=-1$, the surface $x=0$ presents all the characteristics of an event horizon. For a freely falling observer an infinite time $t$ is required to traverse the finite distance $L_0$ between an exterior point and a point on the horizon, but that destination is reached in a finite proper time. The running backwards of $t$ for some intervals of $\tau$ has nothing to do with a possible going backward in time or time travel. This effect is a mere consequence of the special choice of the time coordinate. \section{Quantum effects} The line element (\ref{3D}) has also another interesting property, being connected to the a special class of bubble spacetimes \footnote{A general clasification of bubbles in (anti-) de Sitter spacetime can be found in \cite{Astefanesei:2005eq}.}. The four dimensional "topologically nutty bubbles" obtained by Ghezelbash and Mann in \cite{Ghezelbash:2002xt} as a suitable analytical continuation of a Taub-Nut-AdS geometry can be written in a compact way as \begin{equation} \label{nut} ds^2=\frac{dr^2}{F(r)}+F(\chi)\left(d\chi+2\tilde{n}\frac{df_k(\theta)}{d \theta} dt\right)^2 +(r^2+\tilde{n}^2)(d\theta^2-f_k^2(\theta)dt^2), \end{equation} where \begin{eqnarray} F(r)=\frac{r^4+(-\ell^2+6\tilde{n}^2)r^2-2m \ell^2r-\tilde{n}^2(-\ell^2+3\tilde{n}^2)} {\ell^2(r^2+\tilde{n}^2)}. \end{eqnarray} The discrete parameter $k$ takes the values $1, 0$ and $-1$ and implies the form of the function $f_k(\theta)$ \begin{equation} f_k(\theta)=\frac{1}{2}(e^{\theta}+ke^{-\theta}). \end{equation} Here $m$ is the mass parameter, $r$ a radial coordinate, $\tilde{n}$ the nut charge and $\Lambda=-3/\ell^2$ the cosmological constant. The $\theta$ coordinate is no longer periodic and takes on all real values. One can easily see that a hypersurface of constant large radius $r$ in this four-dimensional asymptotically AdS spacetime has a metric which is proportional to the three-dimensional line element (\ref{3D}) after the identifications $x=\theta l$; $m=1/\ell$; $n=2\tilde{n}/\ell$. The Maldacena conjecture \cite{Maldacena:1998re} implies that a theory of quantum gravity in a $(D+1)$-dimensional spacetime with a negative cosmological constant can be modeled by the a conformal field theory in the fixed $D$-dimensional boundary geometry. Therefore the interest in field quantization in the background (\ref{3D}), since it will encode the informations on the quantum gravity in a "topologically nutty bubble". Although the corresponding field theory is not known in this case, similar to other situations one may consider the simple case of a nonminimally coupled scalar field. One approach toward field quantization is to work directly in the (Lorentz signature) spacetime under consideration. This approach has the advantage of yielding a direct, physical interpretation of the results obtained. The line element (\ref{D4}) presents a global $t=const.$ Cauchy surface and the standard methods of quantization can be directly applied \cite{Birrell:ix}. Due the high degree of symmetry it is possible to solve the scalar wave equation in terms of hypergeometric functions for any value of $k$. An alternative approach is to define all quantities on a Euclidean manifold (i.e. a positive definite metric). The results on the Lorentzian section are to be obtained by analytical continuation of the Euclidean quantities. Here we remark that the "nutty bubble" geometry (\ref{nut}) and the general $D=4$ Taub-NUT-AdS family of solutions discussed in \cite{Astefanesei:2004kn} share the same Euclidean section. The Euclidean boundary geometry in both cases corresponds to the line element (\ref{euclid}). Therefore a number of general results found in \cite{Astefanesei:2004kn} working on the Euclidean section are valid in this case too (in particular the computation of the solutions' action). Concerninig the field quantization, the results for (the essential three-dimensional part of-) a G\"odel universe and the new 3D solution will correspond to different analytic continuation of the same Euclidean quantities (a similar correspondence exists $e.g.$ between the quantization in Rindler spacetime and a cosmic string background, see e.g. \cite{Moretti:1997qn}). The Euclidean approach would enable us to use the powerful formalism of ``the direct local $\zeta$-function approach'' \cite{Moretti:1997qn}, and to compute the effective action, the vacuum fluctuation and the one-loop renormalized stress tensor for a quantum field propagating in the background (\ref{3D}). We remark that for $k=1$, the general $\zeta$-computation presents some similarities with the squashed three-sphere case discussed in \cite{Dowker:1998pi}, the case $k=0$ being approached in Appendix B of Ref. \cite{Astefanesei:2004kn}. A $\zeta$-computation of the effective action for a conformal scalar field propagating in the line element (\ref{3D}) will be presented elsewhere, since a separate analysis is required for every $k$. For the rest of this Section, we present instead some preliminary results concerning the case $k=0$ in four spacetime dimensions, in which case the results have a particularly simple form, focussing on the Green's function computation for a massive scalar field. A 4D Euclidean line-element is obtained from (\ref{k=0}) by using the analytical continuation $t\to -i\tau$ and $n \rightarrow i\bar{n } $ \begin{eqnarray} ds^2=dx^2+(dy+\frac{\bar{n}}{2}e^{mx}d\tau)^2+dz^2+\frac{1}{4}e^{2mx}d\tau^2. \end{eqnarray} The Feynman propagator for a massive scalar field is found by taking the (unique) solution bounded on the Euclidean section of the inhomogeneous equation \begin{eqnarray} (\nabla _{a} \nabla ^{a} -M^{2})G_{E} (x,x')=-\frac{\delta ^{4} (x,x')}{g^{1/2} (x)}, \end{eqnarray} the case of a nonminimally coupled scalar field corresponding to a particular value of $M^2$. If the field is at zero temperature $G_E(x, x')$ has the form \begin{eqnarray} G_{E} ( x,x')=\frac{1}{8\pi ^{3} } \int\limits_{-\infty }^{+\infty }d\omega \int\limits_{-\infty }^{+\infty }dk_{y} \int\limits_{-\infty }^{+\infty }dk_{z} e^{-i\omega (\tau -\tau' )} e^{ik_{y} (y-y')} e^{ik_{z} (z-z')} f_{k_{y} k_{z} \omega } (x,x'), \end{eqnarray} and the only remaining equation for the propagator is \begin{eqnarray} \label{eq-f} e^{-mx}\frac{d}{dx}\left(e^{mx}\frac{df}{dx}\right)-4\omega^2e^{-2mx}f-4\bar{n}e^{-mx}k_y\omega f \\ \nonumber -\left((1+\bar{n}^2)k_y^2+k_z^2+M^2\right)f\sim -\delta. \end{eqnarray} By using the substitution $ u=4|\omega| e^{-mx}/m, $ the solutions to eq. (\ref{eq-f}) when the right hand is zero are \begin{eqnarray} \nonumber f_{1} =M_{k\mu } (u)=e^{-\frac{u}{2} } u^{\mu +\frac{1}{2} }~_1F_{1} \left(\mu -k+\frac{1}{2},1+2\mu ,u\right), \end{eqnarray} finite as $u \to 0$ and \begin{eqnarray} \nonumber f_{2} =W_{k\mu }(u)=e^{-\frac{u}{2} } u^{\mu +\frac{1}{2} } U\left(\mu -k+\frac{1}{2} ,1+2\mu ,u\right), \end{eqnarray} finite as $u \to \infty$. In the relations above $\mu =\sqrt{1/4 +\left(k_{z}^{2}+ (1+\bar{n}^2)k_y^2+M^2\right)/m^2}$, $k =-|\omega|\bar{n}k_y/\omega m$, $_1F_1$ and $U$ are confluent hypergeometric functions. The Whittacker functions satisfy the relation \cite{grad} \begin{eqnarray} \label{transf} W_{k\mu } (z)=\frac{\Gamma (-2\mu )}{\Gamma (\frac{1}{2} -\mu -k)} M_{k\mu } (z)+\frac{\Gamma (2\mu )}{\Gamma (\mu +\frac{1}{2} -k)} M_{k-\mu } (z). \end{eqnarray} Thus the expressions for the spatial part of the Green's function is given by the usual expression \cite{morse} \begin{eqnarray} f(u,u')=\frac{f_{>} (u_{>} )f_{<} (u_{<} )}{W(f_{>} ,f_{<} )} , \end{eqnarray} where $f_{>}=f_{2}$ satisfies the boundary condition of finiteness at large $u$ and $f_{<}=f_{1}$ is similar finite as $u$ goes to zero; $W$ is the wronskian of $f_>$ and $f_<$ . Any other combination of the linearly independent solutions will not satisfy the boundary conditions. However, it is necessary to check that no Euclidean bound states exist. If an everywhere-finite solution of the homogeneous equation do exist, then the freedom one has in adding an arbitrary solution of the homogeneous equation (satisfying the boundary conditions) to the Green's function would make the Green's function nonunique. The condition of existence for a bounded state is \begin{eqnarray} k-\mu-\frac{1}{2}=\rm{positive~integer}. \end{eqnarray} It is easy to see that this cannot happen in the situation under consideration. Returning to the Lorentzian section by continuing back from the Euclidean values to the Lorentzian values we can define a Feynman propagator. The choice of the above analytical continuation fixes the appropriate sign of the timelike Klein-Gordon norms of $f_1$ and $f_2$. The $G_E(x, x')$ contains all the information about the theory \cite{Birrell:ix}. As a simple application, we use the formalism proposed by Lapedes in \cite{Lapedes:1978rw} to prove that an inertial observer at constant $x$ will not see any particles, where ``what an observer sees`` means ``how a detector reacts when it is coupled linearly to a quantized field propagating freely in our space-time``. In Ref.\cite{Lapedes:1978rw} it has been proved that the average number of produced pairs detected by an observer at constant $x$ is \begin{eqnarray} <n_{\omega k_{y} k_{z} } >=\frac{w}{1-w}, \end{eqnarray} where $w$ is the probability for one pair to be created in a state characterized by the quantum numbers $k_y,~ k_z,~\omega$. This probability can be computed by returning to the Lorentzian section and writing \begin{eqnarray} f_{<} =A\bar{f}_{>} +Bf_{>} . \end{eqnarray} The coefficients $A$ and $B$ can be read from (\ref{transf}). Thus $w=1-\frac{\left| B\right| ^{2} }{\left| A\right| ^{2} } $ and, for situation discussed here with $n^2<1$, the relative probability is null. \section{Conclusions} This paper was inspired by the finding that the G\"odel geometry can be obtained by squashing the three dimensional anti-de Sitter geometry. Although the initial metric form to be squashed cannot be entirely arbitrarly, the variety of possibilities and of the resulting spacetimes is quite large. In this way we obtained a new family of solutions of the 3D Einstein's field equations with negative cosmological constant. This solution is characterized by two continuos parameters $m$ and $n$ and a discrete parameter $k$. So far, physically acceptable sources for these solutions are found only for $n^2<1$. In a four dimensional interpretation it satisfies the Einstein-Maxwell-scalar field equations with cosmological constant. This space-time, of course, is not a live candidate for describing a physical situation, but it can be a source of insight into the possibilities allowed by relativity theory. We have presented a global description of the spacetime geometries of our solution by isometrically embedding it in a flat spacetime with four extra dimensions. In this way we gained a rather clear global structure of the geometry. A detailed study of the geodesics of this spacetime showed that the solution is geodesically complete and therefore singularity free. Although this new family posseses the same amount of symmetry as an homeogeneous G\"odel-type spacetime (in fact we prove it corresponds to a suitable analitical continuation of the latter), there are some important differences. The most obvious difference is the absence of CTC and a different geodesic structure. We noticed a relevance of this 3D geometry within AdS/CFT correspondence since it is the boundary of the four dimensional "topologically nutty bubbles" with negative cosmological constants discussed in \cite{Ghezelbash:2002xt}. Since the solution presented here contains three parameters which may lead to spacetimes with rather different properties, it would be interesting to extend the analysis for $n^2>1$ and $-\infty<m^2<\infty$. Another interesting problem is to study the properties of solutions obtined from other parametrizations of the quadratic surfaces (\ref{s1})-(\ref{s4}). \\ \\ {\bf Acknowledgement} \newline The author is grateful to C. Dariescu for useful remarks. This work was performed in the framework of Enterprise--Ireland Basic Science Research Project SC/2003/390 of Enterprise-Ireland.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} An orbit following calculation of charged particles is one of the classic problems. It is straightforward, but remains to be an important tool for studying particle confinement in laboratory plasmas. In high beta magnetic confinement devices, Larmor radius of energetic particles can be comparable to the characteristic scale length of the experiment (for example $\alpha$ particles in burning plasmas). Guiding center approximation fails in the latter cases.\cite{sol96,mik97} In this work, the equation of motion is solved incorporating Lorentz force term and Coulomb collisional relaxation term. Since solving the Lorentz force term requires much shorter time step compared to the guiding center calculation,\cite{mor66,lit83,nis97,nis08} computational efficiency is the key. We introduce a new algorithm to calculate ion beam trajectories in magnetized plasma by applying perturbation method regarding the Coulomb collisional relaxation term as a small perturbation. We start our analysis from studying the ion orbital behavior (with and without collision effects) in a simple geometry where the magnetic field is axis-symmetric. In this paper, we employ a theta pinch plasma\cite{mor69,ish84} in a two dimensional system at a plasma equilibrium. The orbit following calculation can be useful in studying suppression of tilting instabilities\cite{hor90} and rotational instabilities\cite{ish84} by the ion beams. In Sec.\ \ref{s2}, the basic computation model is described. The orbit following calculation is discussed in Sec.\ \ref{s3}. Section~\ref{s4} presents the perturbation method. We summarize this work in Sec.~\ref{s5}. \section{Equation of motion} \label{s2} In this section, the equation of motion is described. Ion beam equation in the MKS unit is given by\cite{miy86} \begin{eqnarray} m {d {\bf v} \over dt} &=& q {\bf v} \times {\bf B} \nonumber \\ &-& {q^2 {\bf v} \over 4 \pi \varepsilon_0^2 v^3} \sum^\star {\log \Lambda {q^\star}^2 \over m_r} n^\star \Phi_1 (b^\star v), \end{eqnarray} \begin{equation} {d {\bf x} \over dt} = {\bf v}. \end{equation} Here we recapitulate Ref.\cite{miy86} as precise as possible (including the notations), for the transparency of the work. The first and the second term of Eq.(1) are the Lorentz force term (we assume the electric field to be zero) and Coulomb collisional relaxation term, respectively. The second term reflects the momentum change of the test particle per unit time.\cite{miy86} Here, $m$ and $q$ are the mass and the charge of the beam ions. The magnetic field is given by ${\bf B}$ while the ion beam positions and velocities are given by ${\bf x}$ and ${\bf v}$, respectively. The vacuum permittivity is given by $ \varepsilon_0$, and the Coulomb logarithm (see appendix) is given by $\Lambda$. All the variables with the superscript $\star$ signify that of the background plasma species (the ions and the electrons). Here, $m_r = m m^\star/ (m + m^\star) $ is the reduced mass. The function $\Phi_1 $ represents the Gaussian velocity distribution of the background plasma (see appendix), where $T^\star$ is the background plasma temperature and $b^\star = (m^\star / 2 q^\star T^\star)^{1/2}$. Equations (1) and (2) are solved in a Cartesian coordinate $x,y$, and $z$ using a fourth order Runge-Kutta-Gill method.\cite{kaw89} Equations (1) and (2) holds for ion orbital behavior in three dimensional magnetized plasmas in general. In this paper, as an initial application, a rigid roter profile of theta pinch plasma\cite{mor69,ish84} is employed for the two dimensional magnetic field model. Denoting $r = (x^2 + y^2)^{1/2} $, the magnetic field is given by \begin{equation} {\bf B} = B_0 \tanh{ \left[ \kappa \left( 2 r^2 / r_s^2 - 1 \right) \right] } {\bf z}. \end{equation} where the background density is given by \begin{equation} n^\star = n_0 {sech}^2 { \left[ \kappa \left( 2 r^2 / r_s^2 - 1 \right) \right] }, \end{equation} where $\kappa$ is a constant and $r_s$ is the radius at the separatrix.\cite{tus88} Equations (3) and (4) are in plasma equilibrium.\cite{mor69} Correspondingly, the magnetic flux $\psi (r) = \int^r_0 B (r') r' dr' $ is given by \begin{eqnarray} \psi (r) &=& \frac{B_0 r_s^2}{4 \kappa} \log{ \left[ \cosh{ \left[ \kappa \left( 2 r^2 / r_s^2 - 1 \right) \right] } \right] } \nonumber \\ &-& \frac{B_0 r_s^2}{4 \kappa} \log{ \left[ \cosh{ \left( \kappa \right) } \right] } . \end{eqnarray} In this paper, the angular momentum is given by\cite{miy86} \begin{equation} P_\theta = m r^2 \dot{\theta} + q \psi (r), \end{equation} where $\dot{\theta}$ is the time derivative of the angular coordinate $\theta$. The kinetic energy is given by $E_k = m {\bf v}^2 /2$. \section{Beam ion orbit} \label{s3} \indent In this section, the ion orbit calculation is presented employing Eqs.(1) and (2). We study beam ion (energetic particle) behavior whose temperature is much larger than that of the background thermal plasma. Figure 1 shows the particle orbits in a Cartesian coordinate in the absence of Coulomb collisions. The magnetic configuration reflects that of the FRC injection experiment (FIX) parameter in the confinement chamber;\cite{shi93} in Eq.(3), the magnetic field strength $B_0$ is $0.05 (T)$ and the separatrix radius $r_s$ is given by $0.2 (m)$ and thus the magnetic null is at $r_n = 0.141 (m)$. The wall radius is set at $r_w = 0.4 (m)$. In Eq.(3), we set $\kappa = 0.6.$\cite{mor69} The beam ion species is Hydrogen. Throughout this paper, we assume that the neutral beams are ionized at $x=-0.136 (m)$ and $y=0.147 (m)$ which is on the separatrix (the initial position of the beam ion calculation is given there). In Fig.1(a), the beam ion temperature is given by $T_b = 50 ( eV)$ [followed the ion orbit for $20 (\mu s)$], while in Fig.1(b), the beam ion temperature is given by $T_b = 4000 (eV)$ [followed the ion orbit for $2 (\mu s)$]. Naturally, the Fig.1(b) case has a larger Lamor radius. As one can see, the direction of the Larmor precession changes when the trajectory crosses the magnetic null point "$r_n $". This is referred to as {\it meandering motion}.\cite{hor90} Since the magnitude of the magnetic field inside $r_n$ is weaker than the outside, the Larmor radius is slightly larger inside the separatrix [see Fig.1(c) where the magnetic field strength and the density profile are depicted]. As shown above the motion is periodic which can be understood by the Noether's theorem (a canonical variable conjugate to a constant momentum undergoes periodic motion). The conservation of kinetic energy and the angular momentum is verified for the calculation in Fig.1. With a single precision, the momentum (energy) conserves at the accuracy of $6.3 \times 10^{-5} \%$ ($5.5 \times 10^{-4} \%$) of the absolute value after following the orbit for $10000$ steps. Here, the time step in the calculation is given by one percent of $2 \pi / \Omega_c$ where $\Omega_c$ is the beam ion's cyclotron frequency. \begin{figure} \centering \includegraphics[height=7.5cm,angle=+00] {ieee.fig1a.eps} \includegraphics[height=7.5cm,angle=+00] {ieee.fig1b.eps} \includegraphics[height=7.5cm,angle=+00] {ieee.fig1c.eps} \caption{Orbital behavior of (a) 50 (eV) (b) 4000 (eV) Hydrogen ions. The collision term is turned off in Eq.(1). The Larmor precession changes its direction at the magnetic null point (dashed circle). Unit length is normalized by $r_s$ (solid circle). (c) The magnetic field strength (green curve) and the density profile (red curve) are depicted. The location of the separatrix and the magnetic null point are suggested.} \label{fig1} \end{figure} Figures 2 and 3 show the particle orbits in the presence of Coulomb collisions. The collision effect is dominated by electrons (see appendix). The background electron density and temperature is given by $n_0 = 5 \times 10^{19} (m^{-3})$ and $T_e = 20 (eV)$. In Fig.2, the beam ion temperature is given by $T_b = 100 (eV)$. In Fig.3, $T_b = 2000 (eV)$. \begin{figure} \centering \includegraphics[height=7.5cm,angle=+00] {ieee.fig2a.eps} \includegraphics[height=7.5cm,angle=+00] {ieee.fig2b.eps} \caption{(a) Orbital behavior of 100 eV beam ion in the presence of the collision term. The background electron density and temperature is given by $ 5 \times 10^{19} (m^{-3})$ and $20 (eV)$. (b) The kinetic energy (black curve) and the angular momentum (red curve) versus time.} \label{fig2} \end{figure} Figures 2(b) and 3(b) show the kinetic energy ($E_k$) and the canonical angular momentum ($P_\theta$) versus time. \begin{figure} \centering \includegraphics[height=7.5cm,angle=+00] {ieee.fig3a.eps} \includegraphics[height=7.5cm,angle=+00] {ieee.fig3b.eps} \caption{(a) Orbital behavior of 2000 eV beam ion in the presence of the collision term. The background electron density and temperature is given by $ 5 \times 10^{19} (m^{-3})$ and $20 (eV)$. (b) The kinetic energy (black curve) and the angular momentum (red curve) versus time.} \label{fig3} \end{figure} In Fig.2 and 3, the kinetic energy and the angular momentum relax. The e-fold times estimated in Fig.2(b) and Fig.3(b) are summarized in Table 1 (we take the logarithm; $\tau_{ie}^{sim} $ for the kinetic energy and $\tau_{\perp}^{sim} $ for the angular momentum). We now compare the numerical relaxation time [e-fold time estimated from Fig.2(b) and Fig.3(b)] with a theoretically estimated relaxation time.\cite{miy86,spi62} Following Ref.\cite{miy86}, the energy relaxation time is given by \begin{equation} \tau_{ie} = \frac{ \left( 2 \pi \right)^{1/2} 3 \pi \varepsilon_0^2 m m_e} { n_e \log \Lambda q^2 q_e^2 } \left( \frac{T_b}{m} + \frac{T_e}{m_e} \right), \end{equation} (which is independent of beam ion temperature unless ${T_b}/{m} \sim {T_e}/{m_e}$) and the perpendicular momentum relaxation time is given by \begin{equation} \tau_\perp = \frac{ 2 \pi \varepsilon_0^2 m^2 v^3} { n_e \log \Lambda q^2 q_e^2 \Phi \left( b_e v \right) }, \end{equation} respectively. The background plasma is assumed to be only electrons. In Eqs.(7) and (8), the charge, the mass, and the temperature of electrons are given by $q_e$, $m_e$, and $T_e$, respectively. Here, $b_e = \left(m_e / 2 q_e T_e \right)^{1/2}$. The relaxation time employing the parameters used in Fig.2 and Fig.3 are summarized in Table 1. In Table.1, the energy relaxation time compares favorably with the numerical estimation, while the momentum relaxation time differs in particular for the higher energy case. \begin{table} \caption{\label{tab:table1} : Comparison of relaxation times.} \begin{tabular}{ccc} \hline $T_b$ & $\tau_{ie}$ & $\tau_{ie}^{sim}$ \\ \hline $100 (eV)$ & $3.9 \times 10^{-5} (s)$ & $4.3 \times 10^{-5} (s)$ \\ $2000 (eV)$ & $3.9 \times 10^{-5} (s)$ & $5.2 \times 10^{-5} (s)$ \\ \hline $T_b$ & $\tau_\perp$ & $\tau_\perp^{sim}$ \\ \hline $100 (eV)$ & $1.3 \times 10^{-4}$ (s) & $1.9 \times 10^{-4} (s)$ \\ $2000 (eV)$ & $2.7 \times 10^{-3}$ (s) & $1.3 \times 10^{-4} (s)$ \\ \hline \end{tabular} \end{table} \section{Perturbation method} \label{s4} In this section, the perturbation method is introduced. Normalizing Eqs.(1) and (2) by the beam ion cyclotron frequency $\Omega_c = q_b B_0 / m_b$, and the separatirx radius $r_s$, we obtain \begin{equation} {d {\bf V} \over dT} = {\bf V} \times {\bf B} - \epsilon {\bf F} \end{equation} \begin{equation} {d {\bf X} \over dT} = {\bf V} \end{equation} where the frictional force is regarded as a small term employing \begin{equation} \epsilon = {q^4 \log \Lambda n_0 \over 4 \pi \epsilon_0^2 m r_s^3 \Omega_c^4} \sum^\star {\Phi_1 (b^\star) \over m_r } \ll 1 \end{equation} and \begin{equation} {\bf F} = \frac{{\bf V}N(R)}{V^3} \sum^\star {\Phi_1 (b^\star V) \over m_r } \left( \sum^\star {\Phi_1 (b^\star) \over m_r }\right)^{-1}. \end{equation} Expanding ${\bf B} = {\bf B}_0 + \epsilon {\bf B}_1 $ for the rigid rotor profile, we have \begin{equation} {\bf B}_0 = \tanh \left[ \kappa \left( 2 R^2 - 1 \right) \right] {\bf z} \end{equation} \begin{equation} {\bf B}_1 = 4 \kappa ({\bf X}_0 \cdot {\bf X}_1 ) sech^2 \left[ \kappa \left( 2 R^2 - 1 \right) \right] {\bf z} \end{equation} \begin{equation} N(R) = sech^2 \left[ \kappa \left( 2 R^2 - 1 \right) \right] \end{equation} Here, the capital letters ($T,{\bf V},R$, and ${\bf X}$) represent the normalized time, velocity, radius, and position, respectively. From Eqs.(9) and (10), the lowest order equation is given by \begin{equation} {d {\bf V}_0 \over dT} = {\bf V}_0 \times {\bf B}_0 \end{equation} \begin{equation} {d {\bf X}_0 \over dT} = {\bf V}_0 \end{equation} and the first order equation in order $\epsilon$ is given by \begin{equation} {d {\bf V}_1 \over dT} = {\bf V}_1 \times {\bf B}_0 + {\bf V}_0 \times {\bf B}_1 + {\bf F} \end{equation} \begin{equation} {d {\bf X}_1 \over dT} = {\bf V}_1 . \end{equation} The solution then is given by the summation ${\bf X} = {\bf X}_0 + \epsilon {\bf X}_1 $, ${\bf V} = {\bf V}_0 + \epsilon {\bf V}_1 $. The crux in Eqs.(18) and (19) are the changes in particle velocity (${\bf V}_1 \times {\bf B}_0$) and particle's displacement (${\bf V}_0 \times {\bf B}_1$) both induced by the small friction force (the ${\bf F}$ term). The perturbation method is useful since we only need to change the constant $\epsilon$ when the the plasma parameters change, e.g. background densities and temperatures (and do not need to recalculate the whole trajectories). Figure 4 and 5 show particle trajectories when the perturbation method is employed. Here, the green curve solution in Fig.5 are obtained by {\it recycling} ${\bf X}_0 $ and $ {\bf X}_1 $ from Fig.4, by simply changing the parameter $\epsilon$. In both Figs.4 and 5, the solution from the perturbation method [green curves, solved Eqs.(16)-(19)] matches with the direct collision calculations [red curves, solved Eqs.(1) and (2)]. \begin{figure} \centering \includegraphics[height=7.5cm,angle=+00] {ieee.fig4a.eps} \includegraphics[height=7.5cm,angle=+00] {ieee.fig4b.eps} \caption{ (a) The beam ion orbit with a background plasma $T_{e} = 50 eV$ and $n_{e} = 1.0 \times 10^{19}$. The beam ion temperature is $300 (eV)$. (b) Expansion of the final stage of Fig.4(a). The green (red) curve are from the perturbation method (the direct calculation).} \label{fig4} \end{figure} \begin{figure} \centering \includegraphics[height=7.5cm,angle=+00] {ieee.fig5a.eps} \includegraphics[height=7.5cm,angle=+00] {ieee.fig5b.eps} \caption{ (a) The beam ion orbit with a background plasma $T_{e} = 50 eV$ and $n_{e} = 5.0 \times 10^{19}$. The beam ion temperature is $300 (eV)$. (b) Expansion of the final stage of Fig.5(a). Here, the green curve solution in (b) are obtained by {\it recycling} $X_0 $ and $ X_1 $ from (a), by simply changing the parameter $\epsilon$. The green (red) curve are from the perturbation method (the direct calculation).} \label{fig5} \end{figure} The lowest order solution is periodic when the magnetic field is axis-symmetric. Likewise we expect the first order solution to be periodic. If the latter is the case, there will be another attractive application of the perturbation method. By storing the first periodic motion of both the lowest and the higher order solution, the algorithm can predict periodic motion in the later phase and thus can reduce computation time. As a demonstration, here we take a simplified case where the Lorentz force and the centrifugal force are balanced at the initial state; \begin{equation} m r \dot{\theta}^2 = q v B \end{equation} [the trajectory will be a perfect circle in the absence of collisions. See Fig.6(a)]. Figure 6(b) suggests a periodic motion of the first order solution from the perturbation method (time evolution of the Cartesian coordinates $x_1$ and $y_1$ are plotted). \begin{figure} \centering \includegraphics[height=7.5cm,angle=+00] {ieee.fig6a.eps} \includegraphics[height=7.5cm,angle=+00] {ieee.fig6b.eps} \caption{(a) The beam ion orbit (red circle) when the Lorentz force and the centrifugal force are balanced at the initial state. The initial position is at $x=0$ and $y=r_s$. (b) Time evolution of the perturbed quantities $x_1$ (solid), $y_1$ (dashed).} \label{fig6} \end{figure} \section{Summary} \label{s5} An orbit following code is developed to calculate ion beam trajectories in magnetized plasmas. The equation of motion is solved incorporating the Lorentz force term and Coulomb collisional relaxation term. Conservation of energy and angular momentum is confirmed in the absence of collisions. With the collisions, it is shown that the energy relaxation time compares favorably with the theoretical prediction.\cite{miy86} Furthermore, a new algorithm to calculate ion beam trajectories is reported. We have applied perturbation method regarding the collisional term small compared to the zeroth order Lorentz force term. The two numerical solutions from the perturbation method and the direct collisional calculation matched. The perturbation method is useful since we only need to change the perturbation parameter to recalculate the trajectories, when the background parameters change. In general, the algorithm can be applied to periodic motion under perturbative frictional forces, such as guiding center trajectories\cite{mor66,lit83,nis97,nis08} or satellite motion. We have also suggested a reduction in computation time by capturing the periodic motion. More detailed analysis will be our future work. The work is initiated as a diploma thesis at Osaka University during the years 1990-1991.\cite{nis91} The author would like to thank Dr. T.~Ishimura and Dr. S.~Okada for useful discussions. A part of this work is supported by National Cheng Kung University Top University Project. The author would like to thank Dr.~C.~Z.~Cheng and Dr.K.~C.~Shaing. \\
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} For an equilibrium system, adding more transitions to its state space increases the activity. Naively, one would think that this intuitive result holds also for systems that are out of equilibrium. However, unlike in equilibrium systems, the added transitions between states do not have to obey detailed balance, and may lead to an absorbing state, and thus decrease the activity. Conceptually, this is similar to several phenomena seen in many-particle out-of-equilibrium systems, such as the faster-is-slower effect \cite{Helbing2000,Parisi2005,Garcimartin2014,Sticco2017,Chen2018}, slower-is-faster effect \cite{Gershenson2015,Tachet2016}, motility induced phase separation \cite{Fily2012,Redner2013,Cates2015,Cugliandolo2017,Digregorio2018,Whitelam2018,Klamser2019,Merrigan2020}, in which the total activity decreases as the activity of the individual particles increases. \begin{figure} \includegraphics[width=\columnwidth]{sketch_v2.pdf} \caption{A sketch of discrete dynamical systems and the transitions between their states: (a-b) Ergodic equilibrium systems, (c) a system which obeys detailed balance but is not ergodic, (d-h) non-equilibrium systems violating detailed balance. The big red arrows between systems represent a removal of transitions, and the big green arrows represent an addition of transitions. In each system, absorbing states are denoted by a red circle, two way transitions by a blue double-sided arrow, and one way transitions by an orange arrow.} \label{sketch} \end{figure} Consider for example an ergodic system in equilibrium, depicted in Fig. \ref{sketch}a. A concrete example of such a system is the symmetric exclusion process \cite{Spitzer1970}, which is a lattice gas in which each particle may hop to a neighboring site if the target site is vacant. Removing transitions and their reciprocal transition keeps the system in equilibrium (Fig. \ref{sketch}b), but may cause it to become non-ergodic (Fig. \ref{sketch}c). A class of models which demonstrate this is kinetically constrained models (KCMs) \cite{Jackle1994,Ritort2003,Toninelli2006,Toninelli2007,Jeng2008,Biroli2008,Jeng2010,Garrahan2010,Ghosh2014,Ohta2014,Segall2016}, in which a particle may hop to a neighboring site if both the target site is vacant, and the neighborhood of the particle satisfies some model-dependent rule, both before and after the move. These models obey detailed balance, since by construction if a transition is allowed, its reverse is also allowed at the same rate. Essentially, adding this kinetic constraint removes some of the bonds from the transition graph of Fig. \ref{sketch}a and transforms the system into the one schematically illustrated in Fig. \ref{sketch}b. At high enough particle density, these models become non-ergodic, as schematically depicted in Fig. \ref{sketch}c. When one-way transitions are added to the system, it is driven out of equilibrium, as shown for instance in going from Fig. \ref{sketch}b to Fig. \ref{sketch}d. In KCMs, this corresponds to allowing some of the moves which are prohibited by the kinetic constraint, but not their reverse moves. If the original KCM is ergodic, these additional one-way transitions increase the activity in the system, as is the case in going from Fig. \ref{sketch}b to Fig. \ref{sketch}d. However, if the original KCM is non-ergodic, these additional one-way transitions may create a path into absorbing states, and decrease the long time activity in the system, as is the case when going from Fig \ref{sketch}c to Fig. \ref{sketch}e. Another way to add transitions is connecting the system to external reservoirs \cite{Sellitto98,Sellitto2002,Sellitto2002b,Goncalves2009,Teomy2017,Arita2018}. Another way to drive dynamical systems out of equilibrium is to make all transitions one-way only, as shown in going from Fig. \ref{sketch}b to Fig. \ref{sketch}f. This transformation might turn the long time activity to zero, but not necessarily. In lattice gases, this can be achieved by allowing the particles to move only in one direction \cite{Spitzer1970,Derrida1993}. Adding one-way transitions to this system, may increase the activity (Fig. \ref{sketch}g) or decrease it (Fig. \ref{sketch}h). In extreme cases, either adding or removing transitions may jam the system, i.e. cause the long time activity to become zero, or unjam it, i.e. increase the long time activity from zero to a finite value. In this paper we investigate such extreme behavior and provide concrete examples for this non-intuitive result. A less extreme method is to break detailed balance by biasing the particles to move in a certain direction \cite{Sellitto2000,Levin2001,Fielding2002,Arenzon2003,Fernandes2003,Sellitto2008,Shokef2010,Turci2012}. We consider six related modified KCMs, one of which is the equilibrium Kob-Andersen (KA) model \cite{Kob1993}, and the others are out of equilibrium variants of it, which add or remove one-way transitions. By investigating these models numerically, we demonstrate how in some cases adding transitions increases the activity in the system, while in other cases it counter-intuitively decreases the activity and even jams the system. We also derive a semi-mean-field (SMF) analytical approximation for the activity which qualitatively captures the behavior observed numerically. Although the models we consider here are relatively simple, our results can be generalized to other, more complicated systems driven out of equilibrium by adding transitions. The models are described in Section \ref{sec_models} and their activity is investigated in Section \ref{sec_activity}. Section \ref{sec_conclusion} concludes the paper. The technical derivations of our results are presented in the Appendices. \section{The models} \label{sec_models} In this paper we consider six related models. The first model, from which all the others are derived, is the KA KCM on a 2D square lattice. In this model, a particle can hop to one of its four neighboring sites if that site is vacant and if both before and after the move at least two of the particle's four neighbors are vacant, see Fig. \ref{ka_rules}. This model obeys detailed balance with respect to a trivial Hamiltonian; for each allowed move, also the reverse move is allowed and at the same rate. In the steady state the occupancy of all states is equal and there are no probability currents between the states of the system. In the infinite size limit the KA model is always ergodic, while in finite systems it jams at some size-dependent density due to finite-size effects \cite{Teomy2012,Teomy2014b,Toninelli2004,Teomy2014}. In a system of size $L\times L$, the critical density in the KA model is given by $\rho^{\rm{KA}}_{c}(L)=1-\lambda(L)/\ln L$, where $\lambda(L)$ depends weakly on $L$, converges to $\pi^{2}/18\approx0.55$ in the $L\rightarrow\infty$ limit, and is approximately $\lambda(L)\approx0.25$ for all system sizes considered in this paper \cite{Holroyd2003,Teomy2014}. \begin{figure} \includegraphics[width=0.3\columnwidth]{ka_rules-1.pdf} \includegraphics[width=0.3\columnwidth]{ka_rules-2.pdf}\\ \includegraphics[width=0.3\columnwidth]{ka_rules-3.pdf} \includegraphics[width=0.3\columnwidth]{ka_rules-4.pdf} \caption{An illustration of the kinetic constraints for a particle (green circle) moving to one of its nearest neighbors. (a) The three sites marked by a purple $\times$ are the Before group, and the three sites marked by a blue $\diamond$ are the After group. In the KA and DKA models at least one of the sites in the Before group and at least one of the sites in the After group needs to be vacant in order for the particle to move. In the BKA and DBKA models only the Before group is checked, while in the AKA and DAKA models only the After group is checked. (b) The particle can move in the AKA and DAKA models. (c) The particle can move in the BKA and DBKA models. (d) The particle can move in all six models.} \label{ka_rules} \end{figure} A system is jammed when it contains particles that will never be able to move, while an unjammed system does not. Note that if a particle cannot move at the current configuration, but will be able to move if some other particles move, then the system is not jammed. For example, the particles marked by an empty circle in Fig. \ref{rattlers} will never be able to move no matter how the three particles marked with a green circle and a purple $\times$ move, and therefore the system depicted there is jammed. We define the activity as the number of moves per unit time, and thus a jammed system may still be active if some of the particles in it can move. We now define two variants of the KA model, namely the After-KA (AKA) and the Before-KA (BKA) models. In the AKA (BKA) model, a particle can hop to an adjacent vacant site if after (before) the hop at least two of its four neighbors are vacant. As opposed to the KA model, the AKA (BKA) model allows a particle to move regardless of the occupancy of the neighbors before (after) the move. Hence these two models both allow all the moves of the KA model as well as additional moves. These two models are out of equilibrium, since some transitions, namely some of those that are prohibited in the KA model, are allowed here but their inverse transitions and not. The last three models we consider are the driven variants of the three aforementioned models, which we call the DKA, DAKA and DBKA models. In these driven models all particles can move only along one of the four directions, which we designate as down. For such a move to occur the same kinetic constraints are required to hold as in the KA, AKA, and BKA models, respectively. The DKA model was recently investigated numerically \cite{Bolshak2019} and it was found that the steady state current vanishes beyond a certain non-trivial critical density. Similar results were found for a variant of the KA model in which the particles can move in all four directions, but are biased in a particular direction \cite{Sellitto2008,Turci2012}. \section{Activity} \label{sec_activity} \subsection{Definition and Mean-Field Approximation} In this section we investigate the activity in the system after it had reached the steady state. We define the activity, $K$, as the number of moves per unit time per lattice site. In the driven models it is equal to the current, and it may be written as $K=\rho P_{\rm{F}}$, where $\rho$ is the fraction of occupied sites and $P_{\rm{F}}$ is the probability that a given particle can move downwards. In the undriven models the activity is equal to \begin{align} K=\frac{\rho}{4}\sum^{4}_{n=1}nP_{\rm{F},n} , \end{align} where $P_{\rm{F},n}$ is the fraction of particles that can move in $n$ of the four directions. Note that while an undriven system may be jammed, i.e. that a finite fraction of the particles are permanently frozen and will never be able to move whatever the future dynamics of the system may be, there could still be rattlers, which are particles able to move back and forth inside a confined space, and thus the activity does not vanish in those cases. See Fig. \ref{rattlers} for an illustration of such a case. \begin{figure} \includegraphics[width=\columnwidth]{rattlers.pdf} \caption{An illustration of rattlers trapped inside a cage in the KA or BKA models. The particles on the edges (empty circles) are permanently frozen since at least three of their nearest neighbors are occupied by other permanently frozen particles. Although the $\times$ particle cannot currently move, it will be able to after the particles marked with a solid green circle move.} \label{rattlers} \end{figure} We start by considering a mean-field (MF) approximation of the activity, in which we ignore all correlations between occupancies of neighboring sites. This approximation is exact in the KA model, for which there are no correlations \cite{Kob1993}. The MF approximation for the activity in the various models is \begin{align} &K^{\rm{KA}}_{\rm{MF}}=K^{\rm{DKA}}_{\rm{MF}}=\rho\left(1-\rho\right)\left(1-\rho^{3}\right)^{2} ,\nonumber\\ &K^{\rm{AKA}}_{\rm{MF}}=K^{\rm{BKA}}_{\rm{MF}}=K^{\rm{DAKA}}_{\rm{MF}}=K^{\rm{DBKA}}_{\rm{MF}}=\nonumber\\ &=\rho\left(1-\rho\right)\left(1-\rho^{3}\right) . \end{align} For the KA and DKA models, the terms on the right hand side correspond respectively to the probabilities that a site is occupied, that its neighbor in the chosen direction of motion is vacant, and that at least one of the three sites both in the Before and the After group is vacant. In the BKA and DBKA (AKA and DAKA) models, the last term correspond to the probability that at least one of the three sites in the Before (After) group is vacant. Note that under the MF approximation there is no difference between the driven and undriven models, and furthermore the AKA and BKA have the exact same MF behavior. Also note that the MF activity in the KA model is lower than the MF activity for the AKA and BKA models, due to the extra constraint in the KA model. The MF activity is finite for all densities and vanishes only either when $\rho=0$ and there are no particles that can move and contribute to the activity, or when $\rho=1$ and the system is fully occupied such that there are no vacant sites that particles can move into. However, as we will show below, for each of the five non-equilibrium models there is a finite, non-trivial critical density above which the activity in the steady state vanishes. We now derive a SMF approximation for the activity, which considers some of the correlations in the system, and then we will compare it to simulation results. Our SMF approximation predicts a finite, non-trivial value for the critical density at which the activity vanishes and thus qualitatively captures the simulation results. However, the SMF approximation does not capture the numerical values of the critical densities in the different models. \subsection{Semi-Mean Field Approximation} We describe here a sketch of the SMF approximation for the driven models, with the full details given in the appendices. It is straightforward, yet more lengthy to follow the same steps and obtain the SMF approximation also for the undriven models. In the SMF approximation for the driven models, at any moment in time we divide all particles into three groups: free (F), jammed (J), and blocked (B). The particles in the free group are those that can move. The particles in the jammed group are those that have a vacancy in the site below them, but cannot move in their next step solely due to the kinetic constraint. The particles in the blocked group are those whose neighboring site in the direction of the flow is occupied and therefore cannot move regardless of the kinetic constraint. We denote the fractions of particles in the free, jammed, and blocked groups by $P_{\rm{F}}$, $P_{\rm{J}}$, and $P_{\rm{B}}$, respectively, where by construction \begin{align} P_{\rm{F}}+P_{\rm{J}}+P_{\rm{B}}=1 .\label{p3e1} \end{align} Now we write a master equation for the rates for each particle to change its type \begin{align} &\frac{\partial P_{\alpha}}{\partial t}=\sum_{\beta\neq\alpha}r_{\beta,\alpha}P_{\beta}-\sum_{\beta\neq\alpha}r_{\alpha,\beta}P_{\alpha} ,\label{eveq} \end{align} where $\alpha,\beta={\rm F,J,B}$ and $r_{\alpha,\beta}$ is the rate in which a particle of type $\alpha$ changes into a particle of type $\beta$. The rates themselves depend on $P_{\rm{F}}$ such that Eq. (\ref{eveq}) represents a set of three coupled nonlinear equations, which may be reduced to two equations using Eq. (\ref{p3e1}). In order to find an approximation for the rates, we assume that each site not accounted for in the type of the state before the transition is occupied with probability $\rho$, and that within each group the probability to be in each of the microscopic states is proportional to its uncorrelated probability. \begin{figure}[t!] \includegraphics[width=0.3\columnwidth]{rates_by_fig_05.pdf} \caption{An illustration of the transition from a blocked state to a free state in the DBKA model. The main particle whose state changes in marked by a full green circle, and the blocking particle by an empty circle. At least one of the $\times$ sites needs to be vacant in order for the blocking particle to move, and at least one of the $\diamond$ sites needs to be vacant in order for the main particle to be free after the blocking particle moves.} \label{ratebf} \end{figure} For example, consider the rate $r_{{\rm B},{\rm F}}$ in the DBKA model illustrated in Fig. \ref{ratebf}. The configuration before the transition consists of the blocked particle and the blocking particle below it. In order for the blocked particle to change its type to a free particle, two independent conditions should be satisfied. First, the blocking particle needs to move. The blocking particle can move only if it is free itself. The kinetic constraint for the blocking particle to be free is that at least one of the three adjacent sites, except the site below it, is vacant. Since the site above it is occupied by the blocked particle, we approximate the probability that the blocked particle is free given that the site above it is occupied by the uncorrelated fraction of free sites with one of the three neighbors occupied, i.e. $(1-\rho^2)/(1-\rho^3)$. Note that if the blocking particle is free, the site below it must be vacant, and therefore the probability that it is vacant is already included in the probability that the blocking particle is free. The second condition for the blocked particle to change its type to a free particle, is that at least one of its three other neighbors is vacant, the probability of which we approximate by $1-\rho^{3}$. Altogether, the rate $r_{\rm{B},\rm{F}}$ in the DBKA model is given by \begin{align} r_{\rm{B},\rm{F}}=P_{\rm{F}}\frac{1-\rho^{2}}{1-\rho^{3}}\left(1-\rho^{3}\right)=\left(1-\rho^{2}\right)P_{\rm{F}} . \end{align} The other rates are generated in a similar fashion for the three driven models, as detailed in Appendix \ref{app_driven}. For all six models, the rates $r_{\rm{B},\alpha}$ and $r_{\rm{J},\alpha}$ are proportional to $P_{\rm{F}}$ since they involve the movement of a particle besides the blocked or jammed main particle, and the rates $r_{\rm{F},\alpha}$ are linear in $P_{\rm{F}}$, since they contain terms which correspond to the movement of the main particle and to movement of other particles. Therefore, we may write the rates as \begin{align} &r_{\rm{B},\alpha}=\omega_{\rm{B},\alpha}P_{\rm{F}} ,\nonumber\\ &r_{\rm{J},\alpha}=\omega_{\rm{J},\alpha}P_{\rm{F}} ,\nonumber\\ &r_{\rm{F},\alpha}=\Omega_{\rm{F},\alpha}+\omega_{\rm{F},\alpha}P_{\rm{F}} ,\label{omega_def} \end{align} with $\omega_{\alpha,\beta}$ and $\Omega_{\alpha,\beta}$ depending only on the density, and obviously different for the six different models. We now look for stationary solutions of Eq. (\ref{eveq}) under the condition $0\leq P_{\rm{F}},P_{\rm{J}},P_{\rm{B}}\leq 1$. The solution $P_{\rm{F}}=0$ is always a stationary solution. We find that if there is another stationary solution with $P_{\rm{F}}>0$, then it is unique and given by \begin{widetext} \begin{align} P_{\rm{F}}=\frac{\left(\Omega_{\rm{F},\rm{B}} + \omega_{\rm{J},\rm{B}}\right) \left(\omega_{\rm{B},\rm{F}} - \omega_{\rm{J},\rm{F}}\right) - \left(\omega_{\rm{B},\rm{F}} + \omega_{\rm{B},\rm{J}} + \omega_{\rm{J},\rm{B}}\right)\left(\Omega_{\rm{F},\rm{B}} + \Omega_{\rm{F},\rm{J}} - \omega_{\rm{J},\rm{F}}\right)}{\left(\omega_{\rm{F},\rm{B}}+\omega_{\rm{F},\rm{J}}\right)\left(\omega_{\rm{B},\rm{J}}+\omega_{\rm{J},\rm{B}}\right)+\left(\omega_{\rm{B},\rm{J}}+\omega_{\rm{F},\rm{B}}\right)\omega_{\rm{J},\rm{F}}+\omega_{\rm{B},\rm{F}}\left(\omega_{\rm{F},\rm{J}}+\omega_{\rm{J},\rm{B}}+\omega_{\rm{J},\rm{F}}\right)} .\label{pfss} \end{align} \end{widetext} In Appendix \ref{app_stability} we derive Eq. (\ref{pfss}) and show that if this solution exists, it is also stable. In Appendix \ref{app_stabpf0} we investigate the stability of the $P_{\rm{F}}=0$ state under the SMF approximation, and find that for large enough $P_{\rm{B}}$, the solution is stable. For the three driven models, as well as for the BKA model, we find that within the SMF approximation there is some finite, model-dependent critical density $0<\rho_{c}<1$ such that for densities higher than the critical density $\rho>\rho_{c}$, solving Eq. (\ref{pfss}) yields a negative $P_{\rm{F}}$ and thus it does not exist, while for $\rho<\rho_{c}$ we find that $P_{\rm{F}}>0$. Therefore, the critical density is defined as the solution to Eq. (\ref{pfss}) with $P_{\rm{F}}=0$. The critical densities we get from this SMF approximation are $\rho^{{\rm SMF}}_{{\rm DKA}}=0.792$, $\rho^{{\rm SM}}_{{\rm DAKA}}=0.933$, $\rho^{{\rm SMF}}_{{\rm DBKA}}=0.679$ and $\rho^{{\rm SMF}}_{{\rm BKA}}=0.858$. In the driven models, the SMF approximation involves three different states. In the undriven models, we need to account for whether in each of the four directions the particle is free to move, blocked or jammed, which gives a total of $3^{4}=81$ states, which reduce to $20$ by rotational and inversion symmetry. In the BKA model, however, the number of states is reduced to six, since a particle is jammed in a certain direction only if it is blocked in the other three directions. We therefore present in Appendix \ref{app_bka} also the derivation of the SMF activity in the BKA model. We leave the derivation of the AKA and KA models, which are straightforward but cumbersome, to future publications. However, we expect that the SMF approximation in the KA model would yield the exact same result as the MF approximation for that model since by construction it has no correlations. Thus, it would only be interesting to develop the SMF approximation for the AKA model. \subsection{Numerical Results} We simulated the six models on a $30\times30$ lattice with periodic boundary conditions. For each density we averaged over $100$ realizations, which start from different random initial conditions. We also performed simulations on larger systems up to $100\times100$ (not shown), and found very small deviations due to finite-size effects \cite{Teomy2012,Teomy2014b,Toninelli2004,Teomy2014}. Figure \ref{activity_plot} compares the steady state activity evaluated from the simulations, the MF approximation and the SMF approximation. The SMF approximation over-estimates the activity in the simulations for all six models. While the KA, DKA, AKA and DAKA models converge to the steady state rather rapidly, the BKA and DBKA models converge very slowly for an intermediate range of densities ($0.37<\rho<0.43$ for the DBKA model and $0.50<\rho<0.81$ for the BKA model), as shown in Fig. \ref{decay}. \begin{figure} \includegraphics[width=0.6\columnwidth]{current_driven_l30.pdf}\\ \includegraphics[width=0.6\columnwidth]{activity_undriven_l30.pdf} \caption{Activity as a function of density for the three driven models (a) and the three undriven models (b), both from simulations results (continuous lines) and under the MF (dotted) and SMF (dashed) approximations. Note that the MF approximation is identical for the AKA/BKA/DAKA/DBKA models.} \label{activity_plot} \end{figure} \begin{figure}[t] \includegraphics[width=0.4\columnwidth]{bka_decay.pdf} \includegraphics[width=0.4\columnwidth]{dbka_decay.pdf} \caption{Activity $K$ as a function of time for the BKA and DBKA models for different densities.} \label{decay} \end{figure} \begin{figure}[h!] \includegraphics[width=0.6\columnwidth]{pfrozen.pdf} \caption{Lower bound on the fraction of frozen particles, $P_{{\rm Z}}$, as a function of density $\rho$ after a very long time. Note that in the BKA model in the density range $0.5<\rho<0.81$, the system did not yet reach the steady state.} \label{frozen_fig} \end{figure} We also measure in the simulations a lower bound on the fraction of frozen particles, $P_{{\rm Z}}$, i.e. those that will never be able to move. For a given configuration, we do this by an iterative culling procedure \cite{Jeng2008,Teomy2015}. This procedure starts by removing all mobile particles. In this new configuration, some particles which could not move before can now move, and we remove them too. We continue this procedure until all the remaining particle, if any, cannot be removed. This procedure gives a lower bound, since any particle which remains after this process is necessarily a frozen particle, but it is possible that some frozen particles have been removed \cite{Teomy2015}. Note that this procedure is not done during the dynamics, but on a snapshot of the system. Except for the BKA model which did not reach the steady state at intermediate densities ($0.5<\rho<0.81$), we find from Fig. \ref{frozen_fig} that $P_{{\rm Z}}$ jumps from $0$ to $1$ at some critical density. This critical density is the same one at which the activity vanishes, since zero activity implies $P_{{\rm Z}}=1$. Note that the critical density of the KA model in a system of size $30\times30$ is $\rho^{\rm{KA}}_{c}(30)\approx0.93$ \cite{Teomy2014}, much higher than the numerically obtained critical densities of the other five models. This is in contrast to the MF approximation, which predicts $\rho^{\rm{KA}}_{c}=1$. However, as stated above, the finite value of the critical density in the KA model is strictly a well understood finite size effect, which does not affect significantly the critical densities of the other five models. \begin{figure*} \includegraphics[width=2\columnwidth]{snapshots_big.pdf} \caption{A snapshot of $50\times 50$ systems after a long time at density $\rho=0.6$ (top) and $\rho=0.8$ (bottom), for six models.} \label{snapshot_big} \end{figure*} In order to better understand the behavior of the system, we present in Fig. \ref{snapshot_big} typical configurations of a $50\times50$ system after a very long time. In the KA, DKA, AKA, DAKA and DBKA models the system has reached the steady state, while in the BKA it has not. In the KA model, the system is always in equilibrium and there are no correlations between the occupancies of the sites. In the DKA model, the system is jammed at $\rho=0.8$ and there are scattered structures of vacancies. At $\rho=0.6$ the system is not jammed, but these structures can still be seen. These structures have been investigated in \cite{Bolshak2019,Turci2012}. The AKA and DAKA models appear very similar. At $\rho=0.8$ the system is jammed, and the vacancies tend to be arranged in a checkerboard pattern. At $\rho=0.6$ the system is not jammed, but there are jammed regions with a checkerboard pattern. These checkerboard patterns are the sparsest locally jammed structures in the AKA and DAKA models, and due to their symmetry may be extended indefinitely. Hence, once a checkerboard pattern appears it is unstable only at its boundary. However, above the critical density the accumulation of particles at its boundary does not allow the pattern to break, but rather causes it to grow. The behavior of the BKA model and the DBKA model is more interesting. Before investigating the configurations, we note that any particle that is part of two consecutive full rows is permanently frozen, since it is blocked in three directions and jammed in the other direction. In the KA, DKA, AKA and DAKA models such a configuration cannot be generated dynamically, since a particle is prohibited from completing the second row. However, in the BKA and DBKA models such a configuration can be generated dynamically. At high densities in the DBKA model the system is jammed, and there are structures of vacancies reminiscent of those in the DKA model. At lower densities a front develops, which after some time settles into two full consecutive rows. At that point, these two rows cannot move, and thus the system becomes jammed after all the remaining particles drop onto these rows. The two consecutive rows always form in the direction normal to the driving. A snapshot of a $100\times100$ system in the DBKA model at $\rho=0.6$ before the onset of jamming is shown in Fig. \ref{snow}. The spontaneous formation of these jammed structures is the cause for the slow relaxation in the DBKA model, since it generally takes a very long time for this event to occur. \begin{figure} \includegraphics[width=0.6\columnwidth]{dbka_l100_r6_snap.pdf} \caption{A snapshot of a $100\times100$ system in the DBKA model at $\rho=0.6$} \label{snow} \end{figure} In the BKA model at low densities two consecutive full rows or columns can be generated dynamically, which then behave as an unmovable wall inside the system. This wall can grow thicker as other particles form full rows or columns adjacent to it. However, outside the wall, the remaining particles are still active. These walls spontaneously form in either of the two axes. As the density increases the system becomes divided into rectangles with rattlers, which may be thought of as enclosures between two pairs of parallel walls. If there are enough rattlers inside a rectangle, they can decrease the size of the rectangle by forming a full row or column adjacent to the rectangle's edge. If there are not enough rattlers to form a full row or column, they will continue rattling inside the rectangle forever. We hypothesize that in the infinite size limit, the density of infinitely rattling particles goes to zero. The slow relaxation in the BKA model is due to the rare events of rattlers forming a full row or column and decreasing the size of the rectangle. Since the formation of the wall is a collective effect of $O(L)$ particles, the relaxation time scale increases with system size. \subsection{Temporal behavior of the activity} The SMF approximation can also be used to describe the temporal behavior of the activity. Starting form a random, uncorrelated initial condition, the SMF approximation can give insight into the short time dynamics, namely the temporal derivative of the activity at time $t=0$, before correlations start developing in the system. Since at $t=0$ the sites are uncorrelated, the activity at time $t=0$ is equal to the MF approximation. The temporal derivative, $\partial K/\partial t$, is given by the SMF approximation with the probabilities $P_{\alpha}$ given by their MF values. Using Eq. (\ref{eveq}), we find that for the driven models \begin{align} &\left.\frac{\partial K^{\rm{DKA}}}{\partial t}\right|_{t=0}=\nonumber\\ &-\rho^{4}\left(1-\rho\right)^{3}\left(1+2\rho-\rho^{3}+2\rho^{4}+3\rho^{5}+\rho^{6}+\rho^{7}\right) ,\nonumber\\ &\left.\frac{\partial K^{\rm{DAKA}}}{\partial t}\right|_{t=0}=-\rho^{5}\left(1-\rho\right)^{2}\left(1+3\rho+\rho^{2}\right) ,\nonumber\\ &\left.\frac{\partial K^{\rm{DBKA}}}{\partial t}\right|_{t=0}=-\rho^{4}\left(1-\rho\right)^{2}\left(2+\rho+\rho^{2}-\rho^{3}\right) . \end{align} For the BKA model we find that \begin{align} \left.\frac{\partial K^{\rm{BKA}}}{\partial t}\right|_{t=0}=-\frac{1}{4}\rho^{4}\left(1-\rho^{2}\right)^{2}\left(7+\rho+6\rho^{2}-4\rho^{3}\right) . \end{align} In these four models, $\partial K/\partial t$ at $t=0$ is negative for all densities. Figure \ref{dkt_t0} shows the excellent agreement between the simulations and the analytical results. In the KA model, $\partial K^{{\rm KA}}/{\partial t}=0$ at all times and for all densities, since correlations never develop there. \begin{figure} \includegraphics[width=0.6\columnwidth]{dkt_t0.pdf} \caption{The rate of change of the activity, $\partial K/\partial t$ at $t=0$ from the simulations (symbols) and the SMF approximation (continuous lines). The numerical results are averages over $10^5$ runs of $100\times100$ systems.} \label{dkt_t0} \end{figure} Numerically, we see in Fig. \ref{ktime_other} that in the DKA, BKA and DBKA models, the activity decreases monotonically with time, while in Fig. \ref{ktime} we see that in the AKA and DAKA models, the activity is not monotonic with time for $\rho<\rho_{{\rm m}}$, with $\rho^{\rm{AKA}}_{{\rm m}}\approx0.66$ and $\rho^{\rm{DAKA}}_{{\rm m}}\approx0.64$. As shown in Fig. \ref{ktime}, for $\rho<\rho_{{\rm m}}$ the activity in the AKA and DAKA models first decreases until it reaches a minimum at time $t_{{\rm min}}$, then increases until it reaches the steady state, while for $\rho>\rho_{{\rm m}}$ it is monotonically decreasing. This result is counter-intuitive; one would expect that the activity would either increase or decrease monotonically with time, depending on whether the system becomes less or more restricted. \begin{figure} \includegraphics[width=0.6\columnwidth]{ktime_other.pdf} \caption{The temporal behavior of the activity in the BKA, DBKA and DKA models for density $\rho=0.5$. The dashed lines are the SMF approximation. The numerical results are averages over $10^5$ runs of $100\times100$ systems.} \label{ktime_other} \end{figure} \begin{figure} \includegraphics[width=0.4\columnwidth]{ktime_daka2.pdf} \includegraphics[width=0.4\columnwidth]{ktime_aka2.pdf}\\ \includegraphics[width=0.4\columnwidth]{ktime_daka5.pdf} \includegraphics[width=0.4\columnwidth]{ktime_aka5.pdf}\\ \includegraphics[width=0.4\columnwidth]{ktime_daka8.pdf} \includegraphics[width=0.4\columnwidth]{ktime_aka8.pdf} \caption{The temporal behavior of the activity in the AKA and DAKA models for three different densities. The dotted red line is the SMF approximation. The numerical results are averages over $10^5$ runs of $100\times100$ systems.} \label{ktime} \end{figure} The SMF approximation can qualitatively explain this behavior. Consider the evolution equation for $P_{\rm{B}}$ in the three driven models, given explicitly by \begin{align} &\frac{\partial P^{\rm{DKA}}_{\rm{B}}}{\partial t}=\frac{1-\rho^{2}}{1-\rho^{3}}\left(\rho-P^{\rm{DKA}}_{\rm{B}}\right)P^{\rm{DKA}}_{\rm{F}} ,\nonumber\\ &\frac{\partial P^{\rm{DAKA}}_{\rm{B}}}{\partial t}=\left[\frac{\rho\left(1-\rho^{2}\right)}{1-\rho^{3}}-P^{\rm{DAKA}}_{\rm{B}}\right]P^{\rm{DAKA}}_{\rm{F}} ,\nonumber\\ &\frac{\partial P^{\rm{DBKA}}_{\rm{B}}}{\partial t}=\left[\rho-P^{\rm{DBKA}}_{\rm{B}}\frac{1-\rho^{2}}{1-\rho^{3}}\right]P^{\rm{DBKA}}_{\rm{F}} , \end{align} with the initial condition $\left.P_{\rm{B}}\right|_{t=0}=\rho$. In the DKA model, $P^{\rm{DKA}}_{\rm{B}}=\rho$ at all times, while in the DAKA (DBKA) model $\frac{\partial P_{\rm{B}}}{\partial t}$ is negative (positive) for all $\rho$. This means that $P^{\rm{DAKA}}_{\rm{B}}$ ($P^{\rm{DBKA}}_{\rm{B}}$) decreases (increases) monotonically with time from its initial value, $\rho$, to its steady state value. Now consider the evolution equation for $P_{\rm{F}}$ in the three driven models, which has the form \begin{align} \frac{\partial P_{\rm{F}}}{\partial t}=\left(C_{0}+C_{\rm{B}}P_{\rm{B}}+C_{\rm{F}}P_{\rm{F}}\right)P_{\rm{F}} ,\label{pfgen} \end{align} where $C_{0},C_{\rm{B}}$ and $C_{\rm{F}}$ depend only on the density. For the DKA and DBKA models, $\partial P_{\rm{F}}/\partial t$ is negative for all values of $P_{\rm{B}}$ and $P_{\rm{F}}$ between their initial values and the steady state values, for all densities, and thus $P_{\rm{F}}$ in these two models monotonically decreases with time, and so does the activity. In the DAKA model, however, there are values of $P_{\rm{B}}$ and $P_{\rm{F}}$ between their initial and steady state values for which $\partial P_{\rm{F}}/\partial t$ is positive. Furthermore, for $P^{\rm{DAKA}}_{\rm{B}}$ equal to its steady state value, and $\rho<\rho^{SMF}_{DAKA}\approx0.933$, we find that $\partial P_{\rm{F}}/\partial t$ is positive for all values of $P^{\rm{DAKA}}_{\rm{F}}$ between its initial and steady state values. Figure \ref{tmin} shows the value of $t_{{\rm min}}$ vs. the density. The SMF approximation predicts that $t_{{\rm min}}$ diverges at $\rho^{\rm{DAKA}}_{{\rm m}}\approx 0.827$, while according to the simulations it diverges at $\rho\approx0.64$. \begin{figure} \includegraphics[width=0.6\columnwidth]{tmin.pdf} \caption{The time at which the activity reaches a minimum, $t_{{\rm min}}$, as a function of density $\rho$.} \label{tmin} \end{figure} \section{Discussion} \label{sec_conclusion} In this paper we showed that counter-intuitively, adding transitions to a dynamical system may decrease its activity. We analyzed this scenario by investigating six related lattice gas models: the equilibrium KA model, and five non-equilibrium variants of it (AKA,BKA,DKA,DAKA,DBKA). In some cases adding transitions increases the activity (DKA$\rightarrow$DAKA, KA$\rightarrow$AKA at small densities), while in other cases it decreases the activity (DKA$\rightarrow$DBKA, KA$\rightarrow$BKA). The difference lies in the topology of the phase-space for each model. For example, consider the undriven models KA, AKA and BKA. The phase space of the KA model is composed of a large part which contains states in which none of the particles are permanently frozen, and many small parts each of them contains states in which a specific subset of the particles cannot move, most of them due to the particles being in two (or more) consecutive rows or columns. The AKA and BKA models add transitions between the different disjoined parts. In the BKA model, the permanently frozen walls cannot be broken, but they can form dynamically. Therefore, the added transitions in the BKA model between the different parts of the state space is into a more jammed structure. In the AKA model, the permanently frozen walls can be broken, and so the added transitions between the parts allow the system to escape from these jammed parts. At high enough density, there are other jammed structures which are 2D in nature, not quasi-1D as the walls. The AKA model also allows transitions into these 2D jammed structures, but not out of them, and thus at high enough density it also jams. It would be interesting to continue investigating the new models we described in this paper: AKA, DAKA, BKA and DBKA. For example, the critical density we found in the simulations is for a system of size $30\times30$, and there are bound to be finite-size effects. Also, the simulations always started from an uncorrelated initial condition, and an interesting question is how does the initial condition affects the dynamics, since the initial conditions affect even models which obey detailed balance \cite{Corberi2009}. Other points which are worth investigating are the correlations and the relaxation time, especially in the BKA and DBKA models. \section*{Acknowledgments} We thank Gregory Bolshak, Rakesh Chatterjee, and Erdal O\u{g}uz for fruitful discussions. This research was supported in part by the Israel Science Foundation Grant No. 968/16 and by the National Science Foundation Grant No. NSF PHY-1748958.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Traditional model checking techniques focus on a systematic check of the validity of a temporal logic formula on a precise mathematical model. The answer to the model checking question is either true or false. Although this classic approach is enough to specify and verify boolean temporal properties, it does not allow to reason about stochastic nature of systems. In real-life systems, there are many phenomena that can only be modeled by considering their stochastic characteristics. For this purpose, probabilistic model checking is proposed as a formal verification technique for the analysis of stochastic systems. In order to model random phenomena, discrete-time Markov chains, continuous-time Markov chains and Markov decision processes are widely used in probabilistic model checking. Linear-time property is a set of infinite paths. We can use linear-time temporal logic (LTL) to express $\omega$-regular properties. Given a finite Markov chain $M$ and an $\omega$-regular property $Q$, the probabilistic model checking problem for LTL is to compute the probability of accepting runs in the product Markov chain $M$ and a deterministic Rabin automata (DRA) for $\neg Q$ \cite{Katoen}. Among linear-time temporal logics, there exists a number of \emph{choppy logics} that are based on chop ($\CHOP$) operators. Interval Temporal Logic (ITL) \cite{Mos83} is one kind of choppy logics, in which temporal operators such as \emph{chop}, \emph{next} and \emph{projection} are defined. Within the ITL developments, Duan, Koutny and Holt, by introducing a new projection construct $(p_1,\ldots,p_m) \prj q$, generalize ITL to infinite time intervals. The new interval-based temporal logic is called Projection Temporal Logic (PTL) \cite{ZCL07}. PTL is a useful formalism for reasoning about period of time for hardware and software systems. It can handle both sequential and parallel compositions, and offer useful and practical proof techniques for verifying concurrent systems \cite{ZN08,ZCL07}. Compared with LTL, PTL can describe more linear-time properties. In this paper, we investigate the probabilistic model checking on Propositional PTL (PPTL). There are a number of reasons for being interested in projection temporal logic language. One is that projection temporal logic can express various imperative programming constructs (e.g. while-loop) and has executable subset \cite{DYK08,YD08}. In addition, the expressiveness of projection temporal logic is more powerful than the classic point-based temporal logics such as LTL since the temporal logics with \emph{chop star} ($*$) and \emph{projection} operators are equivalent to $\omega$-regular languages, but LTL cannot express all $\omega$-regular properties \cite{wolper}. Furthermore, the key construct used in PTL is the new projection operator $(p_1,\ldots, p_m)~\prj~q$ that can be thought of as a combination of the parallel and the projection operators in ITL. By means of the projection construct, one can define fine- and coarse-grained concurrent behaviors in a flexible and readable way. In particular, the sequence of processes $p_1,\ldots, p_m$ and process $q$ may terminate at different time points. In the previous work \cite{DYK08,YD08,ZCL07}, we have presented a \emph{normal form} for any PPTL formula. Based on the normal form, we can construct a semantically equivalent graph, called \emph{normal form graph} (NFG). An infinite (finite) interval that satisfies a PPTL formula will correspond to an infinite (finite) path in NFG. Different from Buchi automata, NFG is exactly the model of a PPTL formula. For any unsatisfiable PPTL formula, NFG will be reduced to a false node at the end of the construction. NFG consists of both finite and infinite paths. But for concurrent stochastic systems, here we only consider infinite cases. Therefore, we define $\NFG_{\mathit{inf}}$ to denote an NFG only with infinite paths. To capture the accurate semantics for PPTL formulas with infinite intervals, we adopt Rabin acceptance condition as accepting states in $\NFG_{\mathit{inf}}$. In addition, since Markov chain $M$ is a deterministic probabilistic model, in order to guarantee that the product of $M\otimes \NFG_{\mathit{inf}}$ is also a Markov chain, we give an algorithm for deterministic $\NFG_{\mathit{inf}}$, in the spirit of Safra's construction for deterministic Buchi-automata. To make this idea clear, we now consider a simple example shown in Figure \ref{eg-chop}. The definitions of NFGs and Markov chains are formalized in the subsequent sections. Let $p ~\CHOP~ q$ be a \emph{chop} formula in PPTL, where $p$ and $q$ are atomic propositions. $\NFG_{\mathit{inf}}$ of $p~\CHOP~q$ is constructed in Figure \ref{eg-chop}(a), where nodes $v_0$, $v_1$ and $v_2$ are temporal formulas, and edges are state formulas (without temporal operators). $v_0$ is an initial node. $v_2$ is an acceptance node recurring for infinitely many times, whereas $v_1$ appears finitely many times. Figure \ref{eg-chop}(b) presents a Markov chain with initial state $s$. Let path $\mathit{path} = \langle s, s_1, s_3 \rangle$. We can see that $\mathit{path}$ satisfies $p ~\CHOP ~q$ with probability 0.6. Based on the product of Markov chain and $\NFG_{\mathit{inf}}$, we can compute the whole probability that the Markov chain satisfies $p~\CHOP~q$. \begin{figure} \includegraphics[width=9cm]{eg-chop}\\ \caption{A Simple Example for Probabilistic Model Checking on PPTL.}\label{eg-chop} \end{figure} Compared with Buchi automata, NFGs have the following advantages that are more suitable for verification for interval-based temporal logics. \\ (i) NFGs are beneficial for unified verification approaches based on the same formal notation. NFGs can not only be regarded as models of specification language PTL, but also as models of Modeling Simulation and Verification Language (MSVL)\cite{DYK08,YD08}, which is an executable subset of PTL. Thus, programs and their properties can be written in the same language, which avoids the transformation between different notations. \\ (ii) NFGs can accept both finite words and infinite words. But Buchi automata can only accept infinite words. Further, temporal operators \emph{chop} ($p ~\CHOP ~q$), \emph{chop star} ($p^*$), and \emph{projection} can be readily transformed to NFGs. \\ (iii) NFGs and PPTL formulas are semantically equivalent. That is, every path in NFGs corresponds to a model of PPTL formula. If some formula is false, then its NFG will be a false node. Thus, satisfiability in PPTL formulas can be reduced to NFGs construction. But for any LTL formula, the satisfiability problem needs to check the emptiness problem of Buchi automata. The paper is organized as follows. Section 2 introduces PPTL briefly. Section 3 presents the (discrete time) Markov chains. In Section 4, the probabilistic model checking approach for PPTL is investigated. Finally, conclusions are drawn in Section 5. \section{Propositional Projection Temporal Logic} The underlying logic we use is Propositional Projection Temporal Logic (PPTL). It is a variation of Propositional Interval Temporal Logic (PITL). \begin{Def}\rm Let $AP$ be a finite set of atomic propositions. PPTL formulas over $AP$ can be defined as follows: \[ Q ::= \pi \mid \neg Q \mid \bigcirc Q \mid Q_1 \wedge Q_2 \mid (Q_1, \ldots, Q_m) \prj Q \mid Q^+ \] where $\pi \in AP$, $Q, Q_1,\ldots, Q_n$ are PPTL formulas, $\bigcirc$ (next), $\prj$ (projection) and $+$ (plus) are basic temporal operators. \end{Def} A formula is called a \emph{state} formula if it does not contain any temporal operators, i.e., \emph{next} ($\bigcirc$), \emph{projection} ($\prj$) and \emph{chop-plus} (${}^+$); otherwise it is a \emph{temporal} formula. An interval $\sigma = \langle s_0,s_1,\ldots\rangle$ is a non-empty sequence of states, where $s_i~(i \geq 0)$ is a state mapping from $AP$ to $B= \{true, false\}$. The length, $|\sigma|$, of $\sigma$ is $\omega$ if $\sigma$ is infinite, and the number of states minus 1 if $\sigma$ is finite. To have a uniform notation for both finite and infinite intervals, we will use \emph{extended integers} as indices. That is, for set $N_0$ of non-negative integer and $\omega$, we define $ N_{\omega} = N_0 \cup \{\omega\}$, and extend the comparison operators: $=, <, \leq,$ to $N_{\omega}$ by considering $\omega = \omega$, and for all $i \in N_0, i < \omega$. Moreover, we define $\preceq$ as $\leq-\{(\omega,\omega)\}$. To define the semantics of the projection construct we need an auxiliary operator. Let $\sigma = \langle s_0, s_1,...\rangle$ be an interval and $r_1,\ldots,r_h$ be integers ($h\geq 1$) such that $0\leq r_1\leq \ldots \leq r_h\preceq |\sigma| $. \[ \sigma \downarrow (r_1,\ldots,r_h) \DEF \langle s_{t_1}, s_{t_2},\ldots, s_{t_l}\rangle \] The \emph{projection} of $\sigma$ onto $r_1,\ldots,r_h$ is the interval (called projected interval) where $t_1,\ldots,t_l$ are obtained from $r_1,\ldots,r_h$ by deleting all duplicates. In other words, $t_1,\ldots,t_l$ is the longest strictly increasing subsequence of $r_1,\ldots,r_h$. For example, $ \langle s_0, s_1, s_2, s_3 \rangle \downarrow (0,2,2,2,3) = \langle s_0, s_2, s_3 \rangle. $ As depicted in Figure \ref{projection}, the projected interval $\langle s_0, s_2, s_3\rangle$ can be obtained by using $\downarrow$ operator to take the endpoints of each process $\emptyy, len(2), \emptyy, \emptyy, len(1)$. \begin{figure}[htbp] \flushleft\includegraphics[width=10cm]{projection.eps}\\ \caption{A projected interval.}\label{projection} \end{figure} An interpretation for a PPTL formula is a tuple $\interp =(\sigma,i,k,j)$, where $\sigma$ is an interval, $i,k$ are integers, and $j$ an integer or $\omega$ such that $i \leq k \preceq j$. Intuitively, $(\sigma,i,k,j)$ means that a formula is interpreted over a subinterval $\sigma_{(i,..,j)}$ with the current state being $s_k$. The satisfaction relation ($\models$) between interpretation $\interp$ and formula $Q$ is inductively defined as follows. \begin{center} \begin{enumerate} \item $\interp \models \pi$ iff $s_k[\pi]= true$ \item $\interp \models \neg Q$ iff $\interp \nvDash Q$ \item $\interp \models Q_1 \wedge Q_2$ iff $\interp \models Q_1$ and $\interp \models Q_2$ \item $\interp \models \bigcirc Q$ iff $k < j$ and $(\sigma, i, k+1, j) \models Q$ \item $\interp \models (Q_1,\ldots,Q_m) \prj Q$ iff there are~ $k=r_0\leq r_1\leq\ldots \leq r_m \preceq j$~\mbox{such that } $(\sigma,i,r_0,r_1)\models Q_1 ~\mbox{and}$ $(\sigma,r_{l-1},r_{l-1}, r_l)\models Q_l~\mbox{for all}~1<l\leq m ~\mbox{and}$ $ (\sigma',0,0,|\sigma'|)$ $\models Q ~\mbox{for}~ \sigma'~\mbox{given by}:$ \\ $ (a) ~r_m<j ~\mbox{and}~ \sigma' = \sigma \downarrow (r_0,\ldots,r_m)~{\cdot}~\sigma_{(r_m+1,..,j)}$ \\ $(b) ~ r_m=j ~ \mbox{and}~ \sigma'=\sigma \downarrow (r_0,\ldots,r_h) ~\mbox{for some}~ 0\leq h\leq m$. \item $\interp \models Q^+$ $\mbox{iff there are finitely many } r_0,\ldots,r_n \mbox{ and }$ $k= r_0\leq r_1 \leq \ldots \leq r_{n-1}\preceq r_n =j ~(n\geq 1)$\\ $\mbox{such that }$ $(\sigma, i, r_0, r_1)\models Q \mbox{ and }$ $(\sigma,r_{l-1}, r_{l-1}, r_l)\models Q~\mbox{for all}~1<l\leq n$ or\\ $j= \omega$ and there are infinitely many integers $k= r_0 \leq r_1 \leq r_2 \leq \ldots$ such that $\lim\limits_{i\rightarrow\infty} r_i = \omega$ and $(\sigma, i, r_0, r_1) \models Q$ and for $l >1, (\sigma, r_{l-1}, r_{l-1}, r_l) \models Q$. \end{enumerate} \end{center} A PPTL formula $Q$ is satisfied by an interval $\sigma$, denoted by $\sigma \models Q$, if $(\sigma, 0, 0, |\sigma|) \models Q$. A formula $Q$ is called satisfiable, if $\sigma \models Q$. A formula $Q$ is valid, denoted by $\models Q$, if $\sigma \models Q$ for all $\sigma$. Sometimes, we denote $\models p\leftrightarrow q$ (resp. $\models p\rightarrow q$) by $p\approx q$ (resp.$\hookrightarrow$ ) and $\models \Box(p \leftrightarrow q)$ (resp. $\models \Box(p \rightarrow q)$) by $p \equiv q$ (resp. $p\supset q$), The former is called \emph{weak equivalence (resp. weak implication)} and the latter \emph{strong equivalence (resp. strong implication)}. Figure~\ref{fig-formulas} below shows us some useful formulas derived from elementary PTL formulas. $\emptyy$ represents the final state and $\more$ specifies that the current state is a non-final state; $\Diamond P$ (namely \emph{sometimes} $P$) means that $P$ holds eventually in the future including the current state; $\Box P$ (namely \emph{always} $P$) represents that $P$ holds always in the future from now on; $\bigodot P$ (\emph{weak next}) tells us that either the current state is the final one or $P$ holds at the next state of the present interval; $\Prj (P_1,\ldots,P_m)$ represents a \emph{sequential} computation of $P_1, \ldots, P_m$ since the projected interval is a singleton; and $P\;\CHOP \;Q$ ($P$ \emph{chop} $Q$) represents a computation of $P$ followed by $Q$, and the intervals for $P$ and $Q$ share a common state. That is, $P$ holds from now until some point in future and from that time point $Q$ holds. Note that $P \;\CHOP \;Q$ is a strong chop which always requires that $P$ be true on some finite subinterval. $len(n)$ specifies the distance $n$ from the current state to the final state of an interval; $\SKIP$ means that the length of the interval is one unit of time. $\fin(P)$ is $\true$ as long as $P$ is $\true$ at the final state while $\keep(P)$ is $\true$ if $P$ is true at every state but the final one. The formula $\halt(P)$ holds if and only if formula $P$ is $\true$ at the final state. \begin{figure}[htpb] \[ \begin{array}{lcl} \emptyy & \DEF & \neg\bigcirc\true \\ len(n) & \DEF & \left\{ \begin{array}{ll} \emptyy & \mbox{if} ~n=0 \\ \bigcirc len(n-1) & \mbox{if} ~n>1 \end{array} \right. \\ \Box P & \DEF & \neg\Diamond\neg P \\ \SKIP & \DEF & len(1) \\ \Prj (P_1,\ldots,P_m) & \DEF & (P_1,\ldots,P_m) \prj \emptyy \\ \fin(P) & \DEF & \Box(\emptyy \rightarrow P) \\ P\;\CHOP \;Q & \DEF & \Prj \;(P,Q) \\ \keep(P) & \DEF & \Box(\neg \emptyy \rightarrow P) \\ \more & \DEF & \neg\emptyy \\ \halt(P) & \DEF & \Box(\emptyy \leftrightarrow P) \\ \Diamond P & \DEF & \Prj(\true, P) \\ \bigodot P & \DEF & \emptyy \vee \bigcirc P \end{array} \] \caption{Derived PPTL formulas.} \label{fig-formulas} \end{figure} \subsubsection*{An Application of Projection Construct} \begin{Expl}\rm We present a simple application of projection construct about a pulse generator for variable $x$ which can assume two values: 0 (low) and 1 (high). We first define two types of processes: The first one is $hold(i)$ which is executed over an interval of length $i$ and ensures that the value of $x$ remains constant in all but the final state, \[ hold(i) \DEF \Frame(i) \wedge \len(i) \] The other is $switch(j)$ which is ensures that the value of $x$ is first set to 0 and then changed at every subsequent state, \[ switch(j) \DEF x=0 \wedge \len(j) \wedge \Box(\more \rightarrow \bigcirc x= 1-x) \] Having defined $hold(i)$ and $switch(j)$, we can define the pulse generators with varying numbers and length of low and high intervals for $x$, \[ pulse(i_1, \ldots, i_k) \DEF (hold(i_1), \ldots, hold(i_k)) \prj switch(k) \] For instance, a pulse generator \[ \begin{array}{lrl} pulse(3, 5, 3, 4) &\DEF& (hold(3), hold(5), hold(3), hold(4)) \prj switch(4) \end{array} \] can be shown in Figure \ref{projection-example}. \begin{figure}[t!] \begin{center} {\small \begin{verbatim} |<------------------------- switch(4) ------------------------------->| x=0 1 0 1 0 |--------------|------------------------|--------------|-------------------| t0 t3 t8 t11 t15 t0 t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 t15 |----|----|----|----|----|----|----|----|----|----|----|----|----|----|----| |<--hold(3)--->|<-------hold(5) ------->|<--hold(3)--->|<---- hold(4) ---->| x=0 0 0 1 1 1 1 1 0 0 0 1 1 1 1 0 \end{verbatim} } \caption{A Pulse Generator}\label{projection-example} \end{center} \end{figure} \end{Expl} Let $Q$ be a PPTL formula and $Q_p \in AP$ be a set of atomic propositions in $Q$. Normal form of PPTL formulas can be defined as follows. \begin{Def}\rm A PPTL formula $Q$ is in \emph{normal form} if \[ Q \equiv (\bigvee\limits_{j=0}^{n_0} Q_{e_j} \wedge \emptyy) \vee ( \bigvee\limits_{i=0}^{n} Q_{c_i} \wedge \bigcirc Q_{f_i}) \] where $Q_{e_j} \equiv \bigwedge\limits_{k=1}^ {m_0} \dot{q_{jk}}, Q_{c_i} \equiv \bigwedge\limits_{h=1}^ m \dot{q_{ih}}$, $|Q_p|=l$, $1 \leq m_0 \leq l$, $1 \leq m \leq l$; $q_{jk}, q_{ih} \in Q_p$, for any $r \in Q_p$, $\dot{r}$ means $r$ or $\neg r$; $Q_{fi}$ is a general PPTL formula. For convenience, we often write $Q_e \wedge \emptyy$ instead of $\bigvee\limits_{j=0}^{n_0} Q_{e_j} \wedge \emptyy$ and $\bigvee\limits_{i=0}^{n} Q_{i} \wedge \bigcirc Q_{i}'$ instead of $\bigvee\limits_{i=0}^{n} Q_{c_i} \wedge \bigcirc Q_{f_i}$. Thus, \[ Q \equiv (Q_e \wedge \emptyy) \vee (\bigvee\limits_{i=0}^{n} Q_{i} \wedge \bigcirc Q_{i}') \] where $Q_e$ and $Q_i$ are state formulas. \end{Def} \begin{Thm}\rm For any PPTL formula $Q$, there is a normal form $Q'$ such that $Q \equiv Q'$. \cite{ZCL07} \end{Thm} \section{Probabilistic System} We model probabilistic system by \emph{(discrete-time)} \emph{Markov chains} (DTMC). Without loss of generality, we assume that a DTMC has a unique initial state. \begin{Def}\rm A Markov chain is a tuple $M=(S, \mathit{Prob}, \iota_{init}, AP, L)$, where $S$ is a countable, nonempty set of states; $\mathit{Prob}: S \times S\rightarrow [0,1]$ is the transition probability function such that $\sum\limits_{s' \in S} \mathit{Prob}(s, s') =1$; $ \iota_{init}: S \rightarrow [0,1]$ is the initial distribution such that $\sum\limits_{s\in S}\iota_{init}(s) =1$, and $AP$ is a set of atomic propositions and $L: S \rightarrow 2^{AP}$ a labeling function. \end{Def} As in the standard theory of Markov processes \cite{KS60}, we need to formalize a probability space of $M$ that can be defined as $\psi_M =(\Omega, \mathit{Cyl}, Pr)$, where $\Omega$ denotes the set of all infinite sequences of states $\langle s_0, s_1, \ldots \rangle$ such that $\mathit{Prob} (s_i, s_{i+1}) >0$ for all $i \leq 0$, $\mathit{Cyl}$ is a $\sigma$-algebra generated by the \emph{basic cylindric sets}: \[ \mathit{Cyl}(s_0, \ldots, s_n) = \{ path \in \Omega \mid path = s_0, s_1, \ldots, s_n, \ldots\} \] and $Pr$ is a probability distribution defined by \begin{eqnarray*} Pr^M (\mathit{Cyl}(s_0, \ldots, s_n))& = & \mathit{Prob} (s_0, \ldots, s_n) \\& =& \prod\limits_{0 \leq i < n} \mathit{Prob} (s_i, s_{i+1}) \end{eqnarray*} If $p$ is a path in DTMC $M$ and $Q$ a PPTL formula, we often write $p \models Q$ to mean that a path in DTMC satisfies the given formula $Q$. Let $\mathit{path(s)}$ be a set of paths in DTMC starting with state $s$. The probability for $Q$ to hold in state $s$ is denoted by $Pr^M(s \models Q)$, where $Pr^M(s \models Q) = Pr^M_s \{p \in \mathit{path(s)} \mid p \models Q\}$. \section{Probabilistic Model Checking for PPTL} In \cite{ZCL07}, it is shown that any PPTL formulas can be rewritten into normal form, where a graphic description for normal form called Normal Form Graph (NFG) is presented. NFG is an important basis of decision procedure for satisfiability and model checking for PPTL. In this paper, the work reported depends on the NFG to investigate the probabilistic model checking for PPTL. However, there are some differences on NFG between our work and the previous work in \cite{DYK08,YD08,ZCL07}. First, NFG consists of finite paths and infinite paths. For concurrent stochastic systems, we only consider to verify $\omega$-regular properties. Thus, we are supposed to concern with all the infinite paths of NFG. These infinite paths are denoted by $\NFG_{\mathit{inf}}$. Further, to define the nodes which recur for finitely many times, \cite{ZCL07} uses Labeled NFG (LNFG) to tag all the nodes in finite cycles with $F$. But it can not identify all the possible acceptance cases. As the standard acceptance conditions in $\omega$-automata, we adopt Rabin acceptance condition to precisely define the infinite paths in $\NFG_{\mathit{inf}}$. In addition, since Markov chain $M$ is a deterministic probabilistic model, in order to guarantee that the product of $M\otimes \NFG_{\mathit{inf}}$ is also a Markov chain, the $\NFG_{\mathit{inf}}$ needs to be deterministic. Thus, following the Safra's construction for deterministic automata, we design an algorithm to obtain a deterministic $\NFG_{\mathit{inf}}$. \subsection{Normal Form Graph} In the following, we first give a general definition of NFG for PPTL formulas. \begin{Def}[Normal Form Graph \cite{DYK08,ZCL07}]\rm For a PPTL formula $P$, the set $V(P)$ of nodes and the set of $E(P)$ of edges connecting nodes in $V(P)$ are inductively defined as follows. \begin{enumerate} \item $P \in V(P)$; \item For all $Q \in V(P) / \{\emptyy, \false\}$, if $Q \equiv (Q_e \wedge \emptyy) \vee (\bigvee\limits_{i=0}^{n} Q_{i} \wedge \bigcirc Q_{i}')$, then $\emptyy \in V(P)$, $(Q, Q_e, \emptyy) \in E(P)$; $Q_i' \in V(P)$, $(Q, Q_i, Q_i') \in E(P)$ for all $i$, $1 \leq i \leq n$. \end{enumerate} The NFG of PPTL formula $P$ is the directed graph $G = (V(P), E(P))$. \end{Def} A finite path for formula $Q$ in NFG is a sequence of nodes and edges from the root to node $\emptyy$. while an infinite path is an infinite sequence of nodes and edges originating from the root. \begin{Thm}[Finiteness of NFG]\label{finiteness}\rm For any PPTL formula $P$, $|V(P)|$ is finite \cite{ZCL07}. \end{Thm} Theorem \ref{finiteness} assures that the number of nodes in NFG is finite. Thus, each satisfiable formula of PPTL is satisfiable by a finite transition system (i.e., finite NFG). Further, by the finite model property, the satisfiability of PPTL is decidable. In \cite{ZCL07}, Duan \emph{etal} have given a decision procedure for PPTL formulas based on NFG. To verify $\omega$-regular properties, we need to consider the infinite paths in NFG. By ignoring all the finite paths, we can obtain a subgraph only with infinite paths, denoted $\NFG_{\mathit{inf}}$. \begin{Def}\label{NFG-inf}\rm For a PPTL formula $P$, the set $V_{\mathit{inf}}(P)$ of nodes and the set of $E_{\mathit{inf}}(P)$ of edges connecting nodes in $V_{\mathit{inf}}(P)$ are inductively defined as follows. \begin{enumerate} \item $P \in V_{\mathit{inf}}(P)$; \item For all $Q \in V_{\mathit{inf}}(P)$, if $Q \equiv (Q_e \wedge \emptyy) \vee (\bigvee\limits_{i=0}^{n} Q_{i} \wedge \bigcirc Q_{i}')$, then $Q_i' \in V_{\mathit{inf}}(P)$, $(Q, Q_i, Q_i') \in E_{\mathit{inf}}(P)$ for all $i$, $1 \leq i \leq n$. \end{enumerate} Thus, $\NFG_{\mathit{inf}}$ is a directed graph $G' = (V_{\mathit{inf}}(P), E_{\mathit{inf}}(P))$. Precisely, $G'$ is a subgraph of $G$ by deleting all the finite path from node $P$ to node $\emptyy$. \end{Def} In fact, a finite path in the NFG of a formula $Q$ corresponds to a model (i.e., interval) of $Q$. However, the result does not hold for the infinite case since not all of the infinite paths in NFG can be the models of $Q$. Note that, in an infinite path, there must exist some nodes which appear infinitely many times, but there may have other nodes that can just recur for finitely many times. To capture the precise semantics model of formula $Q$, we make use of Rabin acceptance condition as the constraints for nodes that must recur finitely. \begin{Def}\rm For a PPTL formula $P$, $\NFG_{\mathit{inf}}$ with Rabin acceptance condition is defined as $G_{Rabin} = (V_{\mathit{inf}}(P), E_{\mathit{inf}}(P), v_0, \Omega)$, where $V(P)$ is the set of nodes and $E(P)$ is the set of directed edges between $V(P)$, $v_0 \in V(P)$ is the initial node, and $\Omega=\{(E_1, F_1), \ldots, (E_k, F_k)\}$ with $E_i, F_i \in V(P)$ is Rabin acceptance condition. We say that: an infinite path is a model of the formula $P$ if there exists an infinite run $\rho$ on the path such that \[ \exists (E, F) \in \Omega. (\rho \cap E = \emptyset) \wedge (\rho \cap F \neq \emptyset) \] \end{Def} \begin{Expl}\rm Let $Q$ be PPTL formulas. The normal form of $\Diamond Q$ are as follows. \begin{eqnarray*} \Diamond Q & \equiv & \true \;\CHOP\; Q\\ & \equiv & (\emptyy \vee \bigcirc \true) \; \CHOP \; Q \\ & \equiv & (\emptyy \; \CHOP \; Q) \vee (\bigcirc \true \; \CHOP \; Q)\\ & \equiv & Q \vee \bigcirc (\true \CHOP Q) \\ & \equiv & (Q \wedge \emptyy) \vee (Q \wedge \bigcirc \true) \vee \bigcirc \Diamond Q\\ & \equiv & (Q \wedge \emptyy) \vee (Q \wedge \bigcirc (\emptyy \vee \bigcirc \true)) \vee \bigcirc \Diamond Q \end{eqnarray*} The NFG and $\NFG_{\mathit{inf}}$ with Rabin acceptance condition of $\Diamond Q$ are depicted in Figure \ref{NFG-expl}. By the semantics of formula $\Diamond Q$ (see Figure \ref{fig-formulas}), that is, formula $Q$ holds eventually in the future including the current state, we can know that node $\Diamond Q$ must cycle for finitely many times and node $T$ (i.e., $\true$) for infinitely many times. \begin{figure*}[htpb] \centering\includegraphics[width=15cm]{NFG-expl.eps} \caption{NFG of $\Diamond Q$.} \label{NFG-expl} \end{figure*} \end{Expl} \begin{table*}[thpb] \centering \caption {Algorithm for constructing $\NFG_{\mathit{inf}}$ with Rabin condition for a PPTL formula.} \label{Alg-NFG-INF} \begin{tabular}{|l|} \hline\\[-.7em] \textbf{Function} $\NFG_{\mathit{inf}}(Q)$\\[.2em] /*precondition: Q is a PPTL formula, NF(Q) is the normal form for $Q$ */ \\[.2em] /*postcondition: $\NFG_{\mathit{inf}}(Q)$ outputs $\NFG_{\mathit{inf}}$ with Rabin condition of $Q$, \\[.2em]~~~~~~~~~~~~~~~~~~~~~~ $G_{\mathit{Rabin}} = (V_{\mathit{inf}}(Q), E_{\mathit{inf}}(Q), v_0, \Omega)$ */ \\[.2em] \hline\\[-.7em] \textbf{begin function}\\[.2em] $~~V_{\mathit{inf}} (Q) =\{Q\}; E_{\mathit{inf}}(Q) =\emptyset; \textbf{visit}(Q)=0; v_0 = Q; E=F=\emptyset$; ~~/*initialization*/ \\[.2em] $~~$\textbf{ while} there exists $R \in V_{\mathit{inf}} (Q)$ and \textbf{visit}(R) == 0\\[.2em] $~~~~~$\textbf{do} $P=\mathit{NF}(R)$; $~~\textbf{visit}(R) =1$;\\[.2em] $~~~~~\textbf{switch} (P)$\\[.2em] ~~~~~~~~~~\textbf{case} $P \equiv \bigvee\limits_{j=1}^h P_{ej} \wedge \emptyy$: \textbf{break};\\[.2em] ~~~~~~~~~~\textbf{case} $P \equiv \bigvee\limits_{i=1}^k P_i \wedge \bigcirc P_i'$ or $P \equiv (\bigvee\limits_{j=1}^h P_{ej} \wedge \emptyy) \vee (\bigvee\limits_{i=1}^k P_i \wedge \bigcirc P_i')$:\\[.2em] ~~~~~~~~~~~~~~~~~~~~~~~\textbf{foreach} $i~(1 \leq i \leq k)$ \textbf{do}\\[.2em] ~~~~~~~~~~~~~~~~~~~~~~~~~~ \textbf{if} $~\neg ( P_i' \equiv \false)~$ and $P_i' \not\in V_{\mathit{inf}}(Q)$ \\[.2em] ~~~~~~~~~~~~~~~~~~~~~~~~~~ \textbf{then} $\textbf{visit} (P_i') = 0$; \\[.2em] ~~~~~~~~~~~~~~~~~~~~~~~~~~ /*$P_i$ is not decomposed to normal form*/ \\[.2em] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $V_{\mathit{inf}}(Q) = V_{\mathit{inf}}(Q) \cup \bigcup\limits_{i=1}^k \{P_i'\}$; \\[.2em] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $E_{\mathit{inf}}(Q) = E_{\mathit{inf}}(Q) \cup \bigcup\limits_{i=1}^k \{(R, P_i, P_i')\}$;\\[.2em] ~~~~~~~~~~~~~~~~~~~~~~~~~~ \textbf{if} $\neg ( P_i' \equiv \false)~$ and $P_i' \in V_{\mathit{inf}}(Q)$\\[.2em] ~~~~~~~~~~~~~~~~~~~~~~~~~~ \textbf{then} $E_{\mathit{inf}}(Q) = E_{\mathit{inf}}(Q) \cup \bigcup\limits_{i=1}^k \{(R, P_i, P_i')\}$;\\[.7em] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\textbf{when} $P_i' = R$~ \textbf{do}~ /*self-loop*/\\[.5em] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \textbf{if} $R$ is $Q_1 \;\CHOP\; Q_2$ \textbf{then} $E= E \cup \{R\}$ \textbf{else} $F =F \cup \{R\}$ \\[.2em] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \textbf{for} some node $R'' \in V_{\mathit{inf}}(Q)$;\\[.2em] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \textbf{let} $NF(R'')= \bigvee\limits_{j=1}^k R_i \wedge \bigcirc R$ or\\[.2em] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $NF(R'')=(\bigvee\limits_{j=1}^h R_{ej} \wedge \emptyy) \vee (\bigvee\limits_{i=1}^k R_i \wedge \bigcirc R)$;\\[.7em] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /*nodes $R$ and $R''$ form a loop*/\\[.5em] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\textbf{when} $P_i' = R'' ~( R'' \neq R)$~\textbf{do} ~~\\[.2em] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \textbf{if} $R, R'' \not \in E$ \textbf{then} $F= F \cup \{\{R, R''\}\}$ \\[.2em] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \textbf{else} $E =E \cup \{\{R, R''\}\}$; \\[.2em] ~~~~~~~~~ \textbf{break};\\[.2em] ~~\textbf{end while}\\[.2em] ~~\textbf{return} $G_{\mathit{Rabin}}$;\\[.2em] \textbf{end function}\\[.2em] \hline \end{tabular} \end{table*} \subsection{The Algorithms} To investigate the probabilistic model checking problem for interval-based temporal logics, we use Markov chain $M$ as stochastic models and PPTL as a specification language. In the following, we present algorithms for the construction and determinization of $\NFG_{\mathit{inf}}$ with Rabin acceptance condition respectively. \subsubsection*{Construction of $\NFG_{\mathit{inf}}$ } In Table \ref{Alg-NFG-INF}, we present algorithm $\NFG_{\mathit{inf}} (Q)$ for constructing the $\NFG_{\mathit{inf}}$ with Rabin acceptance condition for any PPTL formula. Algorithm $\mathit{NF}(Q)$ can be found in \cite{ZCL07}, which is used for the purpose of transforming formula $Q$ into its normal form. For any formula $R \in V_{\mathit{inf}} (Q)$ and $visit(R)=0$, we assume that $P=\mathit{NF}(R)$ is in normal form, where $visit(R)=0$ means that formula $R$ has not been decomposed into its normal form. When $P \equiv \bigvee_{i=1}^{k} P_i \vee \bigcirc P_i'$ or $P \equiv (\bigvee_{j=1}^{h} P_{ej} \wedge \emptyy) \vee (\bigvee_{i=1}^{k} P_i \wedge \bigcirc P_i')$, if $P_i'$ is a new formula (node), that is, $P_i' \not\in V_{\mathit{inf}}$, then by Definition \ref{NFG-inf}, we add the new node $P_i'$ to $V_{\mathit{inf}}$ and edge $(R, P_i, P_i')$ to $E_{\mathit{inf}}$ respectively. On the other hand, if $P_i' \in V_{\mathit{inf}}$, then it will be a loop. In particular, we need to consider the case of $R \equiv Q_1 ~\CHOP~ Q_2$. Because $Q_1 ~\CHOP~ Q_2$ ($Q_1$ \emph{chop} $Q_2$, defined in Fig.\ref{fig-formulas}) represents a computation of $Q_1$ followed by $Q_2$, and the intervals for $Q_1$ and $Q_2$ share a common state. That is, $Q_1$ holds from now until some point in future and from that time point $Q_2$ holds. Note that $Q_1 ~\CHOP~ Q_2$ used here is a \emph{strong chop} which always requires that $Q_1$ be true on some finite subinterval. Therefore, infinite models of $Q_1$ can cause $R$ to be false. To solve the problem, we employ Rabin acceptance condition to constraint that chop formula will not be repeated infinitely many times. By Theorem \ref{finiteness}, we know that nodes $V(Q)$ is finite in $\NFG$. Since $V_{inf}(Q) \subseteq V(Q)$, so $V_{\mathit{inf}}(Q)$ is finite as well. This is essential since it can guarantee that the algorithm $\NFG_{\mathit{inf}} (Q)$ will terminate. \begin{Thm}\label{alg-termination}\rm Algorithm $\NFG_{\mathit{inf}} (Q)$ always terminates. \end{Thm} \begin{Proof} Let $V_{\mathit{inf}} (Q) =\{v_1,\ldots, v_n\}$. When all nodes in $V_{\mathit{inf}}$ are transformed into normal form, we have $visit(v_i)==1~(1\leq i\leq n)$. Hence, the while loop always terminates. \end{Proof} We denote the set of infinite paths in an $\NFG_{\mathit{inf}}$ $G$ by $\mathit{path}(G) = \{p_1, \ldots, p_m\}$, where $p_i~(1 \leq i \leq m)$ is an infinite path from the initial node to some acceptable node in $F$. The following theorem holds. \begin{Thm}\label{NFG-equiv}\rm $G_{Rabin}$ and $G'_{\mathit{Rabin}}$ are equivalent if and only if $\mathit{path}(G_{\mathit{Rabin}})= \mathit{path}(G'_{\mathit{Rabin}})$. \end{Thm} Let $Q$ be a satisfiable PPTL formula. By unfolding the normal form of $Q$, there is a sequence of formulas $\langle Q, Q_1, Q_1', Q_2, Q_2', \ldots \rangle$. Further, by algorithm $\NFG_{\mathit{inf}}$, we can obtain an equivalent $\NFG_{\mathit{inf}}$ to the normal form. In fact, an infinite path in $\NFG_{\mathit{inf}}$ of $Q$ corresponds to a model of $Q$. We conclude this fact in Theorem \ref{NFG-inf-thm}. \begin{Thm}\label{NFG-inf-thm}\rm A formula $Q$ can be satisfied by infinite models if and only if there exists infinite paths in $\NFG_{\mathit{inf}}$ of $Q$ with Rabin acceptance condition. \end{Thm} \subsubsection*{Determinization of $\NFG_{\mathit{inf}}$} Buchi automata and $\NFG_{\mathit{inf}}$ both accept $\omega$-words. The former is a basis for the automata-theoretic approach for model checking with liner-time temporal logic, whereas the latter is the basis for the satisfiability and model checking of PPTL formulas. Following the thought of the Safra's construction for deterministic Buchi automata \cite{Wolfgang02}, we can obtain a deterministic $\NFG_{\mathit{inf}}$ with Rabin acceptance condition from the non-deterministic ones. However, different from the states in Buchi automata, each node in $\NFG_{\mathit{inf}}$ is specified by a formula in PPTL. Thus, by eliminating the nodes that contain equivalent formulas, we can decrease the number of states in the resulting deterministic $\NFG_{\mathit{inf}}$ to some degree. The construction for deterministic $\NFG_{\mathit{inf}}$ is shown in Table \ref{Alg-DNFG}. For any $R\in V_{\mathit{inf}}'(Q)$, $R$ is a Safra tree consisting of a set of nodes, and each node $v$ is a set of formulas. By Safra's algorithm \cite{Wolfgang02}, we can compute all reachable Safra tree $R'$ that can be reached from $R$ on input $P_i$. To obtain a deterministic $\NFG_{\mathit{inf}}$, we take all pairs $(E_v, F_v)$ as acceptance component, where $E_v$ consists of all Safra trees without a node $v$, and $F_v$ all Safra trees with node $v$ marked '!' that denotes $v$ will recur infinitely often. Furthermore, we can minimize the number of states in the resulting $\NFG_{\mathit{inf}}$ by finding equivalent nodes. Let $R = \{v_0, \ldots, v_n\}$ and $R'=\{v_0', \ldots, v_n'\}$ be two Safra's trees, where $R, R' \in V'_{\mathit{inf}}$, nodes $v_i= \{Q_1,Q_2, \ldots\}$ and $v_i' =\{Q_1', Q_2', \ldots\}$ be a set of formulas. For any nodes $v_i$ and $v_i'$, if we have $v_i = v_i'$, then the two Safra's trees are the same. Moreover, we have $v_i = v_i'$ if and only if $\bigvee_{j=1}^n Q_j \equiv \bigvee_{j=1}^n Q_j'$. The decision procedure for formulas equivalence can be guaranteed by satisfiability theorems presented in \cite{ZCL07}. \begin{table}[htpb] \centering \caption {Algorithm for Deterministic $\NFG_{\mathit{inf}}$.} \label{Alg-DNFG} \begin{tabular}{|l|} \hline\\[-.7em] \textbf{Function} DNFG(Q)\\ /*precondition: $G_{\mathit{Rabin}} = (V_{\mathit{inf}}(Q), E_{\mathit{inf}}(Q), v_0, \Omega)$ is an $\NFG_{\mathit{inf}}$ for PPTL formula $Q$. */ \\ /*postcondition: DNFG(Q) outputs a deterministic $\NFG_{\mathit{inf}}$ and \\ ~~~$G_{Rabin}'=(V_{\mathit{inf}}'(Q), E_{\mathit{inf}}'(Q), v_0', \Omega')$ */ \\[.2em] \hline\\[-.7em] \textbf{begin function} \\[.2em] ~~$V_{\mathit{inf}}'(Q) = \{Q\}; E_{\mathit{inf}}'(Q)=\emptyset; v_0' = v_0; E_v = F_v = \emptyset;$ /*initialization*/ \\[1em] ~~\textbf{while} $R\in V_{\mathit{inf}}'(Q)$ and there exists an input $P_i$ \textbf{do}\\[.2em] ~~~~\textbf{foreach} node $v \in R$ such that $R \cap F \neq \emptyset$ \\[.2em] ~~~~~~\textbf{do} $v'= v \cap F$; $ R'=R \cup \{v'\}$; /* create a new node $v'$ such that $v'$ is a son of $v$*/\\[.2em] ~~~~\textbf{foreach} node $v$ in $R'$\\[.2em] ~~~~~~\textbf{do} $v = \{P_i' \in V_{inf}(Q)\mid \exists(P, P_i, P_i' )\in E_{inf}(Q), P\in v\}$; /*update $R'$*/\\[.2em] ~~~~\textbf{foreach} $v \in R'$ \textbf{do if} $P_i \in v$ such that $P_i \in $ left sibling of $v$ \textbf{then} remove $P_i$ in $v$; \\[.2em] ~~~~\textbf{foreach} $v \in R'$ \textbf{do if} $v=\emptyset$ \textbf{then} remove $v$;\\[.2em] ~~~~\textbf{foreach} $v\in R'$ \textbf{do if} $u_1,\ldots,u_n$ are all sons of $v$ such that $v=\cup_{i}\{u_i\}~(1\leq i \leq n)$ \\[.2em] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \textbf{then} remove $u_i$; mark $v$ with $!$;\\[.2em] ~~~~$V_{\mathit{inf}}'(Q) = \{R'\} \cup V_{\mathit{inf}}'(Q)$; $E_{\mathit{inf}}'(Q) = (R, P_i, R')\cup E_{\mathit{inf}}'(Q)$;\\[.2em] ~~\textbf{end while}\\[.5em] ~~/*Rabin acceptance components*/ \\[.2em] ~~ $E_v=\{R \in V_{\mathit{inf}}'(Q) \mid \mbox{R is Safra tree without node $v$}\}$; \\[.2em] ~~ $F_v=\{R \in V_{\mathit{inf}}'(Q) \mid \mbox{R is Safra tree with $v$ marked $!$}\}$;\\[.2em] ~~\textbf{return} $G_{\mathit{Rabin}}'$;\\ \textbf{end function}\\[.2em] \hline \end{tabular} \end{table} \subsection{Product Markov Chains} \begin{Def}\label{product}\rm Let $M=(S, \mathit{Prob}, \iota_{init}, AP, L)$ be a Markov chain $M$, and for PPTL formula $Q$, $G_{\mathit{Rabin}} = (V_{\mathit{inf}} (Q), E_{\mathit{inf}}(Q), v_0, \Omega)$ be a deterministic $\NFG_{\mathit{inf}}$, where $\Omega =\{(E_1, F_1), \ldots, (E_k, F_k)\}$. The product $M \otimes G_{\mathit{Rabin}}$ is the Markov chain, which is defined as follows. \[ M \otimes G_{\mathit{Rabin}} = (S \times V_{\mathit{inf}}(Q), \mathit{Prob'}, \iota_{init}, \{acc\}, L' ) \] where \[ L'(\langle s, Q'\rangle) = \left\{ \begin{array}{ll} \{ acc \} & \mbox{ if for some $F_i$}, Q' \in F_i, \\& \mbox{ and } Q' \not \in E_j \mbox{ for all } E_j,\\ & ~1 \leq i, j \leq k\\ \emptyset & \mbox{ otherwise} \end{array} \right. \] \[ \iota_{init}' (\langle s, Q'\rangle) = \left\{ \begin{array}{ll} \iota_{init}~ &\mbox{ if } (Q, L(s), Q') \in E_{\mathit{inf}}\\ 0~&\mbox{ otherwise } \end{array} \right. \] and transition probabilities are given by \[ \begin{array}{lrl} && \mathit{Prob}'(\langle s', Q'\rangle, \langle s'', Q'' \rangle)\\ & = & \left\{ \begin{array}{ll} \mathit{Prob}(s', s'')& \mbox{ if } (Q', L(s''), Q'') \in E_{\mathit{inf}}\\ 0 & \mbox{ otherwise } \end{array} \right. \end{array} \] \end{Def} A bottom strongly connected components (BSCCs) in $M \otimes G_{\mathit{Rabin}}$ is accepting if it fulfills the acceptance condition $\Omega$ in $G_{\mathit{Rabin}}$. For some state $s \in M$, we need to compute the probability for the set of paths starting from $s$ in $M$ for which $Q$ holds, that is, the value of $Pr^M(s \models Q)$. From Definition \ref{product}, it can be reduced to computing the probability of accepting runs in the product Markov chain $M \otimes G_{\mathit{Rabin}}$. \begin{Thm}\rm Let $M$ be a finite Markov chain, $s$ a state in $M$, $G_{\mathit{Rabin}}$ a deterministic $\NFG_{\mathit{inf}}$ for formula $Q$, and let $U$ denote all the accepting BSCCs in $M \otimes G_{\mathit{Rabin}}$. Then, we have \[ Pr^M (s \models G_{\mathit{Rabin}})= Pr^{M\otimes G_{\mathit{Rabin}}} (\langle s, Q' \rangle \models \Diamond U) \] where $(Q, L(s), Q') \in E_{\mathit{inf}}$. \end{Thm} \begin{Cor}\rm All the $\omega$-regular properties specified by PPTL are measurable. \end{Cor} \begin{Expl}\rm We now consider the example in Figure \ref{eg-chop}. Let $M$ denote Markov chain in Figure \ref{eg-chop}(b). The probability that \emph{sequential property} $p ~\CHOP ~q$ holds in Markov chain $M$ can be computed as follows. First, by the two algorithms above, deterministic $\NFG_{\mathit{inf}}$ with Rabin condition for $p ~\CHOP ~q$ is constructed as in Figure \ref{eg-chop}(a), where the Rabin acceptance condition is $\Omega ={(v_1, v_2)}$. Further, the product of the Markov chain and $\NFG_{\mathit{inf}}$ for formula $p ~\CHOP ~q$ is given in Figure \ref{product2}. \begin{figure}[htpb] \centering\includegraphics[width=8cm]{eg3.eps} \caption{The Product of Markov chain and $\NFG_{\mathit{inf}}$ in Figure \ref{eg-chop}.} \label{product2} \end{figure} From Figure \ref{product2}, we can see that state $(s_3, v_2)$ is the unique accepting BSCC. Therefore, we have \[ \begin{array}{lrl} &&Pr^M (s \models G_{\mathit{Rabin}})\\ &=&Pr^{M \otimes G_{\mathit{Rabin}}} ((s, v_1) \models \Diamond (s_3, v_3)) \\ &=&1 \end{array} \] That is, sequential property $p~\CHOP~q$ is satisfied almost surely by the Markov chain $M$ in Figure \ref{eg-chop}(b). \end{Expl} \section{Conclusions} This paper presents an approach for probabilistic model checking based on PPTL. Both propositional LTL and PPTL can specify linear-time properties. However, unlike probabilistic model checking on propositional LTL, our approach uses NFGs, not Buchi automata, to characterize models of logic formulas. NFGs possess some merits that are more suitable to be employed in model checking for interval-based temporal logics. Recently, some promising formal verification techniques based on NFGs have been developed, such as \cite{CD10,ZN08}. In the near future, we will extend the existing model checker for PPTL with probability, and according to the algorithms proposed in this paper, to verify the regular safety properties in probabilistic systems. \section*{Acknowledgement} Thanks to Huimin Lin and Joost-Pieter Katoen for their helpful suggestions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $(Y,\lambda)$ be a contact three manifold and $X_{\lambda}$ be the Reeb vector field of it. That is, $X_{\lambda}$ is the unique vector field satisfying $i_{X_{\lambda}}\lambda=1$ and $d\lambda(X_{\lambda},\,\,)=0$. A smooth map $\gamma : \mathbb{R}/T\mathbb{Z} \to Y$ is called a Reeb orbit with periodic $T$ if $\dot{\gamma} = X_{\lambda}(\gamma)$ and simple if $\gamma$ is a embedding map. In this paper, two Reeb orbits are considered equivalent if they differ by reparametrization. The three-dimensional Weinstein conjecture which states that every contact closed three manifold $(Y,\lambda)$ has at least one simple periodic orbit was shown by C. H. Taubes by using Seiberg-Witten Floer (co)homology \cite{T1} and after that, D. Cristofaro-Gardiner and M. Hutshings showed that every contact closed three manifold $(Y,\lambda)$ has at least two simple periodic orbit by using Embedded contact homology(ECH) in \cite{CH}. ECH was introduced by M. Hutchings in several papers (for example, it is briefly explained in \cite{H2}). Let $\gamma : \mathbb{R}/T\mathbb{Z} \to Y$ be a Reeb orbit with periodic $T$. If its return map $d\phi^{T}|_{\mathrm{Ker}\lambda=\xi} :\xi_{\gamma(0)} \to \xi_{\gamma(0)}$ has no eigenvalue 1, we call it a non-degenerate Reeb orbit and we call a contact manifold $(Y,\lambda)$ non-degenerate if all Reeb orbits are non-degenerate. According to the eigenvalues of their return maps, non-degenerate periodic orbits are classified into three types. A periodic orbit is negative hyperbolic if $d\phi^{T}|_{\xi}$ has eigenvalues $h,h^{-1} < 0$, positive hyperbolic if $d\phi^{T}|_{\xi}$ has eigenvalues $h,h^{-1} > 0$ and elliptic if $d\phi^{T}|_{\xi}$ has eigenvalues $e^{\pm i2\pi\theta}$ for some $\theta \in \mathbb{R}\backslash \mathbb{Q}$. For a non-degenerate contact three manifold $(Y,\lambda)$, the following theorems were proved by essentially using ECH. \begin{them}[\cite{HT3}]\label{exacttwo} Let $(Y,\lambda)$ be a closed contact non-degenerate three manifold. Assume that there exists exactly two simple Reeb orbit, then both of them are elliptic and $Y$ is a lens space (possibly $S^{3}$). \end{them} \begin{them}[\cite{HCP}] Let $(Y,\lambda)$ be a non-degenerate contact three manifold. Let $\mathrm{Ker}\lambda=\xi$. Then \item[1.] if $c_{1}(\xi)$ is torsion, there exists infinity many periodic orbits, or there exists exactly two elliptic simple periodic orbits and $Y$ is diffeomorphic to a lens space(that is, it reduces to \cite{HT3}). \item [2.]if $c_{1}(\xi)$ is not torsion, there exists at least four periodic orbit. \end{them} \begin{them}[\cite{HCP}]\label{posihyp} If $b_{1}(Y)>0$, there exists at least one positive hyperbolic orbit. \end{them} In general, ECH splits into two parts $\mathrm{ECH}_{\mathrm{even}}$ and $\mathrm{ECH}_{\mathrm{odd}}$. In particular, $\mathrm{ECH}_{\mathrm{odd}}$ is the part which detects the existence of a positive hyperbolic orbit and moreover if $b_{1}(Y)>0$, we can see directly from the isomorhphism between Seiberg-Witten Floer homology and ECH (Theorem \ref{test}) that $\mathrm{ECH}_{\mathrm{odd}}$ does not vanish. Theorem \ref{posihyp} was proved by using these facts. On contrary to the case $b_{1}(Y)>0$, if $b_{1}(Y)=0$, $\mathrm{ECH}_{\mathrm{odd}}$ may vanish, so such a way doesn't work. As a generalization of this phenomena, D. Cristfaro-Gardiner, M. Hutshings and D. Pomerleano asked the next question in the same paper. \begin{Que}[\cite{HCP}]\label{quest} Let $Y$ be a closed connected three-manifold which is not $S^3$ or a lens space, and let $\lambda$ be a nondegenerate contact form on $Y$. Does $\lambda$ have a positive hyperbolic simple Reeb orbit? \end{Que} The reason why the cases $S^3$ and lens spaces are excluded in Question \ref{quest} is that they admit contact forms with exactly two simple elliptic orbits as stated in Theorem \ref{exacttwo} (for example, see \cite{HT3}). So in general, we can change the assumption of Question \ref{quest} to the one that $(Y,\lambda)$ is not a lens space or $S^3$ with exactly two elliptic orbits (this is generic condition. see \cite{Ir}). For these purpose, the auther proved the next theorem in \cite{S}. \begin{them}[\cite{S}]\label{elliptic} Let $(Y,\lambda)$ be a nondegenerate contact three manifold with $b_{1}(Y)=0$. Suppose that $(Y,\lambda)$ has infinity many simple periodic orbits (that is, $(Y,\lambda)$ is not a lens space with exactly two simple Reeb orbits) and has at least one elliptic orbit. Then, there exists at least one simple positive hyperbolic orbit. \end{them} By Theorem \ref{elliptic}, for answering Question \ref{quest}, we can see that it is enough to consider the next problem. \begin{Que} Let $Y$ be a closed connected three manifold with $b_{1}(Y)=0$. Does $Y$ admit a non-degenerate contact form $\lambda$ such that all simple orbits are negative hyperbolic? \end{Que} The next theorems is the main theorem of this paper. \begin{them}\label{maintheorem} Let $L(p,q)$ $(p\neq \pm 1)$ be a lens space with odd $p$. Then $L(p,q)$ can not admit a non-degenerate contact form $\lambda$ whose all simple periodic orbits are negative hyperbolic. \end{them} Immediately, we have the next corollary. \begin{cor}\label{periodic} Let $(L(p,q),\lambda)$ $(p\neq \pm 1)$ be a lens space with a non-degenerate contact form $\lambda$. Suppose that $p$ is odd and there are infinity many simple Reeb orbits. Then, $(L(p,q),\lambda)$ has a simple positive hyperbolic orbit. \end{cor} In addition to Theorem \ref{maintheorem}, the next Theorem \ref{z2act} and Corollary \ref{allact} hold. Their proofs are shorter than the one of Theorem \ref{maintheorem}. We prove them at the end of this paper. \begin{them}\label{z2act} Let $(S^{3},\lambda)$ be a non-degenerate contact three sphere with a free $\mathbb{Z}/2\mathbb{Z}$ action. Suppose that $(S^{3},\lambda)$ has infinity many simple periodic orbits. Then $(S^{3},\lambda)$ has a simple positive hyperbolic orbit. \end{them} \begin{cor}\label{allact} Let $(S^{3},\lambda)$ be a non-degenerate contact three sphere with a nontrivial finite free group action. Suppose that $(S^{3},\lambda)$ has infinity many simple periodic orbits. Then $(S^{3},\lambda)$ has a simple positive hyperbolic orbit. \end{cor} \subsection*{Acknowledgement} The author would like to thank his advisor Professor Kaoru Ono for his discussion, and Suguru Ishikawa for a series of discussion. This work was supported by JSPS KAKENHI Grant Number JP21J20300. \section{Preliminaries} For a non-degenerate contact three manifold $(Y,\lambda)$ and $\Gamma \in H_{1}(Y;\mathbb{Z})$, Embedded contact homology $\mathrm{ECH}(Y,\lambda,\Gamma)$ is defined. At first, we define the chain complex $(\mathrm{ECC}(Y,\lambda,\Gamma),\partial)$. In this paper, we consider ECH over $\mathbb{Z}/2\mathbb{Z}=\mathbb{F}$. \begin{dfn} [{\cite[Definition 1.1]{H1}}]\label{qdef} An orbit set $\alpha=\{(\alpha_{i},m_{i})\}$ is a finite pair of distinct simple periodic orbit $\alpha_{i}$ with positive integer $m_{i}$. If $m_{i}=1$ whenever $\alpha_{i}$ is hyperboric orbit, then $\alpha=\{(\alpha_{i},m_{i})\}$ is called an admissible orbit set. \end{dfn} Set $[\alpha]=\sum m_{i}[\alpha_{i}] \in H_{1}(Y)$. For two orbit set $\alpha=\{(\alpha_{i},m_{i})\}$ and $\beta=\{(\beta_{j},n_{j})\}$ with $[\alpha]=[\beta]$, we define $H_{2}(Y,\alpha,\beta)$ to be the set of relative homology classes of 2-chains $Z$ in $Y$ with $\partial Z =\sum_{i}m_{i} \alpha_{i}-\sum_{j}m_{j}\beta_{j}$. This is an affine space over $H_{2}(Y)$. \begin{dfn}[{\cite[Definition2.2]{H1}}]\label{representative} Let $Z \in H_{2}(Y;\alpha,\beta)$. A representative of $Z$ is an immersed oriented compact surface $S$ in $[0,1]\times Y$ such that: \item[1.] $\partial S$ consists of positively oriented (resp. negatively oriented) covers of $\{1\}\times \alpha_{i}$ (resp. $\{0\}\times \beta_{j}$) whose total multiplicity is $m_{i}$ (resp. $n_{j}$). \item[2.] $[\pi (S)]=Z$, where $\pi:[0,1]\times Y \to Y$ denotes the projection. \item[3.] $S$ is embedded in $(0,1)\times Y$, and $S$ is transverse to $\{0,1\}\times Y$. \end{dfn} From now on, we fix a trivialization of $\xi$ defined over every simple orbit $\gamma$ and write it by $\tau$. For a non-degenerate Reeb orbit $\gamma$, $\mu_{\tau}(\gamma)$ denotes its Conley-Zehnder index with respect to a trivialization $\tau$ in this paper. If $\gamma$ is hyperbolic (that is, not elliptic), then $\mu_{\tau}(\gamma^{p})=p\mu_{\tau}(\gamma)$ for all positive integer $p$ where $\gamma^{p}$ denotes the $p$ times covering orbit of $\gamma$ (for example, see \cite[Proposition 2.1]{H1}). \begin{dfn}[{\cite[{\S}8.2]{H1}}]\label{intersection} Let $\alpha_{1}$, $\beta_{1}$, $\alpha_{2}$ and $\beta_{2}$ be orbit sets with $[\alpha_{1}]=[\beta_{1}]$ and $[\alpha_{2}]=[\beta_{2}]$. For a fixed trivialization $\tau$, we can define \begin{equation} Q_{\tau}:H_{2}(Y;\alpha_{1},\beta_{1}) \times H_{2}(Y;\alpha_{2},\beta_{2}) \to \mathbb{Z}. \end{equation} by $Q_{\tau}(Z_{1},Z_{2})=-l_{\tau}(S_{1},S_{2})+\#(S_{1}\cap{S_{2}})$ where $S_{1}$, $S_{2}$ are representatives of $Z_{1}$, $Z_{2}$ for $Z_{1}\in H_{2}(Y;\alpha_{1},\beta_{1})$, $Z_{2} \in H_{2}(Y;\alpha_{2},\beta_{2})$ respectively, $\#(S_{1}\cap{S_{2}})$ is their algebraic intersection number and $l_{\tau}$ is a kind of crossing number (see {\cite[{\S}8.3]{H1}} for details). \end{dfn} \begin{dfn}[{\cite[Definition 1.5]{H1}}] For $Z\in H_{2}(Y,\alpha,\beta)$, we define ECH index by \begin{equation} I(\alpha,\beta,Z):=c_{1}(\xi|_{Z},\tau)+Q_{\tau}(Z)+\sum_{i}\sum_{k=1}^{m_{i}}\mu_{\tau}(\alpha_{i}^{k})-\sum_{j}\sum_{k=1}^{n_{j}}\mu_{\tau}(\beta_{j}^{k}). \end{equation} Here, $c_{1}(\xi|_{Z},\tau)$ is a relative Chern number and, $Q_{\tau}(Z)=Q_{\tau}(Z,Z)$. Moreover this is independent of $\tau$ (see \cite{H1} for more details). \end{dfn} \begin{prp}[{\cite[Proposition 1.6]{H1}}]\label{indexbasicprop} The ECH index $I$ has the following properties. \item[1.] For orbit sets $\alpha, \beta, \gamma$ with $[\alpha]=[\beta]=[\gamma]=\Gamma\in H_{1}(Y)$ and $Z\in H_{2}(Y,\alpha,\beta)$, $Z'\in H_{2}(Y,\beta,\gamma)$, \begin{equation}\label{adtiv} I(\alpha,\beta,Z)+I(\beta,\gamma,Z')=I(\alpha,\gamma,Z+Z'). \end{equation} \item[2.] For $Z, Z'\in H_{2}(Y,\alpha,\beta)$, \begin{equation}\label{homimi} I(\alpha,\beta,Z)-I(\alpha,\beta,Z')=<c_{1}(\xi)+2\mathrm{PD}(\Gamma),Z-Z'>. \end{equation} \item[3.] If $\alpha$ and $\beta$ are admissible orbit sets, \begin{equation}\label{mod2} I(\alpha,\beta,Z)=\epsilon(\alpha)-\epsilon(\beta) \,\,\,\mathrm{mod}\,\,2. \end{equation} Here, $\epsilon(\alpha)$, $\epsilon(\beta)$ are the numbers of positive hyperbolic orbits in $\alpha$, $\beta$ respectively. \end{prp} For $\Gamma \in H_{1}(Y)$, we define $\mathrm{ECC}(Y,\lambda,\Gamma)$ as freely generated module over $\mathbb{Z}/2$ by admissible orbit sets $\alpha$ such that $[\alpha]=\Gamma$. That is, \begin{equation} \mathrm{ECC}(Y,\lambda,\Gamma):= \bigoplus_{\alpha:\mathrm{admissibe\,\,orbit\,\,set\,\,with\,\,}{[\alpha]=\Gamma}}\mathbb{Z}_{2}\langle \alpha \rangle. \end{equation} To define the differential $\partial:\mathrm{ECC}(Y,\lambda,\Gamma)\to \mathrm{ECC}(Y,\lambda,\Gamma) $, we pick a generic $\mathbb{R}$-invariant almost complex structure $J$ on $\mathbb{R}\times Y$ which satisfies $J(\frac{d}{ds})=X_{\lambda}$ and $J\xi=\xi$. We consider $J$-holomorphic curves $u:(\Sigma,j)\to (\mathbb{R}\times Y,J)$ where the domain $(\Sigma, j)$ is a punctured compact Riemann surface. Here the domain $\Sigma$ is not necessarily connected. Let $\gamma$ be a (not necessarily simple) Reeb orbit. If a puncture of $u$ is asymptotic to $\mathbb{R}\times \gamma$ as $s\to \infty$, we call it a positive end of $u$ at $\gamma$ and if a puncture of $u$ is asymptotic to $\mathbb{R}\times \gamma$ as $s\to -\infty$, we call it a negative end of $u$ at $\gamma$ (For more details \cite{H1}). Let $u:(\Sigma,j)\to (\mathbb{R}\times Y,J)$ and $u':(\Sigma',j')\to (\mathbb{R}\times Y,J)$ be two $J$-holomorphic curves. If there is a biholomorphic map $\phi:(\Sigma,j)\to (\Sigma',j')$ with $u'\circ \phi= u$, we regard $u$ and $u'$ as equivalent. Let $\alpha=\{(\alpha_{i},m_{i})\}$ and $\beta=\{(\beta_{i},n_{i})\}$ be orbit sets. Let $\mathcal{M}^{J}(\alpha,\beta)$ denote the set of somewhere injectiv $J$-holomorphic curves with positive ends at covers of $\alpha_{i}$ with total covering multiplicity $m_{i}$, negative ends at covers of $\beta_{j}$ with total covering multiplicity $n_{j}$, and no other punctures. Moreover, in $\mathcal{M}^{J}(\alpha,\beta)$, we consider two $J$-holomorphic curves to be equivalent if they represent the same current in $\mathbb{R}\times Y$. For $u \in \mathcal{M}^{J}(\alpha,\beta)$, we naturally have $[u]\in H_{2}(Y;\alpha,\beta)$ and we set $I(u)=I(\alpha,\beta,[u])$. Moreover we define \begin{equation}\label{just} \mathcal{M}_{k}^{J}(\alpha,\beta):=\{\,u\in \mathcal{M}^{J}(\alpha,\beta)\,|\,I(u)=k\,\,\} \end{equation} In this notations, we can define $\partial_{J}:\mathrm{ECC}(Y,\lambda,\Gamma)\to \mathrm{ECC}(Y,\lambda,\Gamma)$ as follows. For admissible orbit set $\alpha$ with $[\alpha]=\Gamma$, we define \begin{equation} \partial_{J} \langle \alpha \rangle=\sum_{\beta:\mathrm{admissible\,\,orbit\,\,set\,\,with\,\,}[\beta]=\Gamma} \# (\mathcal{M}_{1}^{J}(\alpha,\beta)/\mathbb{R})\cdot \langle \beta \rangle. \end{equation} Note that the above counting is well-defined and $\partial_{J} \circ \partial_{J}$. We can see the reason of the former in Proposition \ref{ind} and the later was proved in \cite{HT1} and \cite{HT2}. Moreover, the homology defined by $\partial_{J}$ does not depend on $J$ (see Theorem \ref{test}, or see \cite{T1}). For $u\in \mathcal{M}^{J}(\alpha,\beta)$, the its (Fredholm) index is defined by \begin{equation} \mathrm{ind}(u):=-\chi(u)+2c_{1}(\xi|_{[u]},\tau)+\sum_{k}\mu_{\tau}(\gamma_{k}^{+})-\sum_{l}\mu_{\tau}(\gamma_{l}^{-}). \end{equation} Here $\{\gamma_{k}^{+}\}$ is the set consisting of (not necessarilly simple) all positive ends of $u$ and $\{\gamma_{l}^{-}\}$ is that one of all negative ends. Note that for generic $J$, if $u$ is connected and somewhere injective, then the moduli space of $J$-holomorphic curves near $u$ is a manifold of dimension $\mathrm{ind}(u)$ (see \cite[Definition 1.3]{HT1}). Let $\alpha=\{(\alpha_{i},m_{i})\}$ and $\beta=\{(\beta_{i},n_{i})\}$. For $u\in \mathcal{M}^{J}(\alpha,\beta)$, it can be uniquely written as $u=u_{0}\cup{u_{1}}$ where $u_{0}$ are unions of all components which maps to $\mathbb{R}$-invariant cylinders in $u$ and $u_{1}$ is the rest of $u$. \begin{prp}[{\cite[Proposition 7.15]{HT1}}]\label{ind} Suppose that $J$ is generic and $u=u_{0}\cup{u_{1}}\in \mathcal{M}^{J}(\alpha,\beta)$. Then \item[1.] $I(u)\geq 0$ \item[2.] If $I(u)=0$, then $u_{1}=\emptyset$ \item[3.] If $I(u)=1$, then $u_{1}$ is embedded and $u_{0}\cap{u_{1}}=\emptyset$. Moreover $\mathrm{ind}(u_{1})=1$. \item[4.] If $I(u)=2$ and $\alpha$ and $\beta$ are admissible, then $u_{1}$ is embedded and $u_{0}\cap{u_{1}}=\emptyset$. Moreover $\mathrm{ind}(u_{1})=2$. \end{prp} If $c_{1}(\xi)+2\mathrm{PD}(\Gamma)$ is torsion, there exists the relative $\mathbb{Z}$-grading. \begin{equation} \mathrm{ECH}(Y,\lambda,\Gamma):= \bigoplus_{*:\,\,\mathbb{Z}\mathrm{-grading}}\mathrm{ECH}_{*}(Y,\lambda,\Gamma). \end{equation} Let $Y$ be connected. Then there is degree$-2$ map $U$. \begin{equation}\label{Umap} U:\mathrm{ECH}_{*}(Y,\lambda,\Gamma) \to \mathrm{ECH}_{*-2}(Y,\lambda,\Gamma). \end{equation} To define this, choose a base point $z\in Y$ which is not on the image of any Reeb orbit and let $J$ be generic. Then define a map \begin{equation} U_{J,z}:\mathrm{ECC}_{*}(Y,\lambda,\Gamma) \to \mathrm{ECC}_{*-2}(Y,\lambda,\Gamma) \end{equation} by \begin{equation} U_{J,z} \langle \alpha \rangle=\sum_{\beta:\mathrm{admissible\,\,orbit\,\,set\,\,with\,\,}[\beta]=\Gamma} \# \{\,u\in \mathcal{M}_{2}^{J}(\alpha,\beta)/\mathbb{R})\,|\,(0,z)\in u\,\}\cdot \langle \beta \rangle. \end{equation} The above map $U_{J,z}$ is a chain map, and we define the $U$ map as the induced map on homology. Under the assumption, this map is independent of $z$ (for a generic $J$). See \cite[{\S}2.5]{HT3} for more details. Moreover, in the same reason as $\partial$, $U_{J,z}$ does not depend on $J$ (see Theorem \ref{test}, and see \cite{T1}). In this paper, we choose a suitable generic $J$ as necessary. The next isomorphism is important. \begin{them}[\cite{T1}]\label{test} For each $\Gamma\in H_{1}(Y)$, there is an isomorphism \begin{equation} \mathrm{ECH}_{*}(Y,\lambda,\Gamma) \cong \reallywidecheck{HM}_{*}(-Y,\mathfrak{s}(\xi)+2\mathrm{PD}(\Gamma)) \end{equation} of relatively $\mathbb{Z}/d\mathbb{Z}$-graded abelian groups. Here $d$ is the divisibility of $\mathfrak{s}(\xi)+2\mathrm{PD}(\Gamma)$ in $H_{1}(Y)$ mod torsion and $\mathfrak{s}(\xi)$ is the spin-c structure associated to the oriented 2–plane field as in \cite{KM}. Moerover, the above isomorphism interchanges the map $U$ in (\ref{Umap}) with the map \begin{equation} U_{\dag}: \reallywidecheck{HM}_{*}(-Y,\mathfrak{s}(\xi)+2\mathrm{PD}(\Gamma)) \longrightarrow \reallywidecheck{HM}_{*-2}(-Y,\mathfrak{s}(\xi)+2\mathrm{PD}(\Gamma)) \end{equation} defined in \cite{KM}. \end{them} Here $\reallywidecheck{HM}_{*}(-Y,\mathfrak{s}(\xi)+2\mathrm{PD}(\Gamma))$ is a version of Seiberg-Witten Floer homology with $\mathbb{Z}/2\mathbb{Z}$ coefficients defined by Kronheimer-Mrowka \cite{KM}. The action of an orbit set $\alpha=\{(\alpha_{i},m_{i})\}$ is defined by \begin{equation} A(\alpha)=\sum m_{i}A(\alpha_{i})=\sum m_{i}\int_{\alpha_{i}}\lambda. \end{equation} Note that if two admissible orbit sets $\alpha=\{(\alpha_{i},m_{i})\}$ and $\beta=\{(\beta_{i},n_{i})\}$ have $A(\alpha)\leq A(\beta)$, then the coefficient of $\beta$ in $\partial \alpha$ is $0$ because of the positivity of $J$ holomorphic curves over $d\lambda$ and the fact that $A(\alpha)-A(\beta)$ is equivalent to the integral value of $d\lambda$ over $J$-holomorphic punctured curves which is asymptotic to $\alpha$ at $+\infty$, $\beta$ at $-\infty$. Suppose that $b_{1}(Y)=0$. In this situation, for any orbit sets $\alpha$ and $\beta$ with $[\alpha]=[\beta]$, $H_{2}(Y,\alpha,\beta)$ consists of only one component since $H_{2}(Y)=0$. So we may omit the homology component from the notation of ECH index $I$, that is, $I(\alpha,\beta)$ just denotes the ECH index. Furthermore, for a orbit set $\alpha$ with $[\alpha]=0$, we set $I(\alpha):=I(\alpha,\emptyset)$. \begin{prp}\label{liftinglinear} Let $(Y,\lambda)$ be a non-degenerate connected contact three manifold with $b_{1}(Y)=0$. Let $\rho: \Tilde{Y}\to Y$ be a $p$-fold cover with $b_{1}(\Tilde{Y})=0$. Let $(\Tilde{Y},\Tilde{\lambda})$ be a non-degenerate contact three manifold induced by the covering map. Suppose that $\alpha$ and $\beta$ be admissible orbit sets in $(Y,\lambda)$ consisting of only hyperbolic orbits. Then \begin{equation} I(\rho^{*}\alpha,\rho^{*}\beta)=pI(\alpha,\beta) \end{equation} where $\rho^{*}\alpha$ and $\rho^{*}\beta$ are inverse images of $\alpha$, $\beta$ and thus admissible orbit sets in $(\Tilde{Y},\Tilde{\lambda})$. \end{prp} \begin{proof}[\bf Proof of Proposition \ref{liftinglinear}] Let $\tau$ be a fixed trivialization of $\xi$ defined over every simple orbit $\gamma$ in $(Y,\lambda)$ and $\Tilde{\tau}$ be its induced trivialization in $(\Tilde{Y},\Tilde{\lambda})$. See just before Definition \ref{intersection}. For every hyperbolic orbit $\gamma$ in $(Y,\lambda)$, $\mu_{\tau}(\gamma^p)=p\mu_{\tau}(\gamma)$ and so $p\mu_{\tau}(\gamma)=\mu_{\Tilde{\tau}}(\rho^{*}\gamma)$ (in the right hand side, if several orbits appear in $\rho^{*}\gamma$, we add their Conley-Zehnder indexes all together). Moreover, since the terms $c_{1}(\xi|_{Z},\tau)$ and $Q_{\tau}(Z)$ in ECH index can be defined by counting some kind of intersection numbers, their induced numbers $c_{1}(\xi|_{\Tilde{Z}},\Tilde{\tau})$ and $Q_{\Tilde{\tau}}(\Tilde{Z})$ in $(\Tilde{Y},\Tilde{\lambda})$ where $\{\Tilde{Z}\}=H_{1}(\Tilde{Y};\rho^{*}\alpha,\rho^{*}\beta)$ become $p$ times. Under the assumptions, these properties imply that each term of ECH in $(Y,\lambda)$ becomes $p$ times in $(\Tilde{Y},\Tilde{\lambda})$. We complete the proof of Proposition \ref{liftinglinear}. \end{proof} \section{Proof of the results} There are isomorphic as follows. \begin{equation}\label{isomors3} \mathrm{ECH}(S^{3},\lambda,0)=\mathbb{F}[U^{-1},U]/U\mathbb{F}[U] \end{equation} and for any $\Gamma\in H_{1}(L(p,q))$, \begin{equation} \mathrm{ECH}(L(p,q),\lambda,\Gamma)=\mathbb{F}[U^{-1},U]/U\mathbb{F}[U]. \end{equation} These come from the isomorphisms between ECH, Seiberg-Witten Floer homology and Heegaard Floer homology. See Theorem \ref{test}, \cite{KM}, \cite{T2}, \cite{OZ} and \cite{KLT}. \begin{lem}\label{meces3} Suppose that all simple orbits in $(S^{3}, \lambda)$ are negative hyperbolic. Then, there is a sequence of admissible orbit sets $\{\alpha_{i}\}_{i=0,1,2,...}$ satisfying the following conditions. \item[1.] For any admissible orbit set $\alpha$, $\alpha$ is in $\{\alpha_{i}\}_{i=0,1,2,...}$. \item[2.] $A(\alpha_{i})<A(\alpha_{j})$ if and only if $i<j$. \item[3.] $I(\alpha_{i},\alpha_{j})=2(i-j)$ for any $i,\,\,j$ . \end{lem} \begin{proof}[\bf Proof of Lemma \ref{meces3}] The assumption that there is no simple positive hyperbolic orbit means $\partial= 0$ because of the fourth statement in Proposition \ref{indexbasicprop}. So the ECH is isomorphic to a free module generated by all admissible orbit sets over $\mathbb{F}$. Moreover, from (\ref{isomors3}), we can see that for every two admissible orbit sets $\alpha$ and $\beta$ with $A(\alpha)>A(\beta)$, $U^{k} \langle \alpha \rangle= \langle \beta \rangle$ for some $k>0$. So for every non negative even number $2i$, there is exactly one admissible orbit set $\alpha_{i}$ whose ECH index relative to $\emptyset$ is equal to $2i$. By considering these arguments, we obtain Lemma \ref{meces3}. \end{proof} In the same way as before, we also obtain the next Lemma. \begin{lem}\label{mecelens} Suppose that all simple orbits in $(L(p,q), \lambda)$ are negative hyperbolic. Then, for any $\Gamma\in H_{1}(L(p,q))$, there is a sequence of admissible orbit sets $\{\alpha_{i}^{\Gamma}\}_{i=0,1,2,...}$ satisfying the following conditions. \item[1.] For any $i=0,1,2,...$, $[\alpha_{i}^{\Gamma}]=\Gamma$ in $H_{1}(L(p,q))$. \item[2.] For any admissible orbit set $\alpha$ with $[\alpha]=\Gamma$, $\alpha$ is in $\{\alpha_{i}^{\Gamma}\}_{i=0,1,2,...}$. \item[3.] $A(\alpha_{i}^{\Gamma})<A(\alpha_{j}^{\Gamma})$ if and only if $i<j$. \item[4.] $I(\alpha_{i}^{\Gamma},\alpha_{j}^{\Gamma})=2(i-j)$ for any $i,\,\,j$ . \end{lem} \begin{lem}\label{noncontractible} Suppose that all simple orbits in $(L(p,q), \lambda)$ are negative hyperbolic. Then there is no contractible simple orbit. \end{lem} \begin{proof}[\bf Proof of Lemma \ref{noncontractible}] Let $\rho:(S^{3},\Tilde{\lambda})\to (L(p,q), \lambda)$ be the covering map where $\Tilde{\lambda}$ is the induced contact form of $\lambda$ by $\rho$. Suppose that there is a contractible simple orbit $\gamma$ in $(L(p,q),\lambda)$. Then the inverse image of $\gamma$ by $\rho$ consists of $p$ simple negative hyperbolic orbits. By symmetry, they have the same ECH index relative to $\emptyset$. This contradicts the results in Lemma \ref{meces3}. \end{proof} Recall the covering map $\rho:(S^{3},\Tilde{\lambda})\to (L(p,q), \lambda)$. By Lemma \ref{noncontractible}, we can see that there is an one-to-one correspondence between periodic orbits in $(S^{3},\Tilde{\lambda})$ and ones in $(L(p,q), \lambda)$ under the assumptions. For simplify the notations, we distinguish orbits in $(S^{3},\Tilde{\lambda})$ from ones in $(L(p,q), \lambda)$ by adding tilde. That is, for each orbit $\gamma $ in $(L(p,q), \lambda)$, $\Tilde{\gamma}$ denotes the corresponding orbit in $(S^{3},\Tilde{\lambda})$. Furthermore, we also do the same way in orbit sets. That is, for each orbit set $\alpha=\{(\alpha_{i},m_{i})\}$ over $(L(p,q), \lambda)$, we set $\Tilde{\alpha}=\{(\Tilde{\alpha}_{i},m_{i})\}$. \begin{lem}\label{s3lift} Under the assumptions and notations in Lemma \ref{mecelens}, there is a labelling $\{\Gamma_{0},\,\Gamma_{1},....,\,\Gamma_{p-1}\}=H_{1}(L(p,q))$ satisfying the following conditions. \item[1.] $\Gamma_{0}=0$ in $H_{1}(L(p.q))$. \item[2.] If $A(\Tilde{\alpha}_{i}^{\Gamma_{j}})<A(\Tilde{\alpha}_{i'}^{\Gamma_{j'}})$, then $i<i'$ or $j<j'$. \item[3.] For any $i=0,1,2....$ and $\Gamma_{j} \in \{\Gamma_{0},\,\Gamma_{1},....,\,\Gamma_{p-1}\}$, $\frac{1}{2}I(\Tilde{\alpha_{i}}^{\Gamma_{j}})=j$ in $\mathbb{Z}/p\mathbb{Z}$. \end{lem} \begin{proof}[\bf Proof of Lemma \ref{s3lift}] By Proposition \ref{liftinglinear}, for each $\Gamma\in H_{1}(L(p,q))$, $\frac{1}{2}I(\Tilde{\alpha}^{\Gamma}_{i})$ in $\mathbb{Z}/p\mathbb{Z}$ is independent of $i$. Moreover, by Lemma \ref{noncontractible}, every admissible orbit set of $(S^{3},\Tilde{\lambda})$ comes from some of $(L(p,q), \lambda)$ and so in the notation of Lemma \ref{meces3} and Lemma \ref{mecelens}, the set $\{\Tilde{\alpha}_{i}^{\Gamma}\}_{i=0,1,...,\,\,\Gamma\in H_{1}(L(p,q))}$ is exactly equivalent to $\{ \alpha_{i}\}$. These arguments imply Lemma \ref{s3lift} (see the below diagram). \begin{equation}\label{thediagram} \xymatrix{ \langle \emptyset = \Tilde{\alpha}_{0}^{\Gamma_{0}} \rangle & \langle \Tilde{\alpha}_{0}^{\Gamma_{1}} \rangle\ar[l]_{U}&\langle \Tilde{\alpha}_{0}^{\Gamma_{2}} \rangle\ar[l]_{U} & \ar[l]_{U}&\ar@{.}[l]&\ar[l]_{U}\langle \Tilde{\alpha}_{0}^{\Gamma_{p-1}} \rangle \\ \langle \Tilde{\alpha}_{1}^{\Gamma_{0}} \rangle \ar[urrrrr]^{U} & \langle \Tilde{\alpha}_{1}^{\Gamma_{1}} \rangle\ar[l]&\langle \Tilde{\alpha}_{1}^{\Gamma_{2}} \rangle\ar[l]_{U} & \ar[l]_{U}&\ar@{.}[l]&\ar[l]_{U}\langle \Tilde{\alpha}_{1}^{\Gamma_{p-1}} \rangle \\\langle \Tilde{\alpha}_{2}^{\Gamma_{0}} \rangle \ar[urrrrr]^{U} & \langle \Tilde{\alpha}_{2}^{\Gamma_{1}} \rangle\ar[l]&\langle \Tilde{\alpha}_{2}^{\Gamma_{2}} \rangle\ar[l]_{U} & \ar[l]_{U}&\ar@{.}[l] } \end{equation} \end{proof} \begin{lem}\label{isomorphismcyclic} Suppose that all simple orbits in $(L(p,q), \lambda)$ are negative hyperbolic and $p$ is prime. For $\Gamma\in H_{1}(L(p.q))$, we set $f(\Gamma_{j}) \equiv \frac{1}{2}I(\Tilde{\alpha}_{i}^{\Gamma_{j}})=j \in \mathbb{Z}/p\mathbb{Z}$ for some $i\geq 0$. Then this map has to be isomorphism as cyclic groups. Here we note that by Lemma \ref{s3lift}, this map has to be well-defined and a bijective map from $H_{1}(L(p.q))$ to $\mathbb{Z}/p\mathbb{Z}$. \end{lem} \begin{proof}[\bf Proof of Lemma \ref{isomorphismcyclic}] Since under the assumption there are infinity many simple orbits and $|H_{1}(L(p,q))|<\infty$, we can pick up $p$ different simple periodic orbits $\{\gamma_{1},\,\gamma_{2},....,\gamma_{p}\}$ in $(L(p,q), \lambda)$ with $[\gamma_{1}]=[\gamma_{2}]=...=[\gamma_{p}]=\Gamma$ for some $\Gamma\in H_{1}(L(p,q))$. Since there is no contractible simple orbit (Lemma \ref{noncontractible}), $\Gamma \neq 0$ and so $f(\Gamma)\neq 0$. For $i=1,2,...,p$, let $\Tilde{\gamma}_{i}$ be the corresponding orbit in $(S^{3},\Tilde{\lambda})$ of $\gamma_{i}$ and $C_{\Tilde{\gamma_{i}}}$ be a representative of $Z_{\Tilde{\gamma}_{i}}$ where $\{Z_{\Tilde{\gamma}_{i}}\}=H_{2}(S^{3};\Tilde{\gamma}_{i},\emptyset)$ (see Definition \ref{representative}). \begin{cla}\label{claimonly} Suppose that $1\leq i,j \leq p$ and $i \neq j$. Then the intersection number $\#([0,1]\times \Tilde{\gamma}_{i}\cap{C_{\Tilde{\gamma}_{j}}})$ in $\mathbb{Z}/p\mathbb{Z}$ does not depend on the choice of $i,j$ where $\#([0,1]\times \Tilde{\gamma}_{i}\cap{C_{\Tilde{\gamma}_{j}}})$ is the algebraic intersection number in $[0,1]\times Y$ (see Definition \ref{intersection}). \end{cla} \begin{proof}[\bf Proof of Claim \ref{claimonly}] By the definition, we have \begin{equation} \frac{1}{2}I(\Tilde{\gamma}_{i}\cup{\Tilde{\gamma}_{j}},\Tilde{\gamma}_{i})=\frac{1}{2}I(\Tilde{\gamma}_{j})+\#([0,1]\times \Tilde{\gamma}_{i}\cap{C_{\Tilde{\gamma}_{j}}}) \end{equation} So in $\mathbb{Z}/p\mathbb{Z}$, \begin{equation} \begin{split} \#([0,1]\times \Tilde{\gamma}_{i}\cap{C_{\Tilde{\gamma}_{j}}})&=\frac{1}{2}I(\Tilde{\gamma}_{i}\cup{\Tilde{\gamma}_{j}},\Tilde{\gamma}_{i})-\frac{1}{2}I(\Tilde{\gamma}_{j})\\ =&\frac{1}{2}I(\Tilde{\gamma}_{i}\cup{\Tilde{\gamma}_{j}})-\frac{1}{2}I(\Tilde{\gamma}_{i})-\frac{1}{2}I(\Tilde{\gamma}_{j}) = f(2\Gamma)-2f(\Gamma) \end{split} \end{equation} This implies that the value $ \#([0,1]\times \Tilde{\gamma}_{i}\cap{C_{\Tilde{\gamma}_{j}}})$ in $\mathbb{Z}/p\mathbb{Z}$ depends only on $f(2\Gamma)$ and $f(\Gamma)$. This complete the proof of Claim \ref{claimonly}. \end{proof} Return to the proof of Lemma \ref{isomorphismcyclic}. we set $l:= \#([0,1]\times \Tilde{\gamma}_{i}\cap{C_{\Tilde{\gamma}_{j}}}) \in \mathbb{Z}/p\mathbb{Z}$ for $i \neq j$. In the same way as Claim \ref{claimonly}, for $1\leq n \leq p$, we have \begin{equation} \frac{1}{2}I(\bigcup_{1\leq i \leq n }\Tilde{\gamma}_{i},\bigcup_{1\leq i \leq n-1}\Tilde{\gamma}_{i})=\frac{1}{2}I(\Tilde{\gamma}_{n})+\sum_{1\leq i \leq n-1} \#([0,1]\times \Tilde{\gamma}_{i}\cap{C_{\Tilde{\gamma}_{n}}}) \end{equation} and so \begin{equation} f(n\Gamma)-f((n-1)\Gamma)=f(\Gamma)+(n-1)l \,\,\,\,\,\, \mathrm{in}\,\,\mathbb{Z}/p\mathbb{Z}. \end{equation} Suppose that $l \neq 0$. Since $p$ is prime, there is $1\leq k \leq p$ such that $f(\Gamma)+(k-1)l=0$. This implies that $f(k\Gamma)-f((k-1)\Gamma)=0$. But this contradicts the bijectivity of $f$. So $l=0$ and therefore $f(n\Gamma)=nf(\Gamma)$. Since $f(\Gamma)\neq 0$, we have that $f$ is isomorphism. \end{proof} \begin{lem}\label{minimalandsecond} Suppose that all simple orbits in $(L(p,q), \lambda)$ are negative hyperbolic. Let $\gamma_{\mathrm{min}}$ and $\gamma_{\mathrm{sec}}$ be orbits with smallest and second smallest actions in $(L(p,q), \lambda)$ respectively. Then, \begin{equation}\label{index2minsec} I(\Tilde{\gamma}_\mathrm{min})=I(\Tilde{\gamma}_{\mathrm{sec}},\Tilde{\gamma}_{\mathrm{min}})=2 \end{equation} and moreover, \begin{equation}\label{keyindex} 6< I(\Tilde{\gamma}_{\mathrm{min}}\cup{\Tilde{\gamma}_{\mathrm{sec}}}) \leq 2p. \end{equation} \end{lem} \begin{proof}[\bf Proof of Lemma \ref{minimalandsecond}] Consider the diagram (\ref{thediagram}) and Lemma \ref{meces3}. As admissible orbit sets, $\Tilde{\gamma}_{\mathrm{min}}$ and $\Tilde{\gamma}_{\mathrm{sec}}$ correspond to $\Tilde{\alpha}_{0}^{\Gamma_{1}}$ and $\Tilde{\alpha}_{0}^{\Gamma_{2}}$ respectively. This implies (\ref{index2minsec}). Next, we show the inequality (\ref{keyindex}). See $\Tilde{\alpha}_{1}^{\Gamma_{0}}$ in the diagram (\ref{thediagram}). By the diagram, $I(\Tilde{\alpha}_{1}^{\Gamma_{0}})=2p$. Moreover, this comes from $\alpha_{1}^{\Gamma_{0}}$ with $[\alpha_{1}^{\Gamma_{0}}]=0\in H_{1}(L(p,q))$. By Lemma \ref{noncontractible}, $\alpha_{1}^{\Gamma_{0}}$ has to consist of at least two negative hyperbolic orbits. This implies that $A(\Tilde{\gamma}_{\mathrm{min}}\cup{\Tilde{\gamma}_{\mathrm{sec}}})\leq A(\Tilde{\alpha}_{1}^{\Gamma_{0}})$ and so by Lemma \ref{meces3}, $I(\Tilde{\gamma}_{\mathrm{min}}\cup{\Tilde{\gamma}_{\mathrm{sec}}})\leq I(\Tilde{\alpha}_{1}^{\Gamma_{0}})=2p$. Considering the above argument and Lemma \ref{meces3}, it is enough to show that $I(\Tilde{\gamma}_{\mathrm{min}}\cup{\Tilde{\gamma}_{\mathrm{sec}}}) \neq 6$. We prove this by contradiction. Suppose that $I(\Tilde{\gamma}_{\mathrm{min}}\cup{\Tilde{\gamma}_{\mathrm{sec}}}) = 6$. Since $\Tilde{\gamma}_{\mathrm{sec}}$ corresponds to $\Tilde{\alpha}_{0}^{\Gamma_{2}}$, we have $I(\Tilde{\gamma}_{\mathrm{sec}})=4$ and so $I(\Tilde{\gamma}_{\mathrm{min}}\cup{\Tilde{\gamma}_{\mathrm{sec}}}, \Tilde{\gamma}_{sec})=2$. To consider the $U$-map, fix a generic almost complex structure $J$ on $\mathbb{R}\times S^{3}$. Consider the $U$-map $U\langle \Tilde{\alpha}_{0}^{\Gamma_{1}}=\Tilde{\gamma}_{\mathrm{min}} \rangle=\langle \emptyset \rangle$. This implies that for each generic point $z\in S^{3}$, there is an embedded $J$-holomorphic curve $C_{z}\in \mathcal{M}^{J}(\Tilde{\gamma}_{\mathrm{min}},\emptyset)$ through $(0,z)\in \mathbb{R}\times S^{3}$. By using this $C_{z}$, we have \begin{equation}\label{rhs} I(\Tilde{\gamma}_{\mathrm{min}}\cup{\Tilde{\gamma}_{\mathrm{sec}}}, \Tilde{\gamma}_{\mathrm{sec}})=I(\mathbb{R}\times \Tilde{\gamma}_{\mathrm{sec}}\cup{C_{z}}). \end{equation} Note that the right hand side of (\ref{rhs}) is the ECH index of holomorphic curves $\mathbb{R}\times \Tilde{\gamma}_{\mathrm{sec}}\cup{C_{z}}$ (see just before (\ref{just})). Since $ I(\Tilde{\gamma}_{\mathrm{min}}\cup{\Tilde{\gamma}_{\mathrm{sec}}})=2$ and Proposition \ref{ind}, we have $\mathbb{R}\times \Tilde{\gamma}_{\mathrm{sec}}\cap{C_{z}}=\emptyset$. Consider a sequence of holomorphic curves $C_{z}$ as $z \to \Tilde{\gamma}_{\mathrm{sec}}$. By the compactness argument, this sequence has a convergent subsequence and this has a limiting holomorphic curve $C_{\infty}$ which may be splitting into more than one floor. But in this case, $C_{\infty}$ can not split because the action of the positive end of $C_{\infty}$ is smallest value $A(\Tilde{\gamma}_{\mathrm{min}})$. This implies that $\mathbb{R}\times \Tilde{\gamma}_{\mathrm{sec}}\cap{C_{\infty}} \neq \emptyset$. Since $I(\Tilde{\gamma}_{\mathrm{min}}\cup{\Tilde{\gamma}_{\mathrm{sec}}}, \Tilde{\gamma}_{\mathrm{sec}})=I(\mathbb{R}\times \Tilde{\gamma}_{\mathrm{sec}}\cup{C_{\infty}}) =2$, $\mathbb{R}\times \Tilde{\gamma}_{\mathrm{sec}}\cap{C_{\infty}} \neq \emptyset$ contradicts the fourth statement in Proposition \ref{ind}. Therefore, we have $I(\Tilde{\gamma}_{\mathrm{min}}\cup{\Tilde{\gamma}_{\mathrm{sec}}}) \neq 6$ and thus complete the proof of Lemma \ref{minimalandsecond}. \end{proof} \begin{proof}[\bf Proof of Theorem \ref{maintheorem}] We may assume that $p$ is prime because the condition that all simple periodic orbits are negative hyperbolic does not change under taking odd-fold covering. By Lemma \ref{isomorphismcyclic} and (\ref{index2minsec}), we have $[\gamma_{\mathrm{sec}}]=2[\gamma_{\mathrm{min}}]$ and so $[\gamma_{\mathrm{min}}\cup{\gamma_{\mathrm{sec}}}]=3[\gamma_{\mathrm{min}}]$ in $H_{1}(L(p,q))$. Since $f$ is isomorphic, we have $\frac{1}{2}I(\Tilde{\gamma}_{\mathrm{min}}\cup{\Tilde{\gamma}_{\mathrm{sec}}})=3$ in $\mathbb{Z}/p\mathbb{Z}$. But this can not occur in the range of (\ref{keyindex}). This is a contradiction and we complete the proof of Theorem \ref{maintheorem}. \end{proof} \begin{proof}[\bf Proof of Theorem \ref{z2act}] We prove this by contradiction. By Theorem \ref{elliptic}, we may assume that there is no elliptic orbit and so that all simple orbits are negative hyperbolic. In the same as Lemma \ref{meces3}, there is exactly one admissible orbit set $\alpha_{i}$ whose ECH index relative to $\emptyset$ is equal to $2i$. If there is a non $\mathbb{Z}/2\mathbb{Z}$-invariant orbit $\gamma$, by symmetry, there is two orbit with the same ECH index relative to $\emptyset$. this is a contradiction. So we may assume that all simple orbits are $\mathbb{Z}/2\mathbb{Z}$-invariant. Let $(\mathbb{RP}^3,\lambda')$ be the non-degenerate contact three manifold obtained as the quotient space of $(S^{3},\lambda)$ and $\gamma$ be a $\mathbb{Z}/2\mathbb{Z}$-invariant periodic orbit. Then, this orbit corresponds to a double covering of a non-contractible orbit $\gamma'$ in $(\mathbb{RP}^3,\lambda')$. This implies that the eigenvalues of the return map of $\gamma$ are square of the ones of $\gamma'$. This means that the eigenvalues of the return map of $\gamma$ are both positive and so $\gamma$ is positive hyperbolic. This is a contradiction and so we complete the proof of Theorem \ref{z2act}. \end{proof} \begin{proof}[\bf Proof of Corollary \ref{allact}] Note that if $(L(p,q), \lambda)$ has a simple positive hyperbolic orbit, then its covering space $(S^{3},\Tilde{\lambda})$ is also the same. Considering a non-trivial cyclic subgroup acting on the contact three sphere together with Corollary \ref{periodic} and Theorem \ref{z2act}, we complete the proof of Corollary \ref{allact}. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Counting processes constitute a mathematical framework for modeling specific random events within a time series. The nature of the application and the structural features of a given time series may call for sophisticated models whose complexity goes beyond the standard Poisson process. For instance, the modeling of neuron's spikes in Neurosciences (see \textit{e.g.} \cite{Bremaud_Massoulie,Delattre_2016}) or the frequency of claims that may result in a cyber insurance contract (see \cite{Hillairet_Reveillac_Rosenbaum} for a short review on the literature) require counting processes with stochastic intensity and whose frequency of the events {depends on} the past values of the system. Hawkes processes initially introduced in \cite{Hawkes} has become the paradigm of such processes. The so-called linear Hawkes process is a counting process $H$ (on a filtered probability space) with intensity process $\lambda$ satisfying : \begin{equation} \label{eq:introlambdaHawkes} \lambda_t = \mu + \int_{(0,t)} \Phi(t-s) dH_s, \quad t\geq 0, \end{equation} where the constant $\mu>0$ is the baseline intensity and $\Phi:\mathbb{R}_+ \to \mathbb{R}_+$ is modeling the self-exciting feature of the process. Naturally conditions on $\Phi$ are required for a well-posed formulation. Well-posedness here is accurate as, from this formulation it appears that the pair $(H,\lambda)$ solves a two-dimensional SDE driven by a Poisson measure as we will make precise below. The so-called Poisson imbedding provides a way to formulate this equation. Let $(\Omega,\mathcal F,\mathbb P)$ be a probability space and $N$ a random Poisson measure on $\mathbb{R}_+^2$ with intensity $d\lambda(t,\theta):=dt d\theta$ the Lebesgue measure on $\mathbb{R}_+^2$. If $\lambda$ denotes a non-negative predictable with respect to the natural history of $N$ (whose definition will be recalled in Section \ref{section:preliminaries}) then the process $H$ defined as \begin{equation} \label{eq:introimbedding} H_t = \int_{(0,t]\times \mathbb{R}_+} \ind{\theta \leq \lambda_s} N(ds,d\theta), \quad t\geq 0, \end{equation} is a counting process with intensity $\lambda$, that is $H-\int_0^\cdot \lambda_s ds$ is a martingale. The Poisson imbedding refers to Representation (\ref{eq:introimbedding}) for counting processes. As mentioned, in case of a linear Hawkes process for instance, the Poisson imbedding representation (\ref{eq:introimbedding}) captures the equation feature of this process. Indeed combining (\ref{eq:introlambdaHawkes}) and (\ref{eq:introimbedding}) implies that the linear Hawkes processes can be seen as a system of weakly coupled SDEs with respect to $N$ as : $$ \left\lbrace \begin{array}{l} H_t = \int_{(0,t]\times \mathbb{R}_+} \ind{\theta \leq \lambda_s} N(ds,d\theta) \\ \hspace{20em} t\geq 0, \\ \lambda_t = \mu + \int_{(0,t)} \Phi(t-s) \ind{\theta \leq \lambda_s} N(ds,d\theta). \end{array} \right. $$ Under mild condition on $\Phi$ the second equation (and so the system) can be proved to be well-posed (see \textit{e.g.} \cite{Bremaud_Massoulie,Costa_etal,Hillairet_Reveillac_Rosenbaum}).\\\\ \noindent This representation also opens the way to a new line of research. Indeed, with this representation at hand, a counting process can then be seen as a functional of the two-dimensional Poisson measure $N$ for which stochastic analysis such as the Malliavin calculus is available. Recently, by combining the Malliavin calculus with Stein's method according to the Nourdin-Peccati methodology, quantitative limit theorems for Hawkes functionals have been derived in \cite{torrisi,HHKR,Khabou_Privault_Reveillac}. The specific Malliavin calculus developed in \cite{HHKR} for linear Hawkes processes follows from a Mecke formula provided in \cite{Hillairet_Reveillac_Rosenbaum} with application to the risk analysis of a class of cyber insurance contracts. Another main ingredient (not exploited so far in this context up to our knowledge) when dealing with Gaussian and Poisson functionals is given by the so-called chaotic expansion (also called Wiener-It\^o expansion). Let $t \geq 0$ and $H_t$ as in (\ref{eq:introimbedding}). The chaotic expansion of $H_t$ with respect to $N$ writes down as : \begin{equation} \label{eq:introchaos} H_t = \mathbb E[H_t] + \sum_{j=1}^{+\infty} \frac{1}{j!} \int_{[0,t]\times \mathbb{R}_+} \cdots \int_{[0,t]\times \mathbb{R}_+} f_j(x_1,\ldots,x_{j}) (N(dx_{1})-dx_1) \cdots (N(dx_{j})-dx_j), \end{equation} where $x_i \in \mathbb{R}_+^2$ and $f_j$ is a symmetric function on $(\mathbb{R}_+^2)^j$ defined in terms of the $j$th Malliavin derivative of $H_t$. This expression requires some details on its definition that will be given in Section \ref{section:preliminaries} below; but roughly speaking it allows one to expand the random variable into iterated integrals with respect to the compensated Poisson measure $N(dx_{j})-dx_j := N(dt_{j},d\theta_j)-dt_{j} d\theta_j$ (with $x_j:=(t_j,\theta_j)$). Such decomposition is proved to be useful (for example in the context of Brownian SPDEs) provided that the coefficients $f_j$ can be computed or can be characterized by an equation. In case of a linear Hawkes process we will show in Section \ref{section:Hawkes} that the coefficients can be computed but are far from being explicit.\\\\\noindent In this paper we prove as Theorem \ref{th:pseudochaoticcounting} that counting processes satisfy an alternative representation of the chaotic expansion that we name pseudo-chaotic representation. This pseudo chaotic expansion takes the form of : \begin{equation} \label{eq:intropseudochaos} H_t = \sum_{j=1}^{+\infty} \frac{1}{j!} \int_{[0,t]\times \mathbb{R}_+} \cdots \int_{[0,t]\times \mathbb{R}_+} c_j(x_1,\ldots,x_{j}) N(dx_{1}) \cdots N(dx_{j}), \end{equation} involving iterated integrals of the counting measure $N$ only (and not its compensated version). Whereas any square integrable random variable $F$ admits a chaotic expansion of the form (\ref{eq:introchaos}) we characterize in Theorem \ref{th:characpseudo} those random variables for which a pseudo-chaotic expansion of the form (\ref{eq:intropseudochaos}) is valid. As mentioned any variable $F=H_t$ with $H$ a counting process belongs to this set. \\\\ \noindent In case where $H$ is a linear Hawkes process, in contradistinction to the coefficients $f_j$ in the classical chaotic expansion (\ref{eq:introchaos}) of $H_t$ at some time $t$ which can not be computed explicitly, coefficients $c_j$ of the pseudo-chaotic expansion (\ref{eq:intropseudochaos}) are explicit and given in Theorem \ref{th:explicitHawkes} (see Discussion \ref{discussion:avantageforHawkes}). This provides then a closed-form expression to linear Hawkes processes. Finally, we study further in Section \ref{section:almostHawkes} the structure of linear Hawkes processes by constructing an example of a process in a pseudo-chaotic form that satisfies the stochastic intensity equation (\ref{eq:introlambdaHawkes}) but which fails to be a counting process (see Theorem \ref{th:almostHawkes} and Discussion \ref{discussion:finale}).\\\\ \noindent The paper is organized as follows. Notations and the description of the Poisson imbedding together with elements of Malliavin's calculus are presented in Section \ref{section:preliminaries}. The notion of pseudo-chaotic expansion is presented in Section \ref{section:pseudochaotic}. The application to linear Hawkes processes and their explicit representation is given in Section \ref{section:Hawkes}. Finally, Section \ref{section:almostHawkes} is dedicated to the construction of an example of a process in a pseudo-chaotic form that satisfies the stochastic intensity equation but which fails to be a counting process. \section{Preliminaries and notations} \label{section:preliminaries} \subsection{General conventions and notations} We set $\mathbb{N}^*:=\mathbb{N} \setminus \{0\}$ the set of positive integers. We make use of the convention : \begin{convention} \label{convention:sums} For $a, b \in \mathbb Z$ with $a > b$, and for any map $\rho : \mathbb Z \to \mathbb{R}$, $$ \prod_{i=a}^b \rho(i) :=1; \quad \sum_{i=a}^b \rho(i) :=0.$$ \end{convention} We set \begin{equation} \label{eq:X} \mathbb X:= \mathbb{R}_+\times \mathbb{R}_+ = \{x=(t,\theta), \; t \in \mathbb{R}_+, \; x\in \mathbb{R}_+\}; \end{equation} Throughout this paper we will make use of the notation $(t,\theta)$ to refer to the first and second coordinate of an element in $\mathbb X$. \begin{notation} \label{notation:ordered} Let $k\in \mathbb N^*$ and $(x_1,\ldots,x_k)=((t_1,\theta_1),\ldots,(t_k,\theta_k))$ in $\mathbb X^k$. We set $(x_{(1)},\ldots,x_{(k)})$ the ordered in the $t$-component of $(x_1,\ldots,x_k)$ with \; $0 \leq t_{(1)} \leq \cdots \leq t_{(k)} ,$\; and write $x_{(i)}:=(t_{(i)},\theta_{(i)})$. \end{notation} \noindent We simply write $dx:=dt \, d\theta$ for the Lebesgue measure on $\mathbb X$. We also set $\mathcal B(\mathbb X)$ the set of Borelian of $\mathbb X$. \subsection{Poisson imbedding and elements of Malliavin calculus} Our approach lies on the so-called Poisson imbedding representation allowing one to represent a counting process with respect to a baseline random Poisson measure on $\mathbb X$. Most of the elements presented in this section are taken from \cite{Privault_2009,Last2016}.\\ \noindent We define $\Omega$ the space of configurations on $\mathbb X$ as $$ \Omega:=\left\{\omega=\sum_{i=1}^{n} \delta_{x_i}, \; x_i:=(t_{i},\theta_i) \in \mathbb X,\; i=1,\ldots,n,\; 0=t_0 < t_1 < \cdots < t_n, \; \theta_i \in \mathbb{R}_+, \; n\in \mathbb{N}\cup\{+\infty\} \right\}.$$ Each path of a counting process is represented as an element $\omega$ in $\Omega$ which is a $\mathbb N$-valued $\sigma$-finite measure on $\mathbb X =\mathbb{R}_+^2$. Let $\mathcal F$ be the $\sigma$-field associated to the vague topology on $\Omega$. Let $\mathbb P$ the Poisson measure on $ \Omega$ under which the canonical process $N$ on $\Omega$ is a Poisson process with intensity one that is : $$ (N(\omega))([0,t]\times[0,b])(\omega):=\omega([0,t]\times[0,b]), \quad t \geq 0, \; b \in \mathbb{R}_+,$$ is an homogeneous Poisson process with intensity one ($N([0,t]\times[0,b])$ is a Poisson random variable with intensity $ b t$ for any $(t,b) \in \mathbb X$). We set $\mathbb F^N:=({\cal F}_t^N)_{t\geq 0}$ the natural history of $N$, that is $\mathcal{F}_t^N:=\sigma\{N( \mathcal T \times B), \; \mathcal T \subset \mathcal{B}([0,t]), \; B \in \mathcal{B}(\mathbb{R}_+)\}$. The expectation with respect to $\mathbb P$ is denoted by $\mathbb E[\cdot]$. We also set $\mathcal{F}_\infty^N:=\lim_{t \to +\infty} \mathcal{F}_t^N$. \subsubsection{Add-points operators and the Malliavin derivative} We introduce some elements of Malliavin calculus on Poisson processes.\\ \noindent For $n\in \mathbb N^*$, $f:\mathbb X^n \to \mathbb R$ is symmetric if for any permutation $\sigma$ on $\{1,\ldots,n\}$, and for any $(x_1,\ldots,x_n) \in \mathbb X^n$, $f(x_1,\ldots,x_n) = f(x_{\sigma(1)},\ldots,x_{\sigma(n)})$. We set : $$ L^0(\Omega):=\left\{ F:\Omega \to \mathbb{R}, \; \mathcal{F}_\infty^N-\textrm{ measurable}\right\},$$ $$ L^2(\Omega):=\left\{ F \in L^0(\Omega), \; \mathbb E[|F|^2] <+\infty\right\}.$$ Let for $j\in \mathbb{N}^*$ \begin{equation} \label{definition:L2j} L^2(\mathbb X^j) := \left\{f:\mathbb{X}^j \to \mathbb{R}, \; \int_{\mathbb{X}^j} |f(x_1,\cdots,x_j)|^2 dx_1 \cdots dx_j <+\infty\right\}, \end{equation} and \begin{equation} \label{definition:symm2} L^2_s(\mathbb X^j) := \left\{f \textrm{ symmetric and } f \in L^2(\mathbb X^j) \right\} \end{equation} the set of symmetric square integrable functions $f$ on $\mathbb{X}^j$. \\\\ \noindent For $h \in L^2(\mathbb X)$ and $j\geq 1$ we set $h^{\otimes j} \in L_s^2(\mathbb X^j)$ defined as : \begin{equation} \label{eq:otimes} h^{\otimes j}(x_1,\ldots,x_j) := \prod_{i=1}^j h(x_i), \quad (x_1,\ldots,x_j)\in \mathbb X^j. \end{equation} \noindent The main ingredient we will make use of are the add-points operators on the Poisson space $\Omega$. \begin{definition}$[$Add-points operators$]$\label{definitin:shifts} \begin{itemize} \item[(i)] For $k$ in $\mathbb N^*$, and any subset of $\mathbb X$ of cardinal $k$ denoted $\{x_i, \; i\in \{1,\ldots,k\}\} \subset \mathbb X$, we set the measurable mapping : \begin{eqnarray*} \varepsilon_{(x_1,\ldots,x_k)}^{+,k} : \Omega & \longrightarrow & \Omega \\ \omega & \longmapsto & \omega + \sum_{i=1}^k \delta_{x_i}; \end{eqnarray*} with the convention that given a representation of $\omega$ as $\omega=\sum_{i=1}^{n} \delta_{y_i}$ (for some $n\in \mathbb N^*$, $y_i \in \mathbb X$), $\omega + \sum_{i=1}^k \delta_{x_i}$ is understood as follows\footnote{Note that given fixed atoms $(x_1,\ldots,x_n)$, as $\mathbb P$ is the Poisson measure on $\Omega$, with $\mathbb P$-probability one, marks $x_i$ do not belong to the representation of $\omega$.} : \begin{equation} \label{eq:addjumpsum} \omega + \sum_{i=1}^k \delta_{x_i} := \sum_{i=1}^{n} \delta_{y_i} + \sum_{i=1}^k \delta_{x_i} \ind{x_i \neq y_i}. \end{equation} \item[(ii)] When $k=1$ we simply write $\varepsilon_{x_1}^{+}:=\varepsilon_{x_1}^{+,1}$. \end{itemize} \end{definition} \noindent We now define the Malliavin derivative operator. \begin{definition} \label{definition:Dn} For $F$ in $L^2(\Omega)$, $n\in \mathbb N^*$, and $(x_1,\ldots,x_n) \in \mathbb X^n$, we set \begin{equation} \label{eq:Dn} D_{(x_1,\ldots,x_n)} F:= F\circ \varepsilon_{(x_1,\ldots,x_n)}^{+,n} - F. \end{equation} For instance when $n=1$, we write $D_x F := D_x^1 F$ which is the difference operator (also called add-one cost operator\footnote{see \cite[p.~5]{Last2016}}). Note that with this definition, for any $\omega$ in $\Omega$, the mapping $$ (x_1,\ldots,x_n) \mapsto D_{(x_1,\ldots,x_n)}^n F (\omega) $$ is symmetric and belongs to $L^2_s(\mathbb X^j)$ defined as (\ref{definition:symm2}). \end{definition} \noindent We extend this definition to the iterated Malliavin derivatives. \begin{prop}(See \textit{e.g.} \cite[Relation (15)]{Last2016}) \label{prop:Dnaltenative} Let $F$ in $L^2(\Omega)$, $n\in \mathbb N^*$, and $(x_1,\ldots,x_n) \in \mathbb X^n$. We set the $n$th iterated Malliavin derivative operator $D^n$ as $$ D^n F = D (D^{n-1} F), \quad n\geq 1; \quad D^0 F:=F.$$ It holds that $$ D^n_{(x_1,\ldots,x_n)} F (\omega)= \sum_{J\subset \{1,\cdots,n\}} (-1)^{n-|J|} F\left(\omega + \sum_{j\in J} \delta_{x_j}\right), \quad \textrm{ for a.e. } \omega \in \Omega,$$ where the sum stands for all the subsets $J$ of $\{1,\cdots,n\}$ and $|J|$ denotes the cardinal of $J$. \end{prop} \begin{remark} Note that $\omega + \sum_{j\in J} \delta_{x_j}$ is understood according to (\ref{eq:addjumpsum}). \end{remark} \subsubsection{Iterated integrals and the Chaotic expansion} The decompositions we are going to deal with take the form of iterated stochastic integrals whose definition is made precise in this section. \begin{notation} \label{notation:simplex} For $j\in \mathbb N^*$, $T>0$ and $M>0$ we set : $$\Delta_j:=\left\{(x_1,\cdots,x_j) \in \mathbb X^j, \; x_i \neq x_k, \; \forall i\neq k \in \{1,\cdots,j\}\right\}.$$ \end{notation} \begin{definition} \label{definition:interated} Let $j \in \mathbb{N}^*$ and $f_j$ an element of $L_s^2(\mathbb X^j)$. We set $I_j(f_j)$ the $j$th iterated integral of $f_j$ against the compensated Poisson measure defined as : \begin{align} \label{eq:In} & \hspace{-2em} I_j(f_j) \nonumber \\ &\hspace{-2em}:= \int_{\Delta_j} f_j(x_1,\ldots,x_{j}) (N(dx_{1})-dx_1) \cdots (N(dx_j)-dx_j) \nonumber \\ &\hspace{-2em}=j! \int_{\mathbb X} \int_{[0,t_{j-1})\times \mathbb{R}_+} \cdots \int_{[0,t_{2})\times \mathbb{R}_+} f_j(x_1,\ldots,x_{j}) (N(dx_{1})-dx_1) \cdots (N(dx_j)-dx_j) \nonumber \\ &\hspace{-2em}=j! \int_{\mathbb X} \int_{[0,t_{j-1})\times \mathbb{R}_+} \cdots \int_{[0,t_{2})\times \mathbb{R}_+} f_j((t_1,\theta_1),\ldots,(t_j,\theta_j)) (N(d t_1,d\theta_1)-dt_1 d\theta_1) \cdots (N(d t_j,d\theta_j)-dt_j d\theta_j) \end{align} where we recall the notation $x_i=(t_i,\theta_i)$ and $dx_i=dt_i \, d\theta_i$. Recall that all the integrals are defined pathwise. \end{definition} \noindent These iterated integrals naturally appear in the chaotic expansion recalled below. \begin{theorem}[See \textit{e.g.} Theorem 2 in \cite{Last2016}] \label{th:chaoticgeneral} Let $F$ in $L^2(\Omega)$. Then $$ F = \mathbb E[F] + \sum_{j=1}^{+\infty} \frac{1}{j!} I_j(f_j^F),$$ where the convergence of the series holds in $L^2(\omega,\mathbb P)$ and where the coefficients $f_j^F$ are the elements of $L_s^2(\mathbb X^j)$ (see (\ref{definition:symm2})) given as \begin{eqnarray*} f_j^F : \mathbb X^j & \longrightarrow & \mathbb{R}_+ \\ (x_1,\ldots,x_j) & \longmapsto & \mathbb E\left[D_{(x_1,\cdots,x_j)}^j F\right]. \end{eqnarray*} In addition the decomposition is unique in the sense that : if there exist elements $(g_j)_{j\geq 1}$ with $g_j \in L_s^2(\mathbb X^j)$ such that $$ F = \mathbb E[F] + \sum_{j=1}^{+\infty} \frac{1}{j!} I_j(g_j)$$ then $g_j = f_j^F, \; dx-a.e., \; \forall j\geq 1.$ \end{theorem} \noindent This decomposition is similar to the Wiener-It\^o decomposition on Gaussian spaces. We conclude this section by recalling the link between the iterated Malliavin derivative and the iterated integrals. \begin{theorem} \label{theorem:generalizedIPP} Let $j\in \mathbb N^*$, $g_j$ in $L^2_s(\mathbb X^j)$ and $F$ in $L^2(\Omega)$. Then : $$ \mathbb E\left[\int_{\mathbb X^j} g_j(x_1,\ldots,x_j) \, D_{(x_1,\ldots,x_j) }^j F dx_1\ldots dx_j \right] = \mathbb E\left[F I_j(g_j)\right].$$ \end{theorem} \begin{proof} This result is standard although we could not find a precise reference fitting to our framework so we provide a sketch of the proof. To the Malliavin derivative, one can associate its dual operator named the divergence operator $\delta$ on a subset of the measurable elements $u:\mathbb X\times \Omega \to \mathbb{R}$. More precisely, for any such $u$ in $\textrm{Dom(}\delta\textrm{)}$ (the domain of the operator see for instance \cite{Nualart_Vives_90,Privault_2009,Last2016}), $\delta(u)$ denotes the unique element in $L^2(\Omega)$ such that : \begin{equation} \label{eq:IPPgeneralized} \mathbb E[F \delta(u)] = \int_{\mathbb X}\mathbb E[u(x) D_x F] dx, \quad \forall F \in L^2(\Omega). \end{equation} By uniqueness, $\delta(u)$ coincides with the It\^o stochastic integral in case $u$ is a predictable process which itself is equal to $I_1(u)$ when $u$ is deterministic. Similarly, the iterated divergence of order $j$ denoted $\delta^j$ can be defined as the dual operator of the $j$th Malliavin derivative $D^j$. In case of a deterministic element $g_j$, $I_j(g_j) = \delta^j(g_j)$. To see this, note that for any $j \geq 2$ and any $g \in L^2_s(\mathbb X^j)$ it holds that $$ I_j(g) = \delta(I_{j-1}(g(\cdot,\bullet))), \quad \cdot \in \mathbb X, \; \bullet \in \mathbb X^{j-1}, \; \textrm{ in } L^2(\Omega).$$ Using then the Malliavin integration by parts formula (\ref{eq:IPPgeneralized}) one gets the result by induction. \end{proof} \noindent We conclude this section with the well-known relation between the Malliavin derivatives and the iterated integrals that can be found for example in \cite{Last2016,Privault_2009}. \begin{prop} \label{prop:derinIj} \begin{itemize} \item[(i)] Let $j\in \mathbb N^*$, $k\in \mathbb N^*$, with $k\leq j$ and $h \in L^2(\mathbb X)$. Then : $$ D_{(x_1,\cdots,x_k)}^k I_j(h^{\otimes j}) = \frac{j!}{(j-k)!} I_{j-k}(h^{\otimes (j-k)}) \prod_{i=1}^k h(x_i), \quad \forall (x_1,\cdots,x_k) \in \mathbb X^k.$$ \item[(ii)] In addition, for $k>j$, $$ D_{(x_1,\cdots,x_k)}^k I_j(h^{\otimes j}) =0, \quad \forall (x_1,\cdots,x_k) \in \mathbb X^k. $$ \end{itemize} \end{prop} \section{Notion of pseudo-chaotic expansion} \label{section:pseudochaotic} We now present an alternative decomposition that we name pseudo-chaotic expansion. Recall that according to Theorem \ref{th:chaoticgeneral}, any $F$ in $L^2(\Omega)$ admits a chaotic expansion as : $$ F = \mathbb E[F] + \sum_{j=1}^{+\infty} \frac{1}{j!} I_j(f_j^F),$$ with $f_j^F(x_1,\cdots,x_j)=\mathbb E\left[D_{(x_1,\cdots,x_j)}^j F\right]$.\\\\ \noindent For technical reasons we will also consider the same property but for a baseline Poisson measure on a given bounded subset of $\mathbb X$. \begin{definition} \label{definition:truncatedstuff} For $(T,M)\in \mathbb X$ we set : \begin{itemize} \item[(i)] $ \Delta^{T,M}_j:=\left\{(x_1,\cdots,x_j) \in ([0,T]\times[0,M])^j, \; x_i \neq x_k, \; \forall i\neq k \in \{1,\cdots,j\}\right\},$ \item[(ii)] $N^{T,M}$ the truncated Poisson measure $N^{T,M}$ defined as : $$ N^{T,M}(A):=\int_{A} \ind{[0,T]\times[0,M]}(x) N(dx), \quad A \in \mathcal B(\mathbb X). $$ \item[(iii)] $L^{2,T,M}(\Omega)$ the set of random variables $F$ in $L^2(\Omega)$ such that there exists $(f_j)_{j\geq 1}$ with $f_j \in L_s^2(([0,T]\times[0,M])^j)$ such that $$F= \mathbb E[F] + \sum_{j=1}^{+\infty} \frac{1}{j!} I_j(f_j),$$ that is the set of random variables admitting the chaotic expansion with $N$ replaced by $N^{T,M}$. \end{itemize} \end{definition} \begin{definition}$[$Pseudo-chaotic expansion$]$ \label{eq:pseudo} \begin{enumerate} \item A random variable $F$ in $L^2(\Omega)$ is said to have a pseudo-chaotic expansion with respect to the counting process $N$ if there exists $(g_j)_{j \geq 1}$, $g_j \in L^2_s(\mathbb X^j)$ for all $j\in \mathbb N^*$ such that : \begin{equation} \label{eq:definitionpseudochaotic} F =\sum_{j=1}^{+\infty} \frac{1}{j!} \int_{\mathbb X^j} g_j(x_1,\ldots,x_j) N(dx_1) \cdots N(dx_j), \end{equation} where the series converges in $L^2(\Omega)$. \item Fix $(T,M) \in \mathbb X$. A random variable $F$ in $L^{2,T,M}(\Omega)$ is said to have a pseudo-chaotic expansion with respect to the counting process $N$ if there exists $(g_j)_{j\geq 1}$, $g_j \in L^2_s(([0,T]\times[0,M])^j)$ for all $j\in \mathbb N^*$ such that : \begin{align} \label{eq:psueudochaoticTM} F &=\sum_{j=1}^{+\infty} \frac{1}{j!} \int_{\mathbb X^j} g_j(x_1,\ldots,x_j) N^{T,M}(dx_1) \cdots N^{T,M}(dx_j)\nonumber \\ &=\sum_{j=1}^{+\infty} \frac{1}{j!} \int_{([0,T]\times[0,M])^j} g_j(x_1,\ldots,x_j) N(dx_1) \cdots N(dx_j) \end{align} where the series converges in $L^2(\Omega)$. \end{enumerate} \end{definition} \begin{remarks}\mbox{} \begin{itemize} \item[-] Recall that with the notations here above (and Notation \ref{notation:simplex}), the symmetry of functions $g_j$ entails that \begin{align*} & \int_{\mathbb X^j} g_j(x_1,\ldots,x_j) N(dx_j) \cdots N(dx_1) \\ &= \int_{\Delta_j} g_j(x_1,\ldots,x_j) N(dx_1) \cdots N(dx_j) \\ &= \int_{\Delta_j} g_j((t_1,\theta_1),\ldots,(t_j,\theta_j)) N(dt_1,d\theta_1) \cdots N(dt_j,d\theta_j) \\ &= j! \int_{\mathbb X} \int_{[0,t_{j-1})\times \mathbb{R}_+} \cdots \int_{[0,t_{2})\times \mathbb{R}_+} g_j((t_1,\theta_1),\ldots,(t_j,\theta_j)) N(dt_1,d\theta_1) \cdots N(dt_j,d\theta_j). \end{align*} \item[-] Note that in each term $\int_{\mathbb X^j} g_j(x_1,\ldots,x_j) N(dx_1) \cdots N(dx_j)$ in (\ref{eq:definitionpseudochaotic}), the multiple integration coincides with the one with respect to the so-called factorial measures presented in \cite[Appendix]{Last2016}.\\ \end{itemize} \end{remarks} \begin{definition} \label{eq:setofpseudo} We set $$\mathcal P:=\{F \in L^2(\Omega), \textrm{ which admits a pseudo-chaotic expansion with respect to }N\},$$ and for $(T,M) \in \mathbb X$, $$\mathcal P^{T,M}:=\{F \in L^2(\Omega), \textrm{ which admits a pseudo-chaotic expansion with respect to }N^{T,M}\}.$$ \end{definition} \noindent Before studying and characterizing those random variables which admit a pseudo-chaotic expansion we need some preliminary results collected in the section below. \subsection{Some preliminary results} \begin{definitionprop} \label{definition:truncatedstuffbis} For $(T,M)\in \mathbb X$. \begin{itemize} \item[(i)] Let the random variable $L^{T,M}$ on $\Omega$, \begin{equation} \label{eq:L} L^{T,M}(\omega) := \exp\left(MT\right) \ind{N([0,T]\times[0,M])(\omega)=0} = \exp\left(MT\right) \ind{\omega([0,T]\times[0,M])=0}, \quad \omega \in \Omega. \end{equation} It holds that $$ L^{T,M} = 1 +\sum_{j=1}^{+\infty} \frac{1}{j!} I_j((-\ind{[0,T]\times[0,M]})^{\otimes j}),$$ and \begin{equation} \label{eq:derivL} D_{(x_1,\cdots,x_j)}^j L^{T,M} = L^{T,M} \prod_{i=1}^j (-\ind{[0,T]\times [0,M]}(x_i)), \quad \forall (x_1,\cdots,x_j) \in \mathbb X^j. \end{equation} \item[(ii)] $\mathbb Q^{T,M}$ defined on $(\Omega,\mathcal F_T^N)$ as $\frac{ d \mathbb Q^{T,M}}{d\mathbb P} =:L^{T,M}$ is a probability measure.\\\\ \noindent As the support of $\mathbb Q^{T,M}$ is contained in $\{N([0,T]\times[0,M])=0\}$ we name $\mathbb Q^{T,M}$ the vanishing Poisson measure as it brings the intensity of $N$ to $0$ on the rectangle $[0,T]\times[0,M]$ and $$ \mathbb E^{\mathbb Q^{T,M}}[N^{T,M}(A)] = 0, \quad \forall A \in \mathcal B(\mathbb X). $$ \end{itemize} \end{definitionprop} \begin{proof} Set $$ L= 1 +\sum_{j=1}^{+\infty} \frac{1}{j!} I_j((-\ind{[0,T]\times[0,M]})^{\otimes j}). $$ Following \cite[Proposition 6.3.1]{Privault_2009}, this expression is the chaotic expansion of the stochastic exponential at time $T$ of the deterministic function $x\mapsto -\ind{(0,T)\times[0,M]}(x) $ against the compensated Poisson measure, that is \begin{align*} L &= \exp\left(\int_{\mathbb X} -\ind{[0,T]\times [0,M]}(x) \tilde{N}(dx)\right) \prod_{x, \; N(\{x\})=1} \left[(1-\ind{[0,T]\times [0,M]}(x)) \exp\left(\ind{[0,T]\times [0,M]}(x)\right)\right] \\ &= \exp\left(M T \right) \ind{N([0,T]\times[0,M])=0}\\ &= L^{T,M}. \end{align*} In addition, by definition of the Malliavin derivative $D$ and the exponential structure of $L$ we have that $$D_{x_1} L = -L \ind{[0,T]\times [0,M]}(x_1).$$ Relation (\ref{eq:derivL}) then results from the fact that $-\ind{[0,T]\times [0,M]}$ is deterministic and that $D^j = D^{j-1} D$.\\ \noindent Finally, as $\mathbb E[L^{T,M} = 1]$, the measure $\mathbb Q^{T,M}$ in (ii) is well defined and is a probability measure (not equivalent to $\mathbb P$). \end{proof} \begin{remark} Our analysis will be based on the intervention of the quantity $L^{T,M}$ or of the Poisson vanishing measure $\mathbb Q^{T,M}$ which is properly defined only on bounded subsets of $\mathbb X$; which explains why we derive our results for the truncated Poisson measures $N^{T,M}$ and not for $N$. \end{remark} \begin{prop} \label{prop:degatsQTM} Let $(T,M)\in \mathbb X$, $j\in \mathbb N^*$, $f\in L_s^2(([0,T]\times[0,M])^j)$. Let $F$ of the form (\ref{eq:psueudochaoticTM}) (that is $F \in \mathcal P^{T,M}$). Then $$ \mathbb E^{\mathbb Q^{T,M}}\left[F\right] = \mathbb E\left[L^{T,M} F\right] =0.$$ \end{prop} \begin{proof} The result is an immediate consequence of the definition of $\mathbb Q^{T,M}$ which is supported on the set $N([0,T]\times[0,M])=0$ and of the form of $F$ as a sum (\ref{eq:psueudochaoticTM}) involving only integrals against $N$ on $[0,T]\times[0,M]$. \end{proof} \noindent Our first main result which gives a motivation to the definition of the notion of the pseudo-chaotic expansion lies in the theorem below. \begin{theorem} \label{th:countingexpectation} Let $T,M>0$ and $F$ in $L^{2,T,M}(\Omega)$ (see (ii) of Definition \ref{definition:truncatedstuff}). Then $$ \sum_{j=1}^{+\infty} \frac{(-1)^j}{j!} \int_{\mathbb X^j} \mathbb E\left[D_{(x_1,\cdots,x_j)}^j F\right] dx_1 \cdots dx_j =\mathbb E\left[F(L^{T,M}-1)\right].$$ \end{theorem} \begin{proof} Using the Malliavin integration by parts it holds that : \begin{align*} &\sum_{j=1}^{+\infty} \frac{(-1)^j}{j!} \int_{\mathbb X^j} \mathbb E\left[D_{(x_1,\cdots,x_j)}^j F\right] dx_1 \cdots dx_j \\ &\sum_{j=1}^{+\infty} \frac{(-1)^j}{j!} \int_{([0,T\times [0,M])^j} \mathbb E\left[D_{(x_1,\cdots,x_j)}^j F\right] dx_1 \cdots dx_j \\ &=\sum_{j=1}^{+\infty} \frac{(-1)^j}{j!} \mathbb E\left[ \int_{([0,T\times [0,M])^j} D_{(x_1,\cdots,x_j)}^j F \; dx_1 \cdots dx_j\right]\\ &=\sum_{j=1}^{+\infty} \frac{(-1)^j}{j!} \mathbb E\left[ \int_{\mathbb X^j} \prod_{i=1}^j \ind{[0,T]\times [0,M]}(x_i) D_{(x_1,\cdots,x_j)}^j F \; dx_1 \cdots dx_j\right]\\ &=\sum_{j=1}^{+\infty} \frac{1}{j!} \mathbb E\left[\int_{\mathbb X^j} \prod_{i=1}^j (-\ind{[0,T]\times [0,M]}(x_i)) D_{(x_1,\cdots,x_j)}^j F dx_1 \cdots dx_j\right]\\ &=\sum_{j=1}^{+\infty} \frac{1}{j!} \mathbb E\left[\int_{\mathbb X^j} (-\ind{[0,T]\times [0,M]})^{\otimes j})(x_1,\cdots,x_j) D_{(x_1,\cdots,x_j)}^j F dx_1 \cdots dx_j\right]\\ &=\sum_{j=1}^{+\infty} \frac{1}{j!} \mathbb E\left[F \; I_j\left((-\ind{[0,T]\times [0,M]})^{\otimes j}\right)\right], \quad \textrm{ by Theorem \ref{theorem:generalizedIPP}}\\ &=\mathbb E\left[F (L^{T,M}-1)\right]. \end{align*} \end{proof} \noindent The previous result will find interest for instance when $F=H_T$ for $H$ a counting process and $T>0$. We make precise the definition of such processes. \begin{definition}$[$Counting process with bounded intensity by Poisson imbedding$]$\\ Let $\lambda$ be a $\mathcal F^N$-predictable process such that $$\exists M>0, \textrm{ such that }\; \lambda_t \leq M, \quad \forall t\geq 0, \; \mathbb P-a.s..$$ The process $H$ defined below is a counting process with intensity $\lambda$ : \begin{equation} \label{eq:counting} H_t = \int_{(0,t]\times \mathbb{R}_+} \ind{\theta \leq \lambda_s} N(ds,d\theta) = \int_{(0,t]\times [0,M]} \ind{\theta \leq \lambda_s} N(ds,d\theta), \quad t\geq 0. \end{equation} \\\\\noindent Using the chaotic expansion we have for any $T>0$ that : \begin{equation} \label{eq:chaostemp} H_T = \mathbb E[H_T] + \sum_{j\geq 1} \frac{1}{j!} I_j(f_j^{H_T}), \quad \textrm{ with } \quad f_j^{H_T}(x_1,\cdots,x_j) = \mathbb E\left[D_{(x_1,\cdots,x_j)}^j H_T\right], \; x_i \in [0,T] \times [0,M]. \end{equation} \end{definition} \noindent We now apply Theorem \ref{th:countingexpectation} to a counting process. \begin{corollary} \label{cor:expectcounting} Let $T>0$, $H$ a counting process with bounded intensity $\lambda$ by some $M>0$ so that $H_T$ is given by (\ref{eq:counting}). Then \begin{align*} &\sum_{j=1}^{+\infty} \frac{(-1)^j}{j!} \int_{\mathbb X^j} f_j^{H_T}(x_1,\cdots,x_j) dx_1 \cdots dx_j \\ &= \sum_{j=1}^{+\infty} \frac{(-1)^j}{j!} \int_{\mathbb X^j} \mathbb E\left[D_{(x_1,\cdots,x_j)}^j H_T\right] dx_1 \cdots dx_j \\ &=-\mathbb E[H_T]. \end{align*} \end{corollary} \begin{proof} By Theorem \ref{th:countingexpectation}, \begin{align*} &\sum_{j=1}^{+\infty} \frac{(-1)^j}{j!} \int_{([0,T]\times[0,M])^j} f_j^{H_T}(x_1,\cdots,x_j) dx_1 \cdots dx_j \\ &= \sum_{j=1}^{+\infty} \frac{(-1)^j}{j!} \int_{([0,T]\times[0,M])^j} \mathbb E\left[D_{(x_1,\cdots,x_j)}^j H_T\right] dx_1 \cdots dx_j \\ &=\mathbb E\left[H_T (L^{T,M}-1)\right]. \end{align*} Proposition \ref{prop:degatsQTM} entails then that $$\mathbb E\left[H_T L^{T,M}\right]=\exp(MT) \mathbb E\left[H_T \ind{N([0,T]\times[0,M])=0}\right]=0$$ which concludes the proof. \end{proof} \begin{remark} \label{rk:core} The previous result is at the core of our analysis. This means that for a counting process with bounded intensity, the only term with only Lebesgue integrals ($dx$) in Expansion (\ref{eq:chaostemp}) vanishes with $\mathbb E[H_T]$. In other words, all the terms in Expansion (\ref{eq:chaostemp}) involves at least on integral against $N$. \end{remark} \noindent We conclude this section with a generalized version of Theorem \ref{th:countingexpectation}. \begin{lemma} \label{lemma:technicalsumderivatives} Fix $(T,M) \in \mathbb X$ and $F$ in $L^{2,T,M}(\Omega)$. For any $k\in \mathbb N^*$ and any $(x_1,\cdots,x_k) \in ([0,T]\times[0,M])^k$, it holds that : \begin{align*} &\mathbb E\left[D_{(x_1,\ldots,x_k)}^k F\right] + \sum_{j=k+1}^{+\infty} \frac{(-1)^{j-k}}{(j-k)!} \int_{([0,T]\times[0,M])^{j-k}} \mathbb E\left[D_{(x_1,\ldots,x_k,x_{k+1},\ldots,x_{j})}^j F\right] dx_j \cdots dx_{k+1} \\ &= \mathbb E\left[ D_{(x_1,\ldots,x_k)}^k F L^{T,M} \right]. \end{align*} \end{lemma} \begin{proof} The property $D^j = D^{j-k} D^k$ (for the first equality) and Theorem \ref{theorem:generalizedIPP} applied to $D_{(x_1,\ldots,x_k)}^k F$ (for the second equality) imply \begin{align*} & \mathbb E\left[D_{(x_1,\ldots,x_k)}^k F\right] + \sum_{j=k+1}^{+\infty} \frac{(-1)^{j-k}}{j!} \frac{j!}{(j-k)!} \int_{([0,T]\times[0,M])^{j-k}} \mathbb E\left[D_{(x_1,\ldots,x_k,x_{k+1},\ldots,x_{j})}^j F\right] dx_j \cdots dx_{k+1} \\ &= \mathbb E\left[D_{(x_1,\ldots,x_k)}^k F\right] + \sum_{j=k+1}^{+\infty} \frac{(-1)^{j-k}}{j!} \frac{j!}{(j-k)!} \int_{([0,T]\times[0,M])^{j-k}} \mathbb E\left[D_{(x_{k+1},\ldots,x_{j})}^{j-k} D_{(x_1,\ldots,x_k)}^k F\right] dx_j.\\ &= \mathbb E\left[D_{(x_1,\ldots,x_k)}^k F\right] + \sum_{j=k+1}^{+\infty} \frac{1}{j!} \frac{j!}{(j-k)!} \mathbb E\left[I_{j-k}\left((-\ind{[0,T]\times [0,M]})^{\otimes (j-k)}\right) D_{(x_1,\ldots,x_k)}^k F\right] \\ &= \mathbb E\left[D_{(x_1,\ldots,x_k)}^k F\right] + \mathbb E\left[ \sum_{j=k+1}^{+\infty} \frac{1}{j!} \frac{j!}{(j-k)!} I_{j-k}\left((-\ind{[0,T]\times [0,M]})^{\otimes (j-k)}\right) D_{(x_1,\ldots,x_k)}^k F\right]. \end{align*} Then (i) of Proposition \ref{prop:derinIj} entails that \begin{align*} & \mathbb E\left[D_{(x_1,\ldots,x_k)}^k F\right] + \sum_{j=k+1}^{+\infty} \frac{(-1)^{j-k}}{j!} \frac{j!}{(j-k)!} \int_{([0,T]\times[0,M])^{j-k}} \mathbb E\left[D_{(x_1,\ldots,x_k,x_{k+1},\ldots,x_{j})}^j F\right] dx_j \cdots dx_{k+1} \\ &= \mathbb E\left[D_{(x_1,\ldots,x_k)}^k F\right] + \mathbb E\left[ \sum_{j=k+1}^{+\infty} \frac{1}{j!} (-1)^k \left(D_{(x_1,\ldots,x_k)}^k I_{j}\left((-\ind{[0,T]\times [0,M]})^{\otimes j}\right)\right)D_{(x_1,\ldots,x_k)}^k F\right] \\ &=\mathbb E\left[D_{(x_1,\ldots,x_k)}^k F\right] + (-1)^k \mathbb E\left[ D_{(x_1,\ldots,x_k)}^k F \left(D_{(x_1,\ldots,x_k)}^k \sum_{j=k+1}^{+\infty} \frac{1}{j!} I_{j}\left((-\ind{[0,T]\times [0,M]})^{\otimes j}\right) \right)\right] \\ &= \mathbb E\left[D_{(x_1,\ldots,x_k)}^k F\right] + (-1)^k\mathbb E\left[ D_{(x_1,\ldots,x_k)}^k F \left(D_{(x_1,\ldots,x_k)}^k \left(L^{T,M}-1-\sum_{j=1}^k \frac{1}{j!} I_{j}\left((-\ind{[0,T]\times [0,M]})^{\otimes j}\right) \right)\right)\right], \end{align*} where we have used Proposition-Definition \ref{definition:truncatedstuffbis}. Thus (i) and (ii) of Proposition \ref{prop:derinIj} give \begin{align*} &\mathbb E\left[D_{(x_1,\ldots,x_k)}^k F\right] + \sum_{j=k+1}^{+\infty} \frac{(-1)^{j-k}}{j!} \frac{j!}{(j-k)!} \int_{([0,T]\times[0,M])^{j-k}} \mathbb E\left[D_{(x_1,\ldots,x_k,x_{k+1},\ldots,x_{j})}^j F\right] dx_j \cdots dx_{k+1} \\ &= \mathbb E\left[D_{(x_1,\ldots,x_k)}^k F\right] + (-1)^k \mathbb E\left[ D_{(x_1,\ldots,x_k)}^k F \left(D_{(x_1,\ldots,x_k)}^k L^{T,M} +(-1)^{k+1}\right)\right] \\ &= \mathbb E\left[D_{(x_1,\ldots,x_k)}^k F\right] + (-1)^k \mathbb E\left[ D_{(x_1,\ldots,x_k)}^k F \left((-1)^{k} L^{T,M} +(-1)^{k+1}\right)\right] \\ &= \mathbb E\left[D_{(x_1,\ldots,x_k)}^k F\right] + \mathbb E\left[ D_{(x_1,\ldots,x_k)}^k F \left(L^{T,M} -1 \right)\right] \\ &= \mathbb E\left[L^{T,M} D_{(x_1,\ldots,x_k)}^k F \right]. \end{align*} \end{proof} \subsection{Characterization of $\mathcal P^{T,M}$} Throughout this section we fix $(T,M)$ in $\mathbb X$. Corollary \ref{cor:expectcounting} suggests that random variables of the form $H_T$ with $H$ a counting process satisfy the pseudo-chaotic expansion. We make precise this point and characterize the set $\mathcal P^{T,M}$. \begin{theorem} \label{th:characpseudo} An element $F$in $L^2(\Omega)$ admits a pseudo-chaotic expansion with respect to the counting process $N^{T,M}$ (that is $F \in \mathcal P^{T,M}$) if and only if \begin{equation} \label{eq:expectationsumagain} \mathbb E[F] = \sum_{j=1}^{+\infty} \frac{ (-1)^{j+1}}{j!} \int_{([0,T]\times[0,M])^j} \mathbb E\left[D_{(x_1,\cdots,x_j)}^j F\right] dx_1 \cdots dx_j. \end{equation} In that case the pseudo-chaotic expansion of $F$ is given by \begin{equation} \label{eq:pseudodecompositionck} F = \sum_{k=1}^{+\infty} \int_{([0,T]\times[0,M])^k} \frac{1}{k!} c_k(x_1,\ldots,x_k) N(dx_1) \cdots N(dx_k), \end{equation} with \begin{equation} \label{eq:ck} c_k(x_1,\ldots,x_k) := \mathbb E\left[ L^{T,M} \, D_{(x_1,\ldots,x_k)}^k F \right] = \mathbb E^{\mathbb{Q}^{T,M}}\left[D_{(x_1,\ldots,x_k)}^k F \right], \quad \forall (x_1,\ldots,x_k)\in ([0,T]\times[0,M])^k. \end{equation} \end{theorem} \begin{proof} Let $F$ in $\mathcal P^{T,M}$. Then, according to Theorem \ref{th:countingexpectation} and Proposition \ref{prop:degatsQTM} \begin{align*} &\sum_{j=1}^{+\infty} \frac{(-1)^j}{j!} \int_{([0,T]\times[0,M])^j} \mathbb E\left[D_{(x_1,\cdots,x_j)}^j F\right] dx_1 \cdots dx_j\\ &=\mathbb E\left[F (L^{T,M}-1)\right]\\ &=\mathbb E^{\mathbb Q^{T,M}}[F] - \mathbb E[F]\\ &=-\mathbb E[F]. \end{align*} So $$ \sum_{j=1}^{+\infty} \frac{ (-1)^{j+1}}{j!} \int_{([0,T]\times[0,M])^j} \mathbb E\left[D_{(x_1,\cdots,x_j)}^j F\right] dx_1 \cdots dx_j = \mathbb E\left[F\right].$$ Conversely assume $F\in L^2(\Omega)$ is such that Relation (\ref{eq:expectationsumagain}) is true. The chaotic expansion (see Theorem \ref{th:chaoticgeneral}) allows one to write $$ F = \mathbb E[F] + \sum_{j=1}^{+\infty} \frac{1}{j!} I_j(\mathbb E[D^j F]).$$ The definition of the iterated integrals $I_j$ together with Relation (\ref{eq:expectationsumagain}) implies then that \begin{align*} F &= \mathbb E[F] + \sum_{j=1}^{+\infty} \frac{1}{j!} I_j(f_j^F), \quad \textrm{ with } \quad f_j^F(x_1,\cdots,x_j) = \mathbb E\left[D_{(x_1,\cdots,x_j)}^j F\right] \\ &= \sum_{k=1}^{+\infty} \frac{1}{k!} \int_{([0,T]\times[0,M])^k} c_k(x_1,\ldots,x_k) N(dx_1) \cdots N(dx_k), \end{align*} with \begin{align*} c_k(x_1,\ldots,x_k)&:= f_k^F(x_1,\ldots,x_k) \\ &+ {k!} \sum_{j=k+1}^{+\infty} \frac{(-1)^{j-k}}{j!} \frac{j!}{{k!}(j-k)!}\int_{([0,T]\times[0,M])^{j-k}} f_j^F(x_1,\ldots,x_k,x_{k+1},\ldots,x_{j}) dx_{k+1} \cdots dx_j. \end{align*} Here the number $\frac{j!}{{k!}(j-k)!}$ of $k$-combinations among $j$ choices denotes the number of times the integral $\int_{([0,T]\times[0,M])^{j-k}} f_j^F(x_1,\ldots,x_k,x_{k+1},\ldots,x_{j}) dx_{k+1} \cdots dx_j$ of the symmetric function $f_j^F$ appears in the expansion of $I_j(f_j^F)$. Note also the choice of normalisation by factoring $c_k$ with $\frac{1}{k!}$ which explains the $k!$ factor in front of the sum. We now compute each of these terms. Using the definition of the $f_j^F$ functions and Lemma \ref{lemma:technicalsumderivatives} we get \begin{align*} &c_k(x_1,\ldots,x_k)\\ &= f_k^F(x_1,\ldots,x_k) + \sum_{j\geq k+1} \frac{(-1)^{j-k}}{j!} \frac{j!}{(j-k)!}\int_{\mathbb X^{j-k}} f_j^F(x_1,\ldots,x_k,x_{k+1},\ldots,x_{j}) dx_{k+1} \cdots dx_j\\ &= \mathbb E\left[D_{(x_1,\ldots,x_k)}^k F\right] + \sum_{j\geq k+1} \frac{(-1)^{j-k}}{j!} \frac{j!}{(j-k)!} \int_{\mathbb X^{j-k}} \mathbb E\left[D_{(x_1,\ldots,x_k,x_{k+1},\ldots,x_{j})}^j F\right] dx_{k+1} \cdots dx_j \\ &= \mathbb E\left[L^{T,M} D_{(x_1,\ldots,x_k)}^k F \right]. \end{align*} \end{proof} \begin{remark} Note that the uniqueness of the coefficients in the chaotic expansion transfers to the uniqueness of the pseudo-chaotic expansion when it exists and is given by the coefficients $c_k$ in (\ref{eq:ck}). \end{remark} \noindent We now apply this result to counting processes with bounded intensity processes. \begin{theorem} \label{th:pseudochaoticcounting} Let $T>0$, $H$ a counting process with bounded intensity $\lambda$ by $M>0$ so that $H_T$ is given by (\ref{eq:counting}). Then $H_T$ admits a pseudo-chaotic expansion with respect to $N^{T,M}$ with \begin{equation} \label{eq:pseudodecompositionckcounting} H_T = \sum_{k=1}^{+\infty} \int_{([0,T]\times[0,M])^k} \frac{1}{k!} c_k(x_1,\ldots,x_k) N(dx_1) \cdots N(dx_k), \end{equation} \begin{equation} \label{eq:ckcounting} \hspace*{-0.5cm}c_k(x_1,\ldots,x_k) := \mathbb E\left[L^{T,M} D_{(x_{(1)},\ldots,x_{(k-1)})}^{k-1} \ind{\theta_{(k)}\leq \lambda_{(t_k)}}\right], \quad \forall (x_1,\ldots,x_k)\in ([0,T]\times[0,M])^k \end{equation} where according to Notation \ref{notation:ordered}, \; $0 \leq t_{(1)} \leq \cdots \leq t_{(k)} \leq T$ \; are the ordered elements $(t_1,\ldots,t_k)$ and $x_{(i)}:=(t_{(i)},\theta_{(i)})$. \end{theorem} \begin{proof} Theorem \ref{cor:expectcounting} and Theorem \ref{th:characpseudo} give that $H_T$ admits a pseudo-chaotic expansion and $$H_T = \sum_{k=1}^{+\infty} \int_{([0,T]\times[0,M])^k} \frac{1}{k!} c_k(x_1,\ldots,x_k) N(dx_1) \cdots N(dx_k),$$ with $c_k(x_1,\ldots,x_k) := \mathbb E\left[L^{T,M} D_{(x_1,\ldots,x_k)}^k H_T \right], \quad \forall (x_1,\ldots,x_k)\in ([0,T]\times[0,M])^k.$ Let $k\geq 1$ and $(x_1,\ldots,x_k)$ in $([0,T]\times[0,M])^k$. As $(x_1,\cdots,x_k) \mapsto D_{(x_1,\ldots,x_k)}^k H_T$ is symmetric, $$D_{(x_1,\ldots,x_k)}^k H_T = D_{(x_{(1)},\ldots,x_{(k)})}^k H_T =D_{x_1} \cdots D_{x_k} H_T.$$ Using the definition of $D$ and using the fact that $$ H_T = \int_{(0,T]\times [0,M]} \ind{\theta \leq \lambda_s} N(ds,d\theta) $$ one gets that \begin{equation} \label{eq:DH} D_{(x_{(1)},\ldots,x_{(k)})}^k H_T = D_{(x_{(1)},\ldots,x_{(k-1)})}^{k-1} \ind{\theta_{(k)}\leq \lambda_{(t_k)}} + \int_{[0,T]\times\mathbb{R}_+} D_{(x_{(1)},\ldots,x_{(k)})}^k \ind{\theta\leq \lambda_t} N(dt,d\theta). \end{equation} As $L^{T,M}$ annihilates the Poisson process on $[0,T]\times[0,M]$ it holds that $$ \mathbb E\left[L^{T,M} D_{(x_1,\ldots,x_k)}^k H_T \right] = \mathbb E\left[L^{T,M} D_{(x_{(1)},\ldots,x_{(k-1)})}^{k-1} \ind{\theta_{(k)}\leq \lambda_{(t_k)}}\right].$$ \end{proof} \section{Application to linear Hawkes processes} \label{section:Hawkes} Throughout this section $\Phi:\mathbb{R}_+\to\mathbb{R}_+$ denotes a map in $L^1(\mathbb{R}_+;dt)$. \subsection{Generalities on linear Hawkes processes} \begin{assumption} \label{assumption:Phi} The mapping $\Phi : \mathbb{R}_+ \to \mathbb{R}_+$ belongs to $L^1(\mathbb{R}_+;dt)$ with $$\|\Phi\|_1:=\int_{\mathbb{R}_+} \Phi(t) dt < 1.$$ \end{assumption} \noindent For $f,g$ in $L^1(\mathbb{R}_+;dt)$ we define the convolution of $f$ and $g$ by $$(f\ast g)(t):=\int_0^t f(t-u) g(u) du, \quad t \geq 0.$$ \begin{prop}[See \textit{e.g.} \cite{Bacryetal2013}] \label{prop:Phin} Assume $\Phi$ enjoys Assumption \ref{assumption:Phi}. Let \begin{equation} \label{eq:Phin} \Phi_1:=\Phi, \quad \Phi_n(t):=\int_0^t \Phi(t-s) \Phi_{n-1}(s) ds, \quad t \in \mathbb{R}_+, \; n\in \mathbb{N}^*. \end{equation} For every $n\geq 1$, $\|\Phi_n\|_1 = \|\Phi\|_1^n$ and the mapping $\Psi:=\sum_{n=1}^{+\infty} \Phi_n$ is well-defined as a limit in $L_1(\mathbb{R}_+;dt)$ and $\|\Psi\|_1 = \frac{\|\Phi\|_1}{1-\|\Phi\|_1}$. \end{prop} \begin{definition}[Linear Hawkes process, \cite{Hawkes}] \label{def:standardHawkes} Let $(\Omega,\mathcal F,\mathbb P,\mathbb F:=(\mathcal F_t)_{t\geq 0})$ be a filtered probability space, $\mu>0$ and $\Phi:\mathbb{R}_+ \to \mathbb{R}_+$ satisfying Assumption \ref{assumption:Phi}. A linear Hawkes process $H:=(H_t)_{t\geq 0}$ with parameters $\mu$ and $\Phi$ is a counting process such that \begin{itemize} \item[(i)] $H_0=0,\quad \mathbb P-a.s.$, \item[(ii)] its ($\mathbb{F}$-predictable) intensity process is given by $$\lambda_t:=\mu + \int_{(0,t)} \Phi(t-s) dH_s, \quad t\geq 0,$$ that is for any $0\leq s \leq t$ and $A \in \mathcal{F}_s$, $$ \mathbb E\left[\textbf{1}_A (H_t-H_s) \right] = \mathbb E\left[\int_{(s,t]} \textbf{1}_A \lambda_r dr \right].$$ \end{itemize} \end{definition} \subsection{Pseudo-chaotic expansion of linear Hawkes processes and explicit representation} We aim at providing the coefficients in the pseudo-chaotic expansion of a linear Hawkes process. We start with some general facts regarding a linear Hawkes process. \begin{prop} \label{prop:systemHawkesdeterministe} Let $\Phi$ as in Assumption \ref{assumption:Phi} and $\mu>0$ and $(H,\lambda)$ the Hawkes process defined as the unique solution to the SDE $$ \left\lbrace \begin{array}{l} H_t = \int_{(0,t]\times \mathbb{R}} \ind{\theta\leq \lambda_s} N(ds,d\theta),\\\\ \lambda_t = \mu +\int_{(0,t)\times\mathbb{R}_+} \Phi(t-s) dH_s,\quad t\geq 0 \end{array} \right. $$ Let $n \in \mathbb N^*$ and $\{y_1,\ldots,y_n\} = \{(s_1,\theta_1),\ldots,(s_n,\theta_n)\} \subset \mathbb X$ with $ 0 \leq s_1\leq \cdots \leq s_n \leq t$. \\ We set $(a_1^{\{y_1,\ldots,y_n\}},\cdots,a_n^{\{y_1,\ldots,y_n\}})$ the solution to the system \begin{equation} \label{eq:systemHawkes1} \left\lbrace \begin{array}{l} a_1^{\{y_1,\ldots,y_n\}} = \mu + \ind{\theta_1 \leq \mu},\\\\ a_j^{\{y_1,\ldots,y_n\}} = \mu + \displaystyle{\sum_{i=1}^{j-1} \Phi(s_j-s_i) \ind{\theta_i \leq a_i^{\{y_1,\ldots,y_n\}}}}, \quad j \in \{2,\ldots,k\}. \end{array} \right. \end{equation} which is the triangular system \begin{equation} \label{eq:systemHawkes2} \left\lbrace \begin{array}{l} a_1^{\{y_1,\ldots,y_n\}} = \mu + \ind{\theta_1 \leq \mu},\\\\ a_2^{\{y_1,\ldots,y_n\}} = \mu + \displaystyle{\Phi(s_2-s_1) \ind{\theta_2 \leq a_1^{\{y_1,\ldots,y_n\}}}},\\\\ \hspace{5em}\vdots\\\\ a_n^{\{y_1,\ldots,y_n\}} = \mu + \displaystyle{\sum_{i=1}^{n-1} \Phi(s_n-s_i) \ind{\theta_i \leq a_i^{\{y_1,\ldots,y_n\}}}}. \end{array} \right. \end{equation} \noindent Let $\varpi_{\{y_1,\ldots,y_n\}} := \sum_{i=1}^n \delta_{y_i} \in \Omega.$ Then the values of the deterministic path $\lambda(\varpi_{\{y_1,\ldots,y_n\}})$ (resulting from the evaluation of $\lambda$ at the specific $\omega=\varpi_{\{y_1,\ldots,y_n\}}$) at times $s_1,\ldots,s_n$ is given by $$ (\lambda_{s_1}(\varpi_{\{y_1,\ldots,y_n\}}),\ldots,\lambda_{s_n}(\varpi_{\{y_1,\ldots,y_n\}}))=(a_1^{\{y_1,\ldots,y_n\}},\ldots,a_n^{\{y_1,\ldots,y_n\}}).$$ In addition \begin{equation} \label{eq:lambdaspecial} \lambda_t (\varpi_{\{y_1,\ldots,y_n\}}) = \mu + \sum_{i=1}^{n-1} \Phi(t-s_i) \ind{\theta_i \leq a_i^{\{y_1,\ldots,y_n\}}} \ind{s_i < t}, \quad \forall t\geq s_n. \end{equation} \end{prop} \begin{proof} Let $t \geq 0$. By definition of $\lambda$, \begin{align*} & \lambda_t (\varpi_{\{y_1,\ldots,y_n\}}) \\ &:= \mu + \left(\int_{(0,t)} \Phi(t-u) dH_u\right)(\varpi_{\{y_1,\ldots,y_n\}}) \\ &=\mu + \left(\int_{(0,t)} \Phi(t-u) \ind{\theta \leq \lambda_u} N(du,d\theta)\right)(\varpi_{\{y_1,\ldots,y_n\}}) \\ &=\mu + \int_{(0,t)} \Phi(t-u) \ind{\theta \leq \lambda_u(\varpi_{\{y_1,\ldots,y_n\}})} (N(du,d\theta)(\varpi_{\{y_1,\ldots,y_n\}})) \\ &=\mu + \int_{(0,t)} \Phi(t-u) \ind{\theta \leq \lambda_u(\varpi_{\{y_1,\ldots,y_n\}})} (\varpi_{\{y_1,\ldots,y_n\}})(du,d\theta)\\ &=\mu + \int_{(0,s_1)} \Phi(t-u) \ind{\theta \leq \lambda_u(\varpi_{\{y_1,\ldots,y_n\}})} \ind{u < t} (\varpi_{\{y_1,\ldots,y_n\}})(du,d\theta)\\ &+\sum_{i=1}^{n-1} \int_{[s_i,s_{i+1})} \Phi(t-u) \ind{\theta \leq \lambda_u(\varpi_{\{y_1,\ldots,y_n\}})} \ind{u < t} (\varpi_{\{y_1,\ldots,y_n\}})(du,d\theta) \\ &=\mu + \sum_{i=1}^{n-1} \int_{[s_i,s_{i+1})} \Phi(t-u) \ind{\theta \leq \lambda_u(\varpi_{\{y_1,\ldots,y_n\}})} \ind{u < t} (\varpi_{\{y_1,\ldots,y_n\}})(du,d\theta) \\ &=\mu + \sum_{i=1}^{n-1} \Phi(t-s_i) \ind{\theta_i \leq \lambda_{s_i}(\varpi_{\{y_1,\ldots,y_n\}})} \ind{s_i < t}. \end{align*} In addition, by definition, of $\lambda$, for any $i$, $\lambda_{s_i}(\varpi_{\{y_1,\ldots,y_n\}}) = \lambda_{s_i}(\varpi_{(y_1,\ldots,y_{i-1})})$. Hence, the evaluation of $\lambda$ at the specific path $\varpi_{\{y_1,\ldots,y_n\}}$ is the deterministic path completely determined by its value at the dates $s_1,\ldots,s_n$. Indeed, $$ \lambda_t (\varpi_{\{y_1,\ldots,y_n\}}) = \mu, \quad \forall t\in [0,s_1],$$ in particular $a_1:=\lambda_{s_1} (\varpi_{\{y_1,\ldots,y_n\}}) = \mu$. From this we deduce that for $t\in (s_1,s_2]$, $$ \lambda_t (\varpi_{\{y_1,\ldots,y_n\}}) = \mu + \Phi(t-s_1) \ind{\theta_1 \leq \lambda_{s_1}(\varpi_{\{y_1,\ldots,y_n\}})} = \mu + \Phi(t-s_1) \ind{\theta_1 \leq \mu}.$$ In particular $a_2:=\lambda_{s_2} (\varpi_{\{y_1,\ldots,y_n\}}) = \mu + \Phi(s_2-s_1) \ind{\theta_1 \leq \mu} = \mu + \Phi(s_2-s_1) \ind{\theta_1 \leq a_2}$. By induction we get that for $t\in (s_j,s_{j+1}]$ ($j\in \{1,\cdots,n-1\}$), $$ \lambda_t (\varpi_{\{y_1,\ldots,y_n\}}) = \mu + \sum_{i=1}^{j} \Phi(t-s_i) \ind{\theta_i \leq a_i} \ind{s_i < t},$$ with $a_i:=\lambda_{s_i}(\varpi_{\{y_1,\ldots,y_n\}})$. In other words, $(a_1,\ldots,a_n)$ solves the triangular system of the statement. \end{proof} \begin{theorem}$[$Pseudo-chaotic expansion for linear Hawkes processes$]$ \label{th:explicitHawkes}\\ Let $\Phi$ as in Assumption \ref{assumption:Phi} and $\mu>0$. Assume in addition that\footnote{with classical notations $\|\Phi\|_\infty:=\sup_{t \geq 0} \Phi(t)$} $\|\Phi\|_\infty < +\infty$. Let $(H,\lambda)$ be the unique solution of \begin{equation} \label{eq:Hawkes} \left\lbrace \begin{array}{l} H_t = \int_{(0,t]\times \mathbb{R}} \ind{\theta\leq \lambda_s} N(ds,d\theta),\\\\ \lambda_t = \mu +\int_{(0,t)\times\mathbb{R}_+} \Phi(t-s) dH_s,\quad t\geq 0 \end{array} \right. \end{equation} Then $H$ is a linear Hawkes process with intensity $\lambda$ in the sense of Definition \ref{def:standardHawkes}. For any $T>0$, $H_T$ admits the pseudo-chaotic expansion below : \begin{equation} \label{eq:pseudochaosHawkes} H_T = \sum_{k=1}^{+\infty} \int_{\mathbb X^k} \frac{1}{k!} c_k(x_1,\ldots,x_k) N(dx_1) \cdots N(dx_k), \end{equation} $$ \left\lbrace \begin{array}{l} c_1(x_1) =\ind{\theta_{1}\leq \mu},\\\\ c_k(x_1,\ldots,x_k) =\displaystyle{(-1)^{k-1} \ind{\theta_{k}\leq \mu} + \sum_{n=1}^{k-1} \sum_{\{y_1,\ldots,y_n\} \subset \{x_{(1)},\ldots,x_{(k-1)}\}} (-1)^{k-1-n} \ind{\theta_{k}\leq \lambda_{x_k} (\varpi_{\{y_1,\ldots,y_n\}})}}, \quad k\geq 2 \end{array} \right. $$ where : \begin{itemize} \item[-] We recall Notation \ref{notation:ordered} for $(x_{(1)},\ldots,x_{(k)})$ \item[-] Notation $\sum_{\{y_1,\ldots,y_n\} \subset \{x_1,\ldots,x_{k-1}\}}$ stands for the sum over all subsets $\{y_1,\ldots,y_n\}$ of cardinal $n$ of $\{x_1,\ldots,x_{k-1}\}$ \item[-] $\lambda_t (\varpi_{\{y_1,\ldots,y_n\}})$ is given by (\ref{eq:lambdaspecial}) in Proposition \ref{prop:systemHawkesdeterministe}. \end{itemize} \end{theorem} \begin{proof} First by \cite{Bremaud_Massoulie,Costa_etal,Hillairet_Reveillac_Rosenbaum} the system of SDEs (\ref{eq:Hawkes}) admits a unique solution which is a Hawkes process (we refer to \cite{Hillairet_Reveillac_Rosenbaum} for more details on the construction with pathwise uniqueness). Fix $T>0$. Then for $M > \mu$, we set : $$ \Omega^M:=\left\{\sup_{t\in [0,T]} \lambda_t \leq M\right\} \subset \Omega. $$ By Markov's inequality $$ \mathbb P[\Omega \setminus \Omega^M] \leq \frac{\mathbb E\left[\sup_{t\in [0,T]} \lambda_t\right]}{M} \leq \frac{\mu+\|\Phi\|_\infty \mathbb E\left[H_T\right]}{M} \leq \frac{\mu+\|\Phi\|_\infty \|\Phi\|_1 (1-\|\Phi\|_1)^{-1}}{M}.$$ Letting $\bar{\Omega}:=\lim_{M \to +\infty} [\Omega\setminus \Omega^M]$ where the limit is understood as an decreasing sequence of sets, $\mathbb P[\bar{\Omega}]=0$.\\\\ \noindent Fix $M\geq \mu$ and set $(H^M,\lambda^M)$ the unique solution to \begin{equation} \label{eq:HawkesM} \left\lbrace \begin{array}{l} H_t^M = \int_{(0,t]\times [0,M]} \ind{\theta\leq \lambda_s} N(ds,d\theta),\\\\ \lambda_t^M = \mu +\int_{(0,t)\times\mathbb{R}_+} \Phi(t-s) dH_s^M,\quad t\in [0,T]. \end{array} \right. \end{equation} By construction $H^M$ is a counting process with intensity $\lambda \wedge M$ and $(H^M,\lambda^M) = (H,\lambda)$ on $\Omega^M$ by uniqueness of the solution to the SDE. Hence by Theorem \ref{th:pseudochaoticcounting}, $H_T^M$ admits a pseudo-chaotic expansion with respect to $N^{T,M}$ and \begin{equation} \label{eq:pseudodecompositionckcountingtemp} H_T^M = \sum_{k=1}^{+\infty} \int_{([0,T]\times[0,M])^k} \frac{1}{k!} c_k(x_1,\ldots,x_k) N(dx_1) \cdots N(dx_k), \end{equation} \begin{equation} \label{eq:ckcountingtemp} c_k(x_1,\ldots,x_k) := \mathbb E\left[L^{T,M} D_{(x_{(1)},\ldots,x_{(k-1)})}^{k-1} \ind{\theta_{(k)}\leq \lambda_{(t_k)}}\right], \quad \forall (x_1,\ldots,x_k)\in ([0,T]\times[0,M])^k, \end{equation} with the ordering convention of Notation \ref{notation:ordered}. We should mention that the only dependency on $M$ in the coefficients $c_k$ is simply the domain of the variables $(x_1,\ldots,x_k)$ in $([0,T]\times[0,M])^k$. For such $k$ and $(x_1,\ldots,x_k)$ (where for simplicity we assume that $(x_{(1)},\ldots,x_{(k)}) = (x_{1},\ldots,x_{k})$) we have using Proposition \ref{prop:Dnaltenative} that for $\omega \in \Omega^M$, \begin{align*} &L^{T,M}(\omega) (D_{(x_{1},\ldots,x_{k-1})}^{k-1} \ind{\theta_{k}\leq \lambda_{t_k}})(\omega) \\ &=L^{T,M}(\omega) \sum_{J \subset \{1,\cdots,k-1\}} (-1)^{k-1-|J|} \ind{\theta_{k}\leq \lambda_{t_k}(\omega+ \sum_{j\in J} \delta_{x_j})}, \end{align*} where the sum is over all subsets $J$ of $\{1,\cdots,k-1\}$ including the empty set which is of cardinal $0$. Hence, \begin{align*} &L^{T,M}(\omega) (D_{(x_{1},\ldots,x_{k-1})}^{k-1} \ind{\theta_{k}\leq \lambda_{t_k}})(\omega) \\ &=\exp(MT) \ind{(N([0,T]\times[0,M])(\omega))=0} \sum_{J \subset \{1,\cdots,k-1\}} (-1)^{k-1-|J|} \ind{\theta_{k}\leq \lambda_{t_k}(\omega + \sum_{j\in J} \delta_{x_j})} \\ &=\exp(MT) \ind{\omega([0,T]\times[0,M])=0} \sum_{J \subset \{1,\cdots,k-1\}} (-1)^{k-1-|J|} \ind{\theta_{k}\leq \lambda_{t_k}(\omega + \sum_{j\in J} \delta_{x_j})} \\ &=\exp(MT) \ind{\omega([0,T]\times[0,M])=0} \sum_{J \subset \{1,\cdots,k-1\}} (-1)^{k-1-|J|} \ind{\theta_{k}\leq \lambda_{t_k}(\sum_{j\in J} \delta_{x_j})}. \end{align*} Recall that $\mathbb E[\exp(MT) \ind{\omega([0,T]\times[0,M])=0}] =1$. In other words the effect of $L^{T,M}$ is to freeze the evaluation of the intensity process $\lambda$ on a specific outcome given by the atoms $(x_{1},\ldots,x_{k-1})$. Taking the expectation and reorganizing the sum above we get \begin{align*} c_k(x_1,\ldots,x_k) &= \mathbb E\left[L^{T,M} D_{(x_{1},\ldots,x_{k-1})}^{k-1} \ind{\theta_{k}\leq \lambda_{t_k}} \right] \\ &=\sum_{J \subset \{1,\cdots,k-1\}} (-1)^{k-1-|J|} \ind{\theta_{k}\leq \lambda_{t_k}(\sum_{j\in J} \delta_{x_j})}\\ &=(-1)^{k-1} \ind{\theta_{k}\leq \mu}+\sum_{n=1}^{k-1} \sum_{\{y_1,\ldots,y_n\} \subset \{x_1,\ldots,x_{k-1}\}} (-1)^{k-1-n} \ind{\theta_{k}\leq \lambda_{t_k}(\varpi_{\{y_1,\ldots,y_n\}}}\\ &=(-1)^{k-1} \ind{\theta_{k}\leq \mu}+\sum_{n=1}^{k-1} \sum_{\{y_1,\ldots,y_n\} \subset \{x_1,\ldots,x_{k-1}\}} (-1)^{k-1-n} \ind{\theta_{k}\leq a_k^{\{y_1,\ldots,y_n\}}}. \end{align*} Note that in each term $\ind{\theta_{k}\leq \lambda_{t_k}(\sum_{j\in J} \delta_{x_j})}$, $\sum_{j\in J} \delta_{x_j}$ is deterministic and $\lambda_{t_k}(\sum_{j\in J} \delta_{x_j})$ is explicitly given by the triangular system in Notation \ref{prop:systemHawkesdeterministe}. For $k=1$, the previous expression just reduces to $$c_1(x_1) = \mathbb E\left[L^{T,M} \ind{\theta_{1}\leq \lambda_{t_1}}\right] = \ind{\theta_{1}\leq \mu}.$$ Finally, as $$ H_T(\omega) = H_T^M(\omega), \quad \textrm{ on } \Omega^M,$$ for any $k\geq 1$ \begin{align*} &\int_{([0,T]\times[0,M])^k} \frac{1}{k!} c_k(x_1,\ldots,x_k) N(dx_1) \cdots N(dx_k) \\ &= \int_{\mathbb X^k} \frac{1}{k!} c_k(x_1,\ldots,x_k) N(dx_1) \cdots N(dx_k), \; \textrm{ on } \Omega^M \end{align*} and thus the expansion holds true on $\Omega \setminus \bar{\Omega}$ and $\mathbb P[\bar{\Omega}]=0$. \end{proof} \begin{remark} The boundedness assumption on $\Phi$ in Theorem \ref{th:explicitHawkes} is not sharpe and can be replaced with any assumption ensuring that for any $T>0$, $\mathbb E[\sup_{t\in[0,T]} \lambda_t] <+\infty$. \end{remark} \begin{remark} To the price of cumbersome notations, the previous result can be extended to non-linear Hawkes processes; that is counting processes $H$ with intensity process of the form \begin{equation} \label{eq:HawkesGeneral} \left\lbrace \begin{array}{l} H_t = \int_{(0,t]\times \mathbb{R}_+} \ind{\theta\leq \lambda_s} N(ds,d\theta),\\\\ \lambda_t = h\left(\mu +\int_{(0,t)\times\mathbb{R}_+} \Phi(t-s) dH_s\right),\quad t\in [0,T]. \end{array} \right. \end{equation} where $h:\mathbb{R} \to \mathbb{R}_+$, $\Phi:\mathbb{R}_+ \to \mathbb{R}$ and $\|h\|_1 \|\Phi\|_1 < 1$. Indeed, when computing the coefficients in the expansion, the Poisson measure $N$ is cancelled and involves evaluations of the intensity process at a specific configurations of the form $\varpi_{\{y_1,\ldots,y_n\}}$. This evaluation can be done by a straightforward extension of Proposition \ref{prop:systemHawkesdeterministe} for a non-linear Hawkes process; in other words for both linear or non-linear process the intensity process $\lambda$ is a deterministic function of the fixed configuration of the form $\varpi_{\{y_1,\ldots,y_n\}}$. \end{remark} \begin{discussion} \label{discussion:avantageforHawkes} We would like to comment on the advantage of the pseudo-chaotic expansion compared to the usual one for the value $H_T$ of a linear Hawkes process at any time $T$. Recall the two decompositions \begin{align*} H_T &= \mathbb E[H_T] + \sum_{j=1}^{+\infty} \frac{1}{j!} I_j(f_j^{H_T}), \quad \textrm{ with } \quad f_j^{H_T}(x_1,\cdots,x_j) = \mathbb E\left[D_{(x_1,\cdots,x_j)}^j H_T\right] \\ &= \sum_{k=1}^{+\infty} \int_{\mathbb X^k} \frac{1}{k!} c_k(x_1,\ldots,x_k) N(dx_1) \cdots N(dx_k) \end{align*} For the chaotic expansion, in order to determine each coefficient $f_j$ one has to compute $f_j^{H_T}(x_1,\cdots,x_j) = \mathbb E\left[D_{(x_1,\cdots,x_j)}^j H_T\right]$ which turns out to be quite implicit for a general $\Phi$ kernel. Indeed, already for the first coefficient, using Relation (\ref{eq:DH}) we have \begin{align*} f_j^{H_T}(x_1) &= \mathbb E\left[D_{x_1} H_T\right] \\ & = \mathbb E\left[\ind{\theta_1 \leq \lambda_{t_1}} \right] + \int_{(t_1,T]\times \mathbb{R}_+} \mathbb E\left[D_{x_{1}} \ind{\theta\leq \lambda_s} \right] d \theta ds \\ & = \mathbb P\left[\theta_1 \leq \lambda_{t_1}\right] + \int_{(t_1,T]} \mathbb E\left[D_{x_{1}} \lambda_s \right] ds. \end{align*} The quantity $\int_{(t_1,T]} \mathbb E\left[D_{x_{1}} \lambda_s \right] ds$ has been computed in \cite{HHKR} however a closed form expression for $\mathbb P\left[\theta_1 \leq \lambda_{t_1}\right]$ for any kernel $\Phi$ satisfying Assumption \ref{assumption:Phi} is unknown to the authors. \\\\ \noindent In contradistinction, Theorem \ref{th:explicitHawkes} gives an explicit expression for the coefficients $c_k$. In that sense, the pseudo-chaotic expansion (\ref{eq:pseudochaosHawkes}) is an exact representation and an explicit solution to the Hawkes equation formulation as given in Definition \ref{def:standardHawkes}. \end{discussion} \section{The pseudo-chaotic expansion and the Hawkes equation} \label{section:almostHawkes} The aim of this section is to investigate further the link between a decomposition of the form (\ref{eq:pseudodecompositionckcountingtemp}) that we named pseudo-chaotic expansion and the characterization of a Hawkes process as in Definition \ref{def:standardHawkes}. First, let us emphasize that both the standard chaotic expansion and the pseudo-chaotic expansion characterize a given random variable and not a stochastic process. For instance in Theorem \ref{th:explicitHawkes}, the coefficients $c_k$ for the expansion of $H_T$ depend on the time $T$. In this section, we consider once again the linear Hawkes process, which is essentially described as a counting process with a specific stochastic intensity like in (\ref{eq:introlambdaHawkes}), and we adopt a different point of view based on population dynamics and branching representation as in \cite{hawkes1974cluster} or \cite{boumezoued2016population}. Inspired by this branching representation, we build in Theorem \ref{th:almostHawkes} below a stochastic process \textit{via} its pseudo-chaotic expansion which is an integer-valued piecewise-constant and non-decreasing process with the specific intensity form of a Hawkes process. Nevertheless, although this stochastic process satisfies the stochastic self-exciting intensity equation which determines a Hawkes process, it fails to be a counting process as it may exhibit jumps larger than one. This leaves open further developments for studying the pseudo-chaotic expansion of processes; we refer to Discussion \ref{discussion:finale}. \subsection{A pseudo-chaotic expansion and branching representation} \noindent Throughout this section we will make use of classical stochastic analysis tools hence we describe elements of $\mathbb X$ as $(t,\theta)$ instead of $x$ for avoiding any confusion. The branching representation viewpoint consists in counting the number of individuals in generation $n$, where generation $1$ corresponds of the migrants. We therefore define a series of counting processes $X_t^{(n)}$ (where $n$ stands for the generation) as follows \begin{definitionprop} \label{defintion:definitinHawkes} (i) Let for any $t\geq 0$ \begin{equation} \label{eq:X1} X_t^{(1)}:=\int_{(0,t]\times \mathbb{R}_+} \ind{\theta_1 \leq \mu} N(dv_1,d\theta_1), \end{equation} and for $n \geq 2$, \begin{equation} \label{eq:Xn} X_t^{(n)} := \int_{(0,t]\times \mathbb{R}_+} \int_{(0,v_{n}]\times \mathbb{R}_+} \cdots \int_{(0,v_{2}]\times \mathbb{R}_+} \ind{\theta_1 \leq \mu} \prod_{i=2}^n \ind{\theta_i \leq \Phi(v_i-v_{i-1})} N(dv_1,d\theta_1) \cdots N(dv_n,d\theta_n). \end{equation} We set in addition \begin{equation} \label{eq:X} X_t:=\sum_{n=1}^{+\infty} X_t^{(n)},\end{equation} where the series converges uniformly (in $t$) on compact sets; that is for any $T>0$, $$ \lim_{p\to+\infty} \mathbb E\left[\sup_{t\in [0,T]} \left|X_t-\sum_{n=1}^{p} X_t^{(n)}\right|\right] =0.$$ (ii) We set the $\mathbb F^X$-predictable process \begin{equation} \label{eq:lambda} \ell_t := \mu + \int_{(0,t)} \Phi(t-r) dX_r =\mu + \sum_{n=1}^{+\infty} \int_{(0,t)} \Phi(t-r) dX_r^{(n)}, \quad t\geq 0 \end{equation} where $\mathbb F^X:=(\mathcal F^X_t)_{t\geq 0}$, with $\mathcal F^X_t:=\sigma(X_s, \; s\leq t)$. \end{definitionprop} \noindent The proof of the convergence of the series \eqref{eq:X} is postponed to Section \ref{section:technicallemmata}. The resulting process $X$ aims at counting the number of individuals in the population, while the predictable process $\ell$ is the candidate to be the self-exciting intensity of the process $X$. This intensity reads as follows \begin{align*} &\ell_t \\ &= \mu + \int_{(0,t)} \Phi(t-r) dX_r \\ &=\mu + \int_{(0,t) \times \mathbb{R}_+} \Phi(t-v_1) \ind{\theta_1 \leq \mu} N(dv_1,d\theta_1) \\ &+ \sum_{n=2}^{+\infty} \int_{(0,t]\times \mathbb{R}_+} \Phi(t-v_n) \int_{(0,v_{n}]\times \mathbb{R}_+} \cdots \int_{(0,v_{2}]\times \mathbb{R}_+} \ind{\theta_1 \leq \mu} \prod_{i=2}^n \ind{\theta_i \leq \Phi(v_i-v_{i-1})} N(dv_1,d\theta_1) \cdots N(dv_n,d\theta_n) \\ &=\mu + \sum_{n=1}^{+\infty} \int_{(0,t]\times \mathbb{R}_+} \Phi(t-v_n) \int_{(0,v_{n}]\times \mathbb{R}_+} \cdots \int_{(0,v_{2}]\times \mathbb{R}_+} \ind{\theta_1 \leq \mu} \prod_{i=2}^n \ind{\theta_i \leq \Phi(v_i-v_{i-1})} N(dv_1,d\theta_1) \cdots N(dv_n,d\theta_n). \end{align*} The main result of this section is stated below. Its proof based on Lemmata \ref{eq:CalculX} and \ref{lemma:calcullambda} is postponed to Section \ref{section:proofmain}. \begin{theorem} \label{th:almostHawkes} Let $\mu>0$ and $\Phi$ satisfying Assumption \ref{assumption:Phi}. Recall the stochastic process $X$ and $\ell$ defined (in Definition-Proposition \ref{defintion:definitinHawkes}). We define the process $M:=(M_t)_{t\geq 0}$ by \begin{equation} \label{eq:M} M_t:= X_t-\int_{(0,t)} \ell_u du, \quad t\geq 0. \end{equation} Then $X$ is a $\mathbb{N}$-valued non-decreasing process piecewise constant with predictable intensity process $\ell$, in the sense that process $M$ is a $\mathbb F^N$-martingale (and so a $\mathbb F^H$-martingale as $M$ is $\mathbb F^H$-adapted and $\mathcal F^H_\cdot \subset \mathcal F^N_\cdot $). \end{theorem} \begin{discussion} \label{discussion:finale} In other words $X$ would be a Hawkes process if it were a counting process, unless it has the same expectation of a Hawkes process. Indeed \begin{enumerate} \item some atoms generates simultaneous jumps : any atom $(t_0,\theta_0)$ of $N$ with $\theta_0 \leq \mu$ will generate a jump for $X^{(1)}$ and for all $X^{(n)}$ who have $t_0$ as an ancestor so $X_{t_0}-X_{{t_0}-}$ may be larger than one. \item some atoms are ignored by $X$ : by construction any atom $(t_0,\theta_0)$ of $N$ with $\theta > \|\Phi\|_\infty$ is ignored by the process, whereas the area of decision $\ind{\theta\leq \lambda_t}$ is unbounded in the $\theta$-variable for a linear Hawkes process. \end{enumerate} This example leads to a question. More specifically, the more intricate structure of the coefficients in Theorem \ref{th:almostHawkes} for the pseudo-chaotic expansion of a Hawkes process as a sum and differences of indicator functions suggests a necessary algebraic structure with respect to the time variable on the coefficients of the expansion to guarantee the counting-feature of the process. We leave this issue for future research. \end{discussion} \noindent Before handling in Section \ref{section:proofmain} the proof of Theorem \ref{th:almostHawkes}, we start with some useful technical lemmata. \subsection{Technical results and proofs}\label{section:technicallemmata} \begin{lemma} \label{lemma:magic} Let $f$ in $L_1(\mathbb{R}_+;dt)$. For any $n\in \mathbb{N}$ with $n \geq 3$, and for any $0\leq s \leq t$, \begin{equation} \label{eq:magic} \int_s^t \int_s^u \Phi_{n-1}(t-r) f(r) dr du = \int_s^{t} \int_s^{v_n} \int_s^{v_{n-1}} \cdots \int_s^{v_2} \prod_{i=2}^{n} \Phi(v_{i}-v_{i-1}) f({v_1}) dv_1 \cdots dv_{n}. \end{equation} \end{lemma} \begin{proof} For $g$ a mapping, let $\mathfrak F(g)$ the Fourier transform of $g$. Fix $s\geq 0$ and $n\geq 3$. Let $$ F(u):=\int_s^u \Phi_{n-1}(u-r) f(r) dr, \; u\geq s$$ so that $F = \Phi_{n-1} \ast \tilde f$, with $\tilde f(v):= f(v) \ind{v\geq s}$. We have that $$ \mathfrak F(F) = \mathfrak F(\Phi_{n-1}) \mathfrak F(\tilde f) = \mathfrak F(\Phi)^{n-1} \mathfrak F(\tilde f).$$ In addition by definition of mappings $\Phi_i$ (see Relation (\ref{eq:Phin})), each $\Phi_{i}$ is the $i$th convolution of $\Phi$ with itself; hence $\mathfrak F(\Phi_{n-1}) = (\mathfrak F(\Phi))^{n-1}$. Let : $$ G(u):= \int_s^{u} \Phi(u-v_{n-1}) \int_s^{v_{n-1}} \Phi(v_{n-1}-v_{n-2}) \cdots \int_s^{v_2} \Phi(v_{2}-v_1) f({v_1}) dv_1 \cdots dv_{n-1},$$ we immediately get that $\mathfrak F(G) = (\mathfrak F(\Phi))^{n-2} \mathfrak F(\Phi) \mathfrak F(\tilde f) = (\mathfrak F(\Phi))^{n-1} \mathfrak F(\tilde f) = \mathfrak F(F)$. Using the inverse Fourier transform (on the left) we get that $ F(u) = G(u)$ for a.e. $u$ leading to $ \int_s^t F(u) du = \int_s^t G(u) du $ which is Relation (\ref{eq:magic}). \end{proof} \noindent Lemma \ref{lemma:magic} allows one to prove Proposition \ref{defintion:definitinHawkes}, namely to prove that the series\\ $ X_t=\sum_{n=1}^{+\infty} X_t^{(n)}$ converges uniformly (in $t$) on compact sets; that is for any $T>0$, $$ \lim_{p\to+\infty} \mathbb E\left[\sup_{t\in [0,T]} \left|X_t-\sum_{n=1}^{p} X_t^{(n)}\right|\right] =0.$$ \begin{proof} Set $T>0$. For $p\geq 2$, let $S_p:=\sum_{n=1}^p X^{(n)}$. As each $X^{(n)}$ process is non-negative as a counting process we have that : \begin{align*} &\mathbb E\left[\sup_{t \in[0,T]} \left|X_t-S_p(t)\right|\right] \\ &= \mathbb E\left[\sup_{t \in[0,T]} \left|\sum_{n=p+1}^{+\infty} X^{(n)}_t \right|\right] \\ &= \sum_{n=p+1}^{+\infty} \mathbb E\left[X^{(n)}_T \right] \\ &= \mu \sum_{n=p+1}^{+\infty} \int_0^T \int_0^{v_{n}} \cdots \int_0^{v_{2}} \prod_{i=2}^n \Phi(v_i-v_{i-1}) dv_1 \cdots dv_n \\ &= \mu \sum_{n=p+1}^{+\infty} \int_0^T \int_0^t \Phi_{n-1}(T-r) dr dt, \quad \textrm{ by Lemma \ref{lemma:magic}}\\ &\leq \mu T \sum_{n=p}^{+\infty} \|\Phi_{n}\|_1 = \mu T \frac{\|\Phi\|_1^p}{1-\|\Phi\|_1} \underset{p\to+\infty}{\longrightarrow} 0, \quad \textrm{ by Proposition \ref{prop:Phin}}. \end{align*} \end{proof} \begin{lemma} \label{lemma:Qnu} For $n\geq 3$, $0 \leq s \leq u$ we set \begin{align*} &\hspace{-3em} Q(n,u)\\ &\hspace{-3em} :=\mathbb E_{s-}\left[\int_{(0,u]} \Phi(u-r) dX_r^{(n-1)} \right]\\ &\hspace{-3em} =\mathbb E_{s-}\left[\int_{(0,u]\times \mathbb{R}_+} \hspace{-3em} \Phi(u-v_{n-1}) \int_{(0,v_{n-1}]\times \mathbb{R}_+} \hspace{-1em} \cdots \int_{(0,v_{2}]\times \mathbb{R}_+} \ind{\theta_1 \leq \mu} \prod_{i=2}^{n-1} \ind{\theta_i \leq \Phi(v_i-v_{i-1})} N(dv_1,d\theta_1) \cdots N(dv_{n-1},d\theta_{n-1}) \right]. \end{align*} We have \begin{align} \label{eq:Qnu} Q(n,u)&=h_u^{s,(n-1)} \nonumber\\ &+ \sum_{i=1}^{n-2} \int_s^u \cdots \int_s^{v_{n-i+1}^*} \Phi(u-v_{n-1}) \prod_{k=n-i+1}^{n-1} \Phi(v_k-v_{k-1}) h_{v_{n-i}}^{s,(n-i-1)} dv_{n-i} \ldots dv_{n-1} \nonumber\\ &+ \mu \int_s^u \int_s^{v_{n-1}} \cdots \int_s^{v_2} \Phi(u-v_{n-1}) \prod_{k=2}^{n-1} \Phi(v_k-v_{k-1}) dv_1\ldots dv_{n-1}, \end{align} where $v_{n-i+1}^* := v_{n-i+1}$ for $i\neq 1$ and $v_{n-i+1}^* := u$ for $i=1$. An explicit computation gives that Relation (\ref{eq:Qnu}) is valid for $n=2$ using Convention \ref{convention:sums}. \end{lemma} \begin{proof} Using Fubini's theorem\footnote{here Fubini's theorem is used pathwise as the integral against $N$ are finite-a.e.} we have \begin{align*} &\hspace{-3em} Q(n,u)\\ &\hspace{-3em} = \int_{(0,s]\times \mathbb{R}_+} \hspace{-3em} \Phi(u-v_{n-1}) \int_{(0,v_{n-1}]\times \mathbb{R}_+} \hspace{-1em} \cdots \int_{(0,v_{2}]\times \mathbb{R}_+} \ind{\theta_1 \leq \mu} \prod_{i=2}^{n-1} \ind{\theta_i \leq \Phi(v_i-v_{i-1})} N(dv_1,d\theta_1) \cdots N(dv_{n-1},d\theta_{n-1}) \\ &\hspace{-3em} + \int_s^u \Phi(u-v_{n-1}) \mathbb E_{s-}\left[\int_{(0,v_{n-1}]\times \mathbb{R}_+} \hspace{-1em} \cdots \int_{(0,v_{2}]\times \mathbb{R}_+} \ind{\theta_1 \leq \mu} \prod_{i=2}^{n-1} \ind{\theta_i \leq \Phi(v_i-v_{i-1})} N(dv_1,d\theta_1) \cdots N(dv_{n-1},d\theta_{n-1}) \right] dv_{n-1} \\ &\hspace{-3em}=\int_{(0,s]\times \mathbb{R}_+} \hspace{-3em} \Phi(u-v_{n-1}) \int_{(0,v_{n-1}]\times \mathbb{R}_+} \hspace{-1em} \cdots \int_{(0,v_{2}]\times \mathbb{R}_+} \ind{\theta_1 \leq \mu} \prod_{i=2}^{n-1} \ind{\theta_i \leq \Phi(v_i-v_{i-1})} N(dv_1,d\theta_1) \cdots N(dv_{n-1},d\theta_{n-1}) \\ &\hspace{-3em} + \int_s^u \Phi(u-v_{n-1}) \mathbb E_{s-}\left[\int_{(0,v_{n-1}]\times \mathbb{R}_+} \Phi(v_{n-1}-v_{n-2}) \int_{(0,v_{n-2}]\times \mathbb{R}_+} \hspace{-1em} \cdots \int_{(0,v_{2}]\times \mathbb{R}_+} \right.\\ &\left. \hspace{10em} \ind{\theta_1 \leq \mu} \prod_{i=2}^{n-2} \ind{\theta_i \leq \Phi(v_i-v_{i-1})} N(dv_1,d\theta_1) \cdots N(dv_{n-1},d\theta_{n-1}) \right] dv_{n-1} \\ &\hspace{-3em}=\int_{(0,s]\times \mathbb{R}_+} \hspace{-3em} \Phi(u-v_{n-1}) \int_{(0,v_{n-1}]\times \mathbb{R}_+} \hspace{-1em} \cdots \int_{(0,v_{2}]\times \mathbb{R}_+} \ind{\theta_1 \leq \mu} \prod_{i=2}^{n-1} \ind{\theta_i \leq \Phi(v_i-v_{i-1})} N(dv_1,d\theta_1) \cdots N(dv_{n-1},d\theta_{n-1}) \\ &\hspace{-3em} + \int_s^u \Phi(u-v_{n-1}) Q(n-1,v_{n-1}) dv_{n-1} \\ &\hspace{-3em} =h_u^{s,(n-1)} + \int_s^u \Phi(u-v_{n-1}) Q(n-1,v_{n-1}) dv_{n-1}. \end{align*} So we have proved that $$ Q(n,u) = h_u^{s,(n-1)} + \int_s^u \Phi(u-v_{n-1}) Q(n-1,v_{n-1}) dv_{n-1}. $$ We then deduce by induction that \begin{align*} Q(n,u)&=h_u^{s,(n-1)} \\ &+ \sum_{i=1}^{n-2} \int_s^u \cdots \int_s^{v_{n-i+1}^*} \Phi(u-v_{n-1}) \prod_{k=n-i+1}^{n-1} \Phi(v_k-v_{k-1}) h_{v_{n-i}}^{s,(n-i-1)} dv_{n-i} \ldots dv_{n-1} \\ &+ \mu \int_s^u \int_s^{v_{n-1}} \cdots \int_s^{v_2} \Phi(u-v_{n-1}) \prod_{k=2}^{n-1} \Phi(v_k-v_{k-1}) dv_1\ldots dv_{n-1}. \end{align*} Indeed, assuming the previous relation is true for $Q(n,u)$ for a given $n\geq 3$ and for any $u$; we have \begin{align*} &Q(n+1,u) \\ &= h_u^{s,(n)} + \int_s^u \Phi(u-v_{n}) Q(n,v_{n}) dv_{n} \\ &= h_u^{s,(n)} + \int_s^u \Phi(u-v_{n}) h_{v_n}^{s,(n-1)} dv_{n} \\ &+ \int_s^u \Phi(u-v_{n}) \left[ \sum_{i=1}^{n-2} \int_s^{v_n} \cdots \int_s^{v_{n-i+1}} \Phi(v_n-v_{n-1}) \prod_{k=n-i+1}^{n-1} \Phi(v_k-v_{k-1}) h_{v_{n-i}}^{s,(n-i-1)} dv_{n-i} \ldots dv_{n-1} \right] dv_{n} \\ &+ \mu \int_s^u \Phi(u-v_{n}) \int_s^{v_{n-1}} \Phi(v_n-v_{n-1}) \prod_{k=2}^{n-1} \Phi(v_k-v_{k-1}) dv_1\ldots dv_{n-1} dv_{n} \\ &= h_u^{s,(n)} + \int_s^u \Phi(u-v_{n}) h_{v_n}^{s,(n-1)} dv_{n} \\ &+ \sum_{i=1}^{n-2} \int_s^u \Phi(u-v_{n}) \int_s^{v_n} \cdots \int_s^{v_n-i+1} \prod_{k=n-i+1}^{n} \Phi(v_k-v_{k-1}) h_{v_{n-i}}^{s,(n-i-1)} dv_{n-i} \ldots dv_{n-1} dv_{n} \\ &+ \mu \int_s^u \int_s^{v_{n-1}} \cdots \int_s^{v_2} \Phi(u-v_{n}) \prod_{k=2}^{n} \Phi(v_k-v_{k-1}) dv_1\ldots dv_{n-1} dv_{n} \\ &= h_u^{s,(n)} \\ &+ \sum_{i=0}^{n-2} \int_s^u \Phi(u-v_{n}) \int_s^{v_n} \cdots \int_s^{v_{n-i+1}} \Phi(v_n-v_{n-1}) \prod_{k=n-i+1}^{n} \Phi(v_k-v_{k-1}) h_{v_{n-i}}^{s,(n-i-1)} dv_{n-i} \ldots dv_{n-1} dv_{n} \\ &+ \mu \int_s^u \int_s^{v_{n-1}} \cdots \int_s^{v_2} \Phi(u-v_{n}) \prod_{k=2}^{n} \Phi(v_k-v_{k-1}) dv_1\ldots dv_{n-1} dv_{n}, \end{align*} where for $i=0$ we use Convention \ref{convention:sums}. Hence \begin{align*} &Q(n+1,u) \\ &= h_u^{s,(n)} \\ &+ \sum_{j=1}^{(n+1)-2} \int_s^u \Phi(u-v_{n}) \int_s^{v_n} \cdots \int_s^{v_{(n+1)-j+1}^*} \prod_{k=(n+1)-j+1}^{n+1} \Phi(v_k-v_{k-1}) h_{v_{n+1-j}}^{s,((n+1)-j-1)} dv_{(n+1)-j} \ldots dv_{n-1} dv_{n} \\ &+ \mu \int_s^u \int_s^{v_{n-1}} \cdots \int_s^{v_2} \Phi(u-v_{n}) \prod_{k=2}^{(n+1)-1} \Phi(v_k-v_{k-1}) dv_1\ldots dv_{(n+1)-1}, \end{align*} which gives the result. \end{proof} \subsection{Proof of Theorem \ref{th:almostHawkes}} \label{section:proofmain} The proof consists in showing that the process $M$ defined by (\ref{eq:M}) is a $\mathbb F^N$-martingale, that is for any $0\leq s\leq t$, \begin{equation} \label{eq:martingale} \mathbb E_{s-}\left[X_t-X_s\right] = \int_s^t \mathbb E_{s-}[\ell_r] dr, \end{equation} where for simplicity $\mathbb E_{s-}[\cdot] := \mathbb E[\cdot \vert \mathcal F_{s-}^N]$. This result is a direct consequence of Lemma \ref{eq:CalculX} and \ref{lemma:calcullambda} below in which we compute both terms in (\ref{eq:martingale}). To this end we introduce the following notation \begin{notation} \label{eq:hn} Let $s \geq 0$, $v\geq s$ and $n\geq 1$, we set $$ h_v^s:=\sum_{n=1}^{+\infty} h_v^{s,(n)},$$ with \begin{align*} &h_v^{s,(n)}\\ &:= \int_{(0,s)} \Phi(v-v_n) dX_{v_n}^{(n)} \\ &=\int_{(0,s)\times \mathbb{R}_+} \Phi(v-v_n) \int_{(0,v_{n}]\times \mathbb{R}_+} \cdots \int_{(0,v_{2}]\times \mathbb{R}_+} \ind{\theta_1 \leq \mu} \prod_{i=2}^n \ind{\theta_i \leq \Phi(v_i-v_{i-1})} N(dv_1,d\theta_1) \cdots N(dv_n,d\theta_n). \end{align*} \end{notation} \noindent Lemma \ref{eq:CalculX} first compute the left-hand side of in (\ref{eq:martingale}). \begin{lemma} \label{eq:CalculX} For any $0 \leq s \leq t$ we have that \begin{align} \mathbb E_{s-}\left[\int_{(s,t]} dX_r \right] = \int_s^t (\mu+h_u^s) du + \int_s^t \int_s^{u} \Psi(u-r) (\mu + h_{r}^{s}) dr du. \end{align} \end{lemma} \begin{proof} We have : \begin{equation} \label{eq:cond1} \mathbb E_{s-}\left[\int_{(s,t]} dX_r^{(1)} \right] = \mathbb E_{s-}\left[\int_{(s,t]\times \mathbb{R}_+} \ind{\theta_1 \leq \mu} N(dv_1,d\theta_1)\right]=\mu (t-s). \end{equation} Let $n\geq 2$. \begin{align*} &\hspace{-5em}\mathbb E_{s-}\left[\int_{(s,t]} dX_r^{(n)} \right] \\ &\hspace{-5em}=\mathbb E_{s-}\left[\int_{(s,t]\times \mathbb{R}_+} \int_{(0,v_{n}]\times \mathbb{R}_+} \cdots \int_{(0,v_{2}]\times \mathbb{R}_+} \ind{\theta_1 \leq \mu} \prod_{i=2}^n \ind{\theta_i \leq \Phi(v_i-v_{i-1})} N(dv_1,d\theta_1) \cdots N(dv_n,d\theta_n) \right] \\ &\hspace{-5em} =\int_s^t \mathbb E_{s-}\left[\int_{(0,v_{n}]\times \mathbb{R}_+} \hspace{-3em} \Phi(v_n-v_{n-1}) \int_{(0,v_{n-1}]\times \mathbb{R}_+} \hspace{-1em} \cdots \int_{(0,v_{2}]\times \mathbb{R}_+} \ind{\theta_1 \leq \mu} \prod_{i=2}^{n-1} \ind{\theta_i \leq \Phi(v_i-v_{i-1})} N(dv_1,d\theta_1) \cdots N(dv_{n-1},d\theta_{n-1}) \right] dv_n\\ &\hspace{-5em} =\int_s^t Q(n,v_n) dv_n. \end{align*} Hence, by Lemma \ref{lemma:Qnu}, \begin{align} \label{eq:cond1} \mathbb E_{s-}\left[\int_{(s,t]} dX_r^{(n)} \right] &=\int_s^t h_{v_n}^{s,(n-1)} dv_n \nonumber\\ &+\sum_{i=1}^{n-2} \int_s^t \int_s^{v_n} \cdots \int_s^{v_{n-i+1}} \prod_{k=n-i+1}^{n} \Phi(v_k-v_{k-1}) h_{v_{n-i}}^{s,(n-i-1)} dv_{n-i} \ldots dv_n \nonumber\\ &+\mu \int_s^t \int_s^{v_n} \cdots \int_s^{v_2} \prod_{k=2}^{n} \Phi(v_k-v_{k-1}) dv_1\ldots dv_n. \end{align} Using Lemma \ref{lemma:magic}, \begin{align*} &\sum_{i=1}^{n-2} \int_s^t \int_s^{v_n} \cdots \int_s^{v_{n-i+1}} \prod_{k=n-i+1}^{n} \Phi(v_k-v_{k-1}) h_{v_{n-i}}^{s,(n-i-1)} dv_{n-i} \ldots dv_n \\ &=\sum_{i=1}^{n-2} \int_s^t \int_s^{u} \Phi_i(u-r) h_{r}^{s,(n-i-1)} dr du, \end{align*} and $$\mu \int_s^t \int_s^{v_{n}} \cdots \int_s^{v_2} \prod_{k=2}^{n} \Phi(v_k-v_{k-1}) dv_1\ldots dv_n=\mu \int_s^t \int_s^{u} \Phi_{n-1}(u-r) dr du.$$ Plugging back these expressions in (\ref{eq:cond1}) we get \begin{align} \label{eq:cond2} &\mathbb E_{s-}\left[\int_{(s,t]} dX_r^{(n)} \right] \nonumber\\ &=\int_s^t h_{u}^{s,(n-1)} du +\sum_{i=1}^{n-2} \int_s^t \int_s^{u} \Phi_i(u-r) h_{r}^{s,(n-i-1)} dr du +\mu \int_s^t \int_s^{u} \Phi_{n-1}(u-r) dr du. \end{align} We now sum the previous quantity over $n \geq 2$. The main term to be treated is the second one that we treat separately below. Note also that using Convention \ref{convention:sums} for $n=2$ we get \begin{align} \label{eq:tempdoublesum} &\sum_{n=2}^{+\infty} \sum_{i=1}^{n-2} \int_s^t \int_s^{u} \Phi_i(u-r) h_{r}^{s,(n-i-1)} dr du \nonumber\\ &=\sum_{n=3}^{+\infty} \sum_{i=1}^{n-2} \int_s^t \int_s^{u} \Phi_i(u-r) h_{r}^{s,(n-i-1)} dr du \nonumber\\ &= \sum_{n=3}^{+\infty} \sum_{j=1}^{n-2} \int_s^t \int_s^{u} \Phi_{n-j-1}(u-r) h_{r}^{s,(j)} dr du \nonumber\\ &= \sum_{j=1}^{+\infty} \int_s^t \int_s^{u} h_{r}^{s,(j)} \left(\sum_{n=3}^{+\infty} \ind{j\leq n-2} \Phi_{n-j-1}(u-r) \right)dr du \nonumber\\ &= \int_s^t \int_s^{u} h_{r}^{s,(1)} \left(\sum_{n=3}^{+\infty} \Phi_{n-2}(u-r) \right)dr du + \sum_{j=2}^{+\infty} \int_s^t \int_s^{u} h_{r}^{s,(j)} \left(\sum_{n=j+2}^{+\infty} \Phi_{n-j-1}(u-r) \right)dr du \nonumber\\ &= \int_s^t \int_s^{u} h_{r}^{s,(1)} \left(\sum_{k=1}^{+\infty} \Phi_{k}(u-r) \right)dr du + \sum_{j=2}^{+\infty} \int_s^t \int_s^{u} h_{r}^{s,(j)} \left(\sum_{k=1}^{+\infty} \Phi_{k}(u-r) \right)dr du \nonumber\\ &= \int_s^t \int_s^{u} h_{r}^{s,(1)} \Psi(u-r) dr du + \sum_{j=2}^{+\infty} \int_s^t \int_s^{u} h_{r}^{s,(j)} \Psi(u-r) dr du \nonumber\\ &=\sum_{j=1}^{+\infty} \int_s^t \int_s^{u} h_{r}^{s,(j)} \Psi(u-r) dr du \nonumber\\ &=\int_s^t \int_s^{u} \Psi(u-r) h_{r}^{s} dr du. \end{align} With these computations at hand, Relations (\ref{eq:cond1}) and (\ref{eq:cond2}) lead to \begin{align*} &\mathbb E_{s-}\left[\int_{(s,t]} dX_r \right] \\ &=\mathbb E_{s-}\left[\int_{(s,t]} dX_r^{(n)} \right] + \sum_{n=2}^{+\infty} \mathbb E_{s-}\left[\int_{(s,t]} dX_r^{(n)} \right] \\ &= \int_s^t \mu du + \sum_{n=2}^{+\infty} \int_s^t h_{u}^{s,(n-1)} du + \sum_{n=2}^{+\infty} \sum_{i=1}^{n-2} \int_s^t \int_s^{u} \Phi_i(u-r) h_{r}^{s,(n-i-1)} dr du +\mu \sum_{n=2}^{+\infty} \int_s^t \int_s^{u} \Phi_{n-1}(u-r) dr du \\ &= \int_s^t (\mu+h_u^s) du + \int_s^t \int_s^{u} \Psi(u-r) (\mu + h_{r}^{s}) dr du, \end{align*} which concludes the proof. \end{proof} \noindent We now compute the right-hand side in (\ref{eq:martingale}). \begin{lemma} \label{lemma:calcullambda} For any $0 \leq s \leq t$ we have that : \begin{align} \mathbb E_{s-}\left[\int_s^t \ell_r dr \right] = \int_s^t (\mu+h_u^s) du + \int_s^t \int_s^{u} \Psi(u-r) (\mu + h_{r}^{s}) dr du. \end{align} \end{lemma} \begin{proof} The proof is rather similar to the one of Lemma \ref{eq:CalculX}, we provide a proof for the sake of completeness.\\ Let $0 \leq s \leq r \leq t$. Recall Notation \ref{eq:hn}. We have \begin{align} \label{eq:calcultemplambda} \mathbb E_{s-}\left[\ell_r\right] &= \mathbb E_{s-}\left[\mu + \int_{(0,r)} \Phi(r-u) dX_u \right] \nonumber\\ &= \mu + \sum_{n=1}^{+\infty} \mathbb E_{s-}\left[\int_{(0,r)} \Phi(r-u) dX_u^{(n)} \right] \nonumber\\ &= \mu + \sum_{n=1}^{+\infty} \int_{(0,s)} \Phi(r-u) dX_u^{(n)} +\sum_{n=1}^{+\infty} \mathbb E_{s-}\left[\int_{(s,r)} \Phi(r-u) dX_u^{(n)} \right] \nonumber\\ &= \mu + h_r^s + \sum_{n=1}^{+\infty} \mathbb E_{s-}\left[\int_{(s,r)} \Phi(r-u) dX_u^{(n)} \right]. \end{align} Let $n\geq 2$, \begin{align*} &\mathbb E_{s-}\left[\int_{(s,r)} \Phi(r-u) dX_u^{(n)} \right]\\ &= \mathbb E_{s-}\left[\int_{(s,r)} \Phi(r-v_n) \int_{(0,v_n]\times \mathbb{R}_+} \hspace{-1em} \cdots \int_{(0,v_{2}]\times \mathbb{R}_+} \ind{\theta_1 \leq \mu} \prod_{i=2}^{n} \ind{\theta_i \leq \Phi(v_i-v_{i-1})} N(dv_1,d\theta_1) \cdots N(dv_{n},d\theta_{n}) \right] \\ &= \int_s^r \Phi(r-v_n) \mathbb E_{s-}\left[\int_{(0,v_n]\times \mathbb{R}_+} \Phi(v_n-v_{n-1}) \int_{(0,v_{n-1}]\times \mathbb{R}_+} \cdots \int_{(0,v_{2}]\times \mathbb{R}_+} \right.\\ &\left. \hspace{5em} \ind{\theta_1 \leq \mu} \prod_{i=2}^{n-1} \ind{\theta_i \leq \Phi(v_i-v_{i-1})} N(dv_1,d\theta_1) \cdots N(dv_{n-1},d\theta_{n-1}) \right] dv_n \\ &=\int_s^r \Phi(r-v_n) Q(n,v_n) dv_n\\ &=\int_s^r \Phi(r-v_{n}) h_{v_{n}}^{s,(n-1)} dv_{n}\\ &+\sum_{i=1}^{n-2} \int_s^r \Phi(r-v_{n}) \int_s^{v_{n}} \cdots \int_s^{v_{n-i+1}^*} \prod_{k=n-i+1}^{n} \Phi(v_k-v_{k-1}) h_{v_{n-i}}^{s,(n-i-1)} dv_{n-i} \ldots dv_{n-1} dv_{n}\\ &+ \mu \int_s^r \Phi(r-v_{n}) \int_s^{v_{n}} \int_s^{v_{n-1}} \cdots \int_s^{v_2} \prod_{k=2}^{n} \Phi(v_k-v_{k-1}) dv_1\ldots dv_{n-1} dv_{n} \end{align*} where the last equality follows from Lemma \ref{lemma:Qnu}. Integrating the previous expression in $r$ on $(s,t]$ and using Lemma \ref{lemma:magic} one gets \begin{align*} &\int_s^t \mathbb E_{s-}\left[\int_{(s,r)} \Phi(r-u) dX_u^{(n)} \right] dr\\ &=\int_s^t \int_s^{v_{n}} \Phi(v_n-v_{n+1}) h_{v_{n+1}}^{s,(n-1)} dv_{n+1} dv_n\\ &+\sum_{i=1}^{n-2} \int_s^t \int_s^{v_{n+1}} \int_s^{v_{n}} \cdots \int_s^{v_{n-i+1}^*} \prod_{k=n-i+1}^{n+1} \Phi(v_k-v_{k-1}) h_{v_{n-i}}^{s,(n-i-1)} dv_{n-i} \ldots dv_{n-1} dv_{n} dv_{n+1}\\ &+ \mu \int_s^t \int_s^{v_{n+1}} \int_s^{v_{n}} \int_s^{v_{n-1}} \cdots \int_s^{v_2} \prod_{k=2}^{n+1} \Phi(v_k-v_{k-1}) dv_1\ldots dv_{n-1} dv_{n} dv_{n+1}\\ &=\int_s^t \int_s^u \Phi(u-r) h_{r}^{s,(n-1)} dr du\\ &+\sum_{i=1}^{n-2} \int_s^t \int_s^u \Phi_{i+1}(u-r) h_{r}^{s,(n-i-1)} dr du \\ &+ \mu \int_s^t \int_s^u \Phi_n(u-r) dr du \end{align*} Using the same computations than (\ref{eq:tempdoublesum}) we deduce (using Convention \ref{convention:sums}) that \begin{align*} &\sum_{n=1}^{+\infty} \int_s^t \mathbb E_{s-}\left[\int_{(s,r)} \Phi(r-u) dX_u^{(n)} \right] dr\\ &=\int_s^t \mathbb E_{s-}\left[\int_{(s,r)} \Phi(r-u) dX_u^{(1)} \right] + \sum_{n=2}^{+\infty} \int_s^t \mathbb E_{s-}\left[\int_{(s,r)} \Phi(r-u) dX_u^{(n)} \right] dr\\ &=\mu \int_s^t \int_s^u \Phi(r-u) dr du+\sum_{n=2}^{+\infty} \int_s^t \int_s^u \Phi(u-r) h_{r}^{s,(n-1)} dr du\\ &+\sum_{n=3}^{+\infty} \sum_{i=1}^{n-2} \int_s^t \int_s^u \Phi_{i+1}(u-r) h_{r}^{s,(n-i-1)} dr du + \sum_{n=2}^{+\infty} \mu \int_s^t \int_s^u \Phi_n(u-r) dr du \\ &=\mu \int_s^t \int_s^u \Psi(u-r) dr du \\ &+\sum_{n=2}^{+\infty} \int_s^t \int_s^u \Phi(u-r) h_{r}^{s,(n-1)} dr du\\ &+\sum_{n=3}^{+\infty} \sum_{j=2}^{(n+1)-2} \int_s^t \int_s^u \Phi_{j}(u-r) h_{r}^{s,(n+1-j-1)} dr du \\ &=\mu \int_s^t \int_s^u \Psi(u-r) dr du +\sum_{n=2}^{+\infty} \int_s^t \int_s^u \Phi(u-r) h_{r}^{s,(n-1)} dr du +\sum_{n=2}^{+\infty} \sum_{j=2}^{n-2} \int_s^t \int_s^u \Phi_{j}(u-r) h_{r}^{s,(n-j-1)} dr du\\ &=\mu \int_s^t \int_s^u \Psi(u-r) dr du +\sum_{n=2}^{+\infty} \sum_{j=1}^{n-2} \int_s^t \int_s^u \Phi_{j}(u-r) h_{r}^{s,(n-j-1)} dr du\\ &=\mu \int_s^t \int_s^u \Psi(u-r) dr du + \int_s^t \int_s^u \Psi(u-r) h_{r}^{s} dr du \\ &=\int_s^t \int_s^u \Psi(u-r) (\mu+h_{r}^{s}) dr du. \end{align*} where we have used Relation (\ref{eq:tempdoublesum}). Thus we have proved that \begin{align*} \int_s^t \mathbb E_{s-}\left[\int_{(s,r)} \Phi(r-u) dX_u \right] dr &= \sum_{n=1}^{+\infty} \int_s^t \mathbb E_{s-}\left[\int_{(s,r)} \Phi(r-u) dX_u^{(n)} \right] dr \\ &=\int_s^t \int_s^u \Psi(u-r) (\mu+h_{r}^{s}) dr du. \end{align*} Hence coming back to Relation (\ref{eq:calcultemplambda}) we obtain $$\mathbb E_{s-}\left[\int_s^t \ell_r dr\right] = \int_s^t (\mu + h_u^s) du + \int_s^t \int_s^u \Psi(u-r) (\mu+h_{r}^{s}) dr du.$$ \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{ Introduction and main results} In [9], Hawkins showed that a deformation of the differential graded algebra of differential forms $\Omega^*(P)$ on a manifold $P$ gives arise to a Poisson tensor on $P$ (which characterizes the deformation) and a torsion-free and flat contravariant connection whose metacurvature vanishes. In Riemannian case, Hawkins showed that if a Riemannian manifold $P$ is deformed into a real spectral triple, the Riemannian metric and the Poisson tensor (which characterizes the deformation) are compatible in the following sense: \begin{enumerate} \item the metric contravariant connection ${\cal D}$ associated to the metric and the Poisson tensor is flat. \item The metacurvature of ${\cal D}$ vanishes. \item The Poisson tensor $\pi$ is compatible with the Riemannian volume $\epsilon$: $$d(i_{\pi}\epsilon)=0.$$\end{enumerate} The metric contravariant connection is a torsion-free contravariant connection associated naturally to any couple of Riemannian metric and Poisson tensor. It has appeared first in [3]. The metacurvature, introduced by Hawkins in [9], is a $(2,3)$-tensor field (symmetric in the contravariant indices and antisymmetric in the covariant indices) associated naturally to any torsion-free and flat contravariant connection. In this paper, we will construct a large class of smooth manifolds which satisfy the necessary conditions (presented by Hawkins in [9]) to deform the differential graded algebra of differential forms. We will also give a large class of (pseudo)-Riemannian manifolds with a Poisson tensor compatible with the metric in the sense of Hawkins. In order to state the main results of this paper, let us recall some classical results on symplectic Lie groups and on solutions of the classical Yang-Baxter equation. Let $G$ be a Lie group and ${\cal G}$ its Lie algebra. The group $G$ is a sympelctic Lie group if there exists on $G$ a left invariant symplectic form. It is clear that $G$ is a symplectic Lie group if and only if there exists on ${\cal G}$ a non-degenerate 2-form $\omega$ such that $$\omega([x,y],z)+\omega([y,z],x)+\omega([z,x],y)=0,\quad x,y,z\in{\cal G}.\eqno(1).$$ A Lie algebra with a 2-form such above is called a symplectic Lie algebra. It is well known since the end of 1960s (see e.g. [5]) that, if $({\cal G},\omega)$ is a symplectic Lie algebra, the formula $$\omega(A_xy,z)=-\omega(y,[x,z])\eqno(2)$$defines a product $A:{\cal G}\times {\cal G}\longrightarrow {\cal G}$ which verifies:\begin{enumerate} \item $A_xy-A_yx=[x,y];$\item $A_{[x,y]}z-(A_xA_yz-A_yA_xz)=0.$\end{enumerate}Thus $A$ defines on a $G$ a flat and torsion-free linear (covariant) connection (or equivalently an affine structure) which is left invariant. Let ${\cal G}$ be a Lie algebra and $r\in{\cal G}\wedge{\cal G}$. We will also denote by $r:{\cal G}^*\longrightarrow{\cal G}$ the linear map induced by $r$. The bi-vector $r$ satisfies the classical Yang-Baxter equation if $$[r,r]=0,\eqno(Y-B)$$where $[r,r]\in{\cal G}\wedge{\cal G}\wedge{\cal G}$ is defined by $$[r,r](\alpha,\beta,\gamma)=\alpha([r(\beta),r(\gamma)])+\beta([r(\gamma),r(\alpha)])+ \gamma([r(\alpha),r(\beta)]).$$ Solutions of $(Y-B)$ are strongly related to symplectic Lie algebras and hence to left invariant affine structures. Let us explain this relation. One can observe that to give $r\in {\cal G}\wedge{\cal G}$ is equivalent to give a vectorial subspace $S_r\subset\cal G$ and a non-degenerate 2-form $\omega_r\in\wedge^2S^*_r$. Indeed, for $r\in {\cal G}\wedge{\cal G}$, we put $S_r=Imr$ and $\omega_r(u,v)=r(r^{-1}(u),r^{-1}(v))$ where $u,v\in S_r$ and $r^{-1}(u)$ is any antecedent of $u$ by $r$. Conversely, let $(S,\omega)$ be a vectorial subspace of $\cal G$ with a non-degenerate 2-form. The 2-form $\omega$ defines an isomorphism $\omega^b:S\longrightarrow S^*$ by $\omega^b(u)=\omega(u,.)$, we denote by $\omega^\#:S^*\longrightarrow S$ its inverse and we put $r=\omega^\#\circ i^*$ where $i^*:{\cal G}^*\longrightarrow S^*$ is the dual of the inclusion $i:S\hookrightarrow\cal G$. With this observation in mind, the following classical proposition gives another description of the solutions of $(Y-B)$. \begin{pr} Let $r\in {\cal G}\wedge{\cal G}$ and $(Imr,\omega_r)$ its associated subspace. The following assertions are equivalent: \begin{enumerate}\item $r$ is a solution of $(Y-B)$. \item $(Imr,\omega_r) $ is a symplectic subalgebra of $\cal G$.\end{enumerate} \end{pr} A solution $r$ of $(Y-B)$ will be called abelian, unimodular etc... if $Imr$ is an abelian subalgebra, an unimodular subalgebra etc... \bigskip {\bf Remark.}\begin{enumerate}\item Let ${\cal G}$ be a Lie algebra and $S$ an even dimensional abelian subalgebra of ${\cal G}$. Any non-degenerate 2-form $\omega$ on $S$ verifies the assertion 2. in Proposition 1.1 and hence $(S,\omega)$ defines a solution of $(Y-B)$. \item Remark that any solution of $(Y-B)$ on a compact Lie algebra is abelian.\end{enumerate} Let $P$ a smooth manifold and ${\cal G}$ a Lie algebra which acts on $P$, i.e., there exists a morphism of Lie algebras $\Gamma:{\cal G}\longrightarrow{\cal X}(P)$ from ${\cal G}$ to the Lie algebra of vector fields on $P$. Let $r\in\wedge^2{\cal G}$ be a solution of $(Y-B)$. If $r=\sum_{i<j}a_{ij}u_i\wedge u_j$, put $${\cal D}^{\Gamma}_\alpha\beta:=\sum_{i<j}a_{ij}\left(\alpha(U_i)L_{U_j}\beta-\alpha(U_j)L_{U_i}\beta \right),\eqno(3)$$where $\alpha,\beta$ are two differential 1-forms on $P$ and $U_i=\Gamma(u_i)$. The restriction of $\Gamma$ to $Imr$ defines on $P$ a singular foliation which coincides with the symplectic foliation of the Poisson tensor $\Gamma(r)$ if the action of $Imr$ is locally free. In this case, since $Imr$ is a symplectic Lie algebra, the product given by $(2)$ defines, on any symplectic leaf of $\Gamma(r)$, a flat and torsion-free covariant connection . Thus any symplectic leaf of $\Gamma(r)$ becomes an affine manifold. In fact, our main result in this paper, asserts that ${\cal D}^\Gamma$ given by $(3)$ is a flat and torsion-free contravariant connection associated to $\Gamma(r)$. Hence ${\cal D}^\Gamma$ induces on any symplectic leaf an affine structure which is the affine structure described above. Let us state now our main results. \begin{th} Let $P$ be a differentiable manifold, ${\cal G}$ a Lie algebra with $\Gamma:{\cal G}\longrightarrow {\cal X}(P)$ a Lie algebras morphism from ${\cal G}$ to the Lie algebra of vector fields and $r\in\wedge^2{\cal G}$ a solution of the classical Yang-Baxter equation such that $Imr$ is an unimodular Lie algebra. Then, for any volume form $\epsilon$ on $P$ such that $L_{\Gamma(u)}\epsilon=0$ for each $u\in Imr$, $\Gamma(r)$ is compatible with $\epsilon$, i.e., $$d(i_{\Gamma(r)}\epsilon)=0.$$\end{th} \begin{th} Let $P$ be a differentiable manifold, ${\cal G}$ a Lie algebra with $\Gamma:{\cal G}\longrightarrow {\cal X}(P)$ a Lie algebras morphism from ${\cal G}$ to the Lie algebra of vector fields and $r\in\wedge^2{\cal G}$ a solution of the classical Yang-Baxter equation. Then: $i)$ ${\cal D}^\Gamma$ given by (3) defines a torsion-free and flat contravariant connection ${\cal D}^{\Gamma}$ associated to the Poisson tensor $\Gamma(r)$ which depends only on $r$ and $\Gamma$. $ii)$ If $P$ is a Riemannian manifold and $\Gamma(u)$ is a Killing vector field for any $u\in Imr$, then ${\cal D}^{\Gamma}$ is the metric contravariant connection associated to the metric and $\Gamma(r)$. $iii)$ If the restriction of $\Gamma$ to $Imr$ is a locally free action then the metacurvature of ${\cal D}^\Gamma$ vanishes.\end{th} There are some interesting implications of Theorem 1.1 and Theorem 1.2: \begin{enumerate} \item Let $G$ be a Lie group and ${\cal G}$ its Lie algebra identified to $T_eG$. Let $\Gamma^l:{\cal G}\longrightarrow{\cal X}(G)$ and $\Gamma^r:{\cal G}\longrightarrow{\cal X}(G)$ be respectively the left and the right action of ${\cal G}$ on $G$. For any $\alpha\in{\cal G}^*$, we denote by $\alpha^l$ (resp. $\alpha^r$) the left-invariant (resp. right-invariant) differential 1-form on $G$ associated to $\alpha$. For any $r\in\wedge^2{\cal G}$ a solution of $(Y-B)$, one can check easily that ${\cal D}^{\Gamma^l}$ and ${\cal D}^{\Gamma^r}$ are given by $${\cal D}^{\Gamma^l}_{\alpha^l}\beta^l=-(ad^*_{r(\alpha)}\beta)^l\quad \mbox{and}\quad {\cal D}^{\Gamma^r}_{\alpha^r}\beta^r=(ad^*_{r(\alpha)}\beta)^r.$$ Let $<,>^l$ (resp. $<,>^r$) be a left-invariant (resp. right-invariant) pseudo-Riemannian metric on $G$. If $r$ is unimodular, then the Poisson tensor $\Gamma^r(r)$ (resp. $\Gamma^l(r)$) is compatible in the sense of Hawkins with $<,>^l$ (resp. $<,>^r$). \item Since any Lie algebra admits a non trivial solution of $(Y-B)$ (see [6]), according to Theorem 1.2, any locally free action of ${\cal G}$ on a manifold $P$ gives arise to a non trivial Poisson tensor with a non trivial torsion-free and flat contravariant connection whose metacurvature vanishes; so the necessary conditions to deform the differential graded algebra of differential forms $\Omega^*(P)$ are satisfied (see [9]). \item Any locally free action by isometries of an unimodular symplectic Lie group $G$ on a pseudo-Riemannian manifold $P$ gives arise on $P$ to a Poisson tensor which is compatible with the metric in the sense of Hawkins. \item Any symplectic nilmanifold is a quotient of nilpotent symplectic Lie group by a discrete co-compact subgroup (see [1]), so any symplectic nilmanifold admit a torsion-free and flat contravariant connection whose metacurvature vanishes. Symplectic nilpotent groups were classified by Medina and Revoy [14]. \item The affine group $K^n\times GL(K^n)$ ($K= {\rm I}\!{\rm R} $ or $ \;{}^{ {}_\vert }\!\!\!{\rm C} $) admits many left invariant symplectic forms (see [2]) which implies that its Lie algebra carries many invertible solutions of $(Y-B)$. So any locally free action of the affine group on a manifold gives arise to a Poisson tensor with a non trivial, torsion-free and flat contravariant connection whose metacurvature vanishes.\item Let $G$ be a Lie group with a bi-invariant pseudo-Riemannian metric and $r$ an unimodular solution of $(Y-B)$. For any discrete, co-compact subgroup $\Lambda$ of $G$, $G$ acts on the compact manifold $P:=G/\Lambda$ by isometries, so we get a Poisson tensor on the compact pseudo-Riemannian manifold $P$ compatible with the pseudo-Riemannian metric in the sense of Hawkins. A connected Lie group $G$ admits a bi-invariant Riemannian metric if and only if it is isomorphic to the cartesian product of a compact group and a commutative group (see [14]). Any solution of $(Y-B)$ on the Lie algebra of a such group is abelian. So any triple $(G,\Lambda,S)$, where $G$ is a Lie group with a bi-invariant Riemannian metric, $\Lambda$ is a discrete and co-compact subgroup of $G$ and $S$ an even dimensional subalgebra of the Lie algebra of $G$, gives arise to a compact Riemannian manifold with a Poisson tensor compatible with the metric in the sense of Hawkins. Connected Lie groups which admit a bi-invariant pseudo-Riemannian metric were classified in [11]. We will give now an example of a compact Lorentzian manifold with a Poisson tensor compatible with the Lorentzian metric in the sense of Hawkins and such that the Poisson tensor cannot be constructed locally by commuting Killing vector fields. This shows that Theorem 6.6 in [9] is false in the Lorentzian case. For $\lambda=(\lambda_1,\lambda_2)\in {\rm I}\!{\rm R} ^2$, $0<\lambda_1\leq\lambda_2$, the oscillator group of dimension 6 is the connected and simply connected Lie group $G_\lambda$ whose Lie algebra is $${\cal G}_\lambda:=vect\{e_{-1},e_0,e_1,e_e,\check{e}_1,\check{e}_2\}$$with brackets $$[e_{-1},e_j]=\lambda_j\check{e}_j,\qquad [e_j,\check{e}_j]=e_0,\qquad [e_{-1},\check{e}_j]=-\lambda_j e_j$$and the unspecified brackets are either zero or given by antisymmetry. These groups were introduced by Medina (see [12]) as the only non commutative simply connected solvable Lie groups which have a bi-invariant Lorentzian metric. Their discrete co-compact subgroups where classified in [12]. These groups have discrete co-compact subgroups if and only if the set $\{\lambda_1,\lambda_2\}$ generates a discrete subgroup of $( {\rm I}\!{\rm R} ,+)$. Let $\Lambda$ be a discrete co-compact subgroup of $G_\lambda$. The bi-invariant Lorentzian metric on $G_\lambda$ defines a Lorentzian metric on $P:=G_\lambda/\Lambda$ and we get an action of $G_\lambda$ on $P$ by isometries. Consider now $r\in\wedge^2{\cal G}_\lambda$ given by $$r=e_0\wedge e_1+e_2\wedge \check{e}_1.$$It is easy to check that $r$ is a solution of the Yang-Baxter equation and $Imr$ is a nilpotent and hence an unimodular Lie algebra. According to Theorem 1.1, we get a Poisson tensor on $P$ which is compatible with the Lorentzian metric. But, the Poisson tensor is non parallel with respect to the metric contravariant connection and hence it cannot be constructed locally from commuting Killing vectors fields.\end{enumerate} {\bf Remark.} \begin{enumerate}\item In Theorem 1.2, if the action of $Imr$ is not locally free, in general, the metacurvature does not vanishe. For instance, consider the 2-dimensional Lie algebra ${\cal G}=Vect\{e_1,e_2\}$ with $[e_1,e_2]=e_1$ and the action $\Gamma:{\cal G}\longrightarrow{\cal X}( {\rm I}\!{\rm R} ^2)$ given by $$\Gamma(e_1)=\frac{\partial}{\partial x}\quad\mbox{and}\quad \Gamma(e_2)=x\frac{\partial}{\partial x}.$$ For $r=e_1\wedge e_2$, $\Gamma(r)$ is the trivial Poisson tensor and $${\cal D}^\Gamma_\alpha\beta=\alpha\left(\frac{\partial}{\partial x}\right) \beta\left(\frac{\partial}{\partial x}\right)dx.$$ A differential 1-form $\gamma$ is parallel with respect ${\cal D}^\Gamma$ if and only if $\gamma=f(x,y)dy$. In this case, for any $\alpha$ and $\beta$, we have $${\cal D}^\Gamma_\alpha{\cal D}^\Gamma_\beta d\gamma= \alpha\left(\frac{\partial}{\partial x}\right) \beta\left(\frac{\partial}{\partial x}\right) \frac{\partial f}{\partial x}dx\wedge dy.$$This shows that the metacurvature does not vanishe (see (7)).\end{enumerate} Section 2 is devoted to a complete proof of Theorem 1.1 and Theorem 1.2.\bigskip {\bf Notations.} For a smooth manifold $P$, $C^\infty(P)$ will denote the space of smooth functions on $P$, $\Gamma(P,V)$ will denote the space of smooth sections of a vector bundle, $\Omega^p(P):=\Gamma(P,\wedge^pT^*P)$ and ${\cal X}^p(P):=\Gamma(P,\wedge^pTP)$. Lower case Greek characters $\alpha,\beta,\gamma$ will mostly denote 1-forms. However, $\pi$ will denote a Poison bivector field and $\omega$ will denote a symplectic form. For a manifold $P$ with a Poisson tensor $\pi$, ${\pi_{\#}}:T^*P\longrightarrow TP$ will denote the anchor map given by $\beta({\pi_{\#}}(\alpha))=\pi(\alpha,\beta),$ and $[\;,\;]_\pi$ will denote the Koszul bracket given by $$[\alpha,\beta]_\pi= L_{\pi_{\#}(\alpha)}\beta-L_{\pi_{\#}(\beta)}\alpha-d(\pi(\alpha,\beta)).$$ We will denote $P^{reg}$ the dense open set where the rank of $\pi$ is locally constant. \section{Proof of Theorem 1.1 and Theorem 1.2} \subsection{Preliminaries} Contravariant connections associated to a Poisson structure have recently turned out to be useful in several areas of Poisson geometry. Contravariant connections were defined by Vaismann [16] and were analyzed in detail by Fernandes [7]. This notion appears extensively in the context of noncommutative deformations see [8], [9], [15]. One can consult [7] for the general properties of contravariant connections. In this subsection, I will give some general properties of contravariant connections. Namely, I will recall the definition of the metacurvature of a flat and torsion-free contravariant connection ${\cal D}$, and I will give a necessary and sufficient condition for the vanishing of the metacurvature in the case where ${\cal D}$ is an ${\cal F}$-connection (see [7]). Let $(P,\pi)$ be a Poisson manifold and $V\stackrel{p}\longrightarrow P$ a vector bundle over $P$. A contravariant connection on $V$ with respect to $\pi$ is a map ${\cal D}:\Omega^1(P)\times \Gamma(P,V)\longrightarrow\Gamma(P,V)$, $(\alpha,s)\mapsto{\cal D}_\alpha s$ satisfying the following properties:\begin{enumerate} \item ${\cal D}_\alpha s$ is linear over $C^\infty(P)$ in $\alpha$: $${\cal D}_{f\alpha+h\beta}s=f{\cal D}_\alpha s+h{\cal D}_\beta s,\quad f,g\in C^\infty(P);$$ \item ${\cal D}_\alpha s$ is linear over $ {\rm I}\!{\rm R} $ in $s$: $${\cal D}_{\alpha}(as_1+bs_2)=a{\cal D}_\alpha s_1+b{\cal D}_\alpha s_2,\quad a,b\in {\rm I}\!{\rm R} ;$$ \item ${\cal D}$ satisfies the following product rule: $${\cal D}_\alpha(fs)=f{\cal D}_\alpha s+{\pi_{\#}}(\alpha)(f)s,\quad f\in C^\infty(P).$$\end{enumerate} The curvature of a contravariant connection ${\cal D}$ is formally identical to the usual definition $$K(\alpha,\beta)={\cal D}_\alpha{\cal D}_\beta-{\cal D}_\beta{\cal D}_\alpha-{\cal D}_{[\alpha,\beta]_\pi}.$$We will call ${\cal D}$ flat if $K$ vanishes identically. A contravariant connection ${\cal D}$ will be called an ${\cal F}$-connection if its satisfies the following properties $${\pi_{\#}}(\alpha)=0\qquad\Rightarrow \qquad{\cal D}_\alpha=0.$$We will call ${\cal D}$ un ${\cal F}^{reg}$-connection if the restriction of ${\cal D}$ to $P^{reg}$ is an ${\cal F}$-connection. If $V=T^*P$, one can define the torsion $T$ of ${\cal D}$ by $$T(\alpha,\beta)={\cal D}_\alpha\beta-{\cal D}_\beta\alpha.$$ Let us define now an interesting class of contravariant connections, namely contravariant connection associated naturally to a Poisson tensor and a pseudo-Riemannian metric. Let $P$ be a pseudo-Rimannian manifold and $\pi$ a Poisson tensor on $P$. The metric contravariant connection associated naturally to $(\pi,<\;,\;>)$ is the unique contravariant connection $D$ such that: \begin{enumerate}\item the metric $<,>$ is parallel with respect to $D$, i.e., $$\pi_{\#}(\alpha).<\beta,\gamma>=<D_\alpha\beta,\gamma>+<\beta,D_\alpha\gamma>;$$ \item $D$ is torsion-free. \end{enumerate} One can define $D$ by the Koszul formula \begin{eqnarray*} 2<D_\alpha\beta,\gamma>&=&\pi_{\#}(\alpha).<\beta,\gamma>+\pi_{\#}(\beta).<\alpha,\gamma>- \pi_{\#}(\gamma).<\alpha,\beta>\\ &+&<[\gamma,\alpha]_\pi,\beta>+<[\gamma,\beta]_\pi,\alpha>+<[\alpha,\beta]_\pi,\gamma>. \qquad(4)\end{eqnarray*} Let us recall briefly the definition of the metacurvature. For details see [9]. Let $(P,\pi)$ be a Poisson manifold and ${\cal D}$ a torsion-free and flat contravariant connection with respect to $\pi$. Then, there exists a bracket $\{\;,\;\}$ on the differential graded algebra of differential forms $\Omega^*(P)$ such that: \begin{enumerate} \item $\{\;,\;\}$ is $ {\rm I}\!{\rm R} $-bilinear, degree 0 and antisymmetric, i.e. $$\{\sigma,\rho\}=-(-1)^{deg\sigma deg\rho}\{\rho,\sigma\}. $$ \item The differential $d$ is a derivation with respect to $\{\;,\;\}$ i.e. $$d\{\sigma,\rho\}=\{d\sigma,\rho\}+(-1)^{deg\sigma}\{\sigma,d\rho\}. $$ \item $\{\;,\;\}$ satisfies the product rule $$\{\sigma,\rho\wedge\lambda\}=\{\sigma,\rho\}\wedge\lambda+(-1)^{deg\sigma deg\rho}\rho\wedge\{\sigma,\lambda\}.$$ \item For any $f,h\in C^\infty(P)$ and for any $\sigma\in\Omega^*(P)$ the bracket $\{f,g\}$ coincides with the initial Poisson bracket and $$\{f,\sigma\}={\cal D}_{df}\sigma.$$\end{enumerate} Hawkins called this bracket a generalized Poisson bracket and showed that there exists a $(2,3)$-tensor ${\cal M}$ such that the following assertions are equivalent:\begin{enumerate}\item The generalized Poisson bracket satisfies the graded Jacobi identity $$\{\{\sigma,\rho\},\lambda\}=\{\sigma,\{\rho,\lambda\}\} -(-1)^{deg\sigma deg\rho}\{\rho,\{\sigma,\lambda\}\}. $$\item The tensor ${\cal M}$ vanishes identically.\end{enumerate} ${\cal M}$ is called the metacurvature and is given by $${\cal M}(df,\alpha,\beta)=\{f,\{\alpha,\beta\}\}-\{\{f,\alpha\},\beta\}- \{\{f,\beta\},\alpha\}.\eqno(5)$$ Hawkins pointed out in $[9]$ pp. 9, that for any parallel 1-form $\alpha$ and any 1-form $\beta$, the generalized Poisson bracket of $\alpha$ and $\beta$ is given by $$\{\alpha,\beta\}=-{\cal D}_{\beta}d\alpha.\eqno(6)$$Then, one can deduce from $(5)$ that for any parallel 1-form $\alpha$ and for any $\beta,\gamma$, we have $${\cal M}(\alpha,\beta,\gamma)=-{\cal D}_\beta{\cal D}_\gamma d\alpha.\eqno(7)$$ The definition of a contravariant connection is similar to the definition of an ordinary (covariant) connection, except that cotangent vectors have taken the place of tangent vectors. So one can translate many definitions, identities and proof for covariant connections to contravariant connections simply by exchanging the roles of tangent and cotangent vectors and replacing Lie Bracket with Koszul bracket. Nevertheless, there are some differences between those two notions. Fernandes pointed out in [7] that the equation $D\alpha=0$ cannot be solved locally for a general flat contravariant connection $D$. However, he showed that for a flat ${\cal F}$-connection this equation can be solved locally. We will give now a proof of this fact which is different from Fernandes's proof. \begin{pr} Let $(P,\pi)$ be a Poisson manifold and ${\cal D}$ a flat contravariant connection with respect $\pi$. Let $p$ be a regular point of $\pi$ and $(p_1,\ldots,p_r,q_1,\ldots,q_r,z_1,\ldots,z_l)$ a Darboux coordinates on a neighborhood ${\cal U}$ of $p$. We denote by $N$ the submanifold given by $z_i=0$, $i=1,\ldots,l$. Suppose that the restriction of ${\cal D}$ to ${\cal U}$ is an ${\cal F}$-connection. Then, for any $\beta\in \Gamma(N,T_{|N}^*P)$ there exists a unique 1-form $\widetilde\beta\in\Omega^1({\cal U})$ such that ${\cal D}\widetilde\beta=0$ and $\widetilde\beta_{|N}=\beta$.\end{pr} {\bf Proof.} In the Darboux coordinates system $(p_1,\ldots,p_r,q_1,\ldots,q_r,z_1,\ldots,z_l)$, we have $$\pi=\sum_{i=1}^r\frac{\partial }{\partial p_i}\wedge\frac{\partial }{\partial q_i}.$$ The connection ${\cal D}$ is entirely determined by the Christoffel symbols $\Gamma_{p_ip_j}^{p_k},\Gamma_{p_ip_j}^{q_k},\Gamma_{p_ip_j}^{z_k}$ and so on. We are looking for $$\widetilde\beta=\sum_{i=1}^r(a_idp_i+b_idq_i)+\sum_{i=1}^lc_idz_i$$ such that ${\cal D}\widetilde\beta=0$ and $\widetilde\beta$ along $N$ coincides with $\beta$. Since ${\cal D}$ is a torsion-free ${\cal F}$-connection, and since $dz_i$, $i=1,\ldots,l$, is in the center of $(\Omega^1({\cal U}),[\;,\;]_\pi)$, one have ${\cal D}_{dz_i}=Ddz_i=0$. Hence ${\cal D}\widetilde\beta=0$ is equivalent to the following two systems: $$\left\{\begin{array}{lll} \displaystyle\frac{\partial a_k}{\partial q_i}&=&-\displaystyle\sum_{l=1}^r(a_l\Gamma_{p_ip_l}^{p_k}+b_l\Gamma_{p_iq_l}^{p_k}),\\ \displaystyle\frac{\partial a_k}{\partial p_i}&=&\displaystyle\sum_{l=1}^r(a_l\Gamma_{q_ip_l}^{p_k}+b_l\Gamma_{q_iq_l}^{p_k}),\\ \displaystyle\frac{\partial b_k}{\partial q_i}&=&-\displaystyle\sum_{l=1}^r(a_l\Gamma_{p_ip_l}^{q_k}+b_l\Gamma_{p_iq_l}^{q_k}),\\ \displaystyle\frac{\partial b_k}{\partial q_i}&=&\displaystyle\sum_{l=1}^r(a_l\Gamma_{q_ip_l}^{q_k}+b_l\Gamma_{q_iq_l}^{q_k}). \end{array}\right.\eqno(*)$$ $$\left\{\begin{array}{lll}\displaystyle \frac{\partial c_k}{\partial q_i}&=&-\displaystyle\sum_{l=1}^r(a_l\Gamma_{p_ip_l}^{z_k}+b_l\Gamma_{p_iq_l}^{z_k}),\\ \displaystyle\frac{\partial c_k}{\partial p_i}&=&\displaystyle\sum_{l=1}^r(a_l\Gamma_{q_ip_l}^{z_k}+b_l\Gamma_{q_iq_l}^{z_k}). \end{array}\right.\eqno(**)$$ One can see the functions $(a_1,\ldots,a_r,b_1,\ldots,b_r)$ in $(*)$ as functions with variables $(q_i,p_i)$ and parameters $(z_1,\ldots,z_l)$. The vanishing of curvature gives necessary integrability conditions of $(*)$ and hence for any initial value and any value of the parameter there exists a unique solution (which depends smoothly on the parameter) $(a_1,\ldots,a_r,b_1,\ldots,b_r)$ of $(*)$. For $k=1,\ldots,l$, consider the 1-form with variables $(q_i,p_i)$ and parameters $(z_1,\ldots,z_l)$ $$\alpha_k=\sum_{i=1}^r \left(-\left(\sum_{l=1}^r(a_l\Gamma_{p_ip_l}^{z_k}+b_l\Gamma_{p_iq_l}^{z_k}) \right)dq_i+\left(\sum_{l=1}^r(a_l\Gamma_{q_ip_l}^{z_k}+b_l\Gamma_{q_iq_l}^{z_k})\right)dp_i \right).$$The vanishing of the curvature implies that $d\alpha_k=0$ and hence there exists a function $c_k$ such that $dc_k=\alpha_k$. This solves $(**)$. $\Box$ By combining $(7)$ and Proposition 2.1, we get the following useful Proposition. \begin{pr} Let $(P,\pi)$ be a Poisson manifold and ${\cal D}$ a torsion-free and flat ${\cal F}^{reg}$-connection with respect to $\pi$. Then the metacurvature of ${\cal D}$ vanishes if and only if, for any local parallel 1-form on $P^{reg}$, $D^2d\alpha=0$.\end{pr} \subsection{Proof of Theorem 1.1} Let $P$ be a differentiable manifold, ${\cal G}$ a Lie algebra with $\Gamma:{\cal G}\longrightarrow {\cal X}(P)$ a Lie algebras morphism from ${\cal G}$ to the Lie algebra of vector fields and $r\in\wedge^2{\cal G}$ a solution of the classical Yang-Baxter equation such that $Imr$ is an unimodular Lie algebra. There exists a basis $(e_1,\ldots,e_n,f_1,\ldots,f_n)$ of $Imr$ such that symplectic form $\omega_r$ is given by $$\omega_r=\sum_{i=1}^ne_i^*\wedge f_i^*.$$ Since $Imr$ is unimodular, then for any $z\in Imr$, the trace of $ad_z$ is null. This is equivalent to $$\sum_{i=1}^n\omega_r([z,e_i],f_i)+\omega_r(e_i,[z,f_i])=0.$$ From $(1)$, one get that this relation is equivalent to $$\sum_{i=1}^n\omega_r(z,[e_i,f_i])=0$$and hence to $$\sum_{i=1}^n[e_i,f_i]=0.$$Now let $\epsilon$ a volume form on $P$ such that $L_{\Gamma(e_i)}\epsilon=L_{\Gamma(f_i)}\epsilon=0$ for $i=1,\ldots,n$. We have \begin{eqnarray*} d(i_{\Gamma(r)}\epsilon)&=&d\left(\sum_{i=1}^ni_{\Gamma(e_i)\wedge\Gamma(f_i)}\epsilon\right)\\ &=&\sum_{i=1}^n\left(i_{[\Gamma(e_i),\Gamma(f_i)]}\epsilon- i_{\Gamma(e_i)}L_{\Gamma(f_i)}\epsilon-i_{\Gamma(f_i)}L_{\Gamma(e_i)}\epsilon\right)\\ &=&i_{\Gamma(\sum_{i=1}^n[e_i,f_i])}\epsilon=0.\end{eqnarray*} This gives a proof of Theorem 1.1.$\Box$ \subsection{Proof of Theorem 1.2} Let $P$ be a differentiable manifold, ${\cal G}$ a Lie algebra with $\Gamma:{\cal G}\longrightarrow {\cal X}(P)$ a Lie algebras morphism from ${\cal G}$ to the Lie algebra of vector fields and $r\in\wedge^2{\cal G}$ a solution of the classical Yang-Baxter equation. If a basis $(u_1,\ldots,u_n)$ of ${\cal G}$ is chosen then we can write $r=\sum_{i<j}a_{ij}u_i\wedge u_j$. For any $\alpha,\beta$ two 1-forms on $P$, we put $${\cal D}^{\Gamma}_\alpha\beta=\sum_{i<j}a_{ij}\left(\alpha(U_i)L_{U_j}\beta-\alpha(U_j)L_{U_i}\beta \right),$$where $U_i=\Gamma(u_i)$. One can check easily that this formula defines a contravariant connection with respect to $\Gamma(r)$ which depends only on $r$ and $\Gamma$ and doesn't depend on the basis $(u_1,\ldots,u_n)$. One can also check that ${\cal D}^\Gamma$ is torsion-free. If, for any $u\in Imr$, $\Gamma(u)$ is a Killing vector field, then ${\cal D}^\Gamma$ is the metric contravariant connection associated to the metric and the Poisson tensor $\Gamma(r)$. Let us now compute the curvature of ${\cal D}^\Gamma$ and show that it vanishes identically. There exists a basis $(u_1,\ldots,u_p,v_1,\ldots,v_p)$ of $Imr$ such that $r=\sum_{i=1}^pu_i\wedge v_i.$ We denote $U_i=\Gamma(u_i)$ and $V_i=\Gamma(v_i)$ for $i=1,\ldots,p$. We have $${\cal D}^\Gamma_\alpha\beta=L_{{\pi_{\#}}(\alpha)}\beta+\sum_{i=1}^pA^i(\alpha,\beta)$$where $A^i(\alpha,\beta)=\beta(U_i)d\left(\alpha(V_i)\right)-\beta(V_i)d\left(\alpha(U_i)\right)$ and ${\pi_{\#}}$ is the anchor map associated to $\Gamma(r)$. With this in mind, we get for any $f,g,h\in C^\infty(P)$, \begin{eqnarray*} K(df,dg,dh)&=&\sum_{i=1}^p\left(A^i(df,d\{g,h\})-A^i(dg,d\{f,h\})\right.\\ &+&\left. L_{{\pi_{\#}}(df)}A^i(dg,dh)-L_{{\pi_{\#}}(dg)}A^i(df,dh)-A^i(d\{f,g\},dh)\right)\\ &+&\sum_{i,j=1}^p\left(A^j(df,A^i(dg,dh))-A^j(dg,A^i(df,dh)).\right) \end{eqnarray*} A straightforward computation gives{\small \begin{eqnarray*} K(df,dg,dh)&&=\\\sum_{i,j=1}^p&\left\{\right.& \displaystyle\left(U_j(g)[U_i,V_j](h)-U_j(h)[U_i,V_j](g)+ V_j(g)[U_j,U_i](h)-V_j(h)[U_j,U_i](g)\right)d(V_i(f))\\ &+&\left(U_j(g)[V_j,V_i](h)-U_j(h)[V_j,V_i](g)+V_j(g)[V_i,U_j](h)- V_j(h)[V_i,U_j](g)\right)d(U_i(f))\\ &-&\left(U_j(f)[U_i,V_j](h)-U_j(h)[U_i,V_j](f)+ V_j(f)[U_j,U_i](h)-V_j(h)[U_j,U_i](f)\right)d(V_i(g))\\ &-&\left(U_j(f)[V_j,V_i](h)-U_j(h)[V_j,V_i](f)+V_j(f)[V_i,U_j](h)-V_j(h) [V_i,U_j](f)\right)d(U_i(g))\\ &+&U_i(h)V_j(g)d\left([U_j,V_i](f)\right)-U_i(h)V_j(f)d\left([U_j,V_i](g)\right)\\ &+&U_i(h)U_j(g)d\left([V_i,V_j](f)\right)-U_i(h)U_j(f)d\left([V_i,V_j](g)\right)\\ &+&V_i(h)U_j(g)d\left([V_j,U_i](f)\right)-V_i(h)U_j(f)d\left([V_j,U_i](g)\right)\\ &+&\left.V_i(h)V_j(g)d\left([U_i,U_j](f)\right)-V_i(h)V_j(f)d\left([U_i,U_j](g) \right)\right\}.\end{eqnarray*}} The vanishing of $K$ is a consequence of the equation $[r,r]=0$ which is equivalent to $$\omega_r([x,y],z)+\omega_r([y,z],x)+\omega_r([z,x],y)=0\quad\forall x,y,z\in Imr. \eqno(*)$$ Now $\omega_r=\sum_{i=1}^pu_i^*\wedge v_i^*$ where $(u_1^*,\ldots,u_p^*,v_1^*,\ldots,v_p^*)$ is the dual basis of $(u_1,\ldots,u_p,v_1,\ldots,v_p)$. If one write $[u_i,u_j]=\sum_{k=1}^p(C_{u_iu_j}^{u_k}u_k+C_{u_iu_j}^{v_k}v_k)$ and so on, one can see easily that the condition $(*)$ is equivalent to $$\left\{\begin{array}{ccc} C_{v_jv_k}^{u_i}+C_{v_kv_i}^{u_j}+C_{v_iv_j}^{u_k}&=&0,\\ C_{u_ju_k}^{v_i}+C_{u_ku_i}^{v_j}+C_{u_iu_j}^{v_k}&=&0,\\ C_{u_ju_k}^{u_i}-C_{u_kv_i}^{v_j}-C_{v_iu_j}^{v_k}&=&0,\\ C_{v_iv_j}^{v_k}-C_{u_kv_i}^{u_j}-C_{v_ju_k}^{u_i}&=&0.\end{array}\right.\qquad\forall i,j,k.$$ For $i=1,\ldots,p$, the coefficients of $d(V_i(f))$ and $d(U_i(f))$ in the expression of $K(df,dg,dh)$ are respectively \begin{eqnarray*} \sum_{j,k=1}^p\left(C_{u_iv_j}^{u_k}-C_{u_iv_k}^{u_j}+C_{v_kv_j}^{v_i}\right)U_j(g)U_k(h) +\left(C_{u_iv_j}^{v_k}-C_{u_ku_i}^{u_j}+C_{v_ju_k}^{v_i}\right)U_j(g)V_k(h)\\ +\left(-C_{u_iv_j}^{v_k}+C_{u_ku_i}^{u_j}+C_{u_kv_j}^{v_i}\right)U_j(h)V_k(g)+ \left(C_{u_ju_i}^{v_k}-C_{u_ku_i}^{v_j}+C_{u_ku_j}^{v_i}\right)V_j(g)V_k(h),\end{eqnarray*} \begin{eqnarray*} \sum_{j,k=1}^p\left(C_{v_jv_i}^{u_k}-C_{v_kv_i}^{u_j}+C_{v_kv_j}^{u_i}\right)U_j(g)U_k(h) +\left(C_{v_jv_i}^{v_k}-C_{v_iu_k}^{u_j}+C_{v_ju_k}^{u_i}\right)U_j(g)V_k(h)\\ +\left(-C_{v_jv_i}^{v_k}+C_{v_iu_k}^{u_j}+C_{u_kv_j}^{u_i}\right)U_j(h)V_k(g)+ \left(C_{v_iu_j}^{v_k}-C_{v_iu_k}^{v_j}+C_{u_ku_j}^{u_i}\right)V_j(g)V_k(h).\end{eqnarray*} Those coefficients vanish according to the relations above. The same thing will happens for the coefficients of $d(V_i(g))$ and $d(U_i(g)$. This shows that $K$ vanishes identically. Suppose now that the action of $Imr$ on $P$ is locally free. This will implies obviously that ${\cal D}^\Gamma$ is an ${\cal F}$-connection and a 1-form $\beta$ is parallel with respect ${\cal D}^\Gamma$ if and only if $L_{\Gamma(u)}\beta=0$ for all $u\in Imr$. For any parallel 1-form $\beta$, we have $L_{\Gamma(u)}d\beta=0$ and hence ${\cal D}^\Gamma d\beta=0$. According to Proposition 2.2, this implies that the metacurvature of ${\cal D}^\Gamma$ vanishes identically, which achieves the proof of Theorem 1.2.$\Box$ \eject {\bf References}\bigskip [1] {\bf Benson G., Gordon C.,} {\it K\"ahler and symplectic structures on nilmanifolds,} Topology {\bf 27,4} (1988), 513-518. [2] {\bf Bordemann M., Medina A., Ouadfel A.,} {\it Le groupe affine comme vari\'et\'e symplectique}, T\^{o}hoku Math. J., {\bf 45} (1993), 423-436. [3] {\bf Boucetta M., } {\it Compatibilit\'e des structures pseudo-riemanniennes et des structures de Poisson}{ C. R. Acad. Sci. Paris, {\bf t. 333}, S\'erie I, (2001) 763--768.} [4] {\bf Boucetta M., } {\it Poisson manifolds with compatible pseudo-metric and pseudo-Riemannian Lie algebras}{ Differential Geometry and its Applications, {\bf Vol. 20, Issue 3}(2004), 279--291.} [5] {\bf Chu B. Y.,} {\it Symplectic homogeneous spaces,} Trans. Am. Math. Soc. {\bf 197} (1974), 145-159. [6] {\bf De Smedt V.,} {\it Existence of a Lie bialgebra structure on every Lie algebra,} Letters in Mathematical Physics, {\bf 31} (1994), 225-231. [7] {\bf Fernandes R. L.,} {\it Connections in Poisson Geometry1: Holonomy and invariants}, J. of Diff. Geometry {\bf 54} (2000), 303-366. [8] {\bf Hawkins E.,} {\it Noncommutative Rigidity}, Commun. Math. Phys. {\bf 246} (2004) 211-235. math.QA/0211203. [9] {\bf Hawkins E.,} {\it The structure of noncommutative deformations,} arXiv:math.QA/0504232. [10] {\bf A. Lichnerowicz and A. Medina,} {\it On Lie groups with left-invariant symplectic or K\"ahlerian structures,} {Letters in Mathematical Physics {\bf16} (1988), 225--235.} [11] {\bf Medina A., Revoy Ph.,} {\it Alg\`ebres de Lie et produit scalaire invariant,} {Ann. Ec. Norm. Sup., 4\`eme s\'erie, {\bf t. 18} (1985), 553--561}. [12] {\bf Medina A., Revoy Ph.,} {\it Les groupes oscillateurs et leurs r\'eseaux,} Manuscripta Math. {\bf 52} (1985), 81-95. [13] {\bf Medina A., Revoy Ph.,} {\it Groupes de Lie \`a structure symplectique invariante,} Symplectic geometry, groupoids and integrable systems, in "S\'eminaire Sud Rhodanien", M.S.R.I., New York/Berlin: Springer-Verlag, (1991), 247-266. [14] {\bf Milnor J.,} {\it Curvature of left invariant metrics on Lie groups,} Adv. in Math. {\bf21} (1976), 283-329. [15] {\bf Reshetikhin N., Voronov A., Weinstein A.,} {\it Semiquantum Geometry, Algebraic geometry}, J. Math. Sci. {\bf 82(1)} (1996) 3255-3267. [16] {\bf Vaisman I.}, {\it Lecture on the geometry of Poisson manifolds}, Progr. In Math. {\bf Vol. 118}, Birkhausser, Berlin, (1994). \vskip3cm M. Boucetta\\ Facult\'e des Sciences et Techniques \\ BP 549 Marrakech\\ Morocco \\ Email: {\it boucetta@fstg-marrakech.ac.ma mboucetta2@yahoo.fr} \end{document} The main result in [H2] is the following: if a compact Riemannian manifold is endowed with a Poisson tensor compatible with the metric in the sense above, then the Poisson tensor can be constructed locally from commuting Killing vector fields. This implies that the Poisson tensor is parallel with respect to the metric contravariant connection. All pseudo-Riemannian manifolds considered in this paper are orientable. For a pseudo-Riemannian manifold $P$, $g$ will denote the pseudo-Riemannian metric when it measures the length of tangent vectors and $<,>$ will denote the metric when it measures the length of co-vectors. $\nabla$ will denote the Levi-Civita covariant connection associated to the metric, $\epsilon$ will denote the Riemannian volume form and $\#:T^*P\longrightarrow TP$ will denote the musical isomorphism associated to the metric. {\bf Remark.} \begin{enumerate}\item From (4), one can easily see that, if $f$ is a Casimir function, i.e. $\pi_{\#}(df)=0$, then $(\pi,<,>)$ and $(\pi,e^{f}<,>)$ have the same metric contravariant connection. \item If $\pi_{\#}$ is invertible, i.e. the Poisson structure comes from a symplectic structure, then the metric contravariant connection associated to $(\pi,<,>)$ is given by $$D_\alpha\beta={\pi_{\#}}^{-1}\left(\widetilde\nabla_{{\pi_{\#}}(\alpha)}{\pi_{\#}}(\beta)\right),$$where $\widetilde\nabla$ is the Levi-Civita covariant connection associated to the metric $g^\pi$ given by $$g^\pi(u,v)=<{\pi_{\#}}^{-1}(u),{\pi_{\#}}^{-1}(v)>.$$ \end{enumerate}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Extended ``halos'' of diffuse {Lyman-alpha (Ly$\alpha$)} emission, extending to many times larger radii than starlight, are nearly ubiquitous around rapidly-star-forming galaxies at redshift $z > 2$ \citep[e.g.][]{steidel11, wisotzki16, leclercq17, wisotzki18}. While there is not yet consensus on the dominant physical mechanism giving rise to diffuse \ensuremath{\rm Ly\alpha}\ halos, there is general agreement on the list of potential sources, all of which depend on substantial cool hydrogen gas in the circumgalactic medium (CGM; e.g., \citealt{ouchi20})): (1) resonant scattering of \ensuremath{\rm Ly\alpha}\ produced by recombination of gas photoionised by massive stars or {active galactic nuclei (AGN), i.e., a central source}, where \ensuremath{\rm Ly\alpha}\ photons are subsequently scattered until they find optically-thin channels to escape (2) {\it in situ} photoionization of \ion{H}{I} by the metagalactic {ultra-violet (UV)} ionising radiation field combined with local sources and followed by recombination (sometimes called ``fluorescence'') (3) accreting gas losing energy via collisional excitation of \ensuremath{\rm Ly\alpha}\ (sometimes referred to as ``gravitational cooling'' (4) emission from unresolved satellite galaxies in the halos of larger central galaxies. In principle, the observed surface brightness, spatial distribution, and kinematics of \ensuremath{\rm Ly\alpha}\ emission can discriminate between the various mechanisms and, perhaps more importantly, can provide direct information on the degree to which gas in the CGM is accreting, outflowing, or quiescent. \ensuremath{\rm Ly\alpha}\ emission from the CGM, if interpreted correctly, can provide a detailed map of the cool component of the dominant baryon reservoir associated with forming galaxies, as well as constraints on large-scale gas flows that are an essential part of the current galaxy formation paradigm. Because a \ensuremath{\rm Ly\alpha}\ photon is produced by nearly every photoionisation of hydrogen, the intrinsic \ensuremath{\rm Ly\alpha}\ luminosity of a rapidly star-forming galaxy can be very high, and thus easily detected \citep{partridge67}. However, due to its very high transition probability, \ensuremath{\rm Ly\alpha}\ is resonantly scattered until the last scattering event gives it an emitted frequency and direction such that the optical depth remains low along a trajectory that allows it to escape from the host galaxy. When the \ensuremath{\rm Ly\alpha}\ optical depth is high in all directions, the vastly increased effective path length -- due to large numbers of scattering events during the time the photon is radiatively trapped -- increases the probability that the photon is destroyed by dust or emitted via two-photon mechanism. But \ensuremath{\rm Ly\alpha}\ photons that are not absorbed by dust grains or converted to two-photon radiation must eventually escape, with the final scattering resulting in the photon having a frequency and direction such that the photon can freely stream without further interaction with a hydrogen atom. The radiative transfer of \ensuremath{\rm Ly\alpha}\ thus depends in a complex way on the distribution, clumpiness, and kinematics of \ion{H}{I} within the host galaxy, as well as on where and how the photon was produced initially. But the added complexity is counter-balanced by the availability of a great deal of information about the scattering medium itself that is otherwise difficult or impossible to observe directly: i.e., the neutral hydrogen distribution and kinematics in the CGM. Since the commissioning of sensitive integral-field spectrometers on large ground-based telescopes -- {the Multi Unit Spectroscopic Explorer (MUSE; \citealt{bacon10}) on the Very Large Telescopes (VLT) of the European Southern Observatory (ESO) and the Keck Cosmic Web Imager (KCWI; \citealt{morrissey18})} at the Keck Observatory -- it has become possible to routinely detect diffuse \ensuremath{\rm Ly\alpha}\ emission halos around individual galaxies at high redshift {(e.g., \citealt{wisotzki16, leclercq17})}, and to simultaneously measure the spatially-resolved \ensuremath{\rm Ly\alpha}\ kinematics {(e.g., \citealt{erb18, claeyssens19, leclercq20})}. Such observations can then be interpreted in terms of simple expectations based on \ensuremath{\rm Ly\alpha}\ radiative transfer; for example, a generic expectation is that most \ensuremath{\rm Ly\alpha}\ emission involving scattered photons (i.e., those that must pass through an \ion{H}{I} gas distribution before escaping their host) will exhibit a ``double-peaked'' spectral profile, where the relative strength of the blue-shifted and red-shifted peaks may be modulated by the net velocity field of the emitting gas. In the idealised case of a spherical shell of outflowing (infalling) gas, one predicts that an external observer will measure a dominant red (blue) peak {\citep{verhamme06}}. The fact that most ($\sim 90$\%) of star-forming galaxy ``down the barrel'' (DTB) spectra with \ensuremath{\rm Ly\alpha}\ in emission in the central portions exhibit dominant red peaks (e.g., \citealt{pettini01, steidel10, kulas12, trainor15, verhamme18, matthee21}) has led to the conclusion that outflowing gas dominates \ensuremath{\rm Ly\alpha}\ radiative transfer, at least at small galactocentric distances. Essentially every simulation of galaxy formation (\citealt{fg10}) predicts that gaseous accretion is also important -- particularly at high redshifts ($z \lower.5ex\hbox{\gtsima} 2$) -- and this has focused attention on systems in which a double \ensuremath{\rm Ly\alpha}\ profile with a blue-dominant peak is observed, often cited as evidence for on-going accretion of cool gas (e.g., \citealt{vanzella17, martin14, martin16, ao20}). Quantitative predictions of \ensuremath{\rm Ly\alpha}\ emission from accreting baryons depend sensitively on the thermal state and the small-scale structure of the gas (e.g., \citealt{kollmeier10,fg10,goerdt10}), leading to large uncertainties in the predictions. The role played by ``local'' sources of ionising photons over and above that of the metagalactic ionising radiation field is likely to be substantial for regions near QSOs (\citealt{cantalupo14, borisova16, cai19, osullivan20}) but much more uncertain for star-forming galaxies, where the escape of scattered \ensuremath{\rm Ly\alpha}\ photons is much more likely than that of ionising photons. Models of \ensuremath{\rm Ly\alpha}\ radiative transfer have attempted to understand the dominant physics responsible for producing \ensuremath{\rm Ly\alpha}\ halos around galaxies. Using photon-tracing algorithms with Monte-Carlo simulations, \citet{verhamme06, dijkstra14,gronke16a, gronke16b} have explored the effects of resonant scattering on the emergent central \ensuremath{\rm Ly\alpha}\ line profile using various idealised \ion{H}{I} geometries and velocity fields; {in most cases simple models can be made to fit the observed 1-D profiles \citep{gronke17, song20}. There have also been attempts to model or predict spatially-resolved \ensuremath{\rm Ly\alpha}\ which almost certainly depends on a galaxy's immediate environment (\citealt{zheng11, kakiichi18}), including both outflows and accretion flows, as well as the radiative transfer of \ensuremath{\rm Ly\alpha}\ photons from the site of initial production to escape \citep[e.g.][]{fg10, lake15, smith19, byrohl20}. However, the conclusions reached as to the dominant process responsible for the extended \ensuremath{\rm Ly\alpha}\ emission have not converged, indicating that more realistic, high resolution, cosmological zoom-in simulations may be required to capture all of the physical processes.} Despite the variety of \ensuremath{\rm Ly\alpha}\ radiative transfer models to date, as far as we are aware, no specific effort has been made to statistically compare full 2-D model predictions (simultaneous spatial and kinematic) to observed \ensuremath{\rm Ly\alpha}\ halos. Some insight into the relationship between galaxy properties and the kinematics and spatial distribution of cool gas in the CGM has been provided by studies at lower redshifts, where galaxy morphology is more easily measured. Statistical studies using absorption line probes have clearly shown that the strength of low-ionization metal lines such as \ion{Mg}{II}, \ion{Fe}{II} depends on where the line of sight passes through the galaxy CGM relative to the projected major axis of the galaxy -- the ``azimuthal angle'' -- \citep[e.g.][]{bordoloi11, bouche12, kacprzak12, nielsen15, lan18, martin19}, and the inclination of the galaxy disk relative to the line of sight (e.g., \citealt{steidel02, kacprzak11}). More recently, clear trends have also been observed for high ions (\ion{O}{VI}) \citep{kacprzak15}. In general, these trends support a picture of star-forming galaxies in which high-velocity, collimated outflows perpendicular to the disk are responsible for the strongest absorption lines in both low and high ions, with low ions also being strong near the disk plane. Theoretically at least, accretion flows might also be quasi-collimated in the form of cold streams of gas that would tend to deposit cool gas near the disk plane (see, e.g., \citealt{tumlinson17}). It is less clear how such a geometry for gas flows in the CGM would manifest as emission in a resonantly-scattered line like \ensuremath{\rm Ly\alpha}. One might expect that \ensuremath{\rm Ly\alpha}\ photons would escape most readily along the minor axis, since the large velocity gradients and lower \ion{H}{I} optical depths of outflowing material both favour \ensuremath{\rm Ly\alpha}\ escape from the host galaxy. This picture is consistent with \citet{verhamme12}, who showed that \ensuremath{\rm Ly\alpha}\ escape is enhanced when the simulated galaxies are viewed face-on. Observations of low-redshift, spatially resolved \ensuremath{\rm Ly\alpha}\ emission have so far been limited to small samples -- e.g., in the local universe ($z<0.2$), using the Hubble Space Telescope (HST), the ``\ensuremath{\rm Ly\alpha}\ reference sample'' (LARS; \citealt{ostlin14}) has obtained images probing \ensuremath{\rm Ly\alpha}\ emission around galaxies in great spatial detail, reaffirming the complex nature of \ensuremath{\rm Ly\alpha}\ radiative transfer and its relation to the host galaxies. In most cases LARS found evidence that extended \ensuremath{\rm Ly\alpha}\ emission is most easily explained by photons produced by active star formation that then diffuse into the CGM before a last scattering event allows escape in the observer's direction. Although there are small-scale enhancements associated with outflows, even for galaxies observed edge-on the \ensuremath{\rm Ly\alpha}\ emission is perhaps smoother than expected \citep{duval16}. At $z > 2$, \ensuremath{\rm Ly\alpha}\ emission is more readily observed but detailed analyses are challenged by the relatively small galaxy sizes (both physical and angular) and the need for high spatial resolution to determine the morphology of the stellar light. In this paper, we present a statistical sample of $z > 2$ galaxies drawn from a survey using KCWI of selected regions within the Keck Baryonic Structure Survey (KBSS; \citealt{rudie12a, steidel14, strom17}). Since the commissioning of KCWI in late 2017, we have obtained deep IFU data ($\sim 5$ hour integrations) for $> 100$ KBSS galaxies with $z = 2 - 3.5$, so that the \ensuremath{\rm Ly\alpha}\ line is covered within the KCWI wavelength range; some initial results from the survey have been presented by \citet{erb18, law18}. The 59 galaxies included in our current analysis are those that, in addition to the KCWI data, have also been observed at high spatial resolution by either {HST} or adaptive-optics-assisted near-IR spectroscopy using Keck/OSIRIS. The overarching goal of the study is to evaluate the spatial and spectral distribution of \ensuremath{\rm Ly\alpha}\ emission within $\simeq 5$ arcseconds as compared to the principle axes defined by the galaxy morphology on smaller angular scales by each galaxy's UV/optical continuum emission. In particular, we seek to use the observed kinematics and spatial distribution of \ensuremath{\rm Ly\alpha}\ emission to evaluate whether the cool gas in the CGM of forming galaxies shows evidence for directional dependence -- e.g,, inflows or outflows along preferred directions -- with respect to the central galaxy. This paper is organised as follows. In \S\ref{sec:sample}, we describe the KBSS-KCWI sample; \S\ref{sec:obs} introduces the high-resolution imaging and IFU dataset; \S\ref{sec:pa} covers the details on the measurement of the galaxy principle axes providing the definition of the galactic azimuthal angle; \S\ref{sec:analyses} presents results on the connection between Ly$\alpha$ halos and galactic azimuthal angle. \S\ref{sec:az_halo} looks into the connection between the \ensuremath{\rm Ly\alpha}\ azimuthal asymmetry and the overall \ensuremath{\rm Ly\alpha}\ emission properties. \S\ref{sec:three_bins} checks higher order azimuthal asymmetry of \ensuremath{\rm Ly\alpha}\ emission by dividing the sample into finer azimuthal bins. \S\ref{sec:discussions} discusses the implications of the results, with a summary in \S\ref{sec:summary}. Throughout the paper, we assume a $\Lambda$CDM cosmology with $\Omega_m = 0.3$, $\Omega_\Lambda =0.7$, and $h=0.7$. Distances are given in proper units, i.e., physical kpc (pkpc). \section{The KBSS-KCWI Galaxy Sample} \label{sec:sample} In late 2017, we began using the recently-commissioned Keck Cosmic Web Imager (KCWI; \citealt{morrissey18}) on the Keck \RNum{2} 10m telescope to target selected regions within the survey fields of the Keck Baryonic Structure Survey (KBSS; \citealt{rudie12a,steidel14,strom17}). The main goal of the KCWI observations has been to detect diffuse emission from the CGM (within impact parameter, $D_{\rm tran} \lower.5ex\hbox{\ltsima}{} 100$ pkpc) of a substantial sample of rapidly star-forming galaxies and optically-faint AGN host galaxies, reaching surface brightness sensitivity of $\sim 5\times 10^{-20}$ ergs s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ ($1\sigma$) for unresolved emission lines. Such limiting surface brightness would allow detection of the extended \ensuremath{\rm Ly\alpha}\ halos of individual galaxies at redshifts $z \sim 2-3$ (e.g., \citealt{steidel11}), and would be capable of detecting extended diffuse UV metallic cooling line emission as predicted by simulations of galaxies with comparable mass and redshift (e.g., \citealt{sravan16}). KCWI offers three selectable image slicer scales, each of which can be used with three different regimes of spectral resolving power, $R \equiv \lambda/\Delta \lambda$. All of the observations used in the present study were obtained using the ``medium'' slicer scale and low resolution grating (BL), providing integral field spectra over a contiguous field of view (FoV) of 20\secpoint3$\times$16\secpoint5 covering the common wavelength range 3530-5530 \AA\ with resolving power $\langle R \rangle = 1800$. Given the relatively small solid angle of the KCWI FoV, and the total integration time desired for each pointing of $\sim 5$ hours, it was necessary to choose the KCWI pointings carefully. In general, we chose KCWI pointings to maximize the number of previously-identified KBSS galaxies with $2 \lower.5ex\hbox{\ltsima} z \lower.5ex\hbox{\ltsima} 3.4$ within the field of view, so that the KCWI spectra would include the \ensuremath{\rm Ly\alpha}\ line as well as many other rest-frame far-UV transitions. Most of the targeted galaxies within each pointing were observed as part of KBSS in both the optical (Keck/LRIS; \citealt{oke95,steidel04}) and the near-IR (Keck/MOSFIRE; \citealt{mclean12, steidel14}). The pointings were chosen so that the total sample of KBSS catalog galaxies observed would span the full range of galaxy properties represented in the KBSS survey in terms of stellar mass (M$_{\ast}$), star formation rate (SFR), \ensuremath{\rm Ly\alpha}\ emission strength, and rest-optical nebular properties. Most of the KCWI pointings were also directed within the regions of the KBSS survey fields that have been observed at high spatial resolution by {HST}. As of this writing, the KBSS-KCWI survey comprises 39 pointings of KCWI, including observations of 101 KBSS galaxies, of which 91 have $2 \le z \le 3.4$ placing \ensuremath{\rm Ly\alpha}\ within the KCWI wavelength range. {In this work, we focus only on galaxies without obvious spectroscopic or photometric evidence for the presence of an AGN, and have therefore excluded 14 objects with spectroscopic evidence for the presence of AGN, to be discussed elsewhere. The remaining objects show no sign of significant AGN activity -- e.g., they lack emission lines of high ionisation species in the rest-UV KCWI and existing LRIS spectra, their nebular line ratios in the rest-frame optical are consistent with stellar excitation based on spectra taken with Keck/MOSFIRE (see e.g. \citealt{steidel14}), they lack power-law SEDs in the NIR-MIR, etc. Because measurements of galaxy morphology are important to the analysis, we considered only the subset of the star-forming (non-AGN) galaxies that have also been observed at high spatial resolution using HST imaging or Keck/OSIRIS IFU spectroscopcy behind adaptive optics (\S\ref{sec:image}).} In addition to the known KBSS targets, many of the KCWI pointings include continuum-detected serendipitous galaxies whose KCWI spectra constitute the first identification of their redshifts. A total of 50 galaxies with $z > 2$ (most of which are fainter in the optical continuum than the KBSS limit of {\cal R}$=25.5$) have been identified. We have included 10 such objects in our analysis sample, based on their having HST observations with sufficient S/N for determination of morphology. A minimum total integration of 2.5 hours (5 hours is more typical) at the position of a galaxy in the KCWI data cube was also imposed, in order to ensure the relative uniformity of the data set. Ultimately, after inspection of the high resolution images (\S\ref{sec:image}), 6 galaxies were removed because of source ambiguity or obvious contamination from nearby unrelated objects in the images, and two were excluded because they were not sufficiently resolved by HST to measure their position angle reliably (see \S\ref{sec:pa}). \begin{figure} \includegraphics[width=8cm]{plots/z_distribution.pdf} \caption{ Redshift distribution of the galaxy sample. The blue histogram represents the full sample of 59 galaxies. The orange histogram shows the distribution for the 38 galaxies with nebular redshift measurements from MOSFIRE near-IR spectra, while the rest are calibrated based on \citet{chen20} using rest-UV absorption lines or \ensuremath{\rm Ly\alpha}\ emission from Keck/LRIS and/or KCWI spectra. The mean (median) redshift of the full sample is 2.42 (2.29). } \label{fig:zdistribution} \end{figure} The final sample to be considered in this work contains 59 galaxies, listed in Table~\ref{tab:sample}. The redshifts given in Table~\ref{tab:sample} are based on MOSFIRE nebular spectra for 38 of the 59 galaxies, which have a precision of $\sim \pm 20$ \rm km~s\ensuremath{^{-1}\,}\ and should accurately reflect the galaxy systemic redshift. {In the remaining cases, features in the rest-frame UV spectra including \ensuremath{\rm Ly\alpha}\ emission and strong interstellar absorption features (e.g., \ion{Si}{II}, \ion{Si}{IV}, \ion{C}{II}, \ion{C}{IV}, \ion{O}{I}) were used to estimate the systemic redshift of the galaxy, using the calibration described by \citet{chen20}. Briefly, the method uses the statistics -- based on several hundred KBSS galaxies with both nebular and UV observations -- of the velocity offsets between nebular redshifts and redshifts defined by UV spectral features for samples divided by their UV spectral morphology, i.e., \ensuremath{\rm Ly\alpha}\ emission only, \ensuremath{\rm Ly\alpha}\ emission and interstellar absorption, or interstellar absorption only. The mean offsets for the appropriate sub-sample were applied to the UV-based redshifts in cases where nebular redshifts are not available; systemic redshifts obtained using such calibrations have an uncertainty of $\simeq 100$ \rm km~s\ensuremath{^{-1}\,}\ when the rest-UV spectra are of high quality (see, e.g., \citealt{steidel18}.) Figure~\ref{fig:zdistribution} shows the redshift distribution of the KCWI sample, which has $z_\mathrm{med} =2.29 \pm 0.40$ (median and standard deviation), for which the conversion between angular and physical scales is 8.21 pkpc/" with our assumed cosmology. } {Reliable SED fits are available for 56 of the 59 galaxies using the BPASSv2.2 stellar population synthesis model \citep{stanway18} and SMC extinction curve. This choice of SED model has been shown to predict internally consistent stellar mass ($M_{\ast}$) and star-formation rate (SFR) for high-redshift galaxies having properties similar to those in our sample (see, e.g., \citealt{steidel16, strom17, theios19}), i.e., $8.5 \lower.5ex\hbox{\ltsima} {\rm log}~(M_{\ast}/M_{\odot})\lower.5ex\hbox{\ltsima} 11$, and ${\rm 1 \lower.5ex\hbox{\ltsima} SFR/(M_{\odot}~yr^{-1}) \lower.5ex\hbox{\ltsima} 100}$. The distributions of $M_*$ and SFR (Figure \ref{fig:mstar_sfr}) are similar to those of the full KBSS galaxy sample, albeit with a slight over-representation of ${\rm log}~(M_{\ast}/M_{\odot}) \lower.5ex\hbox{\ltsima} 9$ galaxies \footnote{For direct comparison of our sample with the so-called ``star-formation main-sequence'' (SFMS), we note that SED fits that assume solar metallicity SPS models from \citet{bruzual03} and attenuation curve from \citet{calzetti00} result in a distribution of SFR vs. $M_{\ast}$ entirely consistent with the published SFMS at $z \sim 2$ (e.g., \citealt{whitaker14}).}} \begin{figure} \includegraphics[width=8cm]{plots/mstar_sfr.pdf} \caption{ Distribution of SFR and $M_*$ of 56/59 galaxies in this sample; the remaining 3 galaxies have insufficient photometric measurements for reliable SED fitting. The normalised distributions of the parent KBSS sample are shown in the orange 1-D histograms. The SFR and $M_*$ of galaxies used in this work are similar to those of the parent KBSS sample; the values are all based on the BPASS-v2.2-binary spectral synthesis models \citep{stanway18}, assuming stellar metallicity $Z=0.002$, SMC extinction as described by \citet{theios19}, and a \citet{chabrier03} stellar initial mass function.} \label{fig:mstar_sfr} \end{figure} \begin{figure} \includegraphics[width=8cm]{plots/ew_distribution.pdf} \caption{ Distribution of \ensuremath{W_{\lambda}(\lya)}\ for the sample (blue histogram). The orange skeletal histogram shows the normalised \ensuremath{W_{\lambda}(\lya)}\ distribution from \citet{reddy09}, which is a subset of the current KBSS sample large enough to be representative. The sample discussed in this work is slightly biased toward Ly$\alpha$-emitting galaxies compared to the parent sample of $z \sim 2-3$ KBSS galaxies. } \label{fig:ew_distribution} \end{figure} In order to facilitate comparison of the \ensuremath{\rm Ly\alpha}\ emission line strength of sample galaxies with those in the literature, Table~\ref{tab:sample} includes the rest-frame \ensuremath{\rm Ly\alpha}\ equivalent width (\ensuremath{W_{\lambda}(\lya)}) based on extraction of 1-D spectra from the KCWI data cubes over a spatial aperture defined by the extent of the UV continuum light of each galaxy\footnote{In general, the \ensuremath{\rm Ly\alpha}\ emission evaluated within a central aperture represents only a fraction of the total that would be measured in a large aperture that accounts for the diffuse \ensuremath{\rm Ly\alpha}\ halos with spatial extent well beyond that of the FUV continuum (see, e.g., \citealt{steidel11,wisotzki16}); however, the central \ensuremath{W_{\lambda}(\lya)}\ is a closer approximation to values measured in most spectroscopic galaxy surveys.}. The values of \ensuremath{W_{\lambda}(\lya)}\ in Table~\ref{tab:sample} were measured using the method described in \citet{kornei10} (see also \citealt{reddy09}), where positive values indicate net emission and negative values net absorption. Aside from a slight over-representation of galaxies with the strongest \ensuremath{\rm Ly\alpha}\ emission (\ensuremath{W_{\lambda}(\lya)}$\lower.5ex\hbox{\gtsima} 40$ \AA), the sample in Table~\ref{tab:sample} is otherwise typical of UV-continuum-selected galaxies in KBSS, by construction. The \ensuremath{W_{\lambda}(\lya)}\ distribution for the sample in Table~\ref{tab:sample} is shown in Figure~\ref{fig:ew_distribution}. In addition to \ensuremath{W_{\lambda}(\lya)}, we also measured the total \ensuremath{\rm Ly\alpha}\ flux (\ensuremath{F_{\mathrm{Ly}\alpha}}) and the ratio between the blue- and red-shifted components of emission for the entire \ensuremath{\rm Ly\alpha}\ halo ($\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{blue})} / \ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{red})}$), which are discussed further in \S\ref{sec:az_halo}. \begin{sidewaystable*} \centering \caption{Summary of the galaxy sample and the observations. \label{tab:sample}} \begin{threeparttable} \begin{tabular}{lccclrrrcrc} \hline\hline Object\tnote{a} & RA & DEC & $t_\mathrm{exp}$ (KCWI) & \multicolumn{1}{c}{Redshift\tnote{b}} & \multicolumn{1}{c}{\ensuremath{W_{\lambda}(\lya)}\tnote{c}} & \multicolumn{1}{c}{$\ensuremath{F_{\mathrm{Ly}\alpha}}(\mathrm{tot})$\tnote{d}} & \multirow{2}{*}{$\frac{\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{blue})}}{\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{red})}}$\tnote{e}} & Imaging\tnote{f} & \multicolumn{1}{c}{$\ensuremath{\rm PA}_0$\tnote{g}} & $\ensuremath{\rm PA}_0$\tnote{h} \\ Identifier & (J2000.0) & (J2000.0) & (hr) && \multicolumn{1}{c}{(\AA)} & \multicolumn{1}{c}{$10^{-17}~\mathrm{erg~s}^{-1}\mathrm{cm}^{-2}$} && Data & \multicolumn{1}{c}{(deg)} & Method \\ \hline Q0100-BX210 & 01:03:12.02 & +13:16:18.5 & 5.4 & 2.2769 & $-3 \pm 2$ & $6.3 \pm 0.5$ & $0.08 \pm 0.05$ & F160W & 37 & (\rnum{1}), (\rnum{2}) \\ Q0100-BX212 & 01:03:12.51 & +13:16:23.2 & 5.2 & 2.1063 & $-4 \pm 3$ & $2.3 \pm 0.7$ & $0.5 \pm 0.29$ & F160W & -57 & (\rnum{1}), (\rnum{2}) \\ Q0100-C7 & 01:03:08.24 & +13:16:30.1 & 5.1 & 3.0408 & $12 \pm 1$ & $9.0 \pm 0.4$ & $0.15 \pm 0.03$ & F160W & 16 & (\rnum{1}), (\rnum{2}) \\ Q0100-D11 & 01:03:08.25 & +13:16:37.6 & 5.1 & 2.5865 & $6 \pm 1$ & $1.7 \pm 0.4$ & $0.11 \pm 0.14$ & F160W & -12 & (\rnum{1}), (\rnum{2}) \\ Q0142-BX165 & 01:45:16.87 & -09:46:03.5 & 5.0 & 2.3576 & $51 \pm 3$ & $37.5 \pm 0.5$ & $0.28 \pm 0.01$ & F160W & 14 & (\rnum{2}) \\ Q0142-BX188 & 01:45:17.79 & -09:45:05.6 & 5.3 & 2.0602 & $-6 \pm 1$ & $4.6 \pm 0.7$ & $0.47 \pm 0.15$ & F160W & 66 & (\rnum{1}), (\rnum{2}) \\ Q0142-BX195-CS10 & 01:45:17.11 & -09:45:06.0 & 4.5 & 2.7382 (UV) & $-1 \pm 3$ & $2.2 \pm 0.5$ & $0.47 \pm 0.24$ & F160W & 29 & (\rnum{2}) \\ Q0142-NB5859 & 01:45:17.54 & -09:45:01.2 & 5.8 & 2.7399 (UV) & $20 \pm 2$ & $3.7 \pm 0.4$ & $0.65 \pm 0.14$ & F160W & 42 & (\rnum{1}), (\rnum{2}) \\ Q0207-BX144 & 02:09:49.21 & -00:05:31.7 & 5.0 & 2.1682 & $25 \pm 3$ & $44.9 \pm 0.6$ & $0.29 \pm 0.01$ & F140W & -74 & (\rnum{1}), (\rnum{2}) \\ Q0207-MD60 & 02:09:53.69 & -00:04:39.8 & 4.3 & 2.5904 & $-27 \pm 2$ & $1.4 \pm 0.5$ & $-0.16 \pm 0.21$ & F140W & -64 & (\rnum{2}) \\ Q0449-BX88 & 04:52:14.94 & -16:40:49.3 & 6.3 & 2.0086 & $-7 \pm 3$ & $3.3 \pm 0.8$ & $0.16 \pm 0.16$ & F160W & 10 & (\rnum{2}) \\ Q0449-BX88-CS8 & 04:52:14.79 & -16:40:58.6 & 5.2 & 2.0957 (UV) & $4 \pm 1$ & $3.9 \pm 0.8$ & $0.34 \pm 0.17$ & F160W & -39 & (\rnum{1}), (\rnum{2}) \\ Q0449-BX89 & 04:52:14.80 & -16:40:51.1 & 6.3 & 2.2570 (UV) & $22 \pm 2$ & $7.6 \pm 0.4$ & $0.05 \pm 0.03$ & F160W & -32 & (\rnum{1}), (\rnum{2}) \\ Q0449-BX93 & 04:52:15.41 & -16:40:56.8 & 6.3 & 2.0070 & $-7 \pm 2$ & $7.4 \pm 0.8$ & $0.6 \pm 0.13$ & F160W, OSIRIS & -18 & (\rnum{1}), (\rnum{2}), (\rnum{3}) \\ Q0449-BX110 & 04:52:17.20 & -16:39:40.6 & 5.0 & 2.3355 & $35 \pm 2$ & $20.4 \pm 0.6$ & $0.47 \pm 0.03$ & F160W & -48 & (\rnum{1}), (\rnum{2}) \\ Q0821-MD36 & 08:21:11.41 & +31:08:29.4 & 2.7 & 2.583 & $75 \pm 10$ & $23.1 \pm 0.6$ & $0.16 \pm 0.02$ & F140W & 37 & (\rnum{1}), (\rnum{2}) \\ Q0821-MD40 & 08:21:06.96 & +31:07:22.8 & 4.3 & 3.3248 & $27 \pm 3$ & $11.0 \pm 0.4$ & $0.34 \pm 0.03$ & F140W & -7 & (\rnum{2}) \\ Q1009-BX215 & 10:11:58.71 & +29:41:55.9 & 4.0 & 2.5059 & $1 \pm 1$ & $3.7 \pm 0.8$ & $0.34 \pm 0.15$ & F160W & -20 & (\rnum{1}), (\rnum{2}) \\ Q1009-BX218 & 10:11:58.96 & +29:42:07.5 & 5.3 & 2.1091 & $-6 \pm 3$ & $4.0 \pm 0.6$ & $0.18 \pm 0.11$ & F160W & -43 & (\rnum{1}), (\rnum{2}) \\ Q1009-BX222 & 10:11:59.09 & +29:42:00.5 & 5.3 & 2.2031 & $-4 \pm 1$ & $4.7 \pm 0.5$ & $0.28 \pm 0.08$ & F160W & -83 & (\rnum{1}), (\rnum{2}) \\ Q1009-BX222-CS9 & 10:11:58.92 & +29:42:02.6 & 5.3 & 2.6527 (UV) & $-1 \pm 4$ & $1.0 \pm 0.4$ & \multicolumn{1}{c}{---} & F160W & 76 & (\rnum{1}), (\rnum{2}) \\ Q1009-D15 & 10:11:58.73 & +29:42:10.5 & 5.3 & 3.1028 (UV) & $-18 \pm 6$ & $4.2 \pm 0.4$ & $0.28 \pm 0.08$ & F160W & 28 & (\rnum{1}), (\rnum{2}) \\ Q1549-BX102 & 15:51:55.98 & +19:12:44.2 & 5.0 & 2.1934 & $50 \pm 3$ & $19.5 \pm 0.5$ & $0.51 \pm 0.03$ & F606W & -87 & (\rnum{1}), (\rnum{2}) \\ Q1549-M17 & 15:51:56.06 & +19:12:52.7 & 3.3 & 3.2212 (UV) & $27 \pm 5$ & $4.3 \pm 0.5$ & $0.0 \pm 0.05$ & F606W & 72 & (\rnum{2}) \\ Q1623-BX432 & 16:25:48.74 & +26:46:47.1 & 3.6 & 2.1825 & $17 \pm 1$ & $10.8 \pm 0.7$ & $0.46 \pm 0.06$ & F160W & 16 & (\rnum{2}) \\ Q1623-BX436 & 16:25:49.10 & +26:46:53.4 & 3.6 & 2.0515 (UV) & $-12 \pm 2$ & $2.6 \pm 1.0$ & \multicolumn{1}{c}{---} & F160W & 12 & (\rnum{2}) \\ Q1623-BX453 & 16:25:50.85 & +26:49:31.2 & 4.8 & 2.1821 & $10 \pm 2$ & $2.8 \pm 0.5$ & $0.33 \pm 0.14$ & F160W, OSIRIS & 31 & (\rnum{3}) \\ Q1623-BX453-CS3 & 16:25:50.35 & +26:49:37.1 & 4.7 & 2.0244 (UV) & $17 \pm 2$ & $5.6 \pm 0.9$ & $0.3 \pm 0.11$ & F160W & -66 & (\rnum{1}), (\rnum{2}) \\ Q1623-C52 & 16:25:51.20 & +26:49:26.3 & 4.8 & 2.9700 (UV) & $4 \pm 1$ & $8.5 \pm 0.4$ & $0.22 \pm 0.03$ & F160W & -13 & (\rnum{2}) \\ Q1700-BX490 & 17:01:14.83 & +64:09:51.7 & 4.3 & 2.3958 & $-3 \pm 4$ & $12.8 \pm 0.5$ & $0.39 \pm 0.03$ & F814W, OSIRIS & 86 & (\rnum{1}), (\rnum{2}), (\rnum{3}) \\ Q1700-BX561 & 17:01:04.18 & +64:10:43.8 & 5.0 & 2.4328 & $-3 \pm 3$ & $8.5 \pm 0.6$ & $0.51 \pm 0.08$ & F814W & 9 & (\rnum{2}) \\ Q1700-BX575 & 17:01:03.34 & +64:10:50.9 & 5.0 & 2.4334 & $0 \pm 2$ & $5.2 \pm 0.6$ & $0.14 \pm 0.08$ & F814W & -34 & (\rnum{2}) \\ Q1700-BX581 & 17:01:02.73 & +64:10:51.3 & 4.7 & 2.4022 & $9 \pm 4$ & $10.6 \pm 0.7$ & $0.28 \pm 0.05$ & F814W & 27 & (\rnum{1}), (\rnum{2}) \\ Q1700-BX710 & 17:01:22.13 & +64:12:19.3 & 5.0 & 2.2946 & $-10 \pm 3$ & $8.1 \pm 0.6$ & $0.51 \pm 0.09$ & F814W & -19 & (\rnum{1}), (\rnum{2}) \\ Q1700-BX729 & 17:01:27.77 & +64:12:29.5 & 4.7 & 2.3993 & $14 \pm 2$ & $12.9 \pm 0.5$ & $0.17 \pm 0.03$ & F814W & -3 & (\rnum{1}), (\rnum{2}) \\ Q1700-BX729-CS4 & 17:01:28.95 & +64:12:32.4 & 3.0 & 2.2921 (UV) & $14 \pm 3$ & $5.8 \pm 1.1$ & $0.21 \pm 0.12$ & F814W & 40 & (\rnum{2}) \\ Q1700-BX729-CS9 & 17:01:27.49 & +64:12:25.1 & 4.7 & 2.4014 (UV) & $35 \pm 2$ & $1.1 \pm 0.5$ & $-0.19 \pm 0.21$ & F814W & -59 & (\rnum{1}), (\rnum{2}) \\ Q1700-MD103 & 17:01:00.21 & +64:11:55.6 & 5.0 & 2.3151 & $-24 \pm 1$ & $-0.5 \pm 0.5$ & \multicolumn{1}{c}{---} & F814W & 55 & (\rnum{2}) \\ Q1700-MD104 & 17:01:00.67 & +64:11:58.3 & 5.0 & 2.7465 (UV) & $6 \pm 2$ & $8.2 \pm 0.4$ & $0.11 \pm 0.03$ & F814W & 7 & (\rnum{1}), (\rnum{2}) \\ Q1700-MD115 & 17:01:26.68 & +64:12:31.7 & 4.7 & 2.9081 (UV) & $33 \pm 7$ & $7.8 \pm 0.5$ & $0.05 \pm 0.03$ & F814W & 3 & (\rnum{1}), (\rnum{2}) \\ Q2206-MD10 & 22:08:52.21 & -19:44:13.9 & 5.0 & 3.3269 & $5 \pm 2$ & $4.0 \pm 0.5$ & $0.07 \pm 0.08$ & F160W & 41 & (\rnum{1}), (\rnum{2}) \\ \hline \end{tabular} \end{threeparttable} \end{sidewaystable*} \setcounter{table}{0} \begin{sidewaystable*} \centering \caption{----\textit{continued.}} \begin{threeparttable} \begin{tabular}{lccclrrrcrc} \hline\hline Object\tnote{a} & RA & DEC & $t_\mathrm{exp}$ (KCWI) & \multicolumn{1}{c}{Redshift\tnote{b}} & \multicolumn{1}{c}{\ensuremath{W_{\lambda}(\lya)}\tnote{c}} & \multicolumn{1}{c}{$\ensuremath{F_{\mathrm{Ly}\alpha}}(\mathrm{tot})$\tnote{d}} & \multirow{2}{*}{$\frac{\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{blue})}}{\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{red})}}$\tnote{e}} & Imaging\tnote{f} & \multicolumn{1}{c}{$\ensuremath{\rm PA}_0$\tnote{g}} & $\ensuremath{\rm PA}_0$\tnote{h} \\ Identifier & (J2000.0) & (J2000.0) & (hr) && \multicolumn{1}{c}{(\AA)} & \multicolumn{1}{c}{$10^{-17}~\mathrm{erg~s}^{-1}\mathrm{cm}^{-2}$} && Data & \multicolumn{1}{c}{(deg)} & Method \\ \hline DSF2237b-MD38 & 22:39:35:64 & +11:50:27.5 & 3.7 & 3.3258 & $2 \pm 2$ & $3.0 \pm 0.5$ & $0.44 \pm 0.16$ & F606W & 52 & (\rnum{1}), (\rnum{2}) \\ Q2343-BX379 & 23:46:28.96 & +12:47:26.0 & 4.0 & 2.0427 (UV) & $-7 \pm 2$ & $6.4 \pm 1.3$ & $0.64 \pm 0.23$ & F140W & -60 & (\rnum{1}), (\rnum{2}) \\ Q2343-BX389 & 23:46:28.90 & +12:47:33.5 & 4.0 & 2.1712 & $-16 \pm 1$ & $1.6 \pm 0.6$ & $0.65 \pm 0.49$ & F140W & -50 & (\rnum{2}) \\ Q2343-BX391 & 23:46:28.07 & +12:47:31.8 & 4.0 & 2.1738 & $-16 \pm 1$ & $3.3 \pm 0.7$ & \multicolumn{1}{c}{---} & F140W & 21 & (\rnum{1}), (\rnum{2}) \\ Q2343-BX417 & 23:46:26.27 & +12:47:46.7 & 3.0 & 2.2231 (UV) & $-9 \pm 6$ & $4.8 \pm 0.7$ & $0.49 \pm 0.15$ & F160W & -38 & (\rnum{2}) \\ Q2343-BX418 & 23:46:18.57 & +12:47:47.4 & 4.9 & 2.3054 & $46 \pm 3$ & $40.5 \pm 0.6$ & $0.46 \pm 0.01$ & F140W, OSIRIS & 2 & (\rnum{1}), (\rnum{2}), (\rnum{3}) \\ Q2343-BX418-CS8 & 23:46:18.73 & +12:47:51.6 & 5.1 & 2.7234 (UV) & $66 \pm 15$ & $6.7 \pm 0.4$ & $0.15 \pm 0.04$ & F140W & -67 & (\rnum{1}), (\rnum{2}) \\ Q2343-BX429 & 23:46:25.26 & +12:47:51.2 & 5.2 & 2.1751 & $-8 \pm 5$ & $3.0 \pm 0.5$ & $0.31 \pm 0.15$ & F160W & 28 & (\rnum{1}), (\rnum{2}) \\ Q2343-BX442 & 23:46:19.36 & +12:47:59.7 & 4.6 & 2.1754 & $-18 \pm 4$ & $1.6 \pm 0.7$ & $0.38 \pm 0.52$ & OSIRIS & -12 & (\rnum{4}) \\ Q2343-BX513 & 23:46:11.13 & +12:48:32.1 & 4.8 & 2.1082 & $10 \pm 1$ & $18.6 \pm 0.7$ & $0.71 \pm 0.05$ & F140W, OSIRIS & -9 & (\rnum{1}), (\rnum{2}), (\rnum{3}) \\ Q2343-BX513-CS7 & 23:46:10.55 & +12:48:30.9 & 4.8 & 2.0144 (UV) & $-22 \pm 2$ & $4.2 \pm 1.0$ & $0.53 \pm 0.26$ & F140W & 71 & (\rnum{1}), (\rnum{2}) \\ Q2343-BX587 & 23:46:29.17 & +12:49:03.4 & 5.2 & 2.2427 & $-4 \pm 3$ & $7.2 \pm 0.5$ & $0.22 \pm 0.05$ & F160W & 44 & (\rnum{1}), (\rnum{2}) \\ Q2343-BX587-CS3 & 23:46:28.24 & +12:49:07.2 & 4.8 & 2.5727 (UV) & $2 \pm 2$ & $1.3 \pm 0.4$ & $0.17 \pm 0.26$ & F140W & -53 & (\rnum{1}), (\rnum{2}) \\ Q2343-BX587-CS4 & 23:46:28.62 & +12:49:04.8 & 5.2 & 2.8902 (UV) & $-13 \pm 1$ & $2.8 \pm 0.4$ & $0.08 \pm 0.08$ & F140W & -59 & (\rnum{1}), (\rnum{2}) \\ Q2343-BX610 & 23:46:09.43 & +12:49:19.2 & 5.3 & 2.2096 & $8 \pm 2$ & $13.5 \pm 0.7$ & $0.26 \pm 0.04$ & F140W & 22 & (\rnum{2}) \\ Q2343-BX660 & 23:46:29.43 & +12:49:45.6 & 5.0 & 2.1742 & $20 \pm 3$ & $25.7 \pm 0.6$ & $0.26 \pm 0.02$ & F140W, OSIRIS & 37 & (\rnum{1}), (\rnum{2}), (\rnum{3}) \\ Q2343-BX660-CS7 & 23:46:29.82 & +12:49:38.7 & 4.2 & 2.0788 (UV) & $41 \pm 1$ & $2.3 \pm 0.9$ & \multicolumn{1}{c}{---} & F140W & 89 & (\rnum{2}) \\ Q2343-MD80 & 23:46:10.80 & +12:48:33.2 & 4.8 & 2.0127 & $-26 \pm 3$ & $2.8 \pm 0.8$ & $0.22 \pm 0.24$ & F140W & 9 & (\rnum{1}), (\rnum{2}) \\ \hline\addlinespace[1ex] \end{tabular} \begin{tablenotes}\footnotesize \item[a]{The ``CS'' objects are continuum serendipitous objects discussed in \S\ref{sec:sample}. Their naming follows nearby KBSS galaxies that are previously known, and may not be physically associated to the CS objects.} \item[b]{If marked as ``UV'', the systemic redshift was estimated from features in the rest-UV spectra, calibrated as described by \citet{chen20}. The typical uncertainties on UV-estimated systemic redshifts are $\delta v \equiv c\delta z_{\textrm sys}/(1+z_{\rm sys})\simeq 100$ \rm km~s\ensuremath{^{-1}\,}. Otherwise, $z_{\textrm sys}$ was measured from nebular emission lines in rest-optical (MOSFIRE) spectra, with $\delta v \simeq 20$ \rm km~s\ensuremath{^{-1}\,}. } \item[c]{{Rest-frame \ensuremath{\rm Ly\alpha}\ equivalent width. Details discussed in \S\ref{sec:sample}. }} \item[d]{{Total \ensuremath{\rm Ly\alpha}\ flux. See \S\ref{sec:az_halo} for more details. }} \item[d]{{Flux ratio between the blueshifted and redshifted components of \ensuremath{\rm Ly\alpha}\ emission. Details in \S\ref{sec:az_halo}}.} \item[f]{F140W and F160W images obtained using HST-WFC3-IR, F606W and F814W images from HST-ACS. } \item[g]{Typical uncertainty: $\pm 10^\circ$. } \item[h]{Methods used to measure $\ensuremath{\rm PA}_0$: (\rnum{1}) using GALFIT on HST images; (\rnum{2}) using the pixel intensity second moment on HST images; (\rnum{3}) pixel intensity second moment on OSIRIS H$\alpha$ map; } (\rnum{4}) kinematics of H$\alpha$ emission. Details on the methods are shown in \S\ref{sec:pa_methods}. \end{tablenotes} \end{threeparttable} \end{sidewaystable*} \section{Observations and Reductions} \label{sec:obs} \subsection{KCWI} \label{sec:kcwi} The KCWI data discussed in the present work were obtained between 2017 September and 2020 November, in all cases using the medium-scale slicer made up of 24 slices of width 0\secpoint69 and length 20\secpoint3 on the sky. The instrumental setup uses the BL volume phase holographic (VPH) grating with an angle of incidence that optimises the diffraction efficiency near 4200 \AA, with the camera articulation angle set to record the spectra of each slice with a central wavelength of $\sim 4500$ \AA. A band-limiting filter was used to suppress wavelengths outside of the range 3500-5600 \AA\ to reduce scattered light; the useful common wavelength range recorded for all 24 slices in this mode is 3530-5530 \AA, with a spectral resolving power ranging from $R \simeq 1400$ at 3530 \AA\ to $R \simeq 2200$ at 5530 \AA. At the mean redshift of the sample ($\langle z \rangle = 2.42$), \ensuremath{\rm Ly\alpha}\ falls at an observed wavelength of $\sim 4160$ \AA, where $R\sim 1650$. The E2V 4k$\times$4k detector was binned 2$\times$2, which provides spatial sampling along slices of 0\secpoint29 pix$^{-1}$. Because each slice samples 0\secpoint68 in the dispersion direction, the effective spatial resolution element is rectangular on the sky, with an aspect ratio of $\sim 2.3:1$. We adopted the following approach to the observations, designed to ensure that the slicer geometry with respect to the sky is unique on each 1200s exposure so that the effective spatial resolution on the final stacked data cube is close to isotropic. Typically, a total integration of $\sim 5$ hours is obtained as a sequence of 15 exposures of 1200s, each obtained with the sky {position angle (PA)} of the instrument rotated by 10-90 degrees with respect to adjacent exposures. Each rotation of the instrument field of view is accompanied by a small offset of the telescope pointing before the guide star is reacquired. In this way, a given sky position is sampled in 15 different ways by the slicer. \subsubsection{KCWI Data Reduction} \label{sec:kcwi_reduction} Each 1200s exposure with KCWI was initially reduced using the data reduction pipeline (DRP) maintained by the instrument team and available via the Keck Observatory website\footnote{\href{https://github.com/Keck-DataReductionPipelines/KcwiDRP}{https://github.com/Keck-DataReductionPipelines/KcwiDRP}}. The DRP assembles the 2D spectra of all slices into a 3D data cube (with spaxels of 0\secpoint29$\times$0\secpoint69, the native scale) using a suite of procedures that can be customised to suit particular applications. The procedures include cosmic-ray removal, overscan subtraction and scattered light subtraction, wavelength calibration, flat-fielding, sky-subtraction, differential atmospheric refraction (DAR) correction, and flux-calibration. Wavelength calibration was achieved using ThAr arc spectra obtained using the internal calibration system during the afternoon prior to each observing night. Flat-fielding was accomplished using spectra of the twilight sky at the beginning or end of each night, after dividing by a b-spline model of the solar spectrum calculated using the information from all slices. For each frame, the sky background was subtracted using the sky-modeling feature in the DRP, after which the sky-subtracted image (still in the 2-D format) is examined in order to mask pixels, in all 24 slices, that contain significant light from sources in the field. The frame was then used to make a new 2-D sky model using only unmasked pixels, and the sky-subtracted image is reassembled into a wavelength-calibrated (rebinned to 1 \AA\ per wavelength bin) data cube, at which time a variance cube, an exposure cube, and a mask cube are also produced. Next, we removed any remaining low frequency residuals from imperfect sky background subtraction by forming a median-filtered cube after masking obvious continuum and extended emission line sources using a running 3D boxcar filter. The typical dimensions of the filter are 100 \AA\ (100 pixels) in the wavelength direction, 16 pixels (4\secpoint6) along slices, and 1 pixel (0\secpoint69) perpendicular to the slices, with the last ensuring slice-to-slice independence. Minor adjustments to the filter dimensions were made as needed. If the running boxcar encounters a large region with too few unmasked pixels to allow a reliable median determination, then the pixel values in the filtered cube were interpolated from the nearest adjacent regions for which the median was well-determined. Finally, the median-filtered cube for each observed frame was subtracted from the data. We found that this method proved effective for removing scattered light along slices caused by bright objects within the KCWI field of view. Because neither Keck \RNum{2} nor KCWI has an atmospheric dispersion corrector, each cube is corrected for differential atmospheric refraction (DAR) (i.e., apparent position of an object as a function of wavelength) using the elevation and parallactic angle at the midpoint of the exposure and a model of the atmosphere above Maunakea. Finally, each cube was flux-calibrated using observations obtained with the same instrument configuration of one or more spectrophotometric standard stars selected from a list recommended by the KCWI documentation. Prior to stacking reduced data cubes of individual exposures covering the same sky region, they must be aligned spatially and rotated to account for differences in the PA of the instrument with respect to the sky on different exposures. To accomplish this, we averaged the DAR-corrected cubes along the wavelength axis to create pseudo-white-light images, rotated each to the nominal sky orientation (N up and E left) based on the World Coordinate System (WCS) recorded in the header, and cross-correlated the results to determine the relative offsets in RA and Dec, which we found to have a precision of $\simeq 0.03$-arcsec ({root mean square,} RMS). The offsets were applied to the WCS in the header of each cube, and all cubes for a given pointing were then resampled to a common spatial grid with spatial sampling of $0\secpoint3 \times 0\secpoint3$ using a 2D \textit{drizzle} algorithm in the \textit{Montage} package\footnote{\href{http://montage.ipac.caltech.edu/}{http://montage.ipac.caltech.edu/}}, with drizzle factor of 0.7. If the wavelength grid is different among cubes, the spectrum in each spaxel was resampled to a common grid using cubic spline. Finally, we stacked the resampled cubes by averaging, weighted by exposure time. A white light image of the final stacked data cube was used to determine small corrections to the fiducial RA and Dec to align with other multiwavelength data of the same region. \subsection{High Spatial Resolution Imaging} \label{sec:image} For galaxies in the mass range of our sample at $z\sim 2.5$, the typical half-light diameter is $\sim 4$ pkpc, which corresponds to $\simeq 0\secpoint5$ \citep{law12c}. To obtain reliable measurements of the orientation of the galaxy's projected major and minor axes (see \S\ref{sec:pa}), high resolution images are crucial. We gathered existing data from two sources: space-based optical or near-IR images from the HST/ACS or HST/WFC3; alternatively, H$\alpha$ maps obtained using the Keck/OSIRIS integral field spectrometer (\citealt{larkin06}) behind the WMKO laser guide star adaptive optics (LGSAO) system, which typically provides spatial resolution of $0\secpoint11-0\secpoint15$ (\citealt{law07,law09,law12b}). \begin{table} \caption{Summary of HST Observations \label{tab:hst}} \begin{threeparttable} \centering \begin{tabular}{lccccc} \hline\hline Field & Inst. & Filter & Prog. ID & PI & $t_\mathrm{exp}$ (s)\tnote{a} \\\hline DSF2237b & ACS & F606W & 15287 & A. Shapley\tnote{b} & 6300 \\ Q0100 & WFC3 & F160W & 11694 & D. Law\tnote{c} & $~8100$ \\ Q0142 & WFC3 & F160W & 11694 & D. Law\tnote{c} & $~8100$ \\ Q0207 & WFC3 & F140W & 12471 & D. Erb & $~~800$ \\ Q0449 & WFC3 & F160W & 11694 & D. Law\tnote{c} & $~8100$ \\ Q0821 & WFC3 & F140W & 12471 & D. Erb & $~~800$ \\ Q1009 & WFC3 & F160W & 11694 & D. Law\tnote{c} & $~8100$ \\ Q1549 & ACS & F606W & 12959 & A. Shapley\tnote{d} & $12000$ \\ Q1623 & WFC3 & F160W & 11694 & D. Law\tnote{c} & $~8100$ \\ Q1700 & ACS & F814W & 10581 & A. Shapley\tnote{e} & $12500$ \\ Q2206 & WFC3 & F160W & 11694 & D. Law\tnote{c} & $~8100$ \\ Q2343 & WFC3 & F140W & 14620 & R. Trainor & $~5200$ \\ Q2343 & WFC3 & F160W & 11694 & D. Law\tnote{c} & $~8100$ \\\hline \end{tabular} \begin{tablenotes}\footnotesize \item[a]{Defined as the median exposure time across the whole FoV.} \item[b]{\citet{pahl21}.} \item[c]{\citet{law12a,law12c}.} \item[d]{\citet{mostardi15}.} \item[e]{\citet{peter07}.} \end{tablenotes} \end{threeparttable} \end{table} The {\it HST/WFC3-IR} images in this work were obtained using either the F140W or F160W filters, from programs listed in Table~\ref{tab:hst}, with spatial resolution of $\sim 0\secpoint16$ and$\sim 0\mbox{$''\mskip-7.6mu.\,$} 18$, respectively. The HST/ACS images were taken in either the F606W or F814W filters, with spatial resolution of $\sim 0\mbox{$''\mskip-7.6mu.\,$} 09$ {(full width at half maximum; FWHM)}. In all cases, overlapping exposures were aligned and combined using \textit{DrizzlePac}\footnote{\href{https://www.stsci.edu/scientific-community/software/drizzlepac}{https://www.stsci.edu/scientific-community/software/drizzlepac}}. For the Q2343 field, where galaxies have been observed with comparably deep observations in two filters, we selected the image with the smaller estimated uncertainty in the measured position angle (see \S\ref{sec:pa_methods}). The Keck/OSIRIS \ensuremath{\rm H\alpha}\ maps of six galaxies included in the sample are presented by \citet{law07, law09, law12b,law18}; details of the observations and data reduction can be found in those references. \section{Galaxy Azimuthal Angle} \label{sec:pa} \subsection{Motivation} \begin{figure} \centering \includegraphics[width=6cm]{plots/azimuth.pdf} \caption{ A schematic diagram of how the galaxy azimuthal angle ($\phi$) is defined in this work and how it might be related to the origin and kinematics of gas in the CGM under common assumptions of a bi-conical outflow with accretion along the disk plane. Suppose that we are viewing a galaxy projected on the sky in this diagram, naively, one would expect to see inflow aligning with the projected galaxy major axis, and outflow aligning with the minor axis. Impact parameter, $D_\mathrm{tran}$ is defined as the projected distance from the center of the galaxy. The galaxy azimuthal angle, $\phi$ is defined as the projected angle on the sky with respect to the center of the galaxy, and starts from the projected galaxy major axis. } \label{fig:def_az} \end{figure} The physical state of the CGM is controlled by the competition between accretion of new material onto the galaxy, and outflows driven by processes originating at small galactocentric radii. In what one might call the ``classic'' picture {(see \citealt{veilleux05, tumlinson17} for review articles)}, based on observations of nearby starburst galaxies, inflowing and outflowing gas occurs preferentially along the major and minor axes, respectively, as in the schematic diagram in Figure~\ref{fig:def_az}. It is well-established around star-forming galaxies at $z \lesssim 1$ that outflows driven by energy or momentum from stellar feedback (radiation pressure from massive stars, supernovae) originating near the galaxy center tend to escape along the direction that minimizes the thickness of the ambient {interstellar medium (ISM)} that would tend to slow or prevent outflows from escaping the galactic disk. Meanwhile, accretion of cool gas from the {intergalactic medium (IGM)} is believed to occur in quasi-collimated filamentary flows that carry significant angular momentum and thus would tend to approach the inner galaxy in a direction parallel to the disk plane {\citep{nelson19, peroux20}}. If feedback processes are sufficiently vigorous, the accretion is also most likely along directions that do not encounter strong outflows. When projected onto the plane of the sky, these considerations would tend to intersect outflowing material along the polar direction (the projected minor axis) and accreting material when the azimuthal angle (see Figure~\ref{fig:def_az}) is closer to the PA of the projected major axis. At redshifts $z \lower.5ex\hbox{\ltsima} 1$, there is strong support for this general geometric picture from the statistics of the incidence and strength of rest-UV absorption lines observed in the spectra of background sources, as a function of the azimuthal angle of the vector connecting the sightline and the center of the galaxy. As might be expected from a picture similar to Fig.~\ref{fig:def_az}, sightlines with $\phi$ close to the minor axis intersect gas with a large range of velocities, which kinematically broadens the observed absorption complexes comprised of many saturated components (e.g., \citealt{bordoloi11, bouche12, kacprzak12, schroetter19}). For azimuthal angles $\phi$ close to the major axis, the absorption features are strong due to the high covering fraction and column density of low-ionization gas near the disk plane, and broadened by the kinematics of differential rotation. The same picture has been supported by cosmological simulations where the gas metallicity, radial velocity, and the amount of outflowing mass all show significant differences along the galaxy major and minor axes \citep{nelson19, peroux20}. It remains unclear, however, whether this geometric picture should be applied to galaxies at $z \sim 2$, where a large fraction of galaxies, particularly those with ${\rm log}(M_{\ast}/M_{\odot}) \lower.5ex\hbox{\ltsima} 10$, appear to be ``dispersion dominated'', where the rotational component of dynamical support ($V_{\rm rot}$) is significantly smaller than that of apparently random motion ($\sigma$) (\citealt{law09, forster09}), and where the central dynamical mass of the galaxy may be dominated by cold gas rather than stars -- and therefore any disk would be highly unstable. For such galaxies, there is not always a clear connection between the morphology of starlight and the principal kinematic axes (e.g., \citealt{erb04, law12c}). Meanwhile, a WFC3 survey conducted by \citet{law12c} for similar galaxies strongly supports 3D triaxial morphology, instead of inclined disk, to be the most suitable model to describe these galaxies. The fact that the vast majority of galaxies with ``down the barrel'' spectra at $z > 2$ have systematically blueshifted interstellar absorption lines and systematically redshifted \ensuremath{\rm Ly\alpha}\ emission suggests that outflows cannot be confined to a small range of azimuth (\citealt{shapley03, steidel10, jones12} etc.). The spatial and spectral distribution of the \ensuremath{\rm Ly\alpha}\ emission surrounding galaxies may provide crucial insight into the degree of axisymmetry and the dominant magnitude and direction of gas flows in the CGM. In any case, IFU observations of \ensuremath{\rm Ly\alpha}, where both the geometry and kinematics of the extended emission can be mapped, complement information available from absorption line studies. In the analysis below, we define the galaxy azimuthal angle ($\phi$) as the absolute angular difference between the vector connecting the galaxy centroid and a sky position at projected angular distance of $\theta_{\rm tran}$ (or projected physical distance $D_{\rm tran}$) and $\ensuremath{\rm PA}_0$, the PA of the galaxy major axis measured from the galaxy stellar continuum light, as illustrated schematically in Figure \ref{fig:def_az}, \begin{eqnarray} \label{eq:azimuthal_angle} \phi = |\mathrm{PA} - \ensuremath{\rm PA}_0|. \end{eqnarray} The zero point for measurements of position angle is arbitrary, but for definiteness we measure PA as degrees {east (E) of north (N)}, so that angles increase in the counter-clockwise direction when N is up and E to the left. The use of equation~\ref{eq:azimuthal_angle} to define $\phi$ implies that $0 \le \phi/{\rm deg} \le 90$, and that $\phi = 0$ (90) deg corresponds to the projected major (minor) axis. \begin{figure*} \includegraphics[width=18cm]{plots/hst_gal_mmt.pdf} \caption{ HST images of the 35 galaxies whose $\ensuremath{\rm PA}_0$ were determined from Methods \rnum{1} and \rnum{2}. For each image, shown in the four corners, from top-left in clockwise direction, are the KBSS identifier, the FWHM of PSF, redshift, and the instrument and filter that the image was taken with. The dashed red line, dash-dotted yellow line, and the solid white line indicate the direction of $\ensuremath{\rm PA}_0$ measured from GALFIT, pixel moment, and the average between the two.} \label{fig:hst_gal_mmt} \end{figure*} \begin{figure*} \includegraphics[width=18cm]{plots/hst_mmtonly.pdf} \caption{ Same as Figure \ref{fig:hst_gal_mmt}, except that the galaxies do not have a clear central SB peak. Therefore, their $\ensuremath{\rm PA}_0$ were determined only from the pixel moment (white solid line). } \label{fig:hst_mmt_only} \end{figure*} \begin{figure*} \includegraphics[width=14cm]{plots/hst_osiris.pdf} \caption{ Similar to Figure \ref{fig:hst_gal_mmt}, showing the $\ensuremath{\rm PA}_0$ of galaxies with both Keck/OSIRIS H$\alpha$ maps from \citet{law09} and HST continuum images. For each panel, the left image shows the OSIRIS H$\alpha$ map, in which the cyan dashed line is $\ensuremath{\rm PA}_0$ measured from this map using second pixel moment. The right image shows the HST image, where the red dashed line and the yellow dash-dotted line show the $\ensuremath{\rm PA}_0$ measured from GALFIT fitting and second moment from this image when present. The white solid lines are the final $\ensuremath{\rm PA}_0$ determined for each galaxy by averaging the OSIRIS and HST measurements\protect\footnotemark. } \label{fig:hst_osiris} \end{figure*} \footnotetext{Since the HST WFC3-IR/F160W image of Q1623-BX453 is unresolved, its $\ensuremath{\rm PA}_0$ is only derived from the OSIRIS H$\alpha$ map.} \begin{figure} \centering \includegraphics[width=8cm]{plots/q2343_bx442.pdf} \caption{ Left: HST WFC3-F160W image of Q2343-BX442. Right: H$\alpha$ velocity map of Q2343-BX442 by \citet{law12b}. The $\ensuremath{\rm PA}_0$ (white solid line) is defined to be perpendicular to its rotational axis. } \label{fig:q2343_bx442} \end{figure} \subsection{Methods} \label{sec:pa_methods} In order to obtain reliable measurements of $\phi$, it is important to measure $\ensuremath{\rm PA}_0$ accurately and consistently. We used up to four different methods to determine $\ensuremath{\rm PA}_0$ for each galaxy. The choice of method depends on the information available; these are briefly summarised as follows: \begin{itemize} \item { (\rnum{1}) \it S\'ersic profile fitting of HST images:} In this method, we fit a 2D S\'ersic profile \citep{sersic63} to the host galaxy by using \textit{GALFIT} \citep{peng02, peng10}, and determined $\ensuremath{\rm PA}_0$ from the best-fit model parameters. The point-spread function (PSF) was measured by selecting stellar sources over the full HST pointing using the star classifier in \textit{SExtractor}, which calculates a ``stellarity index''for each object based on a neural network. We then examined sources with the highest 3\% stellarity indices by eye, and normalised and stacked them to form an empirical PSF. {Our fiducial model consists of a 2D elliptical S\'ersic profile convolved with the PSF. We also included the first-order Fourier mode to handle the asymmetric morphology in most cases. However, over- or under-fitting can cause failure of convergence or unreasonably large fitting errors and residuals. In most cases, the cause of the failure and the required adjustment are obvious in the original galaxy image and the model residual. For example, if the residual reveals an additional source, we would add an additional S\'ersic component or a simple scaled PSF to the model, depending on the size of the additional source. Meanwhile, if the primary source shows a triangular morphology, we would add the third-order Fourier mode\footnote{The second-order Fourier mode is degenerate with the ellipticity.} associated with the S\'ersic profile. However, in certain cases, obtaining a successful fit requires experimenting with the model by adding or removing certain degrees of freedom. The rule of thumb is that we add or remove degrees of freedom one at a time, and adopt the adjustment if it makes the fit converge, or significantly diminishes the reduced $\chi^2$ and the fitted error of $\ensuremath{\rm PA}_0$. In the end, 23 galaxies were fit with the fiducial model. Nine galaxies (Q0100-C7, Q0821-MD36, Q1009-BX222, Q1009-D15, Q1623-BX453-CS3, Q1700-BX729-CS9, Q2343-BX418, Q2343-BX418-CS8, Q2343-BX660) were fit without the Fourier modes. Five galaxies (Q1009-BX218, Q1700-BX490, Q1700-BX710, Q1700-BX729, Q2343-BX513-CS7) were fit with additional sources. Three galaxies (Q2343-BX391, Q2343-BX587-CS3, Q2343-BX587-CS4) were fit with third-order Fourier modes.} Galaxies with unsuccessful fits require using alternative methods, detailed below. Successful S\'ersic fits were obtained for 40 of 59 galaxies, shown in Figures~\ref{fig:hst_gal_mmt} and \ref{fig:hst_osiris}; galaxies with successful applications of this method tend to be isolated and to have a dominant central high-surface-brightness component.\\ \item { (\rnum{2}) \it Second moment of pixel intensity on HST images:} This method calculates the flux-weighted pixel PA arithmetically derived from the second moment of the pixel intensity, \begin{eqnarray} \tan (2 {\ensuremath{\rm PA}_0}) = \frac{2 \langle xy \rangle}{\langle x^2 \rangle - \langle y^2 \rangle}, \end{eqnarray} where $x$ and $y$ are the pixel positions relative to the center of the galaxy in the x- and y-directions, and ``$\langle\textrm{...}\rangle$'' indicates the arithmetic mean of the pixel coordinates weighted by pixel flux. Galaxy centers were defined as the point where $\langle x \rangle = 0$ and $\langle y \rangle = 0$. Especially for the galaxies that are morphologically complex, we found that the $\ensuremath{\rm PA}_0$ measurements using this method are sensitive to the surface brightness threshold used to define the outer isophotes, and to the spatial resolution of the HST image. Therefore, all ACS images (which have higher spatial resolution than those taken with WFC3-IR) were convolved with a 2D Gaussian kernel to match the PSF to those of the WFC3-IR/F160W images. We tested various surface brightness thresholds, and found that a threshold of 50\% of the peak SB after convolution provides the most consistent measurements between the $\ensuremath{\rm PA}_0$ measured using (\rnum{1}) and (\rnum{2}) has an RMS of 10.8 degrees. \\ \item { (\rnum{3}) \it Second moment of pixel intensity of OSIRIS H$\alpha$ maps:} This method uses the same algorithm as in (\rnum{2}), applied to Keck/OSIRIS H$\alpha$ maps rather than HST continuum images. The SB threshold follows \citet{law09}. There are 6 galaxies whose major axes were determined using this method, including one (Q1623-BX453) whose F160W image is unresolved. For the remaining 5 galaxies, the RMS between $\ensuremath{\rm PA}_0$ measured from the HST images and this method is $\sim 15$ degrees. All 6 galaxies are shown in Figure \ref{fig:hst_osiris}. \\ \item {(\rnum{4}) \it H$\alpha$ Kinematics:} As the only galaxy in this sample that demonstrates not only rotational kinematics from H$\alpha$ emission, but also clear disk morphology \citep{law12b}, the galaxy major axis of Q2343-BX442 is defined to be perpendicular to its rotational axis (Figure \ref{fig:q2343_bx442}). Because of its complex morphology, we did not apply method (\rnum{1}) to this galaxy. Method (\rnum{2}) is likely dominated by the inner spiral structure and leads to a $\ensuremath{\rm PA}_0$ that is $\sim 90$ deg apart from that determined from kinematics, while method (\rnum{3}) is consistent with this method within 15 deg. \end{itemize} {Table~\ref{tab:sample} lists the adopted measurement of $\ensuremath{\rm PA}_0$ for each galaxy in the sample, as well as the methods used to determine it. In cases where multiple methods were applied, we used the average $\ensuremath{\rm PA}_0$ of the OSIRIS and HST measurements, where the latter value is an average of the results obtained using methods (\rnum{1}) and (\rnum{2}). This way, more weight is given to method (\rnum{3}) results when available, since it utilises entirely independent data from a different instrument. The robustness of $\ensuremath{\rm PA}_0$ measurements is discussed in the following two subsections.} \subsection{Relationship between the Kinematic and Morphological Major Axes} An important issue for the interpretation of morphological measurement of $\ensuremath{\rm PA}_0$ is the extent to which it is likely to be a proxy for the {\it kinematic} major axis. Two of the 17 galaxies in Figure~\ref{fig:hst_mmt_only} (Q2343-BX389 and Q2343-BX610) were studied as part of the SINS IFU survey by \citet[hereafter FS18]{fs18}, and were subsequently re-observed with the benefit of adaptive optics by FS18; the latter found that both have clear rotational signatures and that the inferred difference between their morphological major axis $\ensuremath{\rm PA}_0$ and the kinematic major axis $\ensuremath{\rm PA}_{\rm kin}$ are $\Delta\ensuremath{\rm PA} \equiv |\ensuremath{\rm PA}_0 - \ensuremath{\rm PA}_{\rm kin}| = 2\pm5$ deg and $27\pm10$ deg for Q2343-BX389 and Q2343-BX610, respectively. From their full sample of 38 galaxies observed at AO resolution, FS18 found $\langle \Delta {\rm PA} \rangle = 23$ deg (mean), with median $\Delta {\rm PA}_{\rm med} =13$ deg. Among the 6 galaxies which also have H$\alpha$ velocity maps from \citet{law09} (used for method (\rnum{3}) above), two (Q0449-BX93 and Q1623-BX453) show no clear rotational signature, so that $\ensuremath{\rm PA}_{{\rm kin}}$ is indeterminate. Small amounts of velocity shear were observed for Q2343-BX513 and Q2343-BX660, both having $\ensuremath{\rm PA}_{\rm kin}$ consistent with $\ensuremath{\rm PA}_0$\footnote{Q2343-BX513 was also observed by FS18, who found $\ensuremath{\rm PA}_{\rm kin} = -35$ deg and $\Delta {\rm PA}=40\pm12$ deg; our adopted $\ensuremath{\rm PA}_0=-9$ deg differs by 26 deg from their $\ensuremath{\rm PA}_{\rm kin}$ value}. However, for Q2343-BX418, the implied $\ensuremath{\rm PA}_{\rm kin}$ is nearly perpendicular to its morphology-based $\ensuremath{\rm PA}_0$. For Q1700-BX490, the kinematic structure is complicated by the presence of two distinct components: the brighter, western component appears to have $\ensuremath{\rm PA}_{\rm kin} \simeq 20$ deg, which would be consistent with $\ensuremath{\rm PA}_0$ measured from its \ensuremath{\rm H\alpha}\ intensity map; however, if the eastern component, which has a slightly blue-shifted velocity of $\sim 100 \rm km~s\ensuremath{^{-1}\,}$, is included as part of the same galaxy, we found $\ensuremath{\rm PA}_0 = 86$ deg. As summarised in Table~\ref{tab:sample} and Figures~\ref{fig:hst_gal_mmt}-\ref{fig:hst_osiris}, most of the galaxy sample has $\ensuremath{\rm PA}_0$ measured using 2 or more of the methods described above, and generally the morphologically-determined major axis $\ensuremath{\rm PA}_0$ agree with one another to within $\sim 10$ deg. We caution that no high-spatial-resolution {\it kinematic} information is available for most of the sample; based on the subset of 9 that do have such measurements, approximately two-thirds show reasonable agreement between $\ensuremath{\rm PA}_0$ and $\ensuremath{\rm PA}_{\rm kin}$. \subsection{Distribution and robustness of $\ensuremath{\rm PA}_0$ measurements} \label{sec:pa_discussion} \begin{figure} \centering \includegraphics[width=8cm]{plots/pa_correlation.pdf} \caption{ {Comparison of $\ensuremath{\rm PA}_0$ values measured using different methods. Blue points compare methods (\rnum{1}) and (\rnum{2}), while orange points compare methods (\rnum{1}) and (\rnum{3}). The red shaded regions indicates where the absolute differences between the two measurements are greater than 90 degrees, arithmetically forbidden because of the rotational symmetry. The overall RMS $= 11.4$ deg. }} \label{fig:pa_correlation} \end{figure} {To estimate the systematic uncertainty of the $\ensuremath{\rm PA}_0$ measurements, we compare values measured using different methods for the same objects in Figure \ref{fig:pa_correlation}. Between methods (\rnum{1}) and (\rnum{2}), which are both based on HST images, the measured $\ensuremath{\rm PA}_0$ for 40 galaxies are well-centred along the 1-to-1 ratio, with RMS $\simeq 10.8$ deg. Values of $\pa0$ measured using method (\rnum{3}) are based on spectral line maps from an IFU rather than continuum light in a direct image, so that the 5 $\ensuremath{\rm PA}_0$ values measured using methods (\rnum{1}) and (\rnum{3}) exhibit larger scatter, with RMS$\simeq 15.6$ deg relative to a 1:1 ratio. Therefore, we conclude that the uncertainty in final $\ensuremath{\rm PA}_0$ measurements is $\lesssim 15$ deg. } \begin{figure} \centering \includegraphics[width=8cm]{plots/pa_kde.pdf} \caption{ The kernel density estimate (KDE; blue shaded region) of $\ensuremath{\rm PA}_0$ for the galaxy sample, normalised so that a uniform distribution would have a constant $\mathrm{KDE} = 1$. The KDE was constructed using Gaussian kernels of fixed $\sigma = 10^\circ$, corresponding to an opening angle represented by the black block at the top-right. The orange solid lines indicate the values of $\ensuremath{\rm PA}_0$ for the individual galaxies. There is an apparent excess in the KDE of galaxies with $\ensuremath{\rm PA}_0 \simeq 10-40^\circ$, which we attribute to sample variance.} \label{fig:pa_kde} \end{figure} Figure \ref{fig:pa_kde} shows the normalised Kernel Density Estimator (KDE) and the individual measurements of $\ensuremath{\rm PA}_0$. There is an apparent excess in the occurrence rate of values between $\ensuremath{\rm PA}_0 \sim 10-40^\circ$. To evaluate its possible significance, we conducted 1000 Monte-Carlo realisations of a sample of 58 galaxies with randomly assigned $\ensuremath{\rm PA}_0$; we find that there is a $\simeq 5$\% probability that a similar excess is caused by chance, thus is not statistically significant, and is consistent with that expected from sample variance. Furthermore, because the HST and OSIRIS images used to measure $\ensuremath{\rm PA}_0$ were rotated to the nominal North up and East left orientation prior to measurement, we tested whether significant bias might result from the choice of pixel grid by re-sampling all images with pixel grids oriented in five different directions. We found that the values measured for $\ensuremath{\rm PA}_0$ were consistent to within 10 deg (RMS). \begin{figure*} \centering \subfloat[\label{fig:pa_slicer}]{\includegraphics[width=4.5cm]{plots/pa_slicer.pdf}} \subfloat[\label{fig:kcwi_hst_profile}]{\includegraphics[width=11.5cm]{plots/kcwi_hst_cont.pdf}} \caption{ (a) Histogram of the relative contribution of measurements made at different slicer azimuthal angles ($\phi_{\rm slicer}$) in units of total exposure time. The distribution of $\phi_\mathrm{Slicer}$ is relatively uniform, with a small excess near $\phi_\mathrm{Slicer} \sim 10^\circ$. (b) Stacks of the galaxy continuum images for which the major and minor axes of each galaxy were aligned with the X and Y axes prior to averaging. Each panel shows (\rnum{1}) the pseudo-narrow-band image (rest frame $1230\pm6$ \AA) of the KCWI galaxy continuum, (\rnum{2}) the stacked HST continuum image, after aligning the principal axes in the same way, (\rnum{3}) the HST image convolved with a Gaussian kernel of $\mathrm{FWHM} = 1\secpoint02$ to match the KCWI continuum, (\rnum{4}) the residual between the KCWI continuum and the HST image convolved with the KCWI PSF, (\rnum{5}) a 2D circular Gaussian profile with $\mathrm{FWHM} = 1.21~\mathrm{arcsec}$ as the best symmetric Gaussian profile from a direct fit of the KCWI continuum image, and (\rnum{6}) the residual between the KCWI continuum and the model in (\rnum{5}) isotropic Gaussian profile. {The colour map of (\rnum{1}), (\rnum{2}), (\rnum{3}), and (\rnum{5}) is in log scale, with linear red contours in the dcrement of 0.17. The colour map of (\rnum{4}) and (\rnum{6}) is in linear scale.} The residual map in panel (\rnum{6}) shows a clear dipole residual in the Y (minor axis) direction that is not present in (\rnum{4}). The RMS values in panels (\rnum{4}) and (\rnum{6}) were calculated within $|\Delta x| < 1~\mathrm{arcsec}$ and $|\Delta y| < 1~\mathrm{arcsec}$ to reflect the dipole residual. The ``boxiness'' of the KCWI stack is likely due to the undersampling of KCWI in the spatial direction. Taken together, (b) demonstrates that the KCWI PSF is axisymmetric (with ${\rm FWHM} = 1\secpoint02$) , and that the KCWI continuum image is capable of distinguishing the galaxy major and minor axes. } \label{fig:symmetry} \end{figure*} The possible systematic bias introduced by an uneven $\ensuremath{\rm PA}_0$ distribution is mitigated by our observational strategy of rotating the KCWI instrument PA between individual exposures. We define $\ensuremath{\rm PA}_\mathrm{Slicer}$ as the position angle along the slices for each 1200 s exposure, and \begin{eqnarray} \phi_\mathrm{slicer} = |\ensuremath{\rm PA}_\mathrm{Slicer} - \ensuremath{\rm PA}_0|. \end{eqnarray} Figure \ref{fig:pa_slicer} shows the distribution of $\phi_\mathrm{slicer}$ in units of total exposure time. There is a slight tendency for the observations to align with the galaxy major axis (i.e., $\phi_{\rm slicer} \sim 0$) that results from our usual practice of beginning a sequence of exposures of a given pointing with one at $\ensuremath{\rm PA}_{\rm slicer} = 0$ deg and the aforementioned excess of galaxies with $\ensuremath{\rm PA}_0 \sim 20^\circ$. In order to characterise the PSF of our final KCWI data cubes, and to show that it is effectively axisymmetric, Figure~\ref{fig:symmetry}b shows various composite images of the galaxy sample. Prior to forming the stack, each individual data cube was rotated to align the galaxy major (minor) axis with the X (Y) coordinate of the stacked image. The KCWI stacked galaxy continuum image [panel (\rnum{1})] was made by integrating each aligned data cube along the wavelength axis over the range $1224 \le \lambda_0/{\rm \AA} \le 1236$ in the galaxy rest frame (i.e., $2000 < \Delta v/\rm km~s\ensuremath{^{-1}\,} \le 5000$). This integration window was chosen to be representative of the UV continuum near \ensuremath{\rm Ly\alpha}\ without including the \ensuremath{\rm Ly\alpha}\ line itself, and to be unaffected by \ensuremath{\rm Ly\alpha}\ absorption from the IGM. {The center of the galaxy is determined with high confidence (see \S\ref{sec:spatial_profile}) by fitting a 2D Gaussian function to the individual galaxy continuum image.} A similar approach -- aligning the principal axes in the high-resolution HST images prior to stacking -- was used to produce the HST stacked continuum image shown in panel (\rnum{2}){, except that the centers in the HST images were measured using the first moment of pixel intensity.} Both stacks were conducted in units of observed surface brightness. The FWHM of the stacked HST image along the major and minor axes are $0\secpoint55$ and $0\secpoint35$ respectively. The stacked KCWI continuum image is well reproduced by the convolution of the $\ensuremath{\rm PA}_0$-aligned stacked HST image [panel (\rnum{2}) of Fig.~\ref{fig:symmetry}b] with an axisymmetric 2-D Gaussian profile with $\mathrm{FWHM} = 1\secpoint02$ [see panels (\rnum{3}) and (\rnum{4}) of Fig.~\ref{fig:symmetry}b]. Comparing the convolution of HST and KCWI [panel (\rnum{3})] with the best direct fit of a symmetric Gaussian profile to the KCWI continuum image ($\rm FWHM = 1\secpoint21$) [panel (\rnum{5})] show that even at the $\simeq 1\secpoint02$ resolution of KCWI one can clearly distinguish the major axis elongation. {The residual map assuming a symmetric Gaussian profile [panel (\rnum{6})] shows a clear dipole residual compared to panel (\rnum{4}). } Thus, Figure~\ref{fig:symmetry} shows that (1) the PSF of the KCWI cubes is axisymmetric and thus has not introduced a bias to the azimuthal light distribution measurements and (2) the spatial resolution is sufficient to recognize non-axisymmetry even on sub-arcsec angular scales of the continuum light. \section{Analyses} \label{sec:analyses} \subsection{\ensuremath{\rm Ly\alpha}\ Spatial Profile} \label{sec:spatial_profile} \begin{figure} \centering \includegraphics[width=8cm]{plots/img_cont_lya.pdf} \caption{ Stacked images of the galaxy continuum (Left) and the continuum-subtracted \ensuremath{\rm Ly\alpha}\ emission (Right) with the X- and Y-axes aligned with the galaxy major and minor axes, respectively. The color coding is on a log scale, while the contours are linear. The intensity scales have been normalized to have the same peak surface brightness intensity at the center. The \ensuremath{\rm Ly\alpha}\ emission is more extended than the continuum emission. } \label{fig:img_cont_lya} \end{figure} \begin{figure} \centering \includegraphics[width=8cm]{plots/lya_profile.pdf} \caption{ Top panel: The average \ensuremath{\rm Ly\alpha}\ surface brightness profile of the continuum-subtracted composite \ensuremath{\rm Ly\alpha}\ image shown the righthand panel of Figure \ref{fig:img_cont_lya}. Red points represent the median surface brightness evaluated over all azimuthal angles ($0^\circ < \phi \le 90^\circ$) as a function of projected distance from the galaxy center. Orange and purple curves show the profiles evaluated over $0^\circ < \phi \le 45^\circ$ ( major axis) and $45^\circ < \phi < 90^\circ$ (minor axis) azimuthal angles. {Dashed cyan curve shows the best-fit profile of the two-component exponential model. Dotted cyan curves show the two component separately.} The grey profile shows the normalised continuum for comparison. Bottom: The residual surface brightness profile formed by subtracting the all-azimuth average from the major and minor axis profiles. The residuals are consistent with zero aside from a marginally-significant difference at $\theta_{\rm tran} < 1\secpoint0$, where the the \ensuremath{\rm Ly\alpha}\ emission is slightly stronger along the major axis. Unless otherwise noted, the conversion between $\theta_\mathrm{tran}$ and $D_\mathrm{tran}$ for this and later figures assumes a redshift of 2.3, the median redshift of the sample.} \label{fig:lya_profile} \end{figure} To study the dependence of the \ensuremath{\rm Ly\alpha}\ emission profile on galaxy azimuthal angle, we first analyse the \ensuremath{\rm Ly\alpha}\ surface brightness (SB) profile as a function of the impact parameter (or transverse distance, $D_\mathrm{tran}$). Figure \ref{fig:img_cont_lya} compares the stacked continuum and continuum-subtracted narrow-band \ensuremath{\rm Ly\alpha}\ emission, composed in the same way as in Figure \ref{fig:symmetry}. The integration window of the \ensuremath{\rm Ly\alpha}\ image is $-700 < \Delta v / (\mathrm{km s}^{-1}) \le 1000$ (1213 \AA\ -- 1220 \AA) to include most of the \ensuremath{\rm Ly\alpha}\ emission (as shown later in Figure \ref{fig:intro_cylindrical}). Continuum subtraction throughout this work was done spaxel-by-spaxel in the data cube of each target galaxy by subtracting the average flux density in two windows flanking the position of \ensuremath{\rm Ly\alpha}, with $2000 < |c\Delta v_{\rm sys} / \mathrm{km~s}^{-1}| < 5000$, where $\Delta v_{\rm sys}$ is the velocity separation relative to rest-frame \ensuremath{\rm Ly\alpha}\ (i.e., two windows each of width $\simeq 12$ \AA\ in the rest frame, [1195-1207] and [1224-1236] \AA. {Similar to \S\ref{sec:pa_discussion}, the center of the galaxy in the KCWI data was determined by fitting a 2D Gaussian profile to the KCWI continuum image. The fitting box was chosen by eye to include all of the signal from the galaxy, but excluding nearby contamination. Despite the arbitrary box size, we found that the derived centroid is very robust: varying the box size by 4 pixels ($1\secpoint2$), the fit result does not change by more than $0\secpoint01$. The typical fitting error propagated from the reduced $\chi^2$ is also $\sim 0\secpoint01$ (median), i.e., much smaller than the seeing disk. } A 2D Gaussian fit to the profiles finds that the FWHM values are {$1\secpoint279 \pm 0\secpoint003 (\textrm{major axis}) \times 1\secpoint182 \pm 0\secpoint003 (\textrm{minor axis})$ (or a $\sim 8\%$ difference) for the continuum emission and $1\secpoint689 \pm 0\secpoint005 \times 1\secpoint705 \pm 0\secpoint005$ ($< 1\%$ difference between major and minor axes)} for the \ensuremath{\rm Ly\alpha}\ emission. Therefore, the \ensuremath{\rm Ly\alpha}\ emission in the stacked image is both more symmetric and more spatially extended than the continuum emission. Figure \ref{fig:lya_profile} shows the median \ensuremath{\rm Ly\alpha}\ SB as a function of $D_\mathrm{tran}$ (red). Each point represents the median SB of a bin of pixels with $\Delta D_\mathrm{tran} = 0.1~\mathrm{pkpc}$. The \ensuremath{\rm Ly\alpha}\ surface brightness profile falls off much more slowly than the continuum (grey). {Following \citet{wisotzki16}, we fit the \ensuremath{\rm Ly\alpha}\ SB profile with a two-component model -- a compact ``core'' component and an extended ``halo'' component. Both components are exponential profiles convolved with the KCWI PSF with the amplitudes, exponential radii, and a uniform background term as free parameters. Further details of the model fitting will be described in a future work (R. Trainor \& N. Lamb, in prep.). The best-fit exponential radii $r_\mathrm{exp} = 3.71^{+0.06}_{-0.04}$ pkpc and $15.6^{+0.5}_{-0.4}$ pkpc. The $r_\mathrm{exp}$ of the halo component is close to \citet{steidel11} (for KBSS galaxies observed with narrow-band \ensuremath{\rm Ly\alpha}\ imaging), which found the median-stacked \ensuremath{\rm Ly\alpha}\ profile has $r_\mathrm{exp} = 17.5$ pkpc, but slightly more extended than \citet{wisotzki18} for SF galaxies at $z>3$ (see also \citealt{matsuda12, momose14, leclercq17}).} Dividing the SB profiles into two subsamples with $0^\circ \le \phi < 45^\circ$ (purple) and $45^\circ \le \phi \le 90^\circ$ (orange) that represent the galaxy major and major axes respectively, one can see that the resulting profiles are consistent with one another to within 1$\sigma$, or within $\lesssim 2\times 10^{-19} \mathrm{~erg~s}^{-1}\mathrm{cm}^{-2}\mathrm{arcsec}^{-2}$. The possible exception is at the smallest projected distances ($D_\mathrm{tran} < 1~\mathrm{arcsec}$, or $\lesssim 8$ pkpc), where the \ensuremath{\rm Ly\alpha}\ emission is marginally enhanced; if real, the difference in profiles (the asymmetry) represents $< 2\%$ of the total \ensuremath{\rm Ly\alpha}\ flux. Thus, the composite \ensuremath{\rm Ly\alpha}\ intensity is remarkably symmetric, suggesting an overall lack of a strong statistical connection between the morphology of the starlight and that of the extended \ensuremath{\rm Ly\alpha}\ emission surrounding individual star-forming galaxies at $z \sim 2-3$. \subsection{Cylindrical Projection of 2D Spectra} \label{sec:2dspec} The similarity of the \ensuremath{\rm Ly\alpha}\ surface brightness profile along galaxy major and minor axes suggests that extended \ensuremath{\rm Ly\alpha}\ emission depends little on the galaxy orientation. However, the KCWI data cubes allow for potentially finer discrimination through examination of both the surface brightness and kinematics of \ensuremath{\rm Ly\alpha}\ emission as a function of projected galactocentric distance. To facilitate such comparison, we introduce ``cylindrical projections'' of 2D \ensuremath{\rm Ly\alpha}\ emission. The basic idea behind cylindrical projection, illustrated in Figure~\ref{fig:intro_cylindrical}, is to provide an intuitive visualisation of spatial and spectral information simultaneously. \begin{figure*} \centering \includegraphics[width=6.592cm]{plots/cylindrical1.png} \includegraphics[width=9.408cm]{plots/cylindrical2.pdf} \caption{ {\it Left}: A schematic diagram explaining cylindrically projected 2D (CP2D) spectra. Spaxels with similar $D_\mathrm{tran}$ are averaged to create the emission map in $D_\mathrm{tran}$-$\Delta v$ space. {\it Right}: The composite CP2D spectra of the continuum-subtracted \ensuremath{\rm Ly\alpha}\ emission line map averaged over all 59 galaxies, at all azimuthal angles ($\phi$). The colour-coding of the \ensuremath{\rm Ly\alpha}\ surface intensity is on a log scale to show the full extent of the emission, whereas the contours are spaced linearly and marked as white lines in the colourbar. The stack was formed by shifting the wavelengths of each galaxy data cube to the rest frame, leaving the surface brightness in observed units. The black ellipse at the top right shows the effective resolution of the stacked maps, with principal axes corresponding to the spectral resolution FWHM and the spatial resolution FWHM (see \S\ref{sec:pa_discussion}). Pixels with $\theta_\mathrm{tran} < 0.1~\mathrm{arcsec}$ have been omitted to suppress artifacts owing to the singularity in the cylindrical projection.} \label{fig:intro_cylindrical} \end{figure*} Compared to the standard 2D spectrum one obtains from slit spectroscopy, the cylindrical 2D spectrum replaces the 1D spatial axis (i.e., distance along a slit) with projected distance, by averaging spaxels in bins of $D_\mathrm{tran}$ or, equivalently, $\theta_{\rm tran}$. When projected as in the righthand panel of Figure~\ref{fig:intro_cylindrical}, it can also be viewed as the \ensuremath{\rm Ly\alpha}\ spectrum at every projected radial distance (averaged, in this case, over all azimuthal angles) or as the average radial profile at each slice of wavelength or velocity. Figure \ref{fig:intro_cylindrical} shows the stacked cylindrical 2D spectrum formed by averaging the continuum subtracted data cubes at wavelengths near rest-frame \ensuremath{\rm Ly\alpha}\ for all 59 galaxies in Table~\ref{tab:sample}. This composite cylindrical 2D spectrum, analogous to a ``down-the-barrel'' \ensuremath{\rm Ly\alpha}\ spectrum in 1D, but evaluated as a function of galactocentric distance, shows that the \ensuremath{\rm Ly\alpha}\ emission line is comprised of distinct redshifted and blueshifted components extending to $\pm 1000$ \rm km~s\ensuremath{^{-1}\,}\ with respect to $v_{\rm sys} = 0$, with a minimum close to $v_{\rm sys} = 0$. The vast majority of individual galaxies, and therefore also the average in the stacked profile, has $F_{\ensuremath{\rm Ly\alpha}}(\mathrm{blue}) / F_{\ensuremath{\rm Ly\alpha}}(\mathrm{red}) \simeq 0.3$, and is thus ``red peak dominated''. The two-component spectral morphology extends to at least $\theta_{\rm tran}\simeq 3~\mathrm{arcsec}$ or $D_{\rm tran} \simeq 25~\mathrm{pkpc}$. This overall spectral morphology is most readily explained as \ensuremath{\rm Ly\alpha}\ photons being resonantly scattered by outflowing material, whereby redshifted photons are scattered from the receding (opposite) side are more likely to escape in the observer's direction than blue-shifted photons \cite[e.g.][]{pettini01,steidel10,dijkstra14}. As $D_\mathrm{tran}$ increases and the \ensuremath{\rm Ly\alpha}\ SB decreases exponentially, the two \ensuremath{\rm Ly\alpha}\ peaks become less distinct and merge into a symmetric ``halo'' centered on $\Delta v = 0$. The vast majority of the \ensuremath{\rm Ly\alpha}\ emission is within $-700 < \Delta v / \mathrm{km~s}^{-1} < 1000$. However, we caution that the apparent blue edge at $\Delta v \sim -700~\mathrm{km~s}^{-1}$ of \ensuremath{\rm Ly\alpha}\ emission in this composite 2D spectrum is likely caused by continuum over-subtraction resulting from the relatively simple technique that we used in this work. The continuum subtraction assumed a linear interpolation of the continuum spectrum underneath the \ensuremath{\rm Ly\alpha}\ emission (see \S\ref{sec:spatial_profile}), which tends to over-estimate the continuum flux blueward of the systemic redshift of the galaxy due to intrinsic \ensuremath{\rm Ly\alpha}\ absorption in the stellar spectrum and residual effects of the often-strong \ensuremath{\rm Ly\alpha}\ absorption damping wings on which \ensuremath{\rm Ly\alpha}\ emission is superposed. To improve on this would require a more sophisticated continuum-subtraction method in the inner $\simeq 1\secpoint0$ of the galaxy profile; however, since most of the remainder of this work will involve comparison of 2D cylindrical projections with one another, the imperfections in continuum subtraction at small $\theta_{\rm tran}$ are unlikely to affect the results. \subsection{Dependence on azimuthal angle of cylindrically projected 2D (CP2D) spectra} \label{sec:2dspec_azimuthal} To investigate how \ensuremath{\rm Ly\alpha}\ emission depends on the galaxy azimuthal angle, we split each CP2D spectrum of individual galaxies averaged over two independent bins of azimuthal angle ($\phi$) with respect to the galaxy's major axis: $0^\circ \le \phi < 45^\circ$ (``major axis'') and $45^\circ \le \phi \le 90^\circ$ (``minor axis''), as in \S\ref{sec:spatial_profile} and Figure \ref{fig:lya_profile}. The CP2D stacks covering these azimuth ranges were combined separately to form CP2D composites that we refer to as ``Major Axis'' and ``Minor Axis''. To reveal subtle differences in surface brightness and/or velocity along these two directions, we subtracted one from the other -- Figure \ref{fig:2dspec} shows the result. \begin{figure*} \centering \includegraphics[width=16cm]{plots/sb_2d.pdf} \caption{ {\it Left}: The stacked CP2D spectra along the galaxy major ($0^\circ \le \phi < 45^\circ$; top) and minor ($45^\circ \le \phi \le 90^\circ$; bottom) axes. Both the colour-coding and the contours are on linear scales. {\it Right}: The residual CP2D maps: the top panel shows the difference between the Major axis and Minor axis maps, in the same units of surface intensity as in the lefthand panels, where blue colours indicate regions with excess \ensuremath{\rm Ly\alpha}\ surface intensity along the Major axis; orange colours indicate regions where \ensuremath{\rm Ly\alpha}\ is brighter in the Minor axis map. The bottom panel shows the same residual map in units of the local noise level. The most prominent feature is excess \ensuremath{\rm Ly\alpha}\ emission along the galaxy major axis relative to that along the minor axis), at $\Delta v \sim +300~\mathrm{km s}^{-1}$, extending to $\theta_{\rm tran} \sim 2\secpoint0$ or $D_{\rm tran} \sim 15$ pkpc. } \label{fig:2dspec} \end{figure*} The difference between the CP2D spectra along the major and minor axes (top right of Figure \ref{fig:2dspec}) shows excess emission along the galaxy major axis at $\Delta v \simeq +300~\mathrm{km s}^{-1}$ -- consistent with the velocity of the peak of the redshifted component in the full composite CP2D spectrum -- within $\theta_{\rm tran} \lower.5ex\hbox{\ltsima} 2$$''~$\ ($D_\mathrm{tran} \lesssim 15~\mathrm{pkpc}$). The \ensuremath{\rm Ly\alpha}\ flux of this asymmetric component of \ensuremath{\rm Ly\alpha}\ amounts to $\lower.5ex\hbox{\ltsima} 2$\% of the total, and has a peak intensity $\simeq 5$\% that of the peak of the redshifted \ensuremath{\rm Ly\alpha}\ component shown in Fig.~\ref{fig:lya_profile}. The significance map in the bottom-right panel of Figure~\ref{fig:2dspec} is based on the standard deviation of 100 independent mock CP2D stacks, each made by assigning random $\ensuremath{\rm PA}_0$ to the galaxies in our sample before combining. The mock stacks were then used to produce a 2D map of the RMS residuals evaluated in the same way as the observed data. Considering the effective resolution, the overall significance (compared to the standard deviation) of the most prominent feature in the top-right panel of Figure~\ref{fig:2dspec} is $\simeq 2-2.5 \sigma$ per resolution element. Thus, while the residual (excess) feature may be marginally significant statistically, the level of asymmetry relative to the total \ensuremath{\rm Ly\alpha}\ flux is in fact very small. \subsection{The robustness of residual \ensuremath{\rm Ly\alpha}\ aysmmetry} \label{sec:lya_robustness} Despite the marginally significant detection of the excess \ensuremath{\rm Ly\alpha}\ emission along galaxy major axes, its robustness is subject to scrutiny. In particular, we would like to determine whether the apparent detection is typical of the population or is caused by a few outlier objects having very asymmetric \ensuremath{\rm Ly\alpha}\ as a function of azimuthal angle. \begin{figure} \centering \includegraphics[width=8cm]{plots/hist_excess_lya.pdf} \caption{ Distribution of the difference in \ensuremath{\rm Ly\alpha}\ flux integrated over velocity and angular distance in the bins of azimuthal angle corresponding to ``major'' and ``minor'' axes. Positive (negative) values indicate that \ensuremath{\rm Ly\alpha}\ emission is stronger along the major (minor) axis. The integration is conducted within $0 < \Delta v / (\mathrm{km~s}^{-1}) < 500$ and $\theta_\mathrm{tran} \le 2~\mathrm{arcsec}$ (top) and $0 < \Delta v / (\mathrm{km~s}^{-1}) < 1000$ and $\theta_\mathrm{tran} \le 3~\mathrm{arcsec}$ (bottom). There are two outliers in the first integration (top panel), while one remains in the second (bottom panel). } \label{fig:hist_excess_lya} \end{figure} Figure \ref{fig:hist_excess_lya} shows histograms of the difference in integrated \ensuremath{\rm Ly\alpha}\ flux between the major and minor axis bins of azimuthal angle. We calculated the differences integrated over two different ranges of $\Delta v$ and $\theta_{\rm tran}$; (1) the range where the excess shown in Figure~\ref{fig:2dspec} is most prominent, $\theta_{\rm tran} \le 2$$''~$\ and $0 \le (\Delta v)/\rm km~s\ensuremath{^{-1}\,} \le 500$, shown in the top panel, and (2) the range which encapsulates most of the redshifted component of \ensuremath{\rm Ly\alpha}\, $\theta_{\rm tran} \le 3$$''~$\ and $0 \le (\Delta v)/\rm km~s\ensuremath{^{-1}\,} \le 1000$, shown in the bottom panel. Two galaxies -- Q0142-BX165 and Q2343-BX418 -- are clearly outliers in (1), while only Q0142-BX165 stands out in (2). The distribution of $\Delta F_{\ensuremath{\rm Ly\alpha}}$ for the other 56 galaxies in the sample is relatively symmetric around $\Delta F_{\mathrm{Ly}\alpha} = 0$. \begin{figure*} \centering \includegraphics[width=16cm]{plots/sb_2d_nooutlier1.pdf} \caption{ Same as Figure \ref{fig:2dspec}, but without Q0142-BX165, which is the strongest outlier in terms of excess \ensuremath{\rm Ly\alpha}\ emission along the galaxy major axis. } \label{fig:2dspec_nooutlier1} \end{figure*} \begin{figure*} \centering \includegraphics[width=16cm]{plots/sb_2d_nooutlier2.pdf} \caption{ Same as Figure \ref{fig:2dspec}, but without Q0142-BX165 and Q2343-BX418, the two most significant outliers in the top panel of Figure~\ref{fig:hist_excess_lya}. Significant excess emission that is larger than a resolution element for the redshifted peak no longer exists.} \label{fig:2dspec_nooutlier2} \end{figure*} Figures~\ref{fig:2dspec_nooutlier1}~and~\ref{fig:2dspec_nooutlier2} show the stacked \ensuremath{\rm Ly\alpha}\ profiles as in Figure~\ref{fig:2dspec}, but with the strongest outliers removed from the stack. After removing both outliers (Figure~\ref{fig:2dspec_nooutlier2}), the excess \ensuremath{\rm Ly\alpha}\ emission along the galaxy major axis at $\Delta v \simeq 300~\mathrm{km~s}^{-1}$ becomes consistent with noise. Although Q2343-BX418 is not an extreme outlier in terms of the overall \ensuremath{\rm Ly\alpha}\ asymmetry of the integrated redshifted component of \ensuremath{\rm Ly\alpha}\ emission (bottom panel of Fig.~\ref{fig:hist_excess_lya}), when only Q0142-BX165 is removed from the stack (Fig.~\ref{fig:2dspec_nooutlier1}) the composite 2D spectra still show obvious excess \ensuremath{\rm Ly\alpha}\ emission along the galaxy major axis, albeit with slightly reduced significance. Meanwhile, when both outliers are removed from the stack (Figure~\ref{fig:2dspec_nooutlier2}), one can see an emerging excess \ensuremath{\rm Ly\alpha}\ emission along the {\it minor} axis, for the {\it blue} peak, with $-700 \lesssim \Delta v / (\mathrm{km~s}^{-1}) \lesssim -200$ and $\theta_{\rm tran} \lower.5ex\hbox{\ltsima} 2\secpoint5$ with integrated significance $\sim 2 \sigma$. The flux of the excess blueshifted emission comprises $\sim 10$\% of the total blueshifted \ensuremath{\rm Ly\alpha}\ flux, with a peak amplitude $\sim 1$\% of the peak \ensuremath{\rm Ly\alpha}\ intensity (i.e., the redshifted peak). We conducted an analysis on the \textit{blueshifted} emission similar to that done for the redshifted asymmetry, with results summarised in Figure \ref{fig:hist_excess_lya_blue}. There is no obvious outlier in the difference in integrated \ensuremath{\rm Ly\alpha}\ flux between the major and minor axis azimuth bins except for Q0142-BX165, for which the excess again favors the {\it major} axis (i.e., it is in the direction opposite to the apparent blueshifted asymmetry identified in Figure~\ref{fig:2dspec_nooutlier2}). We also consecutively removed from the \ensuremath{\rm Ly\alpha}\ stack galaxies with extreme excess emission along the minor axis, and found no sudden and significant changes in the composite spectra. Evidently, the blueshifted excess along the minor axis, while of about the same significance as the redshifted excess in the major axis direction, is a general property of the full sample rather than a result of a small number of outliers. \begin{figure} \centering \includegraphics[width=8cm]{plots/hist_excess_lya_blue.pdf} \caption{ Same as Figure \ref{fig:hist_excess_lya}, but with a different velocity range of $-700 < \Delta v / (\mathrm{km~s}^{-1}) \le -200$ and $\theta_\mathrm{tran} \le 2.5~\mathrm{arcsec}$ that focuses on the blueshifted component of \ensuremath{\rm Ly\alpha}\ emission. No individual galaxy is an extreme outlier in terms of excess blueshifted \ensuremath{\rm Ly\alpha}\ along the minor axis. } \label{fig:hist_excess_lya_blue} \end{figure} In summary, we found excess emission along the galaxy major axis for the redshifted component of \ensuremath{\rm Ly\alpha}\ near $\Delta v \simeq 300~\mathrm{km~s}^{-1}$. However, this particular excess emission appears to be caused by galaxy outliers with extreme emission along the major axis. After removing them from the composite CP2D spectra we found excess emission along the galaxy minor axis for the blue peak within $-700 \lesssim \Delta v / (\mathrm{km~s}^{-1}) \lesssim -200$ that is not apparently affected by the extreme scenarios. Both detections are not particularly significant at the $\sim 2\sigma$ level. \subsection{A Closer Look at the Extreme Cases} \label{sec:closer_look} \begin{figure} \centering \includegraphics[width=8cm]{plots/q0142-BX165_2d}\\ \includegraphics[width=8cm]{plots/q0142-BX165_img} \caption{ {\it Top}: Same as the top-right panel of Figure \ref{fig:2dspec}, but for a single galaxy, Q0142-BX165, which has the strongest excess \ensuremath{\rm Ly\alpha}\ emission along the galaxy major axis. Note that the colour scale is 10 times that of Figure \ref{fig:2dspec}. {\it Bottom}: The HST F160w image of Q0142-BX165, overlaid with contours from the KCWI continuum image (left) and the narrow-band \ensuremath{\rm Ly\alpha}\ image (right). \label{fig:q0142-BX165}} \end{figure} Figure \ref{fig:q0142-BX165} shows the residual between the cylindrically projected 2D spectra extracted along the major and minor axes of Q0142-BX165, as well as continuum images from both HST and KCWI. Q0142-BX165 has two comparably bright components separated by $\simeq 3.5~\mathrm{pkpc}$ ($\simeq 0\secpoint4$). Careful inspection of Keck/LRIS and Keck/MOSFIRE spectra of this system revealed no sign of an object at a different redshift. There is no significant offset between the KCWI continuum and \ensuremath{\rm Ly\alpha}\ centroids ($\le 0.5~{\rm pix}\sim 0\secpoint15$ separation), indicating that the apparent directional asymmetry in the CP2D spectra is not caused by a spatial shift between the continuum and \ensuremath{\rm Ly\alpha}\ emission. Instead, the narrow-band \ensuremath{\rm Ly\alpha}\ map shows that the \ensuremath{\rm Ly\alpha}\ emission is elongated approximately along the N-S direction. However, after aligning the KCWI and HST astrometry with reference to a nearby compact galaxy, we found that both the KCWI stellar continuum near \ensuremath{\rm Ly\alpha}\ and the narrow-band \ensuremath{\rm Ly\alpha}\ emission are centered near the SW component (see Figure \ref{fig:q0142-BX165}). It is possible that the SW component alone is responsible for the \ensuremath{\rm Ly\alpha}\ emission, in which case its $\ensuremath{\rm PA}_0$ would be $-59^\circ$,~ $\sim 70^\circ$ off from what was determined in \S\ref{sec:pa}. However, adopting $\ensuremath{\rm PA}_0 = -59^{\circ}$ would cause BX165 to become an outlier with excess \ensuremath{\rm Ly\alpha}\ emission along the {\it minor} axis. Meanwhile, the elongation of the KCWI continuum aligns with the direction of the separation of the two components, and is roughly consistent with the direction of the \ensuremath{\rm Ly\alpha}\ elongation as well. This seems to suggest that the \ensuremath{\rm Ly\alpha}\ elongation simply reflects the asymmetry of the continuum source, albeit on a larger angular scale; however, as shown in Figure \ref{fig:hst_mmt_only}, many galaxies in the sample possess similar morphologies, but Q0142-BX165 is the only one that shows extraordinary asymmetry in \ensuremath{\rm Ly\alpha}\ emission. In any case, Q0142-BX165 has a uniquely asymmetric \ensuremath{\rm Ly\alpha}\ halo, possibly due to source confusion. Consequently, we exclude it from most of the analysis that follows. \begin{figure} \centering \includegraphics[width=8cm]{plots/q2343-BX418_2d}\\ \includegraphics[width=8cm]{plots/q2343-BX418_img} \caption{ Same as Figure \ref{fig:q0142-BX165} but for Q2343-BX418, the object with the second strongest major axis \ensuremath{\rm Ly\alpha}\ asymmetry. \label{fig:q2343-BX418}} \end{figure} The KCWI data cube for Q2343-BX418 has been analysed previously by \citet{erb18}; here, we consider it in the context of the analysis of Q0142-BX165 above (see Figure~\ref{fig:q2343-BX418}). The difference in peak SB between the major and minor axis CP2D spectra is nearly equal to that of Q0142-BX165 ($2.4 \times 10^{-18}~\mathrm{erg~s}^{-1}\mathrm{cm}^{-2}\mathrm{arcsec}^{-2}\textrm{\AA}^{-1}$ for both). However, the spatial extent of the excess emission is significantly smaller for Q2343-BX418. The HST/WFC3, KCWI continuum, KCWI \ensuremath{\rm Ly\alpha}\ images, and OSIRIS-H$\alpha$ images all show Q2343-BX418 comprise a single component whose centroids in the various images are consistent with one another. Despite its extreme SB asymmetry in \ensuremath{\rm Ly\alpha}, Q2343-BX418 exhibits no other obviously peculiar property compared to the rest of the sample. \section{ Integrated Line Flux and Azimuthal Asymmetry} \label{sec:az_halo} \begin{figure*} \centering \includegraphics[width=16cm]{plots/df_halo_corr.pdf} \caption{ Relationship between the flux measurements of anisotropic (excess) \ensuremath{\rm Ly\alpha}\ emission ($\Delta \ensuremath{F_{\mathrm{Ly}\alpha}} = F_\mathrm{major} - F_\mathrm{minor}$) of the blueshifted component (left) and redshifted component (right) of \ensuremath{\rm Ly\alpha}\ emission and properties of the integrated \ensuremath{\rm Ly\alpha}\ halo [{\it Top}: central \ensuremath{\rm Ly\alpha}\ equivalent width, \ensuremath{W_{\lambda}(\lya)}; {\it Middle}: total \ensuremath{\rm Ly\alpha}\ flux, $\ensuremath{F_{\mathrm{Ly}\alpha}}(\mathrm{tot})$; {\it Bottom}: the ratio between the total blueshifted and redshifted components, $\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{blue})} / \ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{red})}$]. Galaxies without reliable $F_\mathrm{red}$ are omitted in the bottom panel since their $\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{blue})} / \ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{red})}$ are dominated by noise. {The pink lines and shaded regions show the results and their 1$\sigma$ uncertainties of a linear regression accounting for the errors in both x- and y-directions.} The vertical dashed line in each panel marks the median value of the (x-axis) property for the full sample. The yellow diamond in each panel marks the location of Q2343-BX418, the outlier that caused the excess emission of the redshifted peak along the galaxy major axis as discussed in \S\ref{sec:closer_look}.} \label{fig:df_halo} \end{figure*} As shown in the previous sections, the degree of \ensuremath{\rm Ly\alpha}\ halo azimuthal asymmetry varies from case to case in our $z \simeq 2.3$ sample, but the correlation with the morphology of the central galaxy is sufficiently weak that, on average, \ensuremath{\rm Ly\alpha}\ halos are remarkably symmetric and appear to be nearly independent -- both kinematically and spatially -- of the apparent orientation of the galaxy at the center. Thus far we have treated the blueshifted and redshifted components of \ensuremath{\rm Ly\alpha}\ emission separately. However, the overall \ensuremath{\rm Ly\alpha}\ profile is expected to provide clues to the geometry and velocity field of circumgalactic \ion{H}{I}. In this section, we compare the dependency between excess \ensuremath{\rm Ly\alpha}\ emission and \ensuremath{W_{\lambda}(\lya)}, total \ensuremath{\rm Ly\alpha}\ flux $\ensuremath{F_{\mathrm{Ly}\alpha}}(\mathrm{tot})$, and the ratio of the total flux of blueshifted and redshifted components of \ensuremath{\rm Ly\alpha}\ emission [$\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{blue})} / \ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{red})}$]. The integration windows used to compute the values in Table~\ref{tab:sample} are $\ensuremath{F_{\mathrm{Ly}\alpha}}(\mathrm{tot})$: $\theta_\mathrm{tran} \le 3~\mathrm{arcsec}$ and $-700 < \Delta v / (\mathrm{km~s}^{-1}) \le 1000$; \ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{blue})}: $\theta_\mathrm{tran} \le 2.5~\mathrm{arcsec}$ and $-700 < \Delta v / (\mathrm{km~s}^{-1}) \le 0$; \ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{red})}: $\theta_\mathrm{tran} \le 3~\mathrm{arcsec}$ and $0 < \Delta v / (\mathrm{km~s}^{-1}) \le 1000$. {Different integration windows were used for the two components in order to optimise the S/N of the integral; they were chosen based on the detected extent of each component in Figure~\ref{fig:2dspec_nooutlier1}, in both $\theta_{\rm tran}$ and $\Delta v$.}% {Figure \ref{fig:df_halo} examines whether or not there is a connection between major axis/minor axis asymmetry in \ensuremath{\rm Ly\alpha}\ flux and the overall \ensuremath{\rm Ly\alpha}\ halo properties mentioned above.For each pair of variables in Figure~\ref{fig:df_halo}, we indicate the value of the Pearson coefficient ($r$) and the corresponding probability $p$ that the observed data set could be drawn from an uncorrelated parent sample. We also performed a linear regression using Orthogonal Distance Regression (ODR) in \textit{SciPy}, which accounts for the estimated uncertainty in both x- and y-variables. As can be seen from the figure, most of the Pearson tests show no significant correlation and linear regression yields slopes consistent with zero. However, the Pearson test for the relation between $F_\mathrm{major}(\mathrm{blue}) - F_\mathrm{minor}(\mathrm{blue})$ and $\ensuremath{F_{\mathrm{Ly}\alpha}}(\mathrm{tot})$ (middle left panel of Figure \ref{fig:df_halo}) yields $p = 0.02$, suggesting a marginally significant trend in which the asymmetry of the blueshifted component of \ensuremath{\rm Ly\alpha}\ favors the minor axis when $F_{\ensuremath{\rm Ly\alpha}}(\rm tot)$ is weak, and the major axis when $F_{\ensuremath{\rm Ly\alpha}}({\rm tot})$ is strong. } The second relationship that stands out is $F_\mathrm{major}(\mathrm{red}) - F_\mathrm{minor}(\mathrm{red})$ and $\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{blue})} / \ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{red})}$ (bottom right panel of Figure \ref{fig:df_halo}), where a non-parametric test for correlation is not significant. The linear regression results in a marginally-significant positive slope, indicating that as the blueshifted component of \ensuremath{\rm Ly\alpha}\ approaches the strength of the redshifted component, there is a tendency for excess emission along the major axis; for galaxies with $\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{blue})} / \ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{red})} \lower.5ex\hbox{\ltsima} 0.3$ (i.e., smaller than the median value for the sample), the tendency is for excess \ensuremath{\rm Ly\alpha}\ emission along the minor axis. \begin{figure*} \centering \includegraphics[width=16cm]{plots/sb2d_halo.pdf} \caption{ The difference between the CP2D spectra of \ensuremath{\rm Ly\alpha}\ emission for the major and minor axes. The maps show the residual for CP2D stacks for two sub-samples representing those below (left) and above (right) the sample median, From top to bottom, the \ensuremath{\rm Ly\alpha}\ halo properties are the central \ensuremath{\rm Ly\alpha}\ equivalent width (\ensuremath{W_{\lambda}(\lya)}), the integrated \ensuremath{\rm Ly\alpha}\ flux ($\ensuremath{F_{\mathrm{Ly}\alpha}} ({\rm tot})$), and the flux ratio between the blueshifted and redshifted components ($\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{blue})} / \ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{red})}$). This figure confirms that the blueshifted excess \ensuremath{\rm Ly\alpha}\ emission favours weak \ensuremath{\rm Ly\alpha}\ emitting galaxies.} \label{fig:sb2d_halo} \end{figure*} To further explore the reliability of the correlations, we split the sample in two halves according to the overall halo properties, and compared the subtracted CP2D spectra between the major and minor axes. Figure \ref{fig:sb2d_halo} shows the result. As discussed in \S\ref{sec:closer_look}, the excess \ensuremath{\rm Ly\alpha}\ emission along the galaxy major axis for the red peak at $\Delta v \simeq 300~\mathrm{km~s}^{-1}$ within $\theta_\mathrm{tran} \lesssim 1~\mathrm{arcsec}$ can be attributed to a single outlier (Q2343-BX418), which happens to fall above the median in all 3 quantities considered in Figure~\ref{fig:sb2d_halo} (i.e., on the righthand panels of the figure). The subtracted CP2D spectra also indicate that essentially the entire excess \ensuremath{\rm Ly\alpha}\ emission for the blue peak along the minor axis -- as identified earlier (\S\ref{sec:closer_look}) -- is contributed by galaxies below the median \ensuremath{W_{\lambda}(\lya)}\ and $F_{\ensuremath{\rm Ly\alpha}}({\rm tot})$ (top and middle lefthand panels of Figure~\ref{fig:sb2d_halo}). In particular, the integrated significance within $-700 < \Delta v / (\rm km~s\ensuremath{^{-1}\,}) \le -200$ and $\theta_\mathrm{tran} \le 2\secpoint5$ exceeds $2.5\sigma$ for the $\ensuremath{W_{\lambda}(\lya)} < \mathrm{Median}$ bin. Comparison of the top two panels also illustrate the same trend of $F_\mathrm{major}(\mathrm{blue}) - F_\mathrm{minor}(\mathrm{blue})$ vs. \ensuremath{W_{\lambda}(\lya)}\ and \ensuremath{F_{\mathrm{Ly}\alpha}}\ correlations suggested by Figure~\ref{fig:df_halo}. For the subsamples divided based on the value of $\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{blue})}/\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{red})}$, the differences between the major axis and minor axis range of azimuthal angle are less significant: there is a marginally significant excess of \ensuremath{\rm Ly\alpha}\ emission, more noticeable in the redshifted component, where it appears to extend over the range $\theta_{\rm tran} \simeq 1-3$ arcsec, and the bin with higher $\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{blue})}/\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{red})}$ appears to have a major axis excess over approximately the same range of angular distances. If Q2343-BX418 is removed from the stack of larger $\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{blue})}/\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{red})}$ galaxies, the residual remains, showing that it is not attributable to a single outlier. However, none of the residuals in the bottom panels of Figure~\ref{fig:sb2d_halo} reaches a threshold of $2\sigma$ per resolution element. In summary, a small statistical azimuthal asymmetry of \ensuremath{\rm Ly\alpha}\ halos persists when the galaxy sample is divided into two according to central \ensuremath{W_{\lambda}(\lya)}, total $F_{\ensuremath{\rm Ly\alpha}}$, and the flux ratio of blueshifted and redshifted emission. Perhaps most intriguing is that galaxies with small or negative central \ensuremath{W_{\lambda}(\lya)}\ have a tendency to exhibit excess \ensuremath{\rm Ly\alpha}\ emission along galaxy minor axes extending over a fairly large range of both $\theta_{\rm tran}$ and velocity ($-700 < (\Delta v/\rm km~s\ensuremath{^{-1}\,}) < -200$). The apparent excess along the major axis of redshifted \ensuremath{\rm Ly\alpha}\ for the subsample with stronger \ensuremath{\rm Ly\alpha}\ emission and larger $\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{blue})}/\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{red})}$, on the other hand, is confined to a smaller range of (redshifted) velocities, again roughly coincident with the typical location of the``red peak''. \section{Finer Division of Galaxy Azimuthal Angles} \label{sec:three_bins} At lower redshifts ($z < 1$), the covering fraction and column density of gas in various ionisation stages are commonly found to be related to the orientation of the gaseous disk relative to the line of sight. Many authors have used used background QSO or galaxy sightlines to detect strong \ion{Mg}{II} absorbers associated with galaxies at redshifts to allow the foreground galaxy orientations to be measured (e.g., \citealt{steidel02,bordoloi11,bouche12,schroetter19, lundgren21}). A common conclusion is that the sightlines giving rise to strong absorption tend to be those located at azimuthal angles corresponding to both the galaxy major and minor axes, but fewer (strong) absorbers are found at intermediate angles, $30^\circ < \phi < 60^\circ$. The kinematics of the absorbing gas also appear to be related to $\phi$ \citep[e.g., ][]{ho17,martin19}, with the broadest (and therefore strongest) systems found along the minor axis, presumably due to fast bi-conical outflows oriented perpendicular to the disk, followed by major axis sightlines sampling accreting or galactic fountain gas sharing the halo's angular momentum and thus exhibiting disk-like rotation. \begin{figure*} \centering \includegraphics[width=16cm]{plots/sb_dpa30.pdf} \caption{ Similar to the right panel of Figure \ref{fig:2dspec} but residual maps are between the major and minor (left), major and intermediate (middle), and intermediate and minor (right) axes with each bin size of only $\Delta \phi = 30^\circ$. The strong residual beyond $\theta_\mathrm{tran} > 4~\mathrm{arcsec}$ is caused by a contaminating source near a single object. No sign of a bimodal distribution of \ensuremath{\rm Ly\alpha}\ emission is present. The sample in this figure is the same as in Figure \ref{fig:2dspec_nooutlier1}. (Q0142-BX165: discarded; Q2343-BX418: included). } \label{fig:sb30} \end{figure*} If the \ensuremath{\rm Ly\alpha}\ emission around $z = 2$--3 galaxies arises in CGM gas with properties similar to that of the low-ionization metallic absorbers at lower redshift, one might expect to see similar evidence for asymmetries along the two principal axes relative to intermediate azimuthal angles. We tested this possibility by expanding our analysis in \S\ref{sec:2dspec} by dividing $\phi$ into three azimuthal bins -- major axis ( $0^\circ \le \phi < 30^\circ$), minor axis ($60^\circ \le \phi \le 90^\circ$), and intermediate ($30^\circ \le \phi \le 60^\circ$). CP2D difference spectra among these 3 bins are shown in Figure~\ref{fig:sb30}. Figure \ref{fig:sb30} shows no obvious sign of a bimodal distribution of \ensuremath{\rm Ly\alpha}\ emission with respect to $\phi$, in which case the middle and right panels would show residuals of opposite sign. Instead, the residual maps of ``Major $-$Intermediate'' and ``Intermediate $-$ Minor'' suggest a gradual transition of the small asymmetries between major and minor axes in the larger bins of $\phi$ shown previously. In any case, the main conclusion to draw from Figure~\ref{fig:sb30} is once again that \ensuremath{\rm Ly\alpha}\ emission halos are remarkably similar in all directions with respect to the projected principal axes of $z \sim 2-3$ galaxies. \section{Discussion} \label{sec:discussions} \subsection{Comparison to previous work} The morphology of \ensuremath{\rm Ly\alpha}\ emission from the CGM and its relation to the host galaxies has been analysed in various works between $z = 0$ and $z \lesssim 4$. However, analyses quantifying the \ensuremath{\rm Ly\alpha}\ emission with respect to the host-galaxy orientation is limited. In this section, we attempt to compare the existing research on \ensuremath{\rm Ly\alpha}\ halo morphology with our findings. At very low redshifts ($z = 0.02 - 0.2$), \citet{guaita15} studied the \ensuremath{\rm Ly\alpha}\ halos of 14 \ensuremath{\rm Ly\alpha}-emitting galaxies in the Lyman Alpha Reference Sample (LARS; \citealt{ostlin14}) and their connection to the host-galaxy morphologies. They found that the \ensuremath{\rm Ly\alpha}\ halos for the subset of the sample that would be considered LAEs are largely axisymmetric, and there is no single galaxy morphological property that can be easily connected to the overall shape of the \ensuremath{\rm Ly\alpha}\ emitting regions. Indeed, the \ensuremath{\rm Ly\alpha}\ images of the individual galaxies in \citet{hayes14} appear to be independent of galaxy orientation beyond $D_{\rm tran} \sim 2$ pkpc. Meanwhile, the stacked \ensuremath{\rm Ly\alpha}\ image of the LARS galaxies is elongated in the same direction as the far-UV continuum, i.e., along the major axis. However, this stack could be significantly affected by sample variance from galaxies that are particularly bright in rest-UV and \ensuremath{\rm Ly\alpha}, as we saw for the KBSS sample before removing outliers. Interestingly, \citet{duval16} studied a special galaxy in the LARS sample which is almost perfectly edge-on -- they found two small \ensuremath{\rm Ly\alpha}\ emitting components (each of extent $< 1$ pkpc) near the disk consistent with \ensuremath{\rm Ly\alpha}\ emission escaping from the ISM through ``holes'' in the galactic disk, akin to that expected in a classical Galactic fountain \citep{bregman80}. While this is likely to be driven by stellar feedback allowing \ensuremath{\rm Ly\alpha}\ to escape in the direction perpendicular to the galaxy disk, the observed \ensuremath{\rm Ly\alpha}\ features are far closer to the galaxy than could be measured in our $z \sim 2-3$ sample. Moreover, most of the galaxies in our sample do not have organised thin disks, and so likely have much lower dust column densities obscuring active star forming regions. At $z > 3$, \citet{bacon17} (and subsequent papers from the same group) conducted a systematic survey of \ensuremath{\rm Ly\alpha}\ emitting galaxies in the Hubble Ultra Deep Field using VLT/MUSE. For example, \citet{leclercq17} found significant variation of \ensuremath{\rm Ly\alpha}\ halo morphology among 145 galaxies, and identified correlations between the halo size and the size and brightness of the galaxy UV continuum. However, its connection with the galaxy morphological orientation remain to be investigated. Meanwhile, \citet{mchen20} examined a case of a strongly-lensed pair of galaxies with extended \ensuremath{\rm Ly\alpha}\ emission at $z > 3$. The continuum image in the reconstructed source plane of their system A has at least three subcomponents extending over $> 1$ arcsec, which may be similar to the subsample of galaxies shown in our Figure \ref{fig:hst_mmt_only}, in terms of morphological complexity. In the context of the analysis we describe in the present work, this arrangement of \ensuremath{\rm Ly\alpha}\ with respect to continuum emission would be classified as excess minor axis \ensuremath{\rm Ly\alpha}\ emission. Unfortunately, the \ensuremath{\rm Ly\alpha}\ halo of a second $z > 3$ system is truncated in the source plane reconstruction. In summary, we compared our result with galaxies and their \ensuremath{\rm Ly\alpha}\ emission morphology at $z\sim 0$ and $z > 3$ in previous work, finding that although our results are qualitatively consistent with earlier results, the comparison is hampered by limited sample sizes, as well as by differences in redshift and intrinsic galaxy properties represented in each sample. \subsection{Theoretical predictions} Many existing studies of the distribution and kinematics of CGM gas have focused on simulated galaxies within cosmological hydrodynamic simulations. \citet{peroux20} analysed how inflowing and outflowing gas might distribute differently as a function of the galactic azimuthal angle within the EAGLE and IllustrisTNG simulations, finding significant angular dependence of the flow rate and direction, as well as the CGM metallicity, with outflows of higher metallicity gas favoring the galaxy minor axis, and accretion of more metal-poor gas tending to occur along the major axis, at $z < 1$. Although \citet{peroux20} focused on $z \sim 0.5$ for their study, they made clear that the predicted trends would weaken significantly with increasing redshift. At much higher redshifts ($z = 5-7$) around galaxies in the FIRE suite of simulations, \citet{smith19} found that \ensuremath{\rm Ly\alpha}\ escape is highly correlated with the direction of the \ion{H}{I} outflow. Naively, one might expect that more \ensuremath{\rm Ly\alpha}\ would be found along the galaxy minor axis, which is the direction along which gaseous outflows would encounter the least resistance to propagation to large galactocentric radii. However, the galaxies experiencing the most active star formation at these redshifts tend to be altered on short timescales ($\sim 10^7$ yrs) by episodic accretion, star formation, and feedback events. Thus, the direction of outflows may change on similar timescales, while the CGM will evolve on a longer timescale, possibly erasing any clear signatures of alignment of outflows and \ensuremath{\rm Ly\alpha}\ emission. For the same reason, rapidly star-forming galaxies at $z \sim 2-3$, most of which have not yet established stable stellar disks, are likely to be surrounded by gas that is similarly turbulent and disordered. Meanwhile, analytic or semi-analytic models of \ensuremath{\rm Ly\alpha}\ resonant scattering for idealised outflow geometries have focused primarily on the integrated \ensuremath{\rm Ly\alpha}\ emission profile from the entire galaxy or \ensuremath{\rm Ly\alpha}\ halo. Although many models account for the impact of the geometry and kinematics of gaseous outflows or accretion on the integrated \ensuremath{\rm Ly\alpha}\ emission profiles, there have been fewer efforts to predict the two-dimensional spatial and spectral profiles for detailed comparison to IFU observations. For example, \citet{carr18} constructed a model to predict the \ensuremath{\rm Ly\alpha}\ spectral morphology assuming biconical \ion{H}{I} outflows with resonant scattering. The model predicts the integrated spectral profile without spatial information. In the context of the biconical outflow model, an integrated profile resembling our observation is predicted when the minor axis is perpendicular to the line of sight, with a large outflow having a small opening angle. However, this particular configuration would likely give rise to a highly asymmetric {\it spatial} distribution of \ensuremath{\rm Ly\alpha}\ emission, which we have shown is unlikely to be consistent with our observations. Our results highlight the need for 3-D models of the cool gas in the CGM around rapidly-star-forming galaxies at high redshifts, prior to the development of stable disk configuration, for which the assumption of axisymmetry of outflows, at least on average, may be closer to reality. In any case, predicting the spatial and spectral properties of \ensuremath{\rm Ly\alpha}\ {\it emission} will require realistic treatment of the kinematics, small-scale structure, and radiative transfer of \ensuremath{\rm Ly\alpha}\ photons from the sites of production to their escape last scattering from the CGM toward an observer. \citet{gronke16b} has devised a model that assumes a two-phase CGM, composed of optically-thick clumps embedded in a highly-ionised diffuse ``inter-clump'' medium. This method has been used to fit the \ensuremath{\rm Ly\alpha}\ profiles at multiple locations within a spatially-resolved ``\ensuremath{\rm Ly\alpha}\ blob'' at $z \simeq 3.1$ (LAB) \citep{li20}. More recently, Li et al., in prep, have shown that the clumpy outflow models can be applied successfully to fit multiple regions within a spatially resolved \ensuremath{\rm Ly\alpha}\ halo simultaneously, i.e., using a central source producing \ensuremath{\rm Ly\alpha}\ which then propagates through a clumpy medium with an axisymmetric outflow (see also \citealt{steidel11}, who showed that \ensuremath{\rm Ly\alpha}\ emission halos similar in extent to those presented in this paper are predicted naturally given the observed velocity fields of outflows viewed ``down the barrel'' to the galaxy center and the same radial dependence of clump covering fraction inferred from absorption line studies of background objects). \subsection{Implications for \ion{H}{I} kinematics and \ensuremath{\rm Ly\alpha}\ Radiative Transfer} A spectral profile with a dominant redshifted component of \ensuremath{\rm Ly\alpha}\ emission line and a weaker blueshifted component -- with peaks shifted by similar $|\Delta v|$ relative to the systemic redshift -- is a typical signature of an expanding geometry, a central \ensuremath{\rm Ly\alpha}\ source function, and resonant scattering. It has also been shown that, for an ensemble of galaxies also drawn from the same KBSS redshift survey, the mean \ensuremath{\rm Ly\alpha}\ {\it absorption} signature measured in the spectra of background objects within $D_{\rm tran} \lower.5ex\hbox{\ltsima} 50$ pkpc ($\theta_{\rm tran} \lower.5ex\hbox{\ltsima} 6\mbox{$''\mskip-7.6mu.\,$}$) are outflow-dominated \citep{chen20}. However, although the observed \ensuremath{\rm Ly\alpha}\ halos and their dependence on galaxy properties may be naturally explained by central \ensuremath{\rm Ly\alpha}\ sources scattering through the CGM, many authors have emphasized that collisionally-excited \ensuremath{\rm Ly\alpha}\ emission from accreting gas (i.e., graviational cooling -- see e.g., \citealt{fg10,goerdt10,lake15}) and {\it in situ} photoionization by the UV background and/or local sources of ionizing photons (e.g., \citealt{leclercq20}) may also contribute significantly to extended \ensuremath{\rm Ly\alpha}\ halos. Due to the complex nature of \ensuremath{\rm Ly\alpha}\ radiative transfer, our results cannot resolve this issue definitively. However, the fact that the stacked CP2D \ensuremath{\rm Ly\alpha}\ spectra show asymmetry of $< 2$\% between the major and minor axis for the ensemble, combined with (1) the empirical correlation between the central \ensuremath{\rm Ly\alpha}\ line strength ($\ensuremath{W_{\lambda}(\lya)}$) and the \ensuremath{\rm Ly\alpha}\ flux integrated within the entire halo, and (2) the consistently red-peak-dominated kinematics of both the central and integrated \ensuremath{\rm Ly\alpha}\ line, all favor scattering of \ensuremath{\rm Ly\alpha}\ photons produced near the galaxy center through an outflowing, clumpy medium -- at least within $D_\mathrm{tran} \lesssim 30$ pkpc. The remarkable statistical symmetry of the full 2D profiles, both spatially and spectrally, suggests that most of the galaxies in our sample lack persistent disk-like configurations, an inference supported also by the ubiquity of blue-shifted absorption profiles in DTB spectra of similar galaxies and their lack of dependence on HST morphology (\citealt{law12b,law12c}). As a consequence, outflows do not behave in the manner expected for central starbursts in disk galaxies; in other words, $z \sim 2-3$ galaxies on average appear to be more axisymmetric than their lower-redshift counterparts. This may have important implications for the cycling of gas and metals into and out of forming galaxies. On the other hand, we have detected marginal ($2\sigma$) excess \ensuremath{\rm Ly\alpha}\ emission along the galaxy major axis for the red peak, and along the galaxy minor axis for the blue peak. While most of the excess emission along the major axis can be easily explained by the relatively small sample size and the presence of one or two extreme cases, the excess blueshifted emission along the galaxy minor axis cannot be. While it is possible that the observed asymmetries indicate the prevalence of outflows along the major axis and inflows along the minor axis (i.e., the opposite of the behavior of galaxies at $z < 1$), we regard such an interpretation as unlikely. An important clue may be that most of the blueshifted, minor-axis excess is contributed by galaxies with weaker than the median \ensuremath{\rm Ly\alpha}\ emission strength -- many in that subset have central $\ensuremath{W_{\lambda}(\lya)} < 0$, meaning that \ensuremath{\rm Ly\alpha}\ photons must scatter from higher-velocity material or in directions with a more porous distribution of optically thick gas to have a high probability of escaping. Since we can only observe the photons that escape, galaxies with lower overall \ensuremath{\rm Ly\alpha}\ escape fractions will mean that those that do escape must take more extreme paths on average. As shown in \S\ref{sec:az_halo}, the galaxies with relatively weak emission also tend to have lower values of $\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{blue})} / \ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{red})}$, which makes smaller absolute differences in emission strength vs. azimuthal angle more noticeable. {Finally, we would like to emphasise the fact that the lack of a strong statistical correlation between the morphology of extended \ensuremath{\rm Ly\alpha}\ emission and the galaxy orientation does {\it not} imply that the individual \ensuremath{\rm Ly\alpha}\ halos are symmetric. In fact, as shown in Figures \ref{fig:hist_excess_lya}, \ref{fig:hist_excess_lya_blue}, and \ref{fig:df_halo}, individual \ensuremath{\rm Ly\alpha}\ halos are often asymmetric, particularly for objects with weak central \ensuremath{\rm Ly\alpha}\ emission. Rather, our results indicate that morphological variations of \ensuremath{\rm Ly\alpha}\ halos are uncorrelated with the apparent orientation of the host galaxy starlight. } \section{Summary} \label{sec:summary} In this paper, we have presented the first statistical results of an IFU survey of star-forming galaxies at $\langle z \rangle =2.43$ drawn from the Keck Baryonic Structure Survey and observed with the Keck Cosmic Web Imager on the Keck 2 telescope. The 59 galaxies, with stellar mass and SFR typical of the full KBSS galaxy sample, comprise the subset of the KBSS-KCWI survey with both deep KCWI observations (typical exposure times of $\sim 5$ hours) and existing high-spatial-resolution images from Hubble Space Telescope and/or Keck/OSIRIS. The high resolution images were used to determine the direction of the projected major axis of the stellar continuum light of each galaxy; {the KCWI IFU data cubes were used to detect spatially- and spectrally-resolved \ensuremath{\rm Ly\alpha}\ emission from the CGM around each galaxy to a limiting surface brightness of $\lesssim 1\times 10^{-19}$ ergs s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ in the composite data (Figure \ref{fig:lya_profile}),} enabling detection of diffuse \ensuremath{\rm Ly\alpha}\ emission halos to projected distances of $\theta_{\rm tran} \simeq 4$$''~$\ ($D_{\rm tran}\simeq 30$ pkpc). Our major findings are summarised below: \\ \begin{enumerate} \item We introduced ``cylindrically projected 2D spectra'' (CP2D) in order to visualise and quantify \ensuremath{\rm Ly\alpha}\ spectra as a function of projected galactocentric distance $D_{\rm tran}$. The CP2D spectra are averages of spaxels over a specified range of azimuthal angle ($\phi$) at a common galactocentric distance, enabling statistical analyses of \ensuremath{\rm Ly\alpha}\ spectral profiles and their spatial variation simultaneously. The CP2D spectra clearly show distinct redshifted and blueshifted components of \ensuremath{\rm Ly\alpha}\ emission that remain distinct out to projected distances of at least 25 pkpc, with rest-frame velocity extending from $-700 \le (\Delta v_{\rm sys}/\rm km~s\ensuremath{^{-1}\,}) \lower.5ex\hbox{\ltsima} 1000$ with respect to the galaxy systemic redshift, with blue and red peaks at $\simeq -300$ \rm km~s\ensuremath{^{-1}\,}\ and $\simeq +300$ \rm km~s\ensuremath{^{-1}\,}, respectively. (\S\ref{sec:2dspec}) \item We stacked the CP2D spectra of individual galaxies after aligning their continuum major axes, in bins of azimuthal angle $\phi$ measured with respect to the major axis. By creating difference images of the CP2D projections in independent ranges of $\phi$, we showed that residual differences between ``major axis'' and ``minor axis'' -- which would reflect asymmetries in either the spatial or spectral dimension along different ranges of $\phi$ -- are very small, with amplitude: $\lower.5ex\hbox{\ltsima} 2\times 10^{-20}~\mathrm{erg~s}^{-1} \mathrm{cm}^{-2} \mathrm{arcsec}^{-2} \textrm{\AA}^{-1}$, corresponding to asymmetries in \ensuremath{\rm Ly\alpha}\ flux amounting to $\le 2\%$ of the total, between galaxy major and minor axis directions. (\S\ref{sec:2dspec_azimuthal}) \item We found little evidence of statitically significant assymetry of the \ensuremath{\rm Ly\alpha}\ emission, except for an excess ($\simeq 2\sigma$) of \ensuremath{\rm Ly\alpha}\ emission along galaxy major axes for the redshifted component of \ensuremath{\rm Ly\alpha}\ emission, with a peak near $\sim +300~\mathrm{km~s}^{-1}$. However, closer inspection revealed that most of the signal was caused by two galaxies with unusually asymmetric \ensuremath{\rm Ly\alpha}\ halos. After discarding these outliers, another excess emission feature, with integrated significance $\simeq 2\sigma$, manifests as excess emission in the {\it blueshifted} component of \ensuremath{\rm Ly\alpha}\ along galaxy {\it minor} axes. This feature extends over a large range of velocity, and appears to be contributed primarily by galaxies with weaker than the sample median \ensuremath{\rm Ly\alpha}\ emission, and central \ensuremath{\rm Ly\alpha}\ equivalent width $\ensuremath{W_{\lambda}(\lya)} < 0$. The same weak-\ensuremath{\rm Ly\alpha}\ subsample includes many of highest $M_{\ast}$ galaxies in the sample, as well as many of the galaxies with the smallest flux ratio between blueshifted and redshifted components ($\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{blue})}/\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{red})}$). We speculate that this asymmetry may indicate that significant azimuthal variation of \ensuremath{\rm Ly\alpha}\ emission morphology exists only for galaxies with the smallest \ensuremath{\rm Ly\alpha}\ escape fractions within the sample, i.e., weaker \ensuremath{\rm Ly\alpha}\ emitting galaxies possess more developed rotational structure. Evidently, one sees only the photons managing to find rare low-\ensuremath{N_{\rm HI}}\ holes or that scatter from the highest velocity material, both of which are more likely along the minor axis ( i.e., similar to expectations based on the standard picture of biconical starburst-driven outflows from disk-like systems). (\S\ref{sec:closer_look}, \S\ref{sec:az_halo}) \item Taken together, the results show that, statistically, the \ensuremath{\rm Ly\alpha}\ halo around galaxies in this sample (and, by extension, the population of relatively massive star-forming galaxies at $z \sim 2-3$) have remarkably little correlation -- either kinematically or spatially -- with the morphological distribution of stellar continuum light of the host galaxy. The observations suggest that most of the galaxies do not conform to expectations in which outflows are bi-conical and oriented along the minor axis of disk-like configurations, with accretion occurring preferentially along the major axis, suggested by observations of CGM gas in star-forming galaxies with $z < 1$. Instead, the lack of systematic variation in the kinematics and spatial extent with azimuthal angle of \ensuremath{\rm Ly\alpha}\ emission, together with the fact that $\ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{blue})} / \ensuremath{F_{\mathrm{Ly}\alpha}(\mathrm{red})}$ is universally smaller than unity, suggests that the bulk of \ensuremath{\rm Ly\alpha}\ at galactocentric distances $\lower.5ex\hbox{\ltsima} 30$ pkpc is scattered from the inside out. The vast majority of scattered photons propagate through a scattering medium whose kinematics are dominated by outflows that statistically symmetric with respect to the apparent morphology of a galaxy's starlight. (\S\ref{sec:discussions}) \end{enumerate} This paper marks the first attempt to understand the relationship between Ly$\alpha$ emission in the CGM and its host-galaxy properties in the KBSS-KCWI sample. As the central DTB \ensuremath{\rm Ly\alpha}\ emission was shown to be correlated with the host galaxy properties for the KBSS galaxies \citep[e.g.,][]{trainor15, trainor16, trainor19}, in forthcoming work, we will utilise the CP2D spectra to further investigate the connection between the \ensuremath{\rm Ly\alpha}\ halo and observable properties of the host galaxies (e.g., stellar mass, star-formation rate, star-formation rate surface density, etc.), to understand whether galaxies at $z = 2 - 3$ significantly impact the \ion{H}{I} distribution in the CGM and \textit{vice versa}, and to place additional constraints on the source functions and radiative transfer of \ensuremath{\rm Ly\alpha}\ in forming galaxies. With increased sample size and improved data reduction processes, we are also pushing to higher sensitivity in the stacked spectral cubes that allow us to probe the \ensuremath{\rm Ly\alpha}\ spectrum at larger galactocentric distances with high fidelity. \section*{Acknowledgements} This work has included data from Keck/KCWI \citep{morrissey18}, Keck/OSIRIS \citep{larkin06}, Keck/MOSFIRE \citep{mclean12}, Keck/LRIS-B \citep{steidel04}, HST/WFC3-IR and HST/ACS. We appreciate the contribution from the staff of the W. M. Keck Observatory and the Space Telescope Science Institute. The following software packages have been crucial to the results presented: Astropy \citep{astropy18}, the SciPy and NumPy system \citep{scipy20, numpy20}, QFitsView\footnote{https://www.mpe.mpg.de/~ott/QFitsView/}, CWITools \citep{osullivan20b}, Montage\footnote{http://montage.ipac.caltech.edu/}, GALFIT \citep{peng02,peng10}, and DrizzlePac\footnote{https://www.stsci.edu/scientific-community/software/drizzlepac.html}. This work has been supported in part by grant AST-2009278 from the US NSF, by NASA through grant HST-GO15287.001, and the JPL/Caltech President's and Director's Program (YC, CS). The authors wish to recognise and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. {We would like to thank the anonymous referee for providing constructive feedback.} We would like to acknowledge Kurt Adelberger, Milan Bogosavljevi\'{c}, Max Pettini, and Rachel Theios for their contribution to the KBSS survey. It is a great pleasure for us to thank Don Neill, Mateusz Matuszewski, Luca Rizzi, Donal O'Sullivan, and Sebastiano Cantalupo for their help in handling the KCWI data, and Cameron Hummels and Max Gronke for insightful discussions. YC would like to acknowledge his grandfather, Chen Yizong, who passed away during the preparation of this manuscript. \section*{Data Availability} The composite CP2D spectra and the Python program used to generate figures in this article are available upon reasonable request. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{SecIntro} \begin{figure}[t] \centering \includegraphics[width=0.85\linewidth]{figures/intro.pdf} \caption{ Comparative results of four popular point cloud classifiers, \ie PointNet \cite{qi2016pointnet}, PointNet++ \cite{qi2017pointnetplusplus}, DGCNN \cite{Wang2019DynamicGC} and Spherical CNN \cite{Esteves2018LearningSE}, with three settings. Compared to ideal rotation-free (in blue) and rotation-only (in green) settings, our single-view partial setting (in orange) is more challenging in view of partially visible point clouds and pose variations, which makes classification accuracy of all four classifiers decrease with a significant margin. Results are reported on the ModelNet40 for the rotation-free and rotation-only settings and its partial variant -- PartialModelNet40 for the proposed single-view partial setting. }\vspace{-0.5cm} \label{fig:intro} \end{figure} The problem of object classification in the 3D domain aims to categorize object shape into semantic classes according to their global topological configuration of local geometry primitives. A point cloud is a popular shape representation in 3D object classification owing to its simple structure and easy acquisition as the raw output of 3D scanners such as the LiDAR in real-world applications. A large number of deep networks such as PointNet \cite{qi2016pointnet}, PointNet++ \cite{qi2017pointnetplusplus}, and DGCNN \cite{Wang2019DynamicGC} have been developed to handle data-specific challenges -- sparsity and irregularity of the point cloud representation, which is under an ideal condition where point clouds are uniformly sampled from the whole surface of object CAD instances aligned in category-level canonical poses\footnote{In the ModelNet40 \cite{Wu20153DSA}, object instances are tolerant of having any arbitrary rotation only along the $Z$-axis.} \cite{Wu20153DSA,Chang2015ShapeNetAI,Sedaghat2017OrientationboostedVN}. Towards a practical setting, \ie relaxing the strict rotation-free assumption on object instances to the arbitrarily posed one, most of the existing convolutional operations on point clouds can have a fundamental challenge that their output of feature representations is sensitive to rotation changes. In light of this, recent rotation-agnostic works concern with eliminating the negative effects of such pose variations. Rotation invariance can be achieved by either learning rotation-invariant feature representation \cite{chen2019clusternet,zhang-riconv-3dv19,You2020PointwiseRN}, learning rotation-equivariant features via Spherical Fourier Transform \cite{Cohen2018SphericalC,Esteves2018LearningSE} or explicit shape alignment via weakly supervised spatial transformation \cite{qi2016pointnet,Wang2019DynamicGC,Yuan2018IterativeTN}. Although these methods all agree that the poses of object models, where point clouds obtained from, are often uncontrollable in practice, their point cloud representation is typically complete and uniformly sampled via the Farthest Point Sampling (FPS) \cite{qi2017pointnetplusplus}. {In a more practical perspective, point clouds residing in the real-word environment are incomplete due to interaction with cluttered contextual objects and self-occlusion restricted by limited observation viewpoints.} In other words, only \textit{partial} surface can be observed, and thus point clouds can only be scanned from those visible regions. In this work, we are motivated to extend recent interests of rotation-only point set classification to such a more practical scenario. Specifically speaking, \emph{partial} and \emph{unaligned} point clouds in our setting can significantly increase difficulties of semantic classification in two folds -- inter-class similarity and intra-class dissimilarity of local shapes, in comparison with the existing settings. On one hand, different semantic classes can share common parts, which can lead to geometrically similar point clouds from local regions belonging to different categories. On the other hand, due to limited observation angles, the partially visible surface spatially varies within the whole shape. Therefore the shape of the partial surface may not uniquely characterize object semantics. It is worth pointing out that classifying aligned partial point clouds is very similar to the rotation-free setting of complete point clouds, as both settings have consistent intra-class geometries from the unique canonical viewing angle. We compare a number of existing representative point set classifiers under the proposed single-view partial setting\footnote{We denote the proposed setting of classifying partially observed and arbitrarily posed point clouds as the single-view partial setting for simplicity.} as well as existing settings on well-aligned (\ie rotation-free) and unaligned (\ie rotation-only) point sets from a complete object surface, whose results visualized in Figure \ref{fig:intro} verify that their performance can be degraded drastically due to partial observations compounded with unconstrained poses. As the category of a surface shape is defined by a global, topological configuration of local shape primitives, point cloud classification under the partial, arbitrarily posed setting is less ambiguous only when the distribution of the partial surface on the complete one is clearly specified. To this end, the problem of partial point cloud classification demands localization of partial shapes on the whole surface as an auxiliary target. Evidently, such a localization problem can be formulated into a supervised regression problem of estimating a rigid transformation (\ie 6D pose) to a category-level canonical rotation and translation. Consequently, partial point cloud representations after the transformation can readily associate with spatially-distributed object parts to alleviate inter-class ambiguities and also large intra-class variations. Note that, we assume that object instances have been detected from contexts, which differentiates our setting from amodal 3D object detection \cite{Geiger2012CVPR,Song2015SUNRA}, by focusing more on learning semantics from object surface geometries instead of precisely locating objects in the 3D space. This paper proposes an end-to-end learning pipeline which consists of two modules -- 6D pose estimation based shape alignment and point cloud classification. The former is fed with partial point clouds as an input and predicts their 6D poses to align the input point set into a canonical space to make subsequent point cloud classification easier. The latter adopts any off-the-shelf point set classifier (\eg PointNet \cite{qi2016pointnet}, PointNet++ \cite{qi2017pointnetplusplus}, DGCNN \cite{Wang2019DynamicGC}) to digest the transformed point clouds to output their class predictions. To benchmark our method and alternative solutions extended from the popular classifiers, we adapt the synthetic ModelNet40 \cite{Wu20153DSA} and the realistic ScanNet \cite{Dai2017ScanNetR3} to the single-view partial setting, where our method can consistently achieve superior classification performance. The main contributions of this paper are as follows: \begin{packed_itemize} \item This paper introduces a realistic single-view partial setting of point cloud classification, which introduces more challenges of point set classification, \ie inter-class similarity and large intra-class variations of partially observed and arbitrarily posed geometries. \item This paper reveals that specifying the distribution of partial point clouds on the object surface can alleviate semantic ambiguities, which encourages us to propose a new classification algorithm with an auxiliary task of object pose estimation. \item Experiment results on two partial point cloud classification datasets can verify our motivation and superior performance to several alternative solutions based on popular complete point cloud classifiers. \end{packed_itemize} Datasets, source codes, and pre-trained models will be released at {\href{url}{https://github.com/xzlscut/PartialPointClouds}}. \section{Related Works} \label{SecRelatedWorks} \vspace{0.1cm}\noindent\textbf{Object Classification on Point Sets --} {As a pioneer, the PointNet \cite{qi2016pointnet} starts the trend of designing deep networks for operating on irregular point-based surface but misses considering geometric patterns in local regions to benefit semantic analysis, which encourages a number of recent works including point-wise MLP-based methods \cite{qi2017pointnetplusplus,Li2018SO,Yang2019ModelingPC,zhao2019pointweb,Duan2019StructuralRR,Yan2020PointASNLRP}, convolutional algorithms \cite{Li2018PointCNNCO,Esteves2018LearningSE,Komarichev2019ACNNAC,thomas2019KPConv,Lan_2019_CVPR}, and graph-based methods \cite{Wang2019DynamicGC,Shen2018MiningPC,chen2019clusternet}.} Recently, the problem of rotation-agnostic classification on point clouds has attracted wide attention, which imposes rotation equivariance or invariance into feature learning with respect to pose transformations. A number of methods have been proposed to either exploit local geometric features invariant to rigid rotation transformation \cite{chen2019clusternet,zhang-riconv-3dv19,You2020PointwiseRN}, or exploit group transformation actions to learn rotation equivariant features \cite{Cohen2018SphericalC,Esteves2018LearningSE}, or even learning with explicit transformation \cite{qi2016pointnet,Wang2019DynamicGC,Yuan2018IterativeTN}. All these rotation-agnostic methods can be readily applied to our single-view partial setting (more technical details are given in Sec. \ref{SecAlternativeSolution}), but cannot perform well without any knowledge about the configuration of observed shape on the whole object surface (See Table \ref{tab:evaluation} in Sec. \ref{SecExps}). \vspace{0.1cm}\noindent\textbf{Point Classification Towards Real-World Data --} Compared to synthetic point clouds sampled from object CAD models, real-world point clouds are typically incomplete and unaligned in addition to irregularly distribution of points such as the existence of holes. Very few existing works \cite{Uy2019RevisitingPC,Yuan2018IterativeTN} have explored to handle with the compound challenges of rotation variations and partially visible shape. Uy \etal \cite{Uy2019RevisitingPC} propose a realistic point cloud classification benchmark, whose setting is similar to ours, but the main differences lie in two folds. On one hand, object shapes in the ScanObjectNN \cite{Uy2019RevisitingPC} only have arbitrary rotations along the vertical axis, instead of the $SO(3)$ rotation group containing all possible rotation transformations in $\mathbb{R}^3$ in our setting. {On the other hand, point clouds in the ScanNet \cite{Dai2017ScanNetR3} and SceneNN \cite{Hua2016SceneNNAS} were obtained via fusion of a depth image sequence with multiple viewpoints, and thus objects segmented by the ScanObjectNN \cite{Uy2019RevisitingPC} in these two datasets can not reflect single-view partiality in real data.} Consequently, although both methods are interested in semantic analysis on partial point clouds, their method concerns more on robust classification performance with noisy point cloud input, while our method treats pose variations as the main challenge. Yuan \etal \cite{Yuan2018IterativeTN} share a similar observation on realistic point cloud classification to handle with partial and unaligned point clouds, which designs an iterative transformation network (ITN) to align partial point clouds before they are fed into a classification module. The ITN \cite{Yuan2018IterativeTN} in principle searches the $SO(3)$ space for a set of optimal centers in pose space that makes partial shape after transformation easier to classify, and therefore its transformation module is directly supervised by its final classification goal. The weakly supervised pose transformation in ITN can be less effective in coping with visually similar partial point clouds from different classes due to the lack of their specific distribution on the object surface. In contrast, our method focuses on localizing partial shape on the whole surface, which can be clearly defined in a category-level canonical space. As a result, supervised regressing on object pose is treated as a proxy task to specify the locations of shape primitives, whose aligned shape can be readily obtained via rigid pose transformation. Experiments in Sec. \ref{SecExps} verify the superior performance of our method to the ITN. \vspace{0.1cm}\noindent\textbf{6D Pose Estimation --} The problem of 6D pose estimation aims to predict object poses (\ie a rotation and translation) in the camera space according to canonical pose space. Existing 6D pose estimation can be categorized into two groups -- instance-level \cite{Xiang2018PoseCNNAC,Li2019DeepIMDI,Wang2019DenseFusion6O,Xu2019WPoseNetDC} and category-level \cite{Wang2019NormalizedOC, Chen2020LearningCS, Tian2020ShapePD}. In instance-level 6D pose estimation \cite{Xiang2018PoseCNNAC,Li2019DeepIMDI,Wang2019DenseFusion6O,Xu2019WPoseNetDC}, a typical assumption is adopted that CAD models for object instances to be estimated are available. In this sense, these methods concern more on learning to match partial observation to the CAD model without considering shape changes across instances. {In category-level 6D pose estimation \cite{Wang2019NormalizedOC, Chen2020LearningCS, Tian2020ShapePD}, without CAD models of each instance during testing, such a problem is made more challenging to cope with shape variations of unseen object instances and thus relies on a high-quality category-level mean shape.} The object pose estimator in our method falls into the latter group but does not estimate the size of object instances compared to existing category-level pose estimators. The rational explanation is the main task of partial point classification will normalize all input point cloud into a unit ball, rather than object detection in \cite{Wang2019NormalizedOC, Chen2020LearningCS, Tian2020ShapePD}. Existing works on category-level 6D pose employ the MLP-based feature encoding, which is less effective to reveal rotation changes. Such an observation encourages us to use Spherical CNN \cite{Esteves2018LearningSE} as the backbone encoder for object pose estimation in our scheme, which can perform better (see Table \ref{tab:backbone} in Sec. \ref{SecExps}). \section{Problem Definition} \label{SecProbDefinition} For the conventional point set classification with $\mathcal{X}_c$ and $\mathcal{Y}$ denoting the input and output space, given $N_s$ training samples, each consisting of a complete point cloud $\mathcal{P}_c= \{ \bm{p}_{i} \in \mathbb{R}^3 \}_{i=1}^{N_c} \in \mathcal{X}_c $ of $N_c$ points and its corresponding category label $y \in \mathcal{Y}$, where the size of object classes $|\mathcal{Y}| = K$, the goal is to learn a mapping function $\Phi_c: \mathcal{X}_c \rightarrow \mathcal{Y}$ that classifies $\mathcal{P}_c$ into one of the $K$ categories. Note that, each object CAD instances, where a point cloud are generated from, can be well aligned to either the canonical pose as \cite{qi2017pointnetplusplus,Wang2019DynamicGC} or arbitrarily posed at $[\bm{R}|\bm{t}]$ \wrt a category-level canonical pose as \cite{chen2019clusternet,Esteves2018LearningSE}, where $\bm{R} \in SO(3)$ and $\bm{t} \in \mathbb{R}^3$. Plenty of recent methods propose deep models to classify point clouds of complete object surface \cite{qi2016pointnet,qi2017pointnetplusplus,Wang2019DynamicGC,Li2018SO}, but they focus on objects under canonical poses. Although the scale and translation variation can be eliminated by normalizing the input point cloud to a unit sphere, these methods are by design rotation-dependent, and as such, arbitrarily rotated object point clouds would degrade the performance of these methods. This is certainly undesirable considering the fact that complete surface shapes of an object captured at different poses represent the same geometric and semantic pattern. The issue motivates recent proposals of rotation-agnostic methods \cite{chen2019clusternet,Esteves2018LearningSE,Yuan2018IterativeTN}, whose designs are generally based on the following three strategies. Technical details of these strategies are given in Sec. \ref{SecAlternativeSolution}. Although the problem of rotated point cloud classification has considered to alleviating negative effects of pose changes, however, these methods are not directly relevant to the single-view partial setting considered in this work, which extends recent interests of classifying point clouds from a \emph{complete} object surface \cite{Wu20153DSA} to a more practical scenario, \ie the point representation is incomplete due to self and inter-object occlusions and un-aligned (\ie under any rotation group $SO(3)$). In this sense, the input and output of training pair of each instance consists of a point cloud $\mathcal{P} = \{ \bm{p}_{i} \in \mathbb{R}^3 \}_{i=1}^{N}\in \mathcal{X}$ of $N$ points covering a \emph{partially visible} surface of an object arbitrarily posed at $[\bm{R}|\bm{t}]$, together with its class label $y \in \mathcal{Y}$. We study learning a prediction function $\Phi: \mathcal{X} \rightarrow \mathcal{Y}$ to categorize $\mathcal{P}$ into one of $K$ semantic classes. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{figures/problem.png} \caption{Illustration of ambiguous examples of inter-class similarity of partially observed shapes on the ModelNet40 \cite{Wu20153DSA}. Three pairs of shape primitives from different classes on the left are visually similar to each other, but they are distinguishable after pose transformation to a canonical space on the right. }\vspace{-0.5cm} \label{fig:problem} \end{figure} As noted in Sec. \ref{SecIntro}, the category of a surface shape is defined by a global, topological configuration of local shape primitives; more precisely, the category of a surface shape can be determined only when both the geometric patterns of its local shape primitives and their global configuration in the 3D space are specified. In Figure \ref{fig:problem}, we illustrate a number of ambiguous example pairs, each pair of which from different object classes can be visually similar under specific viewing angles but can be easily distinguished from other poses (\eg the pre-defined category-level canonical poses). Moreover, an unknown partial surface is difficult to define a unique object category, except in special cases in which surface parts unique to specific categories are observed. As such, our considered partial setting drastically improves the difficulty of recognizing geometric patterns when compared with the complete setting considered in the recent research \cite{Wu20153DSA}. As shown in Figure \ref{fig:intro}, when arbitrarily rotation and partial observation are compounded, the classification accuracy of four recent benchmarking baselines decreases at least 14.5\%. Nevertheless, existing rotation-agnostic methods perform robustly against rotation variations, and we are thus interested in applying these methods in the considered partial setting by investigating their efficacy to cope with pose variations (See Sec. \ref{SecExps}). The above analysis suggests that given a partially observed object surface, its category is less ambiguous only when where the observed surface part is located on the whole surface is clear. This requirement translates as predicting the object pose from the partial surface observation. We thus argue for \emph{supervised pose regression} as an auxiliary task in order to benefit the subsequent task of classifying partial point sets. The specifics are given in Sec. \ref{SecOurSolutions}. \begin{figure*}[t] \centering \includegraphics[width=0.8\linewidth]{figures/pipeline.pdf} \caption{Pipeline of our partial point cloud classification with an auxiliary prediction of object pose in a alignment-classification manner. }\vspace{-0.5cm} \label{fig:our_method} \end{figure*} \section{Limitations of Alternative Solutions} \label{SecAlternativeSolution} Rotation-agnostic methods achieve invariance to rotation using the following strategies. Denote $\Phi_{fea}: \mathbb{R}^{N\times 3} \rightarrow \mathbb{R}^d$ as the feature encoding module, which produces $d$-dimensional feature embedding $\Phi_{fea}(\mathcal{P})$ for any input $\mathcal{P}$, and $\Phi_{cls}: \mathbb{R}^d \rightarrow [0, 1]^K$ as the final classifier typically constructed by fully-connected layers. These methods implement point set classification as a cascaded function $\Phi = \Phi_{cls} \circ \Phi_{fea}$. We briefly discuss how they instantiate $\Phi_{fea}$. \vspace{0.1cm} \noindent\textbf{Learning Rotation-Invariant Feature Representations --} Rotation-invariant learning requires that $\Phi_{fea}(\bm{R}\mathcal{P}) = \Phi_{fea}(\mathcal{P})$ for any $\bm{R} \in SO(3)$. It can be easily shown that the geometric quantities of point-wise norm and angle between a pair of points are rotation-invariant, \ie $\|\bm{R}\bm{p}\|_2 = \|\bm{p}\|_2$ and $\langle\bm{R}\bm{p}, \bm{R}\bm{p}'\rangle = \langle \bm{p}, \bm{p}'\rangle$, $\forall \bm{p}, \bm{p}' \in \mathbb{R}^3$. Higher-order geometric quantities invariant to rotation can be obtained by constructing local neighborhoods around each $\bm{p} \in \mathcal{P}$, which altogether provide rotation-invariant input features for subsequent learning via graph networks \cite{qi2017pointnetplusplus,Wang2019DynamicGC}. Rotation-invariant feature learning methods \cite{chen2019clusternet,zhang-riconv-3dv19,You2020PointwiseRN,Zhao2019RotationIP} thus implement $\Phi_{fea}$ by learning point-wise features from these invariant inputs, followed by pooling in a hierarchy of local neighborhoods constructed by graph networks; as such, they produce $\Phi_{fea}(\mathcal{P}) \in \mathbb{R}^d$. Pre-aligning $\mathcal{P}$ globally to a canonical pose via alignment of PCA axes may also be used to improve the invariance \cite{Zhao2019RotationIP}. However, using handcrafted geometric quantities as point-wise descriptor may not be optimal. Repeatability of the reference axis or the local reference frame directly affects the invariance and robustness of the input descriptor \cite{2011On, Petrelli2012A}. \vspace{0.1cm}\noindent\textbf{Achieving Rotation Invariance via Learning Rotation-Equivariant Deep Features --} Rotation invariance can also be achieved by first learning rotation-equivariant deep features and then pooling the features over the rotation group $SO(3)$. Typical methods of such a kind are spherical CNNs \cite{Cohen2018SphericalC,Esteves2018LearningSE}. Denote a rotation-equivariant layer as $\Psi: SO(3) \times \mathbb{R}^{d_{in}} \rightarrow SO(3) \times \mathbb{R}^{d_{out}}$; it processes an input signal $\bm{f}: SO(3) \rightarrow \mathbb{R}^{d_{in}}$ with $d_{out}$ layer filters, each of which is defined as $\bm{\psi}: SO(3) \rightarrow \mathbb{R}^{d_{in}}$ \footnote{Note that both $\bm{f}$ and $\bm{\psi}$, and the filtered responses are defined on the domain of rotation group $SO(3)$; when the layer is the network input, $\bm{f}$ and $\bm{\psi}$ are defined on the domain of a sphere $S^2$.}. Rotation-equivariant $\Psi$ has the property $[\bm{\psi} \star [\bm{\mathcal{T}}_{\bm{R}} \bm{f}]](\bm{Q}) = [\bm{\mathcal{T}}_{\bm{R}}' [\bm{\psi} \star \bm{f}]](\bm{Q})$, where $\bm{Q} \in SO(3)$ and $\bm{\mathcal{T}}_{\bm{R}}$ (or $\bm{\mathcal{T}}_{\bm{R}}'$) denotes a rotation operator that rotates the feature function $\bm{f}$ as $[\bm{\mathcal{T}}_{\bm{R}}\bm{f}](\bm{Q}) = \bm{f}(\bm{R}^{-1}\bm{Q})$, and $\star$ denotes convolution on the rotation group; spherical convolution defined in \cite{Cohen2018SphericalC,Esteves2018LearningSE} guarantees achievement of the above property. A rotation-invariant $\Phi_{fea}$ can thus be constructed by cascading multiple layers of $\Psi$, followed by pooling the obtained features over the domain of $SO(3)$. To implement $\Phi_{fea}(\mathcal{P})$, one may first cast each point $\bm{p} \in \mathcal{P}$ onto the unit sphere, with the accompanying point-wise geometry features (\eg the length $\|\bm{p}\|_2$), and then quantize the features to the closest grid of discrete sampling on the sphere. Due to numerical approximations and the use of nonlinearities between layers, the Spherical CNN encoder is not perfectly rotation-equivariant, which will directly affect the rotation-invariance of features obtained by global pooling \cite{spezialetti2019learning}. Alternative manners \cite{Thomas2018TensorFN,Weiler20183DSC} exist that directly learn rotation-equivariant point-wise filters in the Euclidean domain. \vspace{0.1cm}\noindent\textbf{Weakly Supervised Learning of Spatial Transformation --} Given an arbitrarily posed $\mathcal{P}$, a module $\Phi_{trans}$ parallel to $\Phi_{fea}$ is also considered in \cite{qi2016pointnet,Wang2019DynamicGC,Yuan2018IterativeTN} to predict a transformation/pose $\bm{T} = [\bm{R}|\bm{t}]$, which is then applied to $\mathcal{P}$ to reduce the pose variation. Learning of $\Phi_{trans}$ is enabled by weak supervision of classification imposed on the network output of classifier $\Phi_{cls}$, which propagates supervision signals back to $\Phi_{trans}$. The design can be further improved via iterative updating \cite{Yuan2018IterativeTN,Wang2019DenseFusion6O}, similar to the point set alignment via iterative closest point \cite{Besl1992AMF}; it predicts a transformation update $\Delta\bm{T}$ per iteration, and the one at iteration $t$ is obtained as a composition $\bm{T}_t = \prod_{i=1}^t \Delta\bm{T}_i$. However, due to the weakly supervised nature, predicted $\bm{T}$ is not guaranteed to be relevant to a transformation that can align the input $\mathcal{P}$ to its canonical pose; consequently, classification results depend empirically on data and problem settings. \vspace{0.1cm}\noindent\textbf{Limitations with Single-View Partial Setting --} The aforementioned algorithms for rotation invariance can be applied to the partial point set classification task but suffer from the following limitations. On one hand, point sets sampled from partial object surfaces are short of the configuration of observed partial surfaces on the whole surface, which thus cannot cope with similar partial surfaces across categories effectively. On the other hand, rotation-invariant features learned from partial point sets can be less consistent. In detail, missing parts of object surfaces can lead to large feature variations, especially under different camera coordinate systems. Note that the mean coordinate of all points in the observed partial surface no longer represents the geometric center of the whole surface like the point cloud from the whole surface, which requires the network to take into account the negative effect of center deviation. Both disadvantages inhibit their classification performance in partial point cloud classification and thus encourages our solution in the following section. \begin{figure*}[t] \centering \includegraphics[width=0.85\linewidth]{figures/dataset.pdf} \caption{Illustration of the PartialModelNet40 and PartialScanNet10 datasets adapted from the ModelNet40 \cite{Wu20153DSA} and ScanNet \cite{Dai2017ScanNetR3}.}\vspace{-0.5cm} \label{fig:dataset} \end{figure*} \section{Classification of Partial Point Clouds via an Auxiliary Object Pose Estimation} \label{SecOurSolutions} In this section, we introduce a novel classification of partial point clouds via an auxiliary prediction of object pose in a manner of alignment and classification (simply put, use an AlgCls to indicate our method), whose pipeline is illustrated in Figure \ref{fig:our_method}. Specifically, the method consists of two parts -- a 6D pose estimation based shape alignment module and a typical point set classifier. Instead of learning a direct mapping function $\Phi: \mathcal{X} \rightarrow \mathcal{Y}$ from partial point cloud $\mathcal{P} \in \mathcal{X}$ to the label $y\in \mathcal{Y}$, the alignment module in our method first learns a regression mapping $\Phi_\text{pos}: \mathcal{X} \rightarrow \mathcal{R}$ from $\mathcal{P}$ to object pose $[\bm{R}|\bm{t}] \in \mathcal{R}$, whose predictions $[\bm{\hat{R}}|\bm{\hat{t}}]$ are utilized to transform $\mathcal{P}$ to generate the aligned partial point clouds $\mathcal{P'}=\{ \bm{p}'_{i} \in \mathbb{R}^3 \}_{i=1}^{N}\in \mathcal{X'}$; the latter module is to learn a classification function $\Phi_\text{cls}: \mathcal{X}' \rightarrow \mathcal{Y}$. Similar to the definition of a cascade function of a deep model in Sec. \ref{SecAlternativeSolution}, the mapping function can be depicted as $\Phi = \Phi_\text{cls} \circ \Phi_\text{pos}$. During testing, a new partial point cloud is first fed into the shape alignment module, whose output is then used as the input of the classification module for a final class prediction. \vspace{0.1cm}\noindent\textbf{Object Pose Estimation Based Shape Alignment --} Existing algorithms of instance-level 6D pose estimation \cite{Xiang2018PoseCNNAC,Li2019DeepIMDI,Wang2019DenseFusion6O} typically have prior knowledge about the topological structure of object instances (\eg CAD models for object instances are provided). In our setting, category-level object pose estimation is more difficult in view of shape variations. Encouraged by the recent success of Spherical CNNs \cite{Esteves2018LearningSE,Cohen2018SphericalC} to cope with rotation variations of the point cloud representation, this paper introduces the spherical convolution operation on partial point clouds to estimate their object poses in 6 degrees of freedom, whose rotation-equivariant features are easier to model $SO(3)$ rotation than those of the popular multi-layer perceptrons (MLPs). Note that, since the input is not a watertight mesh but a sparse and partial point cloud, we cast $W\times H$ equiangular rays from the origin and divide the whole spherical space $S^2$ into $W\times H$ regions. For each spherical region, we take the distance of the farthest point from the origin as the input, generating a spherical signal as $f(\theta j, \phi k) = d_{jk}, 1\leqslant j\leqslant W, 1\leqslant k\leqslant H$, {{where $\theta j$, $\phi k$ denote the longtitude and lattitude of the region in $(j, k)$}}. In Figure \ref{fig:intro}, Spherical CNN can perform more robustly than the other methods in our partial setting, which can also verify its rationale as the feature encoder of pose regression. As shown in Figure \ref{fig:our_method}, we adopt four feature encoding blocks, where the first three consists of two spherical convolution layers followed with weighted average pooling \cite{Esteves2018LearningSE} and the last block only containing two spherical convolution layers as suggested by \cite{Esteves2018LearningSE}. The feature map output of the final spherical convolution block is reshaped into a vector as the input of a fully-connected layer, whose output is fed into two 3-layer MLPs to predict quaternion-based rotation $\hat{\bm{R}}$ and translation $\hat{\bm{t}}$ respectively. Given pose prediction $[\bm{\hat{R}}|\bm{\hat{t}}]$, the original partial point cloud $\mathcal{P} = \{ \bm{p}_{i} \}_{i=1}^{N}$ can be transformed into the aligned point set $\mathcal{P}' = \{ \bm{p}'_{i} \}_{i=1}^{N}$ via $ \bm{p}' ={\hat{\bm{R}}}\bm{p}+{\hat{\bm{t}}}$. \vspace{0.1cm}\noindent\textbf{Classification of Aligned Partial Point Clouds --} The aligned partial point cloud is first normalized into a unit ball as existing classifiers on complete point clouds before feeding into the point classification module. It is worth mentioning here that the classification module is not limited to a specific point cloud classifier, so any off-the-shelf classifier such as PointNet \cite{qi2016pointnet}, PointNet++ \cite{qi2017pointnetplusplus}, DGCNN \cite{Wang2019DynamicGC} \etc can be adopted. In this way, the concept of the proposed alignment-classification structure can be readily applied to any existing point-based networks to adapt from complete point cloud classification to the single-view partial setting. \vspace{0.1cm}\noindent\textbf{Loss Functions --} In our scheme, we have two types of supervision signals -- the class label $y$ and the pose label $[\bm{R}|\bm{t}] \in \mathcal{R}$. For pose estimation, the loss between ground truth and pose predictions is based on the Euclidean norm as follows: \begin{equation} L_{\text{pos}} = ||\bm{q} - \hat{\bm{q}}|| + \alpha ||\bm{t} - \hat{\bm{t}}||\label{pose_loss} \end{equation} where $\alpha$ is the trade-off parameter between two terms, $\bm{q}$ and $\bm{\hat{q}}$ are quaternion transformed from $\bm{R}$ and $\bm{\hat{R}}$ respectively. For the classification loss $L_{\text{cls}}$, the typical cross entropy loss \cite{qi2016pointnet,Wang2019DynamicGC} is used. In general, the total loss can be written as: \begin{equation} L_{\text{total}} = L_{\text{pos}} + \lambda L_{\text{cls}}\label{total_loss} \end{equation} where $\lambda$ is a trade-off parameter between the pose and classification loss. \begin{table*}[thbb] \centering \caption{Comparative evaluation on classification accuracy (\%) with the PartialModelNet40 and the PartialScanNet10.} \resizebox{0.7\linewidth}{!} { \begin{tabular}{l|c|l|l|c|c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Input (size)} & \multicolumn{3}{c|}{PartialModelNet40} & \multirow{2}{*}{PartialScanNet10} \\ \cline{3-5} & & \multicolumn{1}{c|}{1 view} & \multicolumn{1}{c|}{5 views} & 10 views & \\ \hline oracle \cite{qi2017pointnetplusplus} & pc $(1024\times 3)$ & 87.7 $\pm$ 0.1 & 89.9 $\pm$ 0.1 & 90.2 $\pm$ 0.2 & 91.5 $\pm$ 0.3 \\ \hline PointNet++ (baseline) \cite{qi2017pointnetplusplus} & pc $(1024\times 3)$ & 65.6 $\pm$ 0.6 & 67.6 $\pm$ 1.1 & 69.6 $\pm$ 1.0 & 67.7 $\pm$ 1.6 \\ RRI \cite{chen2019clusternet} & pc $(1024\times 3)$ & 71.6 $\pm$ 0.1 & 76.2 $\pm$ 0.4 & 76.7 $\pm$ 0.1 & 71.4 $\pm$ 0.2 \\ Spherical CNN \cite{Esteves2018LearningSE} & voxel $(1\times 64^2)$ & 71.3 $\pm$ 0.1 & 75.0 $\pm$ 0.4 & 75.1 $\pm$ 0.3 & 76.5 $\pm$ 0.1 \\ STN \cite{qi2016pointnet} & pc $(1024\times 3)$ & 66.2 $\pm$ 1.3 & 69.5 $\pm$ 1.2 & 71.5 $\pm$ 1.0 & 73.1 $\pm$ 0.5 \\ ITN \cite{Yuan2018IterativeTN} & pc $(1024\times 3)$ & 67.5 $\pm$ 0.6 & 70.1 $\pm$ 0.9 & 72.2 $\pm$ 0.4 & 66.9 $\pm$ 0.5 \\ AlgCls (ours) & pc $(1024\times 3)$ & \textbf{73.0 $\pm$ 0.2} & \textbf{77.7 $\pm$ 0.3} & \textbf{79.7 $\pm$ 0.2} & \textbf{82.1 $\pm$ 0.3} \\ \hline \end{tabular} }\label{tab:evaluation}\vspace{-0.5cm} \end{table*} \section{Single-View Partial Point Cloud Datasets}\label{SecData} Very few works have explored to generate and release benchmarks of partial point cloud classification, but the available datasets \cite{Wu20153DSA,Uy2019RevisitingPC} lacks object pose annotations, which cannot provide specific configuration of partial shapes. In light of this, we adapt the popular synthetic ModelNet40 \cite{Wu20153DSA} and realistic ScanNet \cite{Dai2017ScanNetR3} to the partial setting with several examples illustrated in Figure \ref{fig:dataset}. \vspace{0.1cm}\noindent\textbf{PartialModelNet40 --} The ModelNet40 \cite{Wu20153DSA} contains 12,311 CAD models belonging to 40 semantic categories which are split into 9,843 for training and 2,468 for testing. For a PartialModelNet40 adapted from the ModelNet40, we first generate a set of pose candidates, whose rotation and translation are respectively sampled on $SO(3)$ and $t_x,t_y\sim\mathcal U(-2.0, 2.0), t_z\sim\mathcal U(2.0, 5.0)$ to simulate the operating range of a typical depth camera. In the PartialModelNet40, we respectively render 1, 5, 10 depth images with corresponding pose candidates as the pose supervision signals, from the CAD model of each instance in the training set of the ModelNet40 \cite{Wu20153DSA}, which are then converted into partial point clouds {using the intrinsic parameter of a pinhole camera.} Consequently, three training split include 9,843, 49,215, and 98,430 training samples. The testing set is constructed by randomly sampling one view of each testing instance in the ModelNet40, which contains 2,468 testing samples. All objects are normalized to the same size to get rid of scale variations. \vspace{0.1cm}\noindent\textbf{PartialScanNet10 --} The ScanNet \cite{Dai2017ScanNetR3} contains 1513 scanned and reconstructed real-world indoor scenes. Ten common semantic classes shared by the ModelNet40 and ScanNet are selected to segment partially visible instances from the scenes with bounding box annotations {and semantic labels}. Point cloud segments are first aligned to the canonical space using the pose annotations provided by the Scan2CAD \cite{Avetisyan2019Scan2CADLC}, and then randomly sample five viewpoint for each object instance to transform them into the camera coordinate system. Since the original ScanNet is generated by fusing a sequence of depth scans, and thus object shapes segmenting from the ScanNet scenes evidently contain redundant surface from multiple views. In this way, the hidden point removal \cite{Katz2007DirectVO} is employed to filter out the invisible points due to self-occlusion under a single perspective. In general, our PartialScanNet10 contains 9,840 training samples and 1,695 test samples of ten categories. The PartialScanNet10 is challenging in view of the existence of realistic sensor noises, nonuniform densities, various poses, and partiality due to self-occlusion and occlusions in the cluttered context. \section{Experiments}\label{SecExps} \vspace{0.1cm}\noindent\textbf{Comparative Methods --} For comparative evaluation, we choose a number of recent methods according to three types of rotation-agnostic methods in Sec. \ref{SecAlternativeSolution}. For learning rotation-invariant feature representation, we choose the rigorously rotation-invariant (RRI) representation proposed in ClusterNet \cite{chen2019clusternet}, which can achieve the state-of-the-art performance with a solid theoretical guarantee. We use the Spherical CNN \cite{Esteves2018LearningSE} as a competing rotation-equivariant method, which takes the distance of the farthest point from the center in each equiangular sampling area of the bounding sphere as the spherical map feature. For a fair comparison, we select the one-branch Spherical CNN without considering the angular feature relies on the normal vector. For explicit spatial transformation, we take the spatial transformer network (STN) \cite{qi2016pointnet} and iterative transformer network (ITN) \cite{Yuan2018IterativeTN} as the baseline methods. The input point clouds for all the methods are randomly sampled to 1024 points and normalized within the unit sphere{, except for Spherical CNN \cite{Esteves2018LearningSE}, which uses all observed points to generate the spherical signals.} We adopt the identical data preprocessing and augmentation (\ie random $SO(3)$ rotation, random shift, and per-point jitter) for all the methods and report the average and variance of classification accuracy by repeating each experiment five times. \vspace{0.1cm}\noindent\textbf{Implementation Details --} We follow the default end-to-end training strategies of different classifiers suggested in their paper and insert the additional shape alignment module between input point clouds and classifiers. The hyper-parameter $\alpha$ between rotation and translation error in Eq. (\ref{pose_loss}) was empirically set as $10.0$, while $\lambda$ in Eq. (\ref{total_loss}) was optimally selected from $\{1,5,10,20\}$. {For symmetric objects, \eg generalized cylinder and cuboids, we use the method proposed in \cite{Pitteri2019OnOS} to map ambiguous rotations to a canonical one.} \subsection{Comparative Evaluation} We compare our AlgCls method with alternative schemes using different rotation-invariance strategies on the PartialModelNet40 and PartialScanNet10. The results are shown in Table \ref{tab:evaluation}, and all methods except the Spherical CNN adopt the PointNet++ as the {classifier}. We report the results of our method and other competitors using three training split mentioned in Sec. \ref{SecData}, the proposed method performs consistently the best. {Both the STN and ITN can only perform comparable to or slightly outperform the baseline PointNet++, indicating that weak supervision on pose transformation can hardly tackle with the pose variation. In addition, their performance has a large variance in view of feature inconsistency without specific distribution of partial shape on the object surface.} Moreover, with increasing training samples (\eg from 5 views to 10 views), performance improvement of the RRI and Spherical CNN tends to saturation with a limited margin, while our method can gain a stable improvement. Our explanation for such a performance gap is that the shape alignment module in our scheme learns to transform single-view partial point clouds from arbitrary poses to the canonical ones, which thus can have consistent geometries under the unique perspective and can mitigate suffering from feature inconsistency with more training samples, while the RRI and Spherical CNN cannot in principle benefit characterizing semantics with large geometric variations. Similar results on the more challenging PartialScanNet10 can be observed, and the proposed method can gain more significant performance improvement than the other rotation-agnostic methods as well as the baseline PointNet++, which further demonstrates the effectiveness of the introduction of supervised pose regression on alleviating semantic ambiguity of single-view partial point clouds. In the top row, we also report the results of our AlgCls method using ground truth pose to transform partial point clouds, which can be the upper bound of our alignment-classification algorithm based on the PointNet++ and reveals a promising direction. \subsection{Ablation Study} \begin{figure}[t] \centering \includegraphics[width=0.996\linewidth]{figures/confusion_matrix.pdf} \caption{Confusion matrix of (a) PointNet, (b) PointNet++, (c) DGCNN, (d) PointNet based AlgCls, (e) PointNet++ based AlgCls (f) DGCNN based AlgCls on the PartialScanNet10. \label{fig:confusion_matrix} \end{figure} \begin{table} \centering \caption{Effects of different classifiers on accuracy (\%).} \resizebox{1.0\linewidth}{!} { \label{tab:ablation} \begin{tabular}{l|ccc|ccc} \hline Dataset & \multicolumn{3}{c|}{PartialModelNet40 (10 views)} & \multicolumn{3}{c}{PartialScanNet10} \\ \hline Classifier & PN & PN++ & DGCNN & PN & PN++ & DGCNN \\ \hline Oracle & 87.1 & 90.2 & 90.3 & 90.1 & 91.5 & 91.0 \\ \hline baseline & 55.4 & 69.6 & 67.6 & 61.9 & 67.7 & 65.5 \\ AlgCls (ours) & \textbf{68.2} & \textbf{79.7} & \textbf{79.2} & \textbf{78.0} & \textbf{82.1} & \textbf{79.9} \\ \hline \end{tabular}}\label{tab:classifier}\vspace{-0.5cm} \end{table} \vspace{0.1cm}\noindent\textbf{Effects of Different Classifiers --} In Table \ref{tab:classifier} and Figure \ref{fig:confusion_matrix}, we evaluate effects of different off-the-shelf classifiers, \ie the PointNet (PN), the PointNet++ (PN++), and the DGCNN as the final classification $\Phi_\text{cls}$ module in our AlgCls on both datasets. It can be concluded that the proposed AlgCls method can consistently outperform its corresponding baseline, owing to effectively mitigating the inter-class similarity, \eg the desk, dresser, and nightstand classes in Figure \ref{fig:confusion_matrix}. In view of the identical classifiers adopted for $\Phi_\text{cls}$ as well as other settings, the performance gain can only be credited to the introduction of the shape alignment module. Moreover, among three classification methods, the PointNet++ is consistently superior to the PointNet and DGCNN, which is adopted as the default classifier of our AlgCls. \begin{table}[h] \centering \caption{Effects of different feature encoders in pose estimation in terms of $10cm10^\circ$ metric {on the PartialModelNet40 (10 views)}.} \resizebox{0.75\linewidth}{!} { \begin{tabular}{c|c|c|c} \toprule[1pt] Backbone & PointNet & PointNet++ & Spherical CNN \\ \hline Accuracy (\%) & 6.08 & 11.59 & \textbf{26.78} \\ \bottomrule[1pt] \end{tabular}}\label{tab:backbone}\vspace{-0.5cm} \end{table} \vspace{0.1cm}\noindent\textbf{Effects of Different Feature Encoding Backbone in Shape Alignment --} As mentioned in Sec. \ref{SecOurSolutions}, we select Spherical CNN in favor of rotation equivariant properties of spherical convolution in \cite{Esteves2018LearningSE}, but the PointNet architecture is adopted in explicit spatial transformation-based methods -- the STN and ITN. In Table \ref{tab:backbone}, we evaluate the performance of the shape alignment module in our AlgCls when using the PointNet, PointNet++, and Spherical CNN as the backbone. We report the average precision of object instances for which the error is less than $10 cm$ for translation and $10^\circ$ for rotation similar to \cite{Li2019DeepIMDI, Wang2019NormalizedOC} and do not penalize rotation errors on the axis of symmetry by following \cite{Wang2019NormalizedOC}. The results show that the rotation equivariant features by Spherical CNN are more conducive to supervised pose estimation. \begin{table}[h] \centering \caption{Effects of different losses in pose estimation in terms of $10cm10^\circ$ metric {on the PartialModelNet40 (10 views)}.} \resizebox{0.65\linewidth}{!} { \begin{tabular}{c|c|c|c} \toprule[1pt] Loss & PMLoss & GeoLoss & RegLoss \\ \hline Accuracy (\%) & 6.36 & 13.82 & \textbf{26.78} \\ \bottomrule[1pt] \end{tabular}}\label{tab:loss}\vspace{-0.3cm} \end{table} \vspace{0.1cm}\noindent\textbf{Effects of Different Pose Losses --} Beyond RegLoss indicating ordinary L2 norm between pose labels and predictions, we also consider geodesic distance (GeoLoss) \cite{Hartley2012RotationA, Huynh2009MetricsF3} and also Point Matching Loss (PMLoss) in \cite{Li2019DeepIMDI}. We report our investigation of using these losses to supervising pose estimation in Table \ref{tab:loss}, and the L2 norm based perform the best. \begin{table}[h] \caption{Classification accuracy for different $\lambda$ in Eq. (\ref{total_loss})} \centering \resizebox{0.55\linewidth}{!}{% \begin{tabular}{c|c|c|c|c} \toprule[1pt] $\lambda$ & 1.0 & 5.0 & 10.0 & 20.0\\ \hline Accuracy (\%) & 79.2 &79.4 &\textbf{79.7} &78.9 \\ \bottomrule[1pt] \end{tabular}}\label{tab:lambda}\vspace{-0.3cm} \end{table} \vspace{0.1cm}\noindent\textbf{Hyper-parameter $\lambda$ Tuning --} Table \ref{tab:lambda} shows the classification accuracy for different $\lambda$ on the PartialModelNet40 (with the 10 views split). Since the classification of partial point clouds dependent on the quality of output of shape alignment but can under-fit the classification task, which suggests a relatively larger $\lambda$. According to Table \ref{tab:lambda}, $\lambda=10.0$ is used in all experiments. \section{Conclusions} This paper introduces a novel single-view partial point cloud classification from a practical perspective. To specify the distribution of partial shape on the whole surface, we propose an alignment-classification algorithm to effectively cope with additional semantic ambiguity of local primitives, which is generic to adapt other point classifiers readily to the single-view partial setting. {Experiment results on two new partial point cloud datasets show that the Spherical CNN for pose estimation based shape alignment is preferred and the PointNet++ as the classifier can consistently perform best.} \section*{Acknowledgements} This work is supported in part by the National Natural Science Foundation of China (Grant No.: 61771201, 61902131), the Program for Guangdong Introducing Innovative and Enterpreneurial Teams (Grant No.: 2017ZT07X183), the Fundamental Research Funds for the Central Universities (Grant No.: 2019MS022), and the Xinhua Scholar Program of the South China University of Technology (Grant No.: D6192110). {\small \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Playing Field} Let $M$ be a connected 4-manifold without boundary. We will work with 2-columns $v:M\to\mathbb{C}^2$ of complex-valued half-densities (a half-density is a quantity which transforms as the square root of a density under changes of local coordinates). The inner product on such 2-columns is defined as $\langle v,w\rangle:=\int_M w^*v\,dx$, where $x=(x^1,x^2,x^3,x^4)$ are local coordinates, $dx=dx^1dx^2dx^3dx^4$ and the star stands for Hermitian conjugation. Let $L$ be a formally self-adjoint first order linear differential operator acting on 2-columns of complex-valued half-densities. Our initial objective will be to examine the geometric content of the operator $L$. In order to pursue this objective we first need to provide an invariant analytic description of the operator. In local coordinates our operator reads \begin{equation} \label{operator L in local coordinates} L=F^\alpha(x)\frac\partial{\partial x^\alpha}+G(x), \end{equation} where $F^\alpha(x)$, $\alpha=1,2,3,4$, and $G(x)$ are some $2\times 2$ matrix-functions. The principal and subprincipal symbols of the operator $L$ are defined as \begin{equation} \label{definition of the principal symbol} L_\mathrm{prin}(x,p):=iF^\alpha(x)\,p_\alpha\,, \end{equation} \begin{equation} \label{definition of the subprincipal symbol} L_\mathrm{sub}(x):=G(x) +\frac i2(L_\mathrm{prin})_{x^\alpha p_\alpha}(x)\,, \end{equation} where $p=(p_1,p_2,p_3,p_4)$ is the dual variable (momentum); see Ref.~\refcite{mybook}. The principal and subprincipal symbols are invariantly defined $2\times 2$ Hermitian matrix-functions on $T^*M$ and $M$ respectively which uniquely determine the operator $L$. Further on in this paper we assume that the principal symbol of our operator satisfies the following non-degeneracy condition: \begin{equation} \label{definition of non-degeneracy} L_\mathrm{prin}(x,p)\ne0,\qquad\forall(x,p)\in T^*M\setminus\{0\}. \end{equation} Condition \eqref{definition of non-degeneracy} means that the elements of the $2\times2$ matrix-function $L_\mathrm{prin}(x,p)$ do not vanish simultaneously for any $x\in M$ and any nonzero momentum $p$. \section{Lorentzian Metric and Orthonormal Frame} \label{Lorentzian Metric and Orthonormal Frame} Observe that the determinant of the principal symbol is a quadratic form in the dual variable (momentum) $p$\,: \begin{equation} \label{definition of metric} \det L_\mathrm{prin}(x,p)=-g^{\alpha\beta}(x)\,p_\alpha p_\beta\,. \end{equation} We interpret the real coefficients $g^{\alpha\beta}(x)=g^{\beta\alpha}(x)$, $\alpha,\beta=1,2,3,4$, appearing in formula \eqref{definition of metric} as components of a (contravariant) metric tensor. The following result was established in Ref.~\refcite{nongeometric}. \begin{lemma} \label{Lemma about Lorentzian metric} Our metric is Lorentzian, i.e.~it has three positive eigenvalues and one negative eigenvalue. \end{lemma} Furthermore, the principal symbol of our operator defines an orthonormal frame $e_j{}^\alpha(x)$. Here the Latin index $j=1,2,3,4$ enumerates the vector fields, the Greek index $\alpha=1,2,3,4$ enumerates the components of a given vector $e_j$ and orthonormality is understood in the Lorentzian sense: \begin{equation} \label{orthonormality of the frame} g_{\alpha\beta}\,e_j{}^\alpha e_k{}^\beta= \begin{cases} 0\quad&\text{if}\quad j\ne k, \\ 1\quad&\text{if}\quad j=k\ne4, \\ -1\quad&\text{if}\quad j=k=4. \end{cases} \end{equation} The orthonormal frame is recovered from the principal symbol as follows. Decomposing the principal symbol with respect to the standard basis \begin{equation*} \label{standard basis} s^1= \begin{pmatrix} 0&\,1\,\\ \,1\,&0 \end{pmatrix}, \quad s^2= \begin{pmatrix} 0&-i\\ i&0 \end{pmatrix}, \quad s^3= \begin{pmatrix} 1&0\\ 0&-1 \end{pmatrix}, \quad s^4= \begin{pmatrix} \,1\,&0\\ 0&\,1\, \end{pmatrix} \end{equation*} in the real vector space of $2\times2$ Hermitian matrices, we get $L_\mathrm{prin}(x,p)=s^j c_j(x,p)$. Each coefficient $c_j(x,p)$ is linear in momentum $p$, so $c_j(x,p)=e_j{}^\alpha(x)\,p_\alpha\,$. The existence of an orthonormal frame implies that our manifold $M$ is parallelizable. We see that our analytic non-degeneracy condition \eqref{definition of non-degeneracy} has far reaching geometric consequences. \section{Gauge Transformations and Covariant Subprincipal Symbol} Let us consider the action (variational functional) $\,\int_M v^*(Lv)\,dx\,$ associated with our operator. Take an arbitrary smooth matrix-function \begin{equation} \label{SL2C matrix-function R} R:M\to\mathrm{SL}(2,\mathbb{C}) \end{equation} and consider the following transformation of our 2-column of unknowns: \begin{equation} \label{SL2C transformation of the unknowns} v\mapsto Rv. \end{equation} We interpret \eqref{SL2C transformation of the unknowns} as a gauge transformation because we are looking here at a change of basis in our vector space of unknowns $v:M\to\mathbb{C}^2$. The transformation \eqref{SL2C transformation of the unknowns} of the 2-column $v$ induces the following transformation of the action: $\,\int_M v^*(Lv)\,dx \,\mapsto \int_M v^*(R^*LRv)\,dx\,$. This means that our $2\times2$ differential operator $L$ experiences the transformation \begin{equation} \label{SL2C transformation of the operator} L\mapsto R^*LR\,. \end{equation} This section is dedicated to the analysis of the transformation \eqref{SL2C transformation of the operator}. \begin{remark} We chose to restrict our analysis to matrix-functions $R(x)$ of determinant one, see formula \eqref{SL2C matrix-function R}, because we want to preserve our Lorentzian metric defined in accordance with formula \eqref{definition of metric}. \end{remark} \begin{remark} In non-relativistic theory one normally looks at the transformation \begin{equation} \label{SU2 transformation of the operator} L\mapsto R^{-1}LR \end{equation} rather than at \eqref{SL2C transformation of the operator}. The reason we chose to go along with \eqref{SL2C transformation of the operator} is that we are thinking in terms of actions and corresponding Euler--Lagrange equations rather than operators as such. We believe that this point of view makes more sense in the relativistic setting. If one were consistent in promoting such a point of view, then one would have had to deal with actions throughout the paper rather than with operators. We did not adopt this `consistent' approach because this would have made the paper difficult to read. Therefore, throughout the paper we use the concept of an operator, having in mind that we are really interested in the action and corresponding Euler--Lagrange equation. \end{remark} \begin{remark} The transformations \eqref{SL2C transformation of the operator} and \eqref{SU2 transformation of the operator} coincide if the matrix-function $R(x)$ is special unitary. Applying special unitary transformations is natural in the non-relativistic 3-dimensional setting when dealing with an elliptic system, see Ref.~\refcite{arxiv}, but in the relativistic 4-dimensional setting when dealing with a hyperbolic system special unitary transformations are too restrictive. \end{remark} The transformation \eqref{SL2C transformation of the operator} of the differential operator $L$ induces the following transformations of its principal \eqref{definition of the principal symbol} and subprincipal \eqref{definition of the subprincipal symbol} symbols: \begin{equation} \label{SL2C transformation of the principal symbol} L_\mathrm{prin}\mapsto R^*L_\mathrm{prin}R\,, \end{equation} \begin{equation} \label{SL2C transformation of the subprincipal symbol} L_\mathrm{sub}\mapsto R^*L_\mathrm{sub}R +\frac i2 \left( R^*_{x^\alpha}(L_\mathrm{prin})_{p_\alpha}R - R^*(L_\mathrm{prin})_{p_\alpha}R_{x^\alpha} \right). \end{equation} Comparing formulae \eqref{SL2C transformation of the principal symbol} and \eqref{SL2C transformation of the subprincipal symbol} we see that, unlike the principal symbol, the subprincipal symbol does not transform in a covariant fashion due to the appearance of terms with the gradient of the matrix-function $R(x)$. It turns out that one can overcome the non-covariance in \eqref{SL2C transformation of the subprincipal symbol} by introducing the \emph{covariant subprincipal symbol} $\,L_\mathrm{csub}(x)\,$ in accordance with formula \begin{equation} \label{definition of covariant subprincipal symbol} L_\mathrm{csub}:= L_\mathrm{sub} +\frac i{16}\, g_{\alpha\beta} \{ L_\mathrm{prin} , \operatorname{adj}L_\mathrm{prin} , L_\mathrm{prin} \}_{p_\alpha p_\beta}, \end{equation} where $ \{F,G,H\}:=F_{x^\alpha}GH_{p_\alpha}-F_{p_\alpha}GH_{x^\alpha} $ is the generalised Poisson bracket on matrix-functions and $\,\operatorname{adj}\,$ is the operator of matrix adjugation \begin{equation} \label{definition of adjugation} F=\begin{pmatrix}a&b\\ c&d\end{pmatrix} \mapsto \begin{pmatrix}d&-b\\-c&a\end{pmatrix} =:\operatorname{adj}F \end{equation} from elementary linear algebra. The following result was established in Ref.~\refcite{nongeometric}. \begin{lemma} \label{Lemma about covariant subprincipal symbol} The transformation \eqref{SL2C transformation of the operator} of the differential operator induces the transformation $\,L_\mathrm{csub}\mapsto R^*L_\mathrm{csub}R\,$ of its covariant subprincipal symbol. \end{lemma} Comparing formulae \eqref{definition of the subprincipal symbol} and \eqref{definition of covariant subprincipal symbol} we see that the standard subprincipal symbol and covariant subprincipal symbol have the same structure, only the covariant subprincipal symbol has a second correction term designed to `take care' of special linear transformations in the vector space of unknowns $v:M\to\mathbb{C}^2$. The standard subprincipal symbol \eqref{definition of the subprincipal symbol} is invariant under changes of local coordinates (its elements behave as scalars), whereas the covariant subprincipal symbol~\eqref{definition of covariant subprincipal symbol} retains this feature but gains an extra $\mathrm{SL}(2,\mathbb{C})$ covariance property. In other words, the covariant subprincipal symbol \eqref{definition of covariant subprincipal symbol} behaves `nicely' under a wider group of transformations. \section{Electromagnetic Covector Potential} \label{Electromagnetic Covector Potential} The covariant subprincipal symbol can be uniquely represented in the form \begin{equation} \label{decomposition of covariant subprincipal symbol} L_\mathrm{csub}(x)=L_\mathrm{prin}(x,A(x)), \end{equation} where $A=(A_1,A_2,A_3,A_4)$ is some real-valued covector field. We interpret this covector field as the electromagnetic covector potential. Lemma~\ref{Lemma about covariant subprincipal symbol} and formulae \eqref{SL2C transformation of the principal symbol} and \eqref{decomposition of covariant subprincipal symbol} tell us that the electromagnetic covector potential is invariant under gauge transformations \eqref{SL2C transformation of the operator}. \section{Adjugate Operator} \begin{definition} The adjugate of a formally self-adjoint non-degenerate first order $2\times2$ linear differential operator $L$ is the formally self-adjoint non-degenerate first order $2\times2$ linear differential operator $\operatorname{Adj}L$ whose principal and covariant subprincipal symbols are matrix adjugates of those of the operator $L$. \end{definition} We denote matrix adjugation by $\,\operatorname{adj}\,$, see formula \eqref{definition of adjugation}, and operator adjugation by $\,\operatorname{Adj}\,$. Of course, the coefficients of the adjugate operator can be written down explicitly in local coordinates via the coefficients of the original operator \eqref{operator L in local coordinates}, see Ref.~\refcite{nongeometric} for details. Applying the analysis from Sections \ref{Lorentzian Metric and Orthonormal Frame}--\ref{Electromagnetic Covector Potential} to the differential operator $\operatorname{Adj}L$ it is easy to see that the metric and electromagnetic covector potential encoded within the operator $\operatorname{Adj}L$ are the same as in the original operator $L$. Thus, the metric and electromagnetic covector potential are invariant under operator adjugation. It also easy to see that $\operatorname{Adj}\operatorname{Adj}L=L$, so operator adjugation is an involution. \section{Main Result} We define the Dirac operator as the differential operator \begin{equation} \label{analytic definition of the Dirac operator} D:= \begin{pmatrix} L&mI\\ mI&\operatorname{Adj}L \end{pmatrix} \end{equation} acting on 4-columns $\,\psi=\begin{pmatrix} \,v_1\,&v_2\,&w_1\,&w_2\, \end{pmatrix}^T\,$ of complex-valued half-densities. Here $m$ is the electron mass and $I$ is the $2\times2$ identity matrix. The `traditional' Dirac operator $D_\mathrm{trad}$ is written down in Appendix A of Ref.~\refcite{nongeometric} and acts on bispinor fields $\,\psi_\mathrm{trad}=\begin{pmatrix} \,\xi^1\,&\xi^2\,&\eta_{\dot 1}\,&\eta_{\dot 2}\,\end{pmatrix}^T\,$. Here we assume, without loss of generality, that the orthonormal frame used in the construction of the operator $D_\mathrm{trad}$ is the one from Section~\ref{Lorentzian Metric and Orthonormal Frame}. Our main result is the following theorem established in Ref.~\refcite{nongeometric}. \begin{theorem} \label{main theorem} The two operators, our analytically defined Dirac operator \eqref{analytic definition of the Dirac operator} and geometrically defined Dirac operator $D_\mathrm{trad}\,$, are related by the formula \begin{equation} \label{main theorem formula} D = |\det g_{\kappa\lambda}|^{1/4} \, D_\mathrm{trad} \, |\det g_{\mu\nu}|^{-1/4}\,. \end{equation} \end{theorem} Consider now the two Dirac equations \begin{equation} \label{our Dirac equation} D\psi=0, \end{equation} \begin{equation} \label{traditional Dirac equation} D_\mathrm{trad}\psi_\mathrm{trad}=0. \end{equation} Formula \eqref{main theorem formula} implies that the solutions of equations \eqref{our Dirac equation} and \eqref{traditional Dirac equation} differ only by a prescribed scaling factor: $\,\psi=|\det g_{\mu\nu}|^{1/4}\,\psi_\mathrm{trad}\,$. This means that for all practical purposes equations \eqref{our Dirac equation} and \eqref{traditional Dirac equation} are equivalent. \section{Spin Structure} Let us consider all possible formally self-adjoint non-degenerate first order $2\times2$ linear differential operators $L$ corresponding, in the sense of formula \eqref{definition of metric}, to the prescribed Lorentzian metric. In this section our aim is to classify all such operators~$L$. Let us fix a reference operator ${\mathbf L}$ and let ${\mathbf e}_j$ be the corresponding orthonormal frame (see Section~\ref{Lorentzian Metric and Orthonormal Frame}). Let $L$ be another operator and let $e_j$ be the corresponding orthonormal frame. We define the following two real-valued scalar fields \[ \mathbf{c}(L):= -\,\frac1{4!}\,( {\mathbf e}_1 \wedge{\mathbf e}_2 \wedge{\mathbf e}_3 \wedge{\mathbf e}_4 )_{\kappa\lambda\mu\nu} \,(e_1\wedge e_2\wedge e_3\wedge e_4)^{\kappa\lambda\mu\nu}\,, \quad \mathbf{t}(L):= -\,{\mathbf e}_{4\alpha}\,e_4{}^\alpha\,. \] Observe that these scalar fields do not vanish; in fact, $\mathbf{c}(L)$ can take only two values, $+1$ or $-1$. This observation gives us a primary classification of operators $L$ into four classes determined by the signs of $\mathbf{c}(L)$ and $\mathbf{t}(L)$. The four classes correspond to the four connected components of the Lorentz group. Note that \begin{eqnarray*} &\mathbf{c}(-L)=\mathbf{c}(L), \qquad &\mathbf{t}(-L)=-\mathbf{t}(L), \\ &\ \ \ \ \ \,\mathbf{c}(\operatorname{Adj}L)=-\mathbf{c}(L), \qquad &\mathbf{t}(\operatorname{Adj}L)=\mathbf{t}(L), \end{eqnarray*} which means that by applying the transformations $L\mapsto-L$ and $L\mapsto\operatorname{Adj}L$ to a given operator $L$ one can reach all four classes of our primary classification. Further on we work with operators $L$ such that $\mathbf{c}(L)>0$ and $\mathbf{t}(L)>0$. We say that the operators $L$ and $\tilde L$ are equivalent if there exists a smooth matrix-function \eqref{SL2C matrix-function R} such that $\tilde L_\mathrm{prin}=R^*L_\mathrm{prin}R$. The equivalence classes of operators obtained this way are called \emph{spin structures}. The above 4-dimensional Lorentzian definition of spin structure is an extension of the 3-dimensional Riemannian definition from Ref.~\refcite{arxiv}. The difference is that we have now dropped the condition $\operatorname{tr}L_\mathrm{prin}(x,p)=0$, replaced the ellipticity condition by the weaker non-degeneracy condition \eqref{definition of non-degeneracy} and extended our group of transformations from special unitary to special linear. One would hope that for a connected Lorentzian 4-manifold admitting a global orthonormal frame (see \eqref{orthonormality of the frame} for definition of orthonormality) our analytic definition of spin structure would be equivalent to the traditional geometric one. Unfortunately, we do not currently have a rigorous proof of equivalence in the 4-dimensional Lorentzian setting. \section*{Acknowledgments} The author is grateful to Zhirayr Avetisyan and Nikolai Saveliev for helpful advice.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{ Introduction} Let $q$ be a prime power and $\mathbb{F}_q$ be a finite field of $q$ elements. The problem of estimating the number of irreducible polynomials of degree $d$ over the finite field ${\mathbb F}_{q}$ with some prescribed coefficients has been largely studied; see surveys by S. D. Cohen such as \cite{Cohen1} and Section 3.5 in \cite{Handbook} and references therein for more details. Asymptotic results were answered in the most generality by Cohen \cite{Cohen0}. Regarding to the exact formulae or expression, Carlitz~\cite{Car52} and Kuz'min~\cite{Kuz90} gave the number of monic irreducible polynomials with the first coefficient prescribed and the first two coefficients prescribed, respectively; see~\cite{CMRSS03, RMS01} for a similar result over ${\mathbb F}_{2}$, and~\cite{MoiRan08, Ri14} for more general results. Yucas and Mullen~\cite{YucMul04} and Fitzgerald and Yucas~\cite{fityuc} considered the number of irreducible polynomials of degree $d$ over ${\mathbb F}_{2}$ with the first three coefficients prescribed. Over any finite field ${\mathbb F}_{q}$, Yucas~\cite{Yuc06} gave the number of irreducible polynomials with prescribed first or last coefficient. In \cite{YucMul04} Yucas and Mullen studied the number of irreducible polynomials over ${\mathbb F}_2$ with the first three coefficients, and they stated: ``It would be interesting to know whether the methods and techniques of \cite{Kuz90} could be extended and used to generalize both our formulas and those of [4] to formulas for arbitrary finite fields, and/or to the case over $\mathbb{F}_2$ where more than three coefficients are specified in advance." Recently, Lal\'{i}n and Larocque \cite{LalLar16} used elementary combinatorial methods, together with the theory of quadratic forms, over finite fields to obtain the formula, originally due to Kuz'min \cite{Kuz90}, for the number of monic irreducible polynomials of degree $n$ over a finite field ${\mathbb F}_{q}$ with the first two prescribed coefficients. Also, an explicit expression for the number of irreducible polynomials over $\mathbb{F}_{2^r}$ with the first three coefficients prescribed zero was given by Ahmadi et al \cite{Ahmadi16}; the proofs involve counting the number of points on certain algebraic curves over finite fields which are supersingular. More recently, Granger \cite{Gra19} carried out a systematic study on the problem with several prescribed leading coefficients. Through a transformation of the problem of counting the number of elements of $\mathbb{F}_{q^n}$ with prescribed traces into the problem of counting the number of elements for which linear combinations of the trace functions evaluate to $1$, he converted the problem into counting points in Artin-Schreier curves of smaller genus and then computed the corresponding zeta functions using Lauder-Wan algorithm \cite{LW}. In particular, he presented an efficient deterministic algorithm which outputs exact expressions in terms of the degree $n$ for the number of monic degree n irreducible polynomials over ${\mathbb F}_{q}$ of characteristic $p$ for which the first $\ell < p$ coefficients are prescribed, provided that $n$ is coprime to $p$. In this paper we use the generating function approach, which is initiated in \cite{GKW21}, to study the problem with several prescribed leading and/or ending coefficients. We study the group of equivalent classes for these polynomials with prescribed coefficients and extend ideas from Hayes \cite{Hay65} and Kuz'min \cite{Kuz90}. We also note that a similar idea was used by Fomenko \cite{Fom96} to study the $L$ functions for the number of irreducible polynomials over $\mathbb{F}_2$ with prescribed three coefficients, and for the case such as prescribed $\ell$ coefficients with $\ell < p$. Using primitive idempotent decomposition for finite abelian group algebras, we can obtain general expressions for the generating functions over group algebras. This provides us a recipe to obtain an explicit formulae for the number of monic irreducible polynomials with prescribed leading coefficients, as well as prescribed ending coefficients. We demonstrate our method by computing these numbers for several concrete examples. Our method is also computationally simpler than that of Granger \cite{Gra19} in the case of prescribed leading coefficients only and it produces simpler formulas in some cases. The rest of the paper is organized as follows. In Section~\ref{main} we described our generating function method and derive our main results. In Section~\ref{TypeI} we apply our main theorem to obtain new compact expressions for some examples with prescribed leading and ending coefficients. In Section~\ref{TypeII} we apply our main theorem to obtain compact expressions for some examples with prescribed leading coefficients and compare them with previous known results. The conclusion is in Section~\ref{conclusion}. \section{Combinatorial framework for counting irreducible polynomials with prescribed coefficients} \label{main} In this section, we describe our general combinatorial framework for counting irreducible polynomials with prescribed coefficients, using generating functions with coefficients from a group algebra. This extends Kuz'Min and Hayes's idea \cite{Kuz90, Hay65} for polynomials with two prescribed leading coefficients. Fix positive integers $\ell$ and $t$. Given a polynomial $f=x^m+f_1x^{m-1}+\cdots +f_{m-1}x+f_m$, we shall call $f_1,\ldots, f_{\ell}$ the first $\ell$ leading coefficients, and $f_m,\ldots,f_{m-t+1}$ the ending coefficients. When we read the leading coefficients from left to right, missing coefficients are interpret as zero. Similarly, we read the ending coefficients from right to left, and interpret the missing coefficients as 0. Thus the leading and ending coefficients of $f$ are the same as those of \[ \sum_{j=0}^{\ell}f_jx^{\ell+t-j}+\sum_{j=0}^{t-1}f_{m-j}x^j, \] where $f_0:=1$, and $f_j:=0$ if $j<0$ or $j>m$. We shall treat the following two different types. Let ${\cal M}$ denote the set of monic polynomials over ${\mathbb F}_q$, ${\cal M}_d$ consisting of those monic polynomials of degree $d$, and let $\deg(f)$ denote the degree of a polynomial $f$. \medskip \noindent Type~I. We wish to prescribe $\ell $ leading coefficients $a_1,\ldots,a_{\ell}$, and $t$ ending coefficients $b_0,\ldots,b_{t-1}$ with the constant term $b_0\ne 0$. Two monic polynomials $f,g\in {\cal M}$, with $f(0)g(0)\ne 0$, are said to be equivalent if they have the same $\ell$ leading and $t$ ending coefficients. Thus the polynomial $f=1$ is equivalent to $x^{\ell+t}+1$, and each equivalence class is represented by a unique monic polynomial of degree $\ell+t$. We recall the reciprocal of $f$ is the polynomial of the form $x^{\deg(f)} f(x^{-1})$ and denote the reciprocals of $f$ and $g$ by $\tilde{f}$ and $\tilde{g}$ respectively. The type I equivalence relation $f\sim g$ can be written as \begin{align} \tilde{f} \equiv \tilde{g} \pmod{x^{\ell+1}}, \mbox{and } f\equiv g \pmod{x^{t}}. \label{eq:II} \end{align} We emphasize that $\tilde{f}(0) = \tilde{g}(0)= 1$ and $f(0)g(0)\ne 0$. \medskip \noindent Type~II. We wish to prescribe $\ell $ leading coefficients. Two monic polynomials $f,g\in {\cal M}$ are said to be equivalent if they have the same $\ell$ leading coefficients. We may write the type II equivalence relation $f\sim g$ as \[ \tilde{f} \equiv \tilde{g} \pmod{x^{\ell+1}}. \] In this case the polynomial $f=1$ is equivalent to $x^{d}$ for any $d>0$, and each equivalence class is represented by a unique monic polynomial of degree $\ell$. It is not difficult to see that this multiplication is well defined (independent of the choice of representatives) because the leading and ending coefficients of $fg$ are determined by the leading and ending coefficients of $f$ and $g$. We note that the set ${\cal E}$ of type~II equivalence classes is a group under the usual multiplication; see, e.g., \cite{Fom96}. \begin{prop}\label{group} For both types, ${\cal E}$ is an abelian group under the multiplication $\langle f\rangle \langle g\rangle =\langle fg\rangle$ with $\langle 1\rangle$ being the identity element. Moreover, the following holds. \begin{itemize} \item[(I)] For type~I, we have $|{\cal E}|=(q-1)q^{\ell+t-1}$. Also, for each $d\ge \ell+t$ and each ${\varepsilon}\in {\cal E}$, there are exactly $q^{d-\ell-t}$ polynomials in ${\cal M}_d$ which are equivalent to ${\varepsilon}$. \item[(II)] For type~II, we have $|{\cal E}|=q^{\ell}$. Also, for each $d\ge \ell$ and each ${\varepsilon}\in {\cal E}$, there are exactly $q^{d-\ell}$ polynomials in ${\cal M}_d$ which are equivalent to ${\varepsilon}$. \end{itemize} \end{prop} \noindent {\bf Proof } The multiplication is well defined because the product is independent of the choices of representatives in the equivalence classes. For type~I, we only need to show that for each monic polynomial $f$ of degree $\ell+t$, there exists a unique monic polynomial $g$ of degree $\ell+t$ such that $\langle fg\rangle=\langle 1\rangle$. Writing \begin{align*} f& = \sum_{j=0}^{\ell+t}f_jx^{\ell+t-j},\\ g&=\sum_{j=0}^{\ell+t}g_jx^{\ell+t-j}, \end{align*} where $f_0=g_0=1$ and $f_{\ell+t}g_{\ell+t}\ne 0$. We have \begin{align}\label{eq:multiply} fg=x^{2\ell+2t}+\sum_{d=1}^{2\ell+2t-1}\sum_{j=0}^df_jg_{d-j}x^{2\ell+2t-d} +f_{\ell+t}g_{\ell+t}. \end{align} Thus $\langle fg\rangle=\langle 1\rangle$ iff \begin{align} \sum_{j=0}^df_jg_{d-j}&=0,~~1\le d\le \ell, \label{eq:start}\\ \sum_{j=0}^{d}f_{\ell+t-j}g_{\ell+t-d+j}&=0,~~1\le d\le t-1, \label{eq:end}\\ g_{\ell+t}&=1/f_{\ell+t}. \end{align} The above system uniquely determines the values of $g_1,\ldots,g_{\ell}$ by the recursion \begin{align} g_d&=-\sum_{j=1}^d f_jg_{d-j},~~1\le d\le \ell,\\ g_{\ell+t-d}&=\frac{-1}{f_{\ell+t}}\sum_{j=1}^{d}f_{\ell+t-j}g_{\ell+t-d+j},~~1\le d\le t-1. \end{align} This completes the proof for type~I.\\ For type~II, we note that $\langle x^k\rangle=\langle x^{\ell}\rangle=\langle 1\rangle$ for all $k\ge 0$. Thus only equation~\eqref{eq:start} is used for obtaining the unique inverse. ~~\vrule height8pt width4pt depth0pt We shall use $0$ to denote the zero element of the group algebra $\mathbb{C}{\cal E}$ generated by the group ${\cal E}$ over the complex field $\mathbb{C}$. For the type~I equivalence, it is convenient to define $\langle f\rangle:= 0$ if $f(0)=0$. Define the following generating function \begin{align} F(z)=\sum_{f\in {\cal M}} \langle f\rangle z^{\deg(f)}. \end{align} We note that $F(z)$ is a formal power series with coefficients in the group algebra $\mathbb{C}{\cal E}$. Let ${\cal I}_d$ be the set of irreducible polynomials in ${\cal M}_d$ and for each ${\varepsilon} \in {\cal E}$ we define \[ I_d({\varepsilon})=\#\{f\in {\cal I}_d: \langle f\rangle={\varepsilon}\}. \] Since a monic polynomial is uniquely factored into a multiset of monic irreducible polynomials. The standard counting argument (generating function argument) (see \cite{FlaSed09}, for example) leads to \begin{align*} F(z)=\prod_{d\ge 1}\prod_{f\in {\cal I}_d}\left(\langle 1\rangle-z^d\langle f\rangle\right)^{-1} =\prod_{d\ge 1}\prod_{{\varepsilon}\in {\cal E}}\left(\langle 1\rangle-z^d{\varepsilon}\right)^{-I_d({\varepsilon})}. \end{align*} Consequently \begin{align \ln F(z) &=\sum_{d\ge 1}\sum_{f\in {\cal I}_d}\sum_{k\ge 1}\frac{1}{k}z^{kd}\langle f\rangle^k \label{eq:Irre} \\ &=\sum_{d\ge 1}\sum_{{\varepsilon}\in {\cal E}}I_d({\varepsilon})\ln\left(\langle 1\rangle- z^d{\varepsilon}\right)^{-1}\nonumber\\ &=\sum_{d\ge 1}\sum_{k\ge 1}\sum_{{\varepsilon}\in {\cal E}}\frac{1}{k}I_d({\varepsilon})z^{kd}{\varepsilon}^k\nonumber\\ &=\sum_{m\ge 1}\sum_{k|m}\sum_{{\varepsilon}\in {\cal E}}\frac{1}{k}I_{m/k}({\varepsilon})z^m{\varepsilon}^k. \label{eq:Ifactor} \end{align} For each ${\varepsilon}\in {\cal E}$, define \begin{align}\label{eq:N1} N_d({\varepsilon})&=d\left[z^d{\varepsilon}\right]\ln F(z). \end{align} \begin{prop} \label{prop:IN} \begin{align} I_d({\varepsilon})=\frac{1}{d}\sum_{k|d}\mu(k)\sum_{{\varepsilon}_1\in {\cal E}} N_{d/k}({\varepsilon}_1)\left\llbracket {\varepsilon}_1^k={\varepsilon} \right\rrbracket. \end{align} \end{prop} \noindent {\bf Proof } Extracting the coefficient of $z^m{\varepsilon}$ on both sides of (\ref{eq:Ifactor}), we obtain \begin{align} \label{NofI} N_m({\varepsilon}')&=m\sum_{k|m}\sum_{{\varepsilon}\in {\cal E}}\frac{1}{k}I_{m/k}({\varepsilon})\left\llbracket {\varepsilon}^k={\varepsilon}' \right\rrbracket. \end{align} Thus \begin{align*} & \frac{1}{d}\sum_{k|d}\mu(k)\sum_{{\varepsilon}_1} N_{d/k}({\varepsilon}_1)\left\llbracket {\varepsilon}_1^k={\varepsilon} \right\rrbracket\\ &=\frac{1}{d}\sum_{k|d}\mu(k)\sum_{{\varepsilon}_1}\frac{d}{k}\sum_{j|(d/k)}\sum_{{\varepsilon}_2}\frac{1}{j}I_{d/kj}({\varepsilon}_2)\left\llbracket {\varepsilon}_2^j={\varepsilon}_1 \right\rrbracket\left\llbracket {\varepsilon}_1^k={\varepsilon} \right\rrbracket\\ &=\sum_{t|d}\left(\sum_{k|t}\mu(k)\right)\frac{1}{t}\sum_{{\varepsilon}_2}I_{d/t}({\varepsilon}_2)\left\llbracket {\varepsilon}_2^t={\varepsilon} \right\rrbracket ~~\hbox{(set $t=jk$)}\\ &=\sum_{t|d}\llbracket t=1\rrbracket \frac{1}{t}\sum_{{\varepsilon}_2}I_{d/t}({\varepsilon}_2)\left\llbracket {\varepsilon}_2^t={\varepsilon} \right\rrbracket\\ &=I_d({\varepsilon}). ~~\vrule height8pt width4pt depth0pt \end{align*} The following result from \cite[Proposition~3.1]{EM08} will be useful. More information on primitive idempotent decomposition can be found in \cite{Jespers}. \begin{prop}\label{prop:cyclic} Let $\xi$ be a generator of the cyclic group $C_r$, and ${\omega}_r=\exp(2\pi i/r)$. For $1\le s\le r$, define \begin{align}\label{eq:JofXi} J_{r,s}=\frac{1}{r}\sum_{j=1}^r{\omega}_r^{-sj}\xi^j. \end{align} Then $\{J_{r,1},\ldots,J_{r,r}\}$ form an orthogonal basis of $\mathbb{C}C_r$, and $J_{r,s}^2=J_{r,s}$ for each $1\le s\le r$. We also have \begin{align}\label{eq:XiofJ} \xi^s=\sum_{j=1}^r{\omega}_r^{js}J_{r,j}, ~1 \leq s \leq r. \end{align} \end{prop} \noindent {\bf Proof } Define the $r\times r$ symmetric matrix $M_r$ whose $(s,j)$th entry is given by $M_r(s,j)=\frac{1}{r}{\omega}_r^{-sj}$. It is easy to check that $ M_rM_r^*=\frac{1}{r}I_r $, where $I_r$ denotes the identity matrix of order $r$. Thus \eqref{eq:JofXi} is equivalent to \begin{align} [J_{r,1},J_{r,2},\ldots, J_{r,r}]&=[\xi,\xi^2,\ldots,\xi^r]M_r. \end{align} Multiplying by $rM_r^{*}$ on both sides, we obtain \[ [\xi,\xi^2,\ldots,\xi^r]=[J_{r,1},J_{r,2},\ldots, J_{r,r}](rM_r^*), \] and \eqref{eq:XiofJ} follows. ~~\vrule height8pt width4pt depth0pt Define \begin{align}\label{eq:E} E=\frac{1}{|{\cal E}|}\sum_{{\varepsilon}\in {\cal E}}{\varepsilon}. \end{align} It is easy to verify \begin{align} E{\varepsilon}&=E, \hbox{ for each }{\varepsilon}\in {\cal E}, \\ E^2&=E. \end{align} It is well known that a finite abelian group is isomorphic to the direct product of cyclic groups. Thus we may write \[ {\cal E}\cong C_{r_1}\times C_{r_2}\times \cdots \times C_{r_f}, \] where $C_{r_j}$ is the cyclic group of order $r_j$ and $r_1r_2\cdots r_f=|{\cal E}|$. We let $\xi_i$ be the generator of $C_{r_j}$ for $1\leq i \leq f$, and denote the $r_i$-th root of unity by $${\omega}_{r_i}=\exp(2\pi i/r_i).$$ Let $[n]$ denote the set $\{1,2,\ldots, n\}$. For convenience, we denote \[ {\cal R}:= [r_1]\times [r_2] \times \cdots \times [r_f], \] and \[ {\cal J}:=\{\vec{j}\in [r_1]\times [r_2] \times \cdots \times [r_f]:\vec{j}\ne \vec{r}\}, \] where $\vec{j} = (j_1, j_2, \ldots, j_f)$ and $\vec{r} = (r_1, r_2, \ldots, r_f)$. Denote \begin{align} B_{\vec{s}}:=J_{r_1,s_1}\times J_{r_2,s_2} \times \cdots \times J_{r_k,s_k}. \label{eq:Bj}, \end{align} for each $\vec{s} = (s_1, s_2, \ldots, s_f) \in [r_1]\times [r_2] \times \cdots \times [r_f]$. It follows from Proposition~\ref{prop:cyclic} that the set $ \{B_{\vec{s}}:\vec{s}\in [r_1]\times [r_2] \times \cdots \times [r_f]\} $ forms an orthogonal basis of $\mathbb{C}{\cal E}$ and $B_{\vec{s}}^2=B_{\vec{s}}$ for all $\vec{s}\in [r_1]\times [r_2] \times \cdots \times [r_f]$. In particular, \[ B_{\vec{r}}=E. \] Next we consider some subsets of the group ${\cal E}$ refined by the parameter $d$. For type~I, let $${\cal E}_d = \{ \langle f \rangle : f \in {\cal M}_d, f(0) \neq 0 \}$$ with $1 \leq d \leq \ell+t-1$. For type~II, we let $${\cal E}_d = \{ \langle f \rangle : f \in {\cal M}_d\}$$ with $1 \leq d \leq \ell-1$. We note that ${\cal E}_d = q^{d -\ell -t} {\cal E}$ for $d \geq \ell +t$. For both types, we define \begin{align} c_{d,\vec{j}} &= \sum_{{\varepsilon} \in {\cal E}_d }\left\llbracket {\varepsilon}= \prod_{i=1}^f \xi_i^{v_i}\right\rrbracket \prod_{i=1}^f {\omega}_{r_i}^{v_i j_i}.\label{eq:cdB} \end{align} The following result expresses $\sum_{{\varepsilon}\in {\cal E}_d}{\varepsilon}$ in terms of the above orthogonal basis. \begin{prop} \label{prop:basis} Let $c_{d,\vec{j}}$ be defined in \eqref{eq:cdB}. Then we have \begin{align} \sum_{{\varepsilon} \in {\cal E}_d} {\varepsilon} &=(q-1)q^{d-1}E+\sum_{\vec{j}\in {\cal J}}c_{d,\vec{j}}B_{\vec{j}} ~\hbox{ for type~I and $1\le d\le \ell+t-1$}, \label{I:AofB}\\ \sum_{{\varepsilon} \in {\cal E}_d} {\varepsilon} &=q^{d}E+\sum_{\vec{j}\in {\cal J}}c_{d,\vec{j}}B_{\vec{j}} ~\hbox{ for type~II and $1\le d\le \ell-1$}, \label{II:AofB} \end{align} \end{prop} \noindent {\bf Proof } Since $\{E\}\cup \{B_{\vec{j}} : \vec{j}\in {\cal J}\}$ is a basis, we may write \begin{align}\label{eq:LinExp} \sum_{{\varepsilon} \in {\cal E}_d} {\varepsilon}&=aE+\sum_{\vec{j}\in {\cal J}}a_{\vec{j}}B_{\vec{j}} \end{align} for some complex numbers $a$ and $a_{\vec{j}}$. Since the basis is orthogonal and the basis elements are idempotent, we have \begin{align} a&=[E]\left(\sum_{{\varepsilon} \in {\cal E}_d} {\varepsilon}\right) =|{\cal E}_d|, \end{align} which is equal to $(q-1)q^{d-1}$ for type~I, and $q^{d}$ for type~II. Similarly, we have \begin{align} a_{\vec{j}}&=[B_{\vec{j}}]\left(\sum_{{\varepsilon} \in {\cal E}_d} {\varepsilon} \right)\nonumber\\ &=[B_{\vec{j}}]\left(\sum_{{\varepsilon} \in {\cal E}_d }\left\llbracket {\varepsilon}= \prod_{i=1}^f \xi_i^{v_i}\right\rrbracket \prod_{i=1}^f \xi_i^{v_i} \right)\nonumber\\ &=[B_{\vec{j}}]\left(\sum_{{\varepsilon} \in {\cal E}_d }\left\llbracket {\varepsilon}= \prod_{i=1}^f \xi_i^{v_i}\right\rrbracket \prod_{i=1}^f \sum_{u_i=1}^{r_i}{\omega}_{r_i}^{u_iv_i}J_{r_i,u_i} \right)\nonumber\\ &=\sum_{{\varepsilon} \in {\cal E}_d }\left\llbracket {\varepsilon}= \prod_{i=1}^f \xi_i^{v_i}\right\rrbracket \prod_{i=1}^f {\omega}_{r_i}^{j_iv_i},\nonumber \end{align} which is $c_{d,\vec{j}}$ defined in \eqref{eq:cdB}. ~~\vrule height8pt width4pt depth0pt Now we are ready to prove our main result. \begin{thm} \label{thm:main} Let $c_{d,\vec{j}}$ be defined in \eqref{eq:cdB}, let $\tau=\ell+t-1$ for type~I and $\tau=\ell-1$ for type~II. Define \begin{align} P_{\vec{j}}(z)&=1+\sum_{k=1}^{\tau}c_{k,\vec{j}}z^k. \label{eq:PB} \end{align} With ${\varepsilon}=\xi_1^{t_1}\cdots \xi_f^{t_f}$, the following hold. \begin{itemize} \item[(I)] For type~I, we have \begin{align} \ln F&=E\ln \frac{1-z}{1-qz}+\sum_{\vec{j}\in {\cal J}}B_{\vec{j}}\,\ln P_{\vec{j}}(z), \label{eq:LN1}\\ [{\varepsilon}]\ln F&=\frac{1}{q-1}q^{1-\ell-t}\left( \ln \frac{1-z}{1-qz}+\sum_{\vec{j}\in {\cal J}}\prod_{i=1}^f\omega_{r_i}^{-j_it_i}\ln P_{\vec{j}}(z)\right). \label{eq:N1} \end{align} \item[(II)] For type~II, we have \begin{align} \ln F&=E\ln \frac{1}{1-qz}+\sum_{\vec{j}\in {\cal J}}B_{\vec{j}}\,\ln P_{\vec{j}}(z), \label{eq:LN2}\\ [{\varepsilon}]\ln F&=q^{-\ell}\left( \ln \frac{1}{1-qz}+\sum_{\vec{j}\in {\cal J}}\prod_{i=1}^f\omega_{r_i}^{-j_it_i}\ln P_{\vec{j}}\right). \label{eq:N2} \end{align} \end{itemize} \end{thm} \noindent {\bf Proof } For type~I, by Proposition~\ref{prop:basis}, \eqref{eq:E} and definition of ${\cal E}_d$, we have \begin{align*} \sum_{d\ge 1}z^d\sum_{{\varepsilon}\in {\cal E}_d}{\varepsilon} &=\sum_{d=1}^{\ell+t-1}\left((q-1)q^{d-1}E+\sum_{\vec{j}\in {\cal J}}c_{d,\vec{j}}B_{\vec{j}}\right)z^d+\sum_{d\ge \ell+t}(q-1)q^{d-1}z^dE \\ &=E\sum_{d\ge 1}(q-1)q^{d-1}z^d+\sum_{\vec{j}\in {\cal J}}B_{\vec{j}}\sum_{d=1}^{\ell+t-1}c_{d,\vec{j}}z^d \\ &=E\frac{(q-1)z}{1-qz}+\sum_{\vec{j}\in {\cal J}}B_{\vec{j}}\sum_{d=1}^{\ell+t-1}c_{d,\vec{j}}z^d. \end{align*} Hence \begin{align*} \ln F&=\sum_{m\ge 1}\frac{(-1)^{m-1}}{m}\left(\sum_{d\ge 1}z^d\sum_{{\varepsilon}\in {\cal E}_d}{\varepsilon} \right)^m \\ &=\sum_{m\ge 1}\frac{(-1)^{m-1}}{m}\left(E\left(\frac{(q-1)z}{1-qz}\right)^m + \sum_{\vec{j}\in {\cal J}}B_{\vec{j}}\left(\sum_{d=1}^{\ell+t-1}c_{d,\vec{j}}z^d\right)^m\right)\\ &=E\ln\left(1+ \frac{(q-1)z}{1-qz}\right)+\sum_{\vec{j}\in {\cal J}}B_{\vec{j}}\ln \left(1+\sum_{d=1}^{\ell+t-1}c_{d,\vec{j}}z^d\right), \end{align*} which gives \eqref{eq:LN1}. Using Proposition~\ref{prop:cyclic} and extracting the coefficient of ${\varepsilon}$ from \eqref{eq:LN1}, we obtain \eqref{eq:N1}. Similarly for type~II, we have \begin{align*} \sum_{d\ge 1}z^d\sum_{{\varepsilon}\in {\cal E}_d}{\varepsilon} &=\sum_{d=1}^{\ell-1}\left(q^dE+\sum_{\vec{j}\in {\cal J}}c_{d,\vec{j}}B_{\vec{j}}\right)z^d+\sum_{d\ge \ell}q^{d}z^dE \\ &=E\sum_{d\ge 1}q^{d}z^d+\sum_{\vec{j}\in {\cal J}}B_{\vec{j}}\sum_{d=1}^{\ell-1}c_{d,\vec{j}}z^d\\ &=E\frac{qz}{1-qz}+\sum_{\vec{j}\in {\cal J}}B_{\vec{j}}\sum_{d=1}^{\ell-1}c_{d,\vec{j}}z^d. \end{align*} Therefore \begin{align*} \ln F&=\sum_{m\ge 1}\frac{(-1)^{m-1}}{m}\left(\sum_{d\ge 1}z^d\sum_{{\varepsilon}\in {\cal E}_d}{\varepsilon} \right)^m \\ &=\sum_{m\ge 1}\frac{(-1)^{m-1}}{m}\left(E\left(\frac{qz}{1-qz} \right)^m + \sum_{\vec{j}\in {\cal J}}B_{\vec{j}}\left(\sum_{d=1}^{\ell-1}c_{d,\vec{j}}z^d\right)^m\right)\\ &=E\ln\left(1+ \frac{qz}{1-qz}\right)+\sum_{\vec{j}\in {\cal J}}B_{\vec{j}}\ln \left(1+\sum_{d=1}^{\ell-1}c_{d,\vec{j}}z^d\right), \end{align*} which gives \eqref{eq:LN2}. Equation~\eqref{eq:N2} follows similarly as for type~I. ~~\vrule height8pt width4pt depth0pt \medskip For the rest of the paper, we shall also use $N_n(\vec{t})$ to denote $N_n({\varepsilon})$ etc. when ${\varepsilon}=\xi_1^{t_1}\cdots\xi_f^{t_f}$ and $\vec{t} = (t_1, t_2, \ldots, t_f)$. The following corollary is immediate. \begin{cor}\label{cor:trivial} Let $P(z)=\prod_{\vec{j}\in {\cal J}}P_{\vec{j}}(z)$, and ${\cal S}$ be the multiset of all complex roots of $P_{\vec{j}}(z)$. Then \begin{itemize} \item[Type~I:] \begin{align} N_n(\vec{0})&=\frac{1}{q-1}q^{1-\ell-t}\left(q^n-1\right) +\frac{1}{q-1}q^{1-\ell-t}n[z^n]\ln P(z) \\ &=\frac{1}{q-1}q^{1-\ell-t}\left(q^n-1\right)-\frac{1}{q-1}q^{1-\ell-t} \sum_{\rho\in {\cal S}}\rho^{-n}. \end{align} \item[Type~II:] \begin{align} N_n(\vec{0})&=q^{n-\ell} +q^{-\ell}n[z^n]\ln P(z) \\ &=q^{n-\ell}-q^{-\ell} \sum_{\rho\in {\cal S}}\rho^{-n}. \end{align} \end{itemize} \end{cor} \medskip For a polynomial $g(z)\in \mathbb{C}[z]$, we use ${\bar g}(z)$ to denote the polynomial obtained from $g(z)$ by changing all the coefficients to their conjugate. From \eqref{eq:cdB} it is clear \begin{align}\label{eq:conjugate} P_{j_1,\ldots j_f}(z)={\bar P}_{r_1-j_1,\ldots,r_f-j_f}(z). \end{align} This equation will be used in the next two sections. Recall that the characteristic polynomial of ${\alpha}\in\mathbb{F}_{q^n}$ over ${\mathbb F}_q$ is defined as \begin{align*} Q_{{\alpha}}(x)=\prod_{j=0}^{n-1}(x-{\alpha}^{q^j}). \end{align*} For ${\varepsilon}\in {\cal E}$, define \begin{align} \label{eq:variety} F_q(n,{\varepsilon})=\#\{{\alpha}\in\mathbb{F}_{q^n}:\langle Q_{{\alpha}}\rangle={\varepsilon}\}. \end{align} An application of the multinomial theorem and a generalized M\"{o}bius inversion-type argument gives the enumeration of irreducible polynomials with prescribed coefficients. This equivalence follows the approach of Miers and Ruskey \cite{Miers}. A method used by Hayes \cite{Hay65}, Hsu \cite{Hsu} and Voloch \cite{Voloch} and others relates the enumeration of irreducible polynomials of degree $n$ with prescribed coefficients (equivalently, formulae for $F_q(n,{\varepsilon})$) to the number of points over $\mathbb{F}_{q^n}$ of certain curves defined over $\mathbb{F}_q$ whose function fields are subfields of the so-called cyclotomic functions fields. Granger \cite{Gra19} studied $F_q(n,{\varepsilon})$ for the type II equivalence class in detail and used it to count irreducible polynomials with prescribed leading coefficients. Through a transformation of the problem of counting the number of elements of $\mathbb{F}_{q^n}$ with prescribed traces to the problem of counting the number of elements for which linear combinations of the trace functions evaluate to $1$, he reduced the varieties in \eqref{eq:variety} to Artin-Schreier curves of smaller genus and then computed the corresponding zeta functions using Lauder-Wan algorithm \cite{LW}. Our next theorem connects $N_n({\varepsilon})$ with $F_q(n,{\varepsilon})$. This gives an alternative way of computing these zeta functions. \begin{thm}\label{thm:Zeta} For both types, we have \begin{equation} \label{eq:NF} N_n({\varepsilon})=F_q(n,{\varepsilon}). \end{equation} Moreover, with ${\varepsilon}=\xi_1^{t_1}\cdots\xi_f^{t_f}$, we have \begin{itemize} \item[(I)] the logarithm of the Hasse-Weil zeta function of $N_n({\varepsilon})$ for type~I is equal to \begin{align*} \frac{1}{q-1}q^{1-\ell-t}\left(\ln \frac{1-z}{1-qz}+\sum_{ \vec{j}\in {\cal J}}\prod_{i=1}^f\omega_{r_i}^{-j_it_i}\ln P_{\vec{j}}(z)\right),\quad and \end{align*} \item[(II)] the logarithm of the Hasse-Weil zeta function of $N_n({\varepsilon})$ for type~II is equal to \begin{align*} q^{-\ell}\left(\ln \frac{1}{1-qz}+\sum_{ \vec{j}\in {\cal J}}\prod_{i=1}^f\omega_{r_i}^{-j_it_i}\ln P_{\vec{j}}(z)\right). \end{align*} \end{itemize} \end{thm} \noindent {\bf Proof } Equation~\eqref{eq:Irre} can be written as \begin{align*} \ln F=\sum_{m\ge1}\sum_{k\mid m}\sum_{f\in {\cal I}_{m/k}}\frac{\langle f^k \rangle}{k}z^m. \end{align*} For $k\mid n$, each $f\in {\cal I}_{n/k}$ has $n/k$ roots ${\alpha}\in\mathbb{F}_{q^n}$. For each ${\alpha}\in\mathbb{F}_{q^n}$ with $f({\alpha})=0$, we have $Q_{\alpha}=f^k$. Therefore, \begin{align*} n[z^n]\ln F&=\sum_{k\mid n}\sum_{f\in {\cal I} _{n/k}}\frac{n}{k}{\langle f^k \rangle}\\ &=\sum_{k\mid n}\sum_{f\in {\cal I}_{n/k}}\sum_{{\alpha}\in\mathbb{F}_{q^n}:f({\alpha})=0}{\langle f^k \rangle}\\ &=\sum_{k\mid n}\sum_{f\in {\cal I}_{n/k}}\sum_{{\alpha}\in\mathbb{F}_{q^n}:f({\alpha})=0}{\langle Q_{\alpha} \rangle}. \end{align*} Each ${\alpha}\in\mathbb{F}_{q^n}$ is the root of some unique irreducible polynomial $f$ over ${\mathbb F}_q$ of degree $n/k$ with $k\mid n$ and $f^k=Q_{\alpha}$, hence we have \begin{align*} n[z^n]\ln F&=\sum_{{\alpha}\in\mathbb{F}_{q^n}}\langle Q_{\alpha} \rangle. \end{align*} which gives \eqref{eq:NF}. The claim about the zeta functions follows from the definition of zeta function and Theorem~\ref{thm:main}. ~~\vrule height8pt width4pt depth0pt \section{Type~I examples} \label{TypeI} Let $\ell \geq 1$ and $t\geq 1$ be fixed integers. In this section, we use Theorem~\ref{thm:main} to derive formulas for the number of monic irreducible polynomials over ${\mathbb F}_{q}$ with prescribed $\ell$ leading coefficients and $t$ ending coefficients. Throughout this section, we let $\tau=\ell+t-1$. \begin{thm}\label{thm:typeI} Let $q$ be a prime power, $\ell$ and $t$ be positive integers. Let $\xi_1, \ldots, \xi_f$ be generators of type~I group ${\cal E}$ with order $r_1, \ldots, r_f$ respectively. Let $\omega_{r_i} = exp(2\pi i/r_i)$ for $1\leq i \leq f$. Let $c_{d, \vec{j}}$ be defined by \eqref{eq:cdB} and $P_{\vec{j}}(z)$ is defined in \eqref{eq:PB}. Suppose each polynomial $P_{\vec{j}}(z)$ is factored into linear factors, and let ${\cal S}_{\vec{j}}$ be the multiset of all the complex roots of $P_{\vec{j}}(z)$. Then the number of monic irreducible polynomials over $\mathbb{F}_q$ with the prescribed first $\ell$ coefficients and the last $t$ coefficients ${\varepsilon}=\xi_1^{t_1}\cdots \xi_f^{t_f}$ is \begin{align} I_d(\vec{t})=\frac{1}{d}\sum_{k|d}\mu(k)\sum_{\vec{s}\in {\cal R}} N_{d/k}(\vec{s})\left\llbracket k\vec{s}\equiv \vec{t} ~(\bmod{~\vec{r}})\right\rrbracket, \end{align} where \begin{align} N_n(\vec{s})&=\frac{1}{q-1}q^{-\tau}\left(q^n-1\right)+\frac{1}{q-1}q^{-\tau}\sum_{\vec{j}\in {\cal J}} \prod_{i=1}^{f}{\omega}_{r_i}^{-j_is_i}n[z^n]\ln P_{\vec{j}}(z)\label{eq:factorI1}\\ &=\frac{1}{q-1}q^{-\tau}\left(q^n-1\right)-\frac{1}{q-1}q^{-\tau} \sum_{\vec{j}\in {\cal J}}\prod_{i=1}^f\omega_{r_i}^{-j_is_i}\sum_{\rho\in {\cal S}_{\vec{j}}}\rho^{-n}.\label{eq:factorI} \end{align} \end{thm} \begin{proof} Equation~\eqref{eq:factorI1} follows immediately from Theorem~\ref{thm:main}. Now \eqref{eq:factorI} follows from \[ \ln P_{\vec{j}}(z)=\sum_{\rho\in {\cal S}_{\vec{j}}}\ln(1-z/\rho), \] and the expansion \[ \ln(1-z/\rho)=-\sum_{n\ge 1}\rho^{-n}z^n/n. \] ~~\vrule height8pt width4pt depth0pt \end{proof} \begin{example} Let $q=2,\ell=t=2$. In this case, $|{\cal E}|=2^3$. The generators are $\xi_1=\langle x+1\rangle$ and $\xi_2=\langle x^4+x+1\rangle$ of orders 4 and 2, respectively. We have ${\cal E}_1=\{\xi_1\}$, ${\cal E}_2=\{\xi_1^2,\xi_1^3\}$, ${\cal E}_3=\{\langle 1\rangle,\xi_1\xi_2,\xi_1^2\xi_2,\xi_1^3\}$. Using \eqref{eq:cdB}, we have \begin{align*} c_{1, (j_1,j_2)}&=i^{j_1},\\ c_{2, (j_1,j_2)}&=i^{2j_1}+i^{3j_1},\\ c_{3, (j_1,j_2)}&=1+i^{j_1}(-1)^{j_2}+i^{2j_1}(-1)^{j_2}+i^{3j_1},\\ P_{j_1,j_2}(z)&=1+c_{1, (j_1,j_2)}z+c_{2, (j_1,j_2)}z^2+c_{3, (j_1,j_2)}z^3, \end{align*} and hence $P_{1,j}(z)={\bar P}_{3,j}(z)$, and \begin{align*} P_{1,1}(z)&=1+iz-(1+i)z^2+2(1-i)z^3,\\ P_{1,2}(z)&=(1-z)(1+(1+i)z),\\ P_{2,1}(z)&=P_{2,2}(z)=1-z,\\ P_{4,1}(z)&=1+z+2z^2,\\ P_{1,1}(z)P_{3,1}(z)&=1-z^2+2z^3-2z^4+8z^6,\\ P_{1,2}(z)P_{3,2}(z)&=(1-z)^2(1+2z+2z^2). \end{align*} For ${\varepsilon}=\xi_1^{2s_1}\xi_2^{s_2}$, we may use \eqref{eq:conjugate} to combine the conjugate pairs to obtain \begin{align} N_n(\xi_1^{2s_1}\xi_2^{s_2})&=\frac{1}{8}\left(2^n-1\right) -\frac{1}{8}\left(1+(-1)^{s_2}+2(-1)^{s_1}\right)\nonumber\\ &~+(-1)^{s_2}\frac{n}{8}[z^n]\ln(1+z+2z^2)\nonumber\\ &~+(-1)^{s_1}\frac{n}{8}[z^n]\ln (1+2z+2z^2)\\ &~+(-1)^{s_1+s_2}\frac{n}{8}[z^n]\ln (1-z^2+2z^3-2z^4+8z^6). \nonumber \end{align} Using Maple to expand the logarithmic functions, we obtain Table~\ref{table222}. \end{example} \begin{table} \begin{center} \begin{tabular}{|r|r|r|r|r|} \hline $n$&$N_n(\xi_1^0\xi_2^0)$&$N_n(\xi_2)$&$N_n(\xi_1^2)$&$N_n(\xi_1^2\xi_2)$\\ \hline 1 & 0& 0&0 &0 \\ \hline 2 &0 &0 & 1& 0\\ \hline 3 & 0& 0 & 0& 3\\ \hline 4 &1&4 & 2& 0\\ \hline 5 &5& 0 & 5& 5\\ \hline 6&9 & 6 & 4& 12\\ \hline 7 & 21&14 &7 & 21\\ \hline 8 &31 & 24& 40& 32 \\ \hline 9 & 63& 72 & 63& 57\\ \hline 10 & 125& 130 &116 & 140\\ \hline 11 & 253& 242 & 275& 253\\ \hline 12 & 523& 532 & 512&480\\ \hline 13 & 923& 1092 & 1079&1001\\ \hline 14 & 2065&2030 &2052 & 2044\\ \hline 15 & 4145& 4110 &4115 &4013\\ \hline 16 & 8143& 8112 &8128 & 8384\\ \hline 17 &16303& 16592 &16439 &16201\\ \hline 18 &33093& 32442 &32692 &32844\\ \hline 19 & 65493&65322 &65379 & 65949\\ \hline 20 & 131731 & 130924&130112 & 131520\\ \hline \end{tabular} \end{center} \caption{Values of $N_n(\xi_1^{2s_1}\xi_2^{s_2})$ for $q=2,\ell =2, t=2$.} \label{table222} \end{table} \begin{example} Let $q=3$, $\ell=2$ and $t=1$. In this case, $|{\cal E}|=18$. The generators are $\xi_1=\langle x+1\rangle$ and $\xi_2=\langle x+2\rangle$ of orders 3 and 6, respectively. We have \begin{align} {\cal E}_1&=\{\xi_1,\xi_2\},\nonumber\\ {\cal E}_2&=\{\xi_2^2,\xi_1\xi_2,\xi_1\xi_2^5,\xi_1^2,\xi_1^2\xi_2,\xi_1^2\xi_2^2\}. \end{align} Using \eqref{eq:cdB}, we have \begin{align*} c_{1, (j_1,j_2)}&={\omega}_3^{j_1}+{\omega}_6^{j_2},\\ c_{2, (j_1,j_2)}&={\omega}_6^{2j_2}+{\omega}_3^{j_1}{\omega}_6^{j_2}+{\omega}_3^{j_1}{\omega}_6^{5j_2}+{\omega}_3^{2j_1}+{\omega}_3^{2j_1}{\omega}_6^{j_2} +{\omega}_3^{2j_1}{\omega}_6^{2j_2},\\ P_{j_1,j_2}(z)&=1+c_{1, (j_1,j_2)}z+c_{2,(j_1,j_2)}z^2. \end{align*} We have $P_{1,j}(z)={\bar P}_{2,6-j}(z)$, $P_{3,j}(z)={\bar P}_{3,6-j}(z)$, and \begin{align*} P_{1,1}(z)&=1+i\sqrt{3}z,\\ P_{1,2}(z)&=(1-z)(1+i\sqrt{3}z),\\ P_{1,3}(z)&=(1+i\sqrt{3}z)\left(1+i\sqrt{3}{\omega}_3z\right),\\ P_{1,4}(z)&=1-z,\\ P_{1,5}(z)&=1-3z^2,\\ P_{1,6}(z)&=(1-z)\left(1-i\sqrt{3}{\omega}_3z\right),\\ P_{3,1}(z)&=(1+i\sqrt{3}z)\left(1-i\sqrt{3}{\omega}_6z\right),\\ P_{3,2}(z)&=(1-z)\left(1-i\sqrt{3}{\omega}_3z\right),\\ P_{3,3}(z)&=1. \end{align*} Applying \eqref{eq:factorI} and combining conjugate pairs, we obtain \begin{align} N_n(\xi_1^{t_1}\xi_2^{t_2}) &=\frac{1}{18}\left(3^{n}-1\right) -\frac{2}{18}3^{n/2}\Re\left({\omega}_6^{-2t_1-t_2}(-i)^n\right) -\frac{2}{18}3^{n/2}\Re\left({\omega}_3^{-t_1-t_2}(-i)^n\right)\nonumber\\ &~~-\frac{2}{18}\Re\left({\omega}_3^{-t_1-t_2}\right) -\frac{2}{18}3^{n/2}\Re\left({\omega}_6^{-2t_1-3t_2}\left((-i)^n+(-i{\omega}_3)^n\right)\right) -\frac{2}{18}\Re\left({\omega}_3^{-t_1-2t_2}\right)\nonumber\\ &~~-\frac{4\llbracket 2\mid n\rrbracket}{18}3^{n/2}\Re\left({\omega}_6^{-2t_1-5t_2}\right) -\frac{2}{18}3^{n/2}\Re\left({\omega}_3^{-t_1}(i{\omega}_3)^n\right) -\frac{2}{18}\Re\left({\omega}_3^{-t_1}\right)\nonumber\\ &~~-\frac{2}{18}3^{n/2}\Re\left({\omega}_6^{-t_2}\left((-i)^n+(i{\omega}_6)^n\right)\right) -\frac{2}{18}\Re\left({\omega}_6^{-2t_2}\right) -\frac{2}{18}3^{n/2}\Re\left({\omega}_6^{-2t_2}(i{\omega}_3)^n\right)\nonumber\\ &=\frac{1}{18}\left(3^{n}-1\right)-\frac{4\llbracket 2\mid n\rrbracket}{18}3^{n/2}\cos\left(\frac{\pi(2t_1+5t_2)}{3}\right) \\ &~~-\frac{2}{18}\left(\cos\left(\frac{2\pi(t_1+t_2)}{3}\right)+ \cos\left(\frac{2\pi (t_1+2t_2)}{3}\right)+\cos\left(\frac{2\pi t_1}{3}\right)+\cos\left(\frac{2\pi t_2}{3}\right) \right)\nonumber\\ &~~-\frac{2}{18}3^{n/2}\left(\cos\left(\frac{3n\pi}{2}-\frac{(2t_1+t_2)\pi}{3}\right)+ \cos\left(\frac{3n\pi}{2}-\frac{2(t_1+t_2)\pi}{3}\right) \right)\nonumber\\ &~~-\frac{2}{18}3^{n/2}(-1)^{t_2}\left(\cos\left(\frac{3n\pi}{2}-\frac{2t_1\pi}{3}\right)+ \cos\left(\frac{3n\pi}{2}+\frac{2(n-t_1)\pi}{3}\right) \right)\nonumber\\ &~~-\frac{2}{18}3^{n/2}\left(\cos\left(\frac{n\pi}{2}+\frac{2(n-t_1)\pi}{3}\right)+ \cos\left(\frac{3n\pi}{2}-\frac{t_2\pi}{3}\right) \right)\nonumber\\ &~~-\frac{2}{18}3^{n/2}\left(\cos\left(\frac{n\pi}{2}+\frac{(n-t_2)\pi}{3}\right)+ \cos\left(\frac{n\pi}{2}+\frac{2(n-t_2)\pi}{3}\right) \right).\nonumber \end{align} \iffals Another expression can be written as the following. \begin{align*} N_n(\xi_1^{t_1}\xi_2^{t_2})&=\frac{3^{n}-1}{18} - \frac{1}{18} \sum_{ (j_1,j_2)\ne (3,6)}\omega_3^{-j_1t_1}\omega_6^{-j_2t_2}\sum_{\rho\in {\cal S}_{j_1,j_2}}\rho^{-n}\\ &= \frac{3^{n}-1}{18} - \frac{1}{18} \left( \omega_3^{-t_1}\omega_6^{-2t_2} +\omega_3^{-t_1}\omega_6^{-4t_2} + \omega_3^{-t_1} + \omega_3^{-2t_1}\omega_6^{-2t_2} \right. \\ &\left.+\omega_3^{-2t_1}\omega_6^{-4t_2} + \omega_3^{-2t_1} + \omega_6^{-2t_2} + \omega_6^{-4t_2} \right) \\ & - \frac{1}{18} \left(\sqrt{3}\right)^{n} \left( \omega_3^{-t_1}\omega_6^{-5t_2} (1+(-1)^n) +\omega_3^{-2t_1}\omega_6^{-t_2} (1+(-1)^n) \right)\\ & - \frac{1}{18} \left(-\sqrt{3}i \right)^{n} \left( \omega_3^{-t_1}\omega_6^{-t_2} +\omega_3^{-t_1}\omega_6^{-2t_2} + \omega_3^{-t_1} \omega_6^{-3t_2} + \omega_3^{-2t_1}\omega_6^{-3t_2} (-1)^n \right. \\ &\left.+\omega_3^{-2t_1}\omega_6^{-4t_2} (-1)^n + \omega_3^{-2t_1} \omega_6^{-5t_2} (-1)^n+ \omega_6^{-t_2}+ \omega_6^{-5t_2} (-1)^n \right) \\ & - \frac{1}{18} \left(-\frac{3+\sqrt{3}i}{2} \right)^{n} \left( \omega_3^{-t_1}\omega_6^{-3t_2} (-1)^n + \omega_3^{-t_1} + \omega_6^{-2t_2} + \omega_6^{-5t_2} \right) \\ & - \frac{1}{18} \left( \frac{\sqrt{3} i -3}{2} \right)^{n} \left( \omega_3^{-2t_1}\omega_6^{-3t_2} (-1)^n + \omega_3^{-2t_1} + \omega_6^{-t_2} + \omega_6^{-4t_2} \right). \end{align*} \f Using Maple, we obtain Tables~\ref{table321I}, \ref{table321II} and \ref{table321III}. \end{example} \begin{table} \begin{center} \begin{tabular}{|r|r|r|r|r|r|r|} \hline $n$&$N_n(\xi_2^0)$&$N_n(\xi_2^1)$&$N_n(\xi_2^2)$&$N_n(\xi_2^3)$&$N_n(\xi_2^4)$&$N_n(\xi_2^5)$\\ \hline 1 & 0&1& 0& 0& 0 &0\\ \hline 2 &0 &0 &1&0 &0 & 0 \\ \hline 3 & 1& 0& 0& 1 &3& 3\\ \hline 4 &0&4 & 8&8&5 & 4\\ \hline 5 &10&15&15& 10 &15&6 \\ \hline 6&58 &36& 45& 40& 45 &36\\ \hline 7 & 112&99& 126&112& 126 &126\\ \hline 8 &328 &360&369& 400 &396 &360\\ \hline 9 & 1093&1134& 1134& 1093&1053 &1053 \\ \hline 10 & 3280& 3240& 3240&3280 &3321 &3240 \\ \hline 11 & 9922& 9801&9801& 9922&9801 &10044\\ \hline 12 & 28714& 29484& 29565& 29848&29565& 29484 \\ \hline 13 & 88816& 89181 &88452&88816&88452&88452 \\ \hline 14 & 265720 &265356& 266085&265720&265356& 265356 \\ \hline 15 & 797161& 796068&796068&797161&798255&798255 \\ \hline 16 &2388568 &2391120&2394036& 2394400 &2391849& 2391120\\ \hline 17 &7172266 & 7175547&7175547& 7172266&7175547& 7168986\\ \hline 18 &21536482& 21520080& 21526641& 21523360&21526641&21520080 \\ \hline 19 &64563520&64553679&64573362&64563520& 64573362&64573362 \\ \hline 20 & 193684000 & 193706964&193713525& 193736488&193733208& 193706964\\ \hline \end{tabular} \end{center} \caption{Values of $N_n(\xi_2^j)$ for $q=3,\ell =2, t=1$.} \label{table321I} \end{table} \begin{table} \begin{center} \begin{tabular}{|r|r|r|r|r|r|r|} \hline $n$&$N_n(\xi_1)$&$N_n(\xi_1\xi_2)$&$N_n(\xi_1\xi_2^2)$&$N_n(\xi_1\xi_2^3)$&$N_n(\xi_1\xi_2^4)$&$N_n(\xi_1\xi_2^5)$\\ \hline 1 & 1&0& 0& 0& 0 &0\\ \hline 2 &0 &0 &0&0 &0 & 2 \\ \hline 3 & 0& 3& 0& 3 &3& 3\\ \hline 4 &5&4 & 2&4&2 & 4\\ \hline 5 &15&15&15& 15 &15&15 \\ \hline 6&45 &36& 27& 36& 36 &54\\ \hline 7 & 99&126& 126&126& 126 &126\\ \hline 8 &396 &360&341& 360 &396 &360\\ \hline 9 & 1134&1053& 1134& 1053&1053 &1053 \\ \hline 10 & 3321& 3240& 3240&3240 &3402 &3402 \\ \hline 11 & 9801& 9801&9801& 9801&9801 &9801\\ \hline 12 & 29565& 29484& 29565&29484&29808& 29484 \\ \hline 13 & 89181& 88452 &88452&88452&88452&88452 \\ \hline 14 & 265356 &265356& 265356&265356 &265356& 266814 \\ \hline 15 & 796068& 798255&796068&798255&798255&798255 \\ \hline 16 &2391849 &2391120& 2389662 & 2391120 & 2389662& 2391120\\ \hline 17 & 7175547& 7175547&7175547& 7175547&7175547& 7175547\\ \hline 18 &21526641& 21520080& 21513519& 21520080&21520080&21533202\\ \hline 19 &64553679&64573362&64573362&64573362& 64573362&64573362 \\ \hline 20 & 193733208 & 193706964 & 193693842& 193706964&193733208& 193706964\\ \hline \end{tabular} \end{center} \caption{Values of $N_n(\xi_1\xi_2^j)$ for $q=3,\ell =2, t=1$.} \label{table321II} \end{table} \begin{table} \begin{center} \begin{tabular}{|r|r|r|r|r|r|r|} \hline $n$&$N_n(\xi_1^2)$&$N_n(\xi_1^2\xi_2)$&$N_n(\xi_1^2\xi_2^2)$&$N_n(\xi_1^2\xi_2^3)$&$N_n(\xi_1^2\xi_2^4)$&$N_n(\xi_1^2\xi_2^5)$\\ \hline 1 &0&0& 0& 0& 0 &0\\ \hline 2 &1 &2 &2&0 &0 & 0 \\ \hline 3 & 3& 0& 0& 0 &3& 0\\ \hline 4 &8&4 & 8&4&2 & 4\\ \hline 5 &6&15&15& 15 &15&15 \\ \hline 6&45 &54& 36& 36&27 &36\\ \hline 7 & 126& 126 & 126&126& 126 &126\\ \hline 8 &369 &360&342& 360 &342 &360\\ \hline 9 & 1053& 1134 & 1134& 1134&1053 &1134 \\ \hline 10 & 3240& 3402& 3240&3240 &3240 &3240 \\ \hline 11 & 10044& 9801&9801& 9801&9801 &9801\\ \hline 12 & 29565& 29484& 29808&29484 &29565 & 29484 \\ \hline 13 & 88452& 88452 &88452&88452&88452&88452 \\ \hline 14 & 266085 &266814& 266814&265356 &265356& 265356 \\ \hline 15 & 798255& 796068 &796068&796068&798255&796068 \\ \hline 16 &2394036 &2391120&2394036& 2391120 & 2389662& 2391120\\ \hline 17 &7168986& 7175547&7175547& 7175547&7175547& 7175547\\ \hline 18 &21526641& 21533202 & 21520080& 21520080&21513519&21520080\\ \hline 19 &64573362&64573362&64573362&64573362& 64573362&64573362 \\ \hline 20 & 193713525 & 193706964&193693842& 193706964&193693842& 193706964\\ \hline \end{tabular} \end{center} \caption{Values of $N_n(\xi_1^2\xi_2^j)$ for $q=3,\ell =2, t=1$.} \label{table321III} \end{table} \section{Type~II examples} \label{TypeII} The number of irreducible polynomials with prescribed leading coefficients (i.e., trace and subtraces) are treated in detail by Granger \cite{Gra19}. Our generating function approach is connected to Granger's approach through \eqref{eq:NF}. We would like to point out that our Theorem~\ref{thm:Zeta} gives an alternative way of computing $N_d({\varepsilon})$, and this will be demonstrated through examples below. We first note that the group ${\cal E}$ is an abelian group of order $q^{\ell}=p^{r\ell}$, and thus it is a direct product of cyclic $p$-groups. For these type II groups, for any $q$, $\ell$, we suppose the generators of the group ${\cal E}$ are $\xi_1, \xi_2, \ldots, \xi_f$, with order $r_i$, $1\leq i \leq f$ respectively. Here $r_1 \cdots r_f = q^{\ell}$. Let $\tau = \ell -1$. The following result is analogous to Theorem~\ref{thm:typeI}. Its proof is essentially the same as that of Theorem~\ref{thm:typeI}. \begin{thm} \label{thm:typeII} Let $q$ be a prime power and $\ell$ be a positive integer. Let $\xi_1, \ldots, \xi_f$ be generators of type II group ${\cal E}$ with order $r_1, \ldots, r_f$ respectively. Let $\omega_{r_i} = exp(2\pi i/r_i)$ for $1\leq i \leq f$. Let $c_{d, \vec{j}}$ be defined as in \eqref{eq:cdB} and $P_{\vec{j}}(z)$ is defined in \eqref{eq:PB}. Suppose each polynomial $P_{\vec{j}}(z)$ is factored into linear factors, and let ${\cal S}_{\vec{j}}$ be the multiset of all the complex roots of $P_{\vec{j}}(z)$. For ${\varepsilon} = \xi_1^{t_1} \cdots \xi_f^{t_f}$, we shall use $I_d(\vec{t})$ to denote $I_d({\varepsilon})$ and so on. Then the number of monic irreducible polynomials over $\mathbb{F}_q$ with prescribed $\ell$ leading coefficients ${\varepsilon}=\xi_1^{t_1}\cdots \xi_f^{t_f}$ is \begin{align} I_d(\vec{t})=\frac{1}{d}\sum_{k|d}\mu(k)\sum_{\vec{s}\in {\cal R}} N_{d/k}(\vec{s})\left\llbracket k\vec{s}\equiv \vec{t} ~(\bmod{~\vec{r}})\right\rrbracket, \end{align} where \begin{align} N_n(\vec{t})&=q^{n-\ell}+q^{-\ell}\sum_{ \vec{j}\in {\cal J}}\prod_{i=1}^f\omega_{r_i}^{-j_it_i}n[z^n]\ln P_{\vec{j}}(z) \\ &=q^{n-\ell}-q^{-\ell}\sum_{ \vec{j}\in {\cal J}}\prod_{i=1}^f\omega_{r_i}^{-j_it_i}\sum_{\rho\in{\cal S}_{\vec{j}}} \rho^{-n}.\label{eq:factorII} \end{align} \end{thm} The following lemma is useful before working out some examples. \begin{lemma}\label{lemma:generator} Let $q=p$ be a prime number. The generators of ${\cal E}$ are $\langle x^j +1 \rangle$ for all $j$'s such that $(p, j) =1$ and $j < \ell$. \end{lemma} \begin{proof} Obviously, the order of $\langle x^j +1 \rangle$ is $p^{s_i}$ such that $s_j$ is the smallest positive integer such that $jp^{s_j} > \ell$. In another word, $p^{s_j-1}$ is the largest power of $p$ such that $j p^{s_j-1} \leq \ell$. Hence $\langle x^{jp^{t}} +1 \rangle$ are all distinct for $t=1, \ldots, s_j-1$. Since each integer in $[\ell]$ can be written uniquely as $jp^{t_j}$ for some $j$ and $t_j$ satisfying $j<\ell$, $(j,p)=1$ and $t_j<s_j$, we have $\sum_{(p, j)=1, j < \ell} s_j = \ell$. Moreover, each of $\langle x^j +1 \rangle$ generates a different subgroup of order $p^{s_j}$. ~~\vrule height8pt width4pt depth0pt \end{proof} \medskip \begin{example}\label{ex:Q2L3} Consider $q=2,\ell=3$. This is also treated in \cite{YucMul04} and some complicated expressions are given there. By Lemma~\ref{lemma:generator}, the group ${\cal E}$ is isomorphic to $C_4\times C_2$ where $C_4$ and $C_2$ are generated by $\xi_1$ and $\xi_2$, where $\xi_1=\langle x+1\rangle$ has order 4 and $\xi_2=\langle x^3+1\rangle$ has order 2. In this case, ${\cal E}_1 = \{ \langle 1\rangle, \xi_1 \}$ and ${\cal E}_2 = \{ \langle 1\rangle, \xi_1, \xi_1^2, \xi_1^3 \xi_2 \}$. Using \eqref{eq:cdB}, we have \begin{align*} c_{1, \vec{j}}&=1+i^{j_1},\\ c_{2, \vec{j}}&=1+i^{j_1}+i^{2j_1}+i^{3j_1}(-1)^{j_2},\\ P_{j_1,j_2}(z)&=1+c_{1, \vec{j}}z+c_{2, \vec{j}}z^2. \end{align*} We have $P_{j_1,j_2}(z)={\bar P}_{4-j_1,j_2}(z)$, and \begin{align*} P_{1,1}(z)&=1+(1+i)z+2iz^2,&\\ P_{1,2}(z)&=1+(1+i)z,&\\ P_{2,1}(z)&=1+2z^2,&(\delta_{2,2})\\ P_{2,2}(z)&=1,&\\ P_{4,1}(z)&=1+2z+2z^2, &(\delta_{2,1})\\ P_{1,2}(z)P_{3,2}(z)&=1+2z+2z^2,&(\delta_{2,1}) \\ P_{1,1}(z)P_{3,1}(z)&=1+2z+2z^2+4z^3+4z^4.&(\delta_{4,1}) \end{align*} We note that the polynomials $\delta_{i,j}(X)$ in \cite[Theorem~6]{Gra19} are the reciprocals of our corresponding polynomials. For comparison purpose, we list the corresponding $\delta_{i, j}$ on the right side. When the exponent of $\xi_1$ is even, we may multiply conjugate pairs together to obtain (using Granger's notation) \begin{align} N_n(\xi_1^{2s_1}\xi_2^{s_2}) &=2^{n-3}+(-1)^{s_2}\frac{n}{8}[z^n]\ln(1+2z^2)\nonumber\\ &~+\left((-1)^{s_2}+(-1)^{s_1}\right)\frac{n}{8}[z^n]\ln(1+2z+2z^2)\nonumber\\ &~+(-1)^{s_1+s_2}\frac{n}{8}[z^n]\ln (1+2z+2z^2+4z^3+4z^4)\nonumber\\ &=2^{n-3}-\frac{\llbracket 2\mid n\rrbracket}{4}(-1)^{s_2} (-2)^{n/2}\\ &~-\frac{1}{8}\left((-1)^{s_2}+(-1)^{s_1}\right)\rho_n(\delta_{2,1})-\frac{1}{8}(-1)^{s_1+s_2}\rho_n(\delta_{4,1}).\nonumber \end{align} More explicitly, we have \begin{align*} N_n(\xi_1^{0}\xi_2^{0})&=F_2(n,0,0,0)\\ &=2^{n-3}-\frac{\llbracket 2\mid n\rrbracket}{4}(-2)^{n/2}-\frac{1}{8}\left(2\rho_n(\delta_{2,1})+\rho_n(\delta_{4,1})\right), \\ N_n(\xi_1^{0}\xi_2^{1})&=F_2(n,0,0,1)\\ &=2^{n-3}+\frac{\llbracket 2\mid n\rrbracket}{4}(-2)^{n/2}+\frac{1}{8}\rho_n(\delta_{4,1}), \\ N_n(\xi_1^{2}\xi_2^{0})&=F_2(n,0,1,0)\\ &=2^{n-3}-\frac{\llbracket 2\mid n\rrbracket}{4}(-2)^{n/2}+\frac{1}{8}\rho_n(\delta_{4,1}), \\ N_n(\xi_1^{2}\xi_2^{1})&=F_2(n,0,1,1)\\ &=2^{n-3}+\frac{\llbracket 2\mid n\rrbracket}{4}(-2)^{n/2}-\frac{1}{8}\left(-2\rho_n(\delta_{2,1})+\rho_n(\delta_{4,1})\right). \end{align*} This immediately implies the corresponding four expression in \cite[Theorem~6]{Gra19}. Moreover, we obtain expressions for even degrees. Since we have simple expressions for all the roots, we may obtain a more compact expression for $N_n(\xi_1^{t_1}\xi_2^{t_2})$ as follows. \begin{align*} N_n(\xi_1^{t_1}\xi_2^{t_2})&=2^{n-3}-2^{-3}\sum_{ (j_1,j_2)\ne (4,2)}i^{-j_1t_1}(-1)^{-j_2t_2}\sum_{\rho\in {\cal S}_{j_1,j_2}}\rho^{-n}\\ &=2^{n-3}+ \frac{ (-1)^{n-1}}{4}\Re\left({i}^{-t_1}(1+i)^n\right)\\ &+\frac{1}{8}(-1)^{t_2}\sum_{j=1}^4{i}^{-jt_1}f_n(j), \end{align*} where \begin{align} f_n(j)&=n[z^n]\ln P_{j,1}(z). \end{align} Note that \begin{align*} f_n(1)&=n[z^n]\ln(1+(1+i)z+2iz^2) \\&=n[z^n]\ln\left(\frac{1-(1+i)^3z^3}{1-(1+i)z}\right) \\&=n[z^n]\left(\ln\left(\frac{1}{1-(1+i)z}\right)-\ln\left(\frac{1}{1-(1+i)^3z^3}\right)\right) \\&=(1+i)^n(1-3\llbracket 3\mid n \rrbracket),\\ f_n(2)&=n[z^n]\ln(1+2z^2)=2(-1)(-2)^{n/2}\llbracket 2\mid n\rrbracket, \\f_n(3)&=\Bar{f_n}(1)=(1-i)^n(1-3\llbracket 3\mid n \rrbracket),\\ f_n(4)&=n[z^n]\ln(1+2z+2z^2) \\&=n[z^n]\ln((1+(1+i)z)(1+(1-i)z)) \\&=n[z^n](\ln(1+(1+i)z)+\ln(1+(1-i)z)) \\&=2(-1)^{n-1}\Re\left((1+i)^n\right). \end{align*} Thus \begin{align} N_n(\xi_1^{t_1}\xi_2^{t_2})&=2^{n-3}+\frac{ 1}{4}(-1)^{n-1}\Re\left({i}^{-t_1}(1+i)^n\right)\nonumber\\ &~~+\frac{\llbracket 2\mid n\rrbracket}{4}(-1)^{t_2+t_1-1}(-2)^{n/2}\nonumber\\ &~~+\frac{1-3\llbracket 3\mid n\rrbracket}{4}(-1)^{t_2}\Re\left(i^{-t_1}(1+i)^n\right)\nonumber\\ &~~+\frac{1}{4}(-1)^{t_2+n-1}\Re\left((1+i)^n\right)\nonumber\\ &=2^{n-3}+\frac{\llbracket 2\mid n\rrbracket}{4}(-1)^{t_2+t_1-1}(-2)^{n/2}\label{eq:ex2}\\ &~~+\frac{1}{4}(-1)^{t_2+n-1}2^{n/2}\cos\left(\frac{n\pi}{4}\right)\nonumber\\ &~~+\left(\frac{1}{4}(-1)^{n-1}+\frac{1-3\llbracket 3\mid n\rrbracket}{4 }(-1)^{t_2} \right)2^{n/2}\cos\left(\frac{n\pi}{4}-\frac{t_1\pi}{2}\right).\nonumber \end{align} Expression~\eqref{eq:ex2} gives a complete solution to the case $q=2$ and $\ell=3$, which improves \cite[Theorem~6]{Gra19}. \end{example} \begin{example}\label{ex:Q2L4} Consider $q=2,\ell=4$. By Lemma~\ref{lemma:generator}, The generators are $\xi_1=\langle x+1\rangle$ and $\xi_2=\langle x^3+1\rangle$ of orders 8 and 2, respectively. Hence \begin{align*} {\cal E}_1 &= \{ \langle 1\rangle, \xi_1 \},\\ {\cal E}_2 &={\cal E}_1 \cup \{\xi_1^2,\xi_1^7\xi_2\},\\ {\cal E}_3 &= {\cal E}_2 \cup \{\xi_1^2\xi_2, \xi_1^3, \xi_1^5\xi_2 ,\xi_2\}. \end{align*} Using \eqref{eq:cdB}, we have \begin{align*} c_{1, (j_1,j_2)}&=1+{\omega}_8^{j_1},\\ c_{2, (j_1,j_2)}&=c_{1, (j_1,j_2)}+{\omega}_8^{2j_1}+{\omega}_8^{7j_1}(-1)^{j_2},\\ c_{3, (j_1,j_2)}&=c_{2, (j_1,j_2)}+{\omega}_8^{2j_1}(-1)^{j_2}+{\omega}_8^{3j_1}+{\omega}_8^{5j_1}(-1)^{j_2}+(-1)^{j_2},\\ P_{j_1,j_2}(z)&=1+c_{1, (j_1,j_2)}z+c_{2,(j_1,j_2)}z^2+c_{3,(j_1,j_2)}z^3. \end{align*} Using Maple, we obtain (for comparison purpose, we list the corresponding $\delta_{j, k}$ from \cite{Gra19}) \begin{align*} P_{1,1}(z)&=1+\frac{2+(1+i)\sqrt{2}}{2}z+(1+i(1+\sqrt{2}))z^2+2i\sqrt{2}z^3,&\\ P_{2,1}(z)&=1+(1+i)z+2iz^2,&\\ P_{3,1}(z)&=1+\frac{2+(-1+i)\sqrt{2}}{2}z+(1-i(1+\sqrt{2}))z^2+2i\sqrt{2}z^3,&\\ P_{4,1}(z)&=1+2z^2,&(\delta_{2,2})\\ P_{8,1}(z)&=1+2z+2z^2,&(\delta_{2,1})\\ P_{1,2}(z)&=1+\frac{2+(1+i)\sqrt{2}}{2}z+(1+i+\sqrt{2})z^2+(2+2i)z^3,&\\ P_{2,2}(z)&=1+(1+i)z,&\\ P_{3,2}(z)&=1+\frac{2+(-1+i)\sqrt{2}}{2}z+(-1+i+\sqrt{2})z^2+(2-2i)z^3,&\\ P_{4,2}(z)&=1,&\\ P_{2,1}(z)P_{6,1}(z)&=1+2z+2z^2+4z^3+4z^4,&(\delta_{4,1})\\ P_{2,2}(z)P_{6,2}(z)&=1+2z+2z^2,&(\delta_{2,1}) \end{align*} and \begin{align*} &~~~P_{1,1}(z)P_{7,1}(z)P_{3,1}(z)P_{5,1}(z)&\\ &=(16z^8 + 8z^6 + 8z^5 + 2z^4 + 4z^3 + 2z^2 + 1)(2z^2 + 2z + 1)^2,&(\delta_{8,2},\delta_{2,1}^2)\\ &~~~P_{1,2}(z)P_{7,2}(z)P_{3,2}(z)P_{5,2}(z)&\\ &=(16z^8 +32z^7 +24z^6 +8z^5+2z^4 + 4z^3 +6z^2 +4z+ 1)(2z^2 + 1)^2. &(\delta_{8,1}, \delta_{2,2}^2) \end{align*} When the exponent of $\xi_1$ is a multiple of 4, we may combine the conjugate pairs together to obtain (using Granger's notation) \begin{align} &~~~~N_n(\xi_1^{4s_1}\xi_2^{s_2})-2^{n-4}\nonumber\\ &=-\frac{1}{16}\left(1+2(-1)^{s_1+s_2}+(-1)^{s_2}\right)\rho_n(\delta_{2,1}) -\frac{1}{16}\left(2(-1)^{s_1}+(-1)^{s_2}\right)\rho_n(\delta_{2,2})\nonumber\\ &~~~-\frac{1}{16}(-1)^{s_2}\rho_n(\delta_{4,1}) -\frac{1}{16}(-1)^{s_1+s_2}\rho_n(\delta_{8,2}) -\frac{1}{16}(-1)^{s_1}\rho_n(\delta_{8,1}). \end{align} Setting $(s_1,s_2)=(0,0),(0,1),(1,0),(1,1)$, we obtain \cite[Theorem~8]{Gra19}. \end{example} \medskip \begin{example} Consider $q=3, \ell=3$. It is easy to check that $\xi_1=\langle x+1\rangle$ has order 9 and $\xi_2=\langle x^2+1\rangle$ has order 3. The group ${\cal E}$ is isomorphic to $C_9\times C_3$ with generators $\xi_1$ and $\xi_2$. We have \begin{align*} {\cal E}_1&=\{\langle 1\rangle,\xi_1,\xi_1^8\xi_2^2\},\\ {\cal E}_2&={\cal E}_1 \cup \{\xi_1^2,\xi_2,\xi_2^2,\xi_1^4\xi_2^2,\xi_1^5\xi_2,\xi_1^7\xi_2\}. \end{align*} Using ${\omega}_3={\omega}_9^3$, we obtain \begin{align*} c_{1,(j_1,j_2)}&=1+{\omega}_9^{j_1}+{\omega}_9^{8j_1+6j_2}, \\ c_{2,(j_1,j_2)}&=c_{1,(j_1,j_2)} +{\omega}_9^{2j_1}+{\omega}_9^{3j_2}+{\omega}_9^{6j_2}+{\omega}_9^{4j_1+6j_2} +{\omega}_9^{5j_1+3j_2}+{\omega}_9^{7j_1+3j_2},\\ P_{j_1,j_2}(z)&=1+c_{1,(j_1,j_2)}z+c_{2,(j_1,j_2)}z^2. \end{align*} We have $P_{j_1,j_2}(z)={\bar P}_{9-j_1,3-j_2}(z)$ and \begin{align*} P_{1,1}(z)&=P_{5,1}(z)=1+(1+{\omega}_9+{\omega}_9^5)z+3{\omega}_9z^2,\\ P_{2,1}(z)&=P_{4,1}(z)=1+(1+{\omega}_9^2+{\omega}_9^4)z+3{\omega}_9^4z^2,\\ P_{3,1}(z)&=1+i\sqrt{3}z,\\ P_{6,1}(z)&=P_{9,1}(z)=1+\frac{3-i\sqrt{3}}{2}z,\\ P_{7,1}(z)&=P_{8,1}(z)=1+(1+{\omega}_9^7+{\omega}_9^8)z+3{\omega}_9^7z^2,\\ P_{1,3}(z)&=1+(1+{\omega}_9+{\omega}_9^8)z+3z^2,\\ P_{2,3}(z)&=1+(1+{\omega}_9^2+{\omega}_9^7)z+3z^2,\\ P_{3,3}(z)&=1,\\ P_{4,3}(z)&=1+(1+{\omega}_9^4+{\omega}_9^5)z+3z^2. \end{align*} Using Maple, we obtain \begin{align*} &~~~\prod_{(j_1,j_2)\ne(9,3)}P_{j_1,j_2}(z)&\\ &=(1+3z^2)\left(1+3z+3z^2\right)^2&({\varepsilon}_{2,2}, {\varepsilon}^2_{2,3})\\ &~\times\left(1+3z+9z^2+15z^3+27z^4+27z^5+27z^6\right)^2&({\varepsilon}^2_{6})\\ &~\times \Big(1+6z+18z^2+39z^3+63z^4+81z^5+117z^6+243z^7& \\ &~~~+567z^8+1053z^9+1458z^{10}+1458z^{11}+729z^{12}\Big)^2. &({\varepsilon}^2_{12,4}) \end{align*} It follows from Corollary~\ref{cor:trivial} that \begin{align}\label{eq:Q3L3} N_n(0,0,0)&=2^{n-3}-\frac{1}{27}\Big(\rho_n({\varepsilon}_{2,2})+2\rho_n({\varepsilon}_{2,3}) +2\rho_n({\varepsilon}_{6}) +2\rho_n({\varepsilon}_{12,4})\Big). \end{align} We note that this formula is much simpler than the one given in \cite[Theoem~14]{Gra19}. \end{example} Finally we include an example with three generators. \begin{example} Consider $q=2, \ell=5$. By Lemma~\ref{lemma:generator}, the generators are $\xi_1=\langle x+1\rangle$, $\xi_2=\langle x^3+1\rangle$ and $\xi_3=\langle x^5+1\rangle$, of orders 8, 2, and 2, respectively. We have \begin{align} {\cal E}_1&=\{\langle 1\rangle,\xi_1\}, \nonumber \\ {\cal E}_2&={\cal E}_1\cup \{ \xi_1^2,\xi_1^7\xi_2\},\nonumber \\ {\cal E}_3&={\cal E}_2 \cup\{\xi_2,\xi_1^2\xi_2\xi_3,\xi_1^3,\xi_1^5\xi_2\xi_3\},\nonumber \\ {\cal E}_4&={\cal E}_3\cup\{\xi_1\xi_2,\xi_1^3\xi_2\xi_3,\xi_1^4,\xi_1^4\xi_2,\xi_1^5\xi_3,\xi_1^6,\xi_1^6\xi_2\xi_3, \xi_1^7\xi_3\}. \end{align} Using \eqref{eq:cdB}, we have \begin{align*} c_{1, (j_1,j_2,j_3)}&=1+{\omega}_8^{j_1},\\ c_{2, (j_1,j_2,j_3)}&=c_{1, (j_1,j_2,j_3)}+{\omega}_8^{2j_1}+{\omega}_8^{7j_1}(-1)^{j_2},\\ c_{3, (j_1,j_2)}&=c_{2, (j_1,j_2,j_3)}+{\omega}_8^{2j_1}(-1)^{j_2+j_3}+{\omega}_8^{3j_1}+{\omega}_8^{5j_1}(-1)^{j_2+j_3}+(-1)^{j_2},\\ c_{4, (j_1,j_2)}&=c_{3, (j_1,j_2,j_3)}+{\omega}_8^{j_1}(-1)^{j_2} +{\omega}_8^{3j_1}(-1)^{j_2+j_3}+{\omega}_8^{4j_1}+{\omega}_8^{4j_1}(-1)^{j_2}\\ &~~~+{\omega}_8^{5j_1}(-1)^{j_3}+{\omega}_8^{6j_1}+{\omega}_8^{6j_1}(-1)^{j_2+j_3}+{\omega}_8^{7j_1}(-1)^{j_3},\\ P_{j_1,j_2,j_3}(z)&=1+c_{1, (j_1,j_2,j_3)}z+c_{2,(j_1,j_2,j_3)}z^2+c_{3,(j_1,j_2,j_3)}z^3++c_{4,(j_1,j_2,j_3)}z^4. \end{align*} Using Maple, we obtain \begin{align*} P_{8,0,1}(z)&=(1+2z^2)(1+2z+2z^2),&(\delta_{2,2},\delta_{2,1})\\ P_{8,1,0}(z)&=1+2z+2z^2,&(\delta_{2,1})\\ P_{8,1,1}(z)&=1+2z+2z^2+4z^3+4z^4, &(\delta_{4,1})\\ P_{4,0,0}(z)&=1,&\\ P_{4,0,1}(z)&=(1+2z+2z^2)(1-2z+2z^2),&(\delta_{2,1},\delta_{2,3})\\ P_{4,1,0}(z)&=1+2z^2,&(\delta_{2,2})\\ P_{4,1,1}(z)&=1+2z^2+4z^4,&(\delta_{4,2})\\ P_{2,0,0}(z)P_{6,0,0}(z)&=1+2z+2z^2,&(\delta_{2,1})\\ P_{2,0,1}(z)P_{6,0,1}(z)&=(1+2z+2z^2)(1-2z+2z^2)(1+2z+2z^2+4z^3+4z^4),&(\delta_{4,1},\delta_{2,1},\delta_{2,3})\\ P_{2,1,0}(z)P_{6,1,0}(z)&=1+2z+2z^2+4z^3+4z^4,&(\delta_{4,1})\\ P_{2,1,1}(z)P_{6,1,1}(z)&=16z^8 + 16z^7 + 8z^6 - 4z^4 + 2z^2 + 2z + 1,&(\delta_{8,3}) \end{align*} and \begin{align*} &~~~P_{1,0,0}(z)P_{7,0,0}(z)P_{3,0,0}(z)P_{5,0,0}(z) &\\ &=(16z^8 +32z^7 +24z^6 +8z^5+2z^4 + 4z^3 +6z^2 +4z+ 1)(2z^2 + 1)^2, &(\delta_{8,1},\delta_{2,2}^2)\\ &~~~P_{1,0,1}(z)P_{7,0,1}(z)P_{3,0,1}(z)P_{5,0,1}(z) &\\ &=(16z^8 +8z^6-8z^5+2z^4 -4z^3 +2z^2 + 1)(1+2z+2z^2+4z^3+4z^4)^2,&(\delta_{8,4},\delta_{4,1}^2)\\ &~~~P_{1,1,0}(z)P_{7,1,0}(z)P_{3,1,0}(z)P_{5,1,0}(z)&\\ &=(16z^8 + 8z^6 + 8z^5 + 2z^4 + 4z^3 + 2z^2 + 1)(1+2z+2z^2)^2,&(\delta_{8,2},\delta_{2,1}^2),\\ &~~~P_{1,1,1}(z)P_{7,1,1}(z)P_{3,1,1}(z)P_{5,1,1}(z) &\\ &=(16z^8 +32z^7 +24z^6 +8z^5+2z^4 + 4z^3 +6z^2 +4z+ 1)(1+2z^2 +4z^4)^2. &(\delta_{8,1},\delta_{4,2}^2) \end{align*} By combining conjugate pairs as before, we obtain (using the notation from \cite{Gra19})\\ \begin{align} &~~~~N_n(\xi_1^{4s_1}\xi_2^{s_2}\xi_3^{s_3})-2^{n-5}\nonumber\\ &=-\frac{1}{32}(-1)^{s_3}\Big(\rho_n(\delta_{2,2})+\rho_n(\delta_{2,1})\Big)- \frac{1}{32}(-1)^{s_2}\rho_n(\delta_{2,1})-\frac{1}{32}(-1)^{s_2+s_3}\rho_n(\delta_{4,1})\nonumber\\ &~~~-\frac{1}{32}(-1)^{s_3}\Big(\rho_n(\delta_{2,3})+\rho_n(\delta_{2,1})\Big)- \frac{1}{32}(-1)^{s_2}\rho_n(\delta_{2,2})- \frac{1}{32}(-1)^{s_2+s_3}\rho_n(\delta_{4,2})\nonumber\\ &~~~- \frac{1}{32}\rho_n(\delta_{2,1}) -\frac{1}{32}(-1)^{s_3}\Big(\rho_n(\delta_{4,1})+\rho(\delta_{2,3})+\rho_n(\delta_{2,1})\Big) - \frac{1}{32}(-1)^{s_2}\rho_n(\delta_{4,1})\nonumber\\ &~~~- \frac{1}{32}(-1)^{s_2+s_3}\rho_n(\delta_{8,3}) -\frac{1}{32}(-1)^{s_1}\Big(\rho_n(\delta_{8,1})+2\rho_n(\delta_{2,2})\Big)\nonumber\\ &~~~-\frac{1}{32}(-1)^{s_1+s_3}\Big(\rho_n(\delta_{8,4})+2\rho_n(\delta_{4,1})\Big) -\frac{1}{32}(-1)^{s_1+s_2}\Big(\rho_n(\delta_{8,2})+2\rho_n(\delta_{2,1})\Big)\nonumber\\ &~~~-\frac{1}{32}(-1)^{s_1+s_2+s_3}\Big(\rho_n(\delta_{8,1})+2\rho_n(\delta_{4,2})\Big). \end{align} Thus \begin{align*} &~~~~N_n(\xi_1^{4s_1}\xi_2^{s_2}\xi_3^{s_3})-2^{n-5}\\ &=-\frac{1}{32}\left(1+(-1)^{s_3}+(-1)^{s_2}+2(-1)^{s_3}+2(-1)^{s_1+s_2}\right)\rho_n(\delta_{2,1})\\ &~~~-\frac{1}{32}\left((-1)^{s_3}+(-1)^{s_2}+2(-1)^{s_1}\right)\rho_n(\delta_{2,2}) -\frac{2}{32}(-1)^{s_3}\rho_n(\delta_{2,3})\\ &~~~-\frac{1}{32}\left((-1)^{s_2+s_3}+(-1)^{s_3}+(-1)^{s_2}+2(-1)^{s_1+s_3}\right)\rho_n(\delta_{4,1})\\ &~~~-\frac{1}{32}\left((-1)^{s_2+s_3}+2(-1)^{s_1+s_2+s_3}\right)\rho_n(\delta_{4,2})\\ &~~~-\frac{1}{32}\left((-1)^{s_1}+ (-1)^{s_1+s_2+s_3}\right)\rho_n(\delta_{8,1}) -\frac{1}{32}(-1)^{s_1+s_2}\rho_n(\delta_{8,2})\\ &~~~-\frac{1}{32}(-1)^{s_2+s_3}\rho_n(\delta_{8,3}) -\frac{1}{32}(-1)^{s_1+s_3}\rho_n(\delta_{8,4}). \end{align*} This immediately implies \cite[Theorem~12]{Gra19} by taking $(s_1,s_2,s_3)=(0,0,0)$, $(1,0,0)$,$(0,0,1)$,$(1,0,1)$, respectively. \end{example} \section{Conclusion} \label{conclusion} Through the study of the group of equivalent classes of monic irreducible polynomials with prescribed coefficients, we obtain general expressions for the generating functions of the number of monic irreducible polynomials with prescribed coefficients over finite fields. Explicit formulae can be obtained accordingly. We demonstrate our recipe using several concrete examples and compare our expressions with previous known results.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The exploration of the phase diagram of strongly interacting matter and search for the signals of the phase transition from hadronic to quark-gluon phase are very important in both theory and experiment. As a fundamental tool, lattice QCD provides us the best framework for investigation of non-perturbative phenomena such as confinement and quark-gluon plasma formation at finite temperature and vanishing~(small) chemical potential \cite{Karsch01, Karsch02, Allton02, Kaczmarek05, Cheng06, YAoki99, Borsanyi10}. However, lattice QCD suffers the serious problem of the fermion determinant with three color at finite $\mu$. Although several approximation methods have been taken to try to evade this problem \cite{Fodor02,Fodor03,Elia09,Ejiri08,Clark07}, the validity of lattice simulations at finite chemical potential is still limited to the region $\mu_q/T<1$~\cite{Fukushima11}. The results obtained with $\mu_q/T>1$ should be taken with care. On the other hand, many phenomenological models \cite{Nambu61,Toublan03,Werth05,Abuki06}, as well as the more microscopic Dyson-Schwinger equations (DSEs) approach \cite{Qin11}, have been proposed to derive a complete description of QCD phase diagram. Among these effective models, the Nambu--Jona-Lasinio model~(NJL) is a predominant one, since it offers a simple illustration of chiral symmetry breaking and restoration, a key feature of QCD~\cite{Volkov84,Hatsuda84,Klevansky92,Hatsuda94, Alkofer96,Buballa05, Rehberg95}. Moreover, it provides a complicated phase diagram of color superconductivity at high density~\cite{ Shovkovy03,Huang03,Alford08}. One deficiency of the standard NJL model is that quarks are not confined. Recently, an improved version of the NJL model coupled to Polyakov-Loop fields (PNJL) has been proposed~\cite{Fukushima04}. The PNJL model takes into account both the chiral symmetry and (de)confinement effect, giving a good interpretation of lattice data at zero chemical potential and finite temperature. At the same time it is able to make predictions in regions that cannot be presently reached in lattice calculation~\cite{Ratti06,Costa10,Schaefer10,Herbst11,Kashiwa08,Abuki08,Fu08}. Most effective models, including the PNJL model, describe the hadron--quark-gluon phase transition based on quark degrees of freedom. As a matter of fact, at low temperature and small chemical potential, QCD dynamics should be governed by hadrons. Therefore, it is natural to describe the strongly interacting matter with hadronic degrees of freedom at low $T$ and small $\mu$ and quarks at high $T$ and large $\mu$. This picture can be easily realized following a two equation of state (Two-EoS) model, where hadronic and quark phases are connected by the Gibbs (Maxwell) criteria. Such approach is widely used in describing the phase transition in the interior of compact star in beta-equilibrium~(e.g.,\,\cite{Glendenning92, Glendenning98,Burgio02,Maruyama07,Yang08,Shao10,Xu10} ). Recently, it has also been adopted to explore the phase diagram of hadron-quark transition at finite temperature and density related to Heavy-Ion Collision (HIC) ~\cite{Muller97, Toro06, Torohq09, Pagliara10, Cavagnoli10}. Moreover, in these studies more attention was paid to isospin asymmetric matter, and some observable effects have been suggested to be seen in charged meson yield ratio and on the onset of quark number scaling of the meson/baryon elliptic flows in Ref.~\cite{Toro06, Torohq09}. Such heavy ion connection provides us a new orientation to investigate the hadron-quark phase transition, and it can stimulate some new relevant researches in this field. We have previously studied the hadron-quark phase transition in the Two-EoS model by using the MIT-Bag model~\cite{Torohq09} and NJL model~\cite{Shao11} to describe quark matter, respectively. In particular a kind of Critical-End-Point (CEP) of a first order transition has been found at about $T=80$ MeV and $\mu=900$ MeV when NJL model is considered for the quark matter. In this paper, in order to obtain more reliable results and predict possible observables in the experiments, an improved calculation, within the Two-EoS approach, has been performed. We take the PNJL Lagrangian to describe the properties of quark matter, with the interaction between quarks and Polyakov-Loop, where both the chiral and (de)confinement dynamics are included simultaneously. We are not considering here color pairing correlations, that are affecting the isospin asymmetry \cite{Pagliara10}, since in heavy ion collisions the high density system will be always formed at rather large temperatures \cite{Toro06}. We obtain the phase diagrams of hadron--quark-gluon phase transition in $ T-\rho_{B}^{}$ and $T-\mu_B^{}$ planes. We compare the obtained results with those given in \cite{Shao11} where the NJL model is used to describe the quark phase. The calculation shows that the phase-transition curves are greatly modified when both the chiral dynamics and (de)confinement effect are considered, in particular in the high temperature and low chemical potential region. We still see a first order transition but the CEP is now at much higher temperature and lower chemical potential. In fact the CEP temperature is much closer to the critical temperature (for a crossover) given by lattice calculation at vanishing chemical potential. Our results seem to stress the importance of an extension of lattice calculations up to quark chemical potentials around $\mu_q/T_c \simeq 1$. In addition we address the discussion about the non-coincidence of chiral and deconfinement phase transition at large chemical potential and low temperature, relevant to the formation of quarkyonic matter. Finally the calculation confirms that the onset density of hadron-quark phase transition is much smaller in isospin asymmetric than that of symmetric matter, and therefore it will be more easy to probe the mixed phase in experiments. The paper is organized as follows. In Section II, we describe briefly the Two-EoS approach and give the relevant formulae of the hadronic non-linear Walecka model and the PNJL effective theory. In Section III, we discuss the expected effects of the confinement dynamics. The quark matter phase transition are presented in Section IV for the NJL as well as the PNJL models. Section V is devoted to the phase diagrams within the Two-EoS frame to the comparison with the results using only the pure quark PNJL model to describe both phases. Moreover, we present some discussions and conclusions about the phase transition, as well as some suggestions for further study. Finally, a summary is given in Section VI. \vskip 0.5cm \section{ hadron matter, quark matter and the mixed phase} In our Two-EoS approach, the hadron matter and quark matter are described by the non-linear Walecka model and by the PNJL model, respectively. For the mixed phase between pure hadronic and quark matter, the two phases are connected each other with the Gibbs conditions deduced from thermal, chemical and mechanical equilibriums. In this section, we will first give a short introduction of the nonlinear Walecka model for the hadron matter and the PNJL model for quark matter, then we construct the mixed phase with the Gibbs criteria based on baryon and isospin charge conservations during the transition. For hadron phase, the non-linear Relativistic Mean Field (RMF) approach is used, which provides an excellent description of nuclear matter and finite nuclei as well as of compressed matter properties probed with high energy HIC ~\cite{Muller97, Toro06, Torohq09, Liubo11, Toro09}. The exchanged mesons include the isoscalar-scalar meson $\sigma$ and isoscalar-vector meson $\omega$ ($NL$ force, for isospin symmetric matter), isovector-vector meson $\rho$ and isovector-scalar meson $\delta$, ($NL\rho$ and $NL\rho\delta$ forces, for isospin asymmetric matter). The effective Lagrangian is written as \begin{widetext} \begin{eqnarray} \cal{L} &=&\bar{\psi}[i\gamma_{\mu}\partial^{\mu}- M +g_{\sigma }\sigma+g_{\delta }\boldsymbol\tau_{} \cdot\boldsymbol\delta -g_{\omega }\gamma_{\mu}\omega^{\mu} -g_{\rho }\gamma_{\mu}\boldsymbol\tau_{}\cdot\boldsymbol \rho^{\mu}]\psi \nonumber\\ & &{}+\frac{1}{2}\left(\partial_{\mu}\sigma\partial^ {\mu}\sigma-m_{\sigma}^{2}\sigma^{2}\right) - \frac{1}{3} b\,(g_{\sigma} \sigma)^3-\frac{1}{4} c\, (g_{\sigma} \sigma)^4 +\frac{1}{2}\left(\partial_{\mu}\delta\partial^{\mu}\delta -m_{\delta}^{2}\delta^{2}\right) \nonumber\\ & &{}+\frac{1}{2}m^{2}_{\omega} \omega_{\mu}\omega^{\mu} -\frac{1}{4}\omega_{\mu\nu}\omega^{\mu\nu} +\frac{1}{2}m^{2}_{\rho}\boldsymbol\rho_{\mu}\cdot\boldsymbol \rho^{\mu} -\frac{1}{4}\boldsymbol\rho_{\mu\nu}\cdot\boldsymbol\rho^{\mu\nu}, \end{eqnarray} \end{widetext} where the antisymmetric tensors of vector mesons are given by \begin{equation} \omega_{\mu\nu}= \partial_\mu \omega_\nu - \partial_\nu \omega_\mu,\qquad \nonumber \rho_{\mu\nu} \equiv\partial_\mu \boldsymbol\rho_\nu -\partial_\nu \boldsymbol\rho_\mu. \end{equation} The nucleon chemical potential and effective mass in nuclear medium can be expressed as \begin{equation} \mu_{i}^{} =\mu_{i}^{*}+g_{\omega }\omega+g_{\rho}\tau_{3i}^{}\rho \, , \end{equation} and \begin{equation} M_{i}^{*} = M -g_{\sigma }^{}\sigma-g_{\delta }^{} \tau_{3i}^{} \delta, \end{equation} where $M$ is the free nucleon mass, $ \tau_{3p}^{}=1$ for proton and $\tau_{3n}^{}=-1$ for neutron, and $\mu_{i}^{*}$ is the effective chemical potential which reduces to Fermi energy $E_{Fi}^{*}=\sqrt{k_{F}^{i^{2}}+M_{i}^{*^{2}}}$ at zero temperature. The baryon and isospin chemical potentials in the hadron phase are defined as \begin{equation} \mu_{B}^{H} =\frac{\mu_{p}+\mu_{n}}{2},\ \ \ \ \ \mu_{3}^{H} =\frac {\mu_{p}-\mu_{n}}{2}. \end{equation} The energy density and pressure of nuclear matter at finite temperature are derived as \begin{widetext} \begin{equation} \varepsilon^{H} = \sum_{i=p,n}\frac{2}{(2\pi)^3} \int \! d^3 \boldsymbol k \sqrt{k^2 + {M^*_i}^2}(f_{i}(k)+\bar{f}_{i}(k))+ \frac{1}{2}m_\sigma^2 \sigma^2 + \frac{b}{3}\,(g_{\sigma }^{} \sigma)^3+ \frac{c}{4}\,(g_{\sigma }^{} \sigma)^4 +\frac{1}{2}m_\delta^2 \delta^2+\frac{1}{2}m_\omega^2 \omega^2 + \frac{1}{2}m_\rho^2 \rho^2 \, , \end{equation} \begin{equation} P^{H} = \sum_{i=p,n} \frac{1}{3}\frac{2}{(2\pi)^3} \int \! d^3 \boldsymbol k \frac{k^2}{\sqrt{k^2 + {M^*_i}^2}}(f_{i}(k)+ \bar{f}_{i}(k))- \frac{1}{2}m_\sigma^2 \sigma^2 - \frac{b}{3}\,(g_{\sigma }^{} \sigma)^3 - \frac{c}{4}\,(g_{\sigma }^{} \sigma)^4 -\frac{1}{2}m_\delta^2 \delta^2+\frac{1}{2}m_\omega^2 \omega^2 + \frac{1}{2}m_\rho^2 \rho^2 \, . \end{equation} \end{widetext} where $f_{i}(k)$ and $\bar{f}_{i}(k)$ are the fermion and antifermion distribution functions for proton and neutron ($i=p,\,n$): \begin{equation} f_{i}(k)=\frac{1}{1+\texttt{exp}\{(E_{i}^{*}(k)-\mu_{i}^{*})/T\}} , \end{equation} \begin{equation} \bar{f}_{i}(k)=\frac{1}{1+\texttt{exp}\{(E_{i}^{*}(k)+\mu_{i}^{*})/T\}}. \end{equation} The effective chemical potentials $\mu_{i}^{*}$ are determined by the nucleon densities \begin{equation}\label{hadrondensity} \rho_i= 2 \int \! \frac{d^3 \boldsymbol k}{(2\pi)^{3}}( f_{i}(k)- \bar{f}_{i}(k)). \end{equation} With the baryon number density $\rho=\rho_{B}^{H}=\rho_p+\rho_n$ and isospin density $\rho_{3}^{H}=\rho_p-\rho_n$. The asymmetry parameter can be defined as \begin{equation} \alpha^{H}\equiv-\frac{\rho_{3}^{H}}{\rho_{B}^{H}}=\frac{\rho_n- \rho_p}{\rho_p+\rho_n}. \end{equation} In this study the parameter set $NL\rho\delta$~\cite{Toro06} will be used to describe the properties of hadron matter. The model parameters is determined by calibrating the properties of symmetric nuclear matter at zero temperature and normal nuclear density. Our parameterizations are also tuned to reproduce collective flows and particle production at higher energies, where some hot and dense matter is probed, see~\cite{Toro09} and refs. therein. We take the PNJL model to describe the quark matter. In the pure gauge theory, the Polyakov-Loop serves as an order parameter for the $\mathbb{Z}_3$ symmetry breaking transition from low to high temperature, i.e. for the transition from a confined to a deconfined phase. In the real world quarks are coupled to the Polyakov-Loop, which explicitly breaks the $\mathbb{Z}_3$ symmetry. No rigorous order parameter is established for the deconfinement phase transition. However, the Polyakov loop can still be practicable to distinguish a confined phase from a deconfined one. The Lagrangian density in the three-flavor PNJL model is taken as \begin{eqnarray}\label{polylagr} \mathcal{L}_{q}&=&\bar{q}(i\gamma^{\mu}D_{\mu}-\hat{m}_{0})q+ G\sum_{k=0}^{8}\bigg[(\bar{q}\lambda_{k}q)^{2}+ (\bar{q}i\gamma_{5}\lambda_{k}q)^{2}\bigg]\nonumber \\ &&-K\bigg[\texttt{det}_{f}(\bar{q}(1+\gamma_{5})q)+\texttt{det}_{f} (\bar{q}(1-\gamma_{5})q)\bigg]\nonumber \\ \nonumber \\ &&-\mathcal{U}(\Phi[A],\bar{\Phi}[A],T), \end{eqnarray} where $q$ denotes the quark fields with three flavors, $u,\ d$, and $s$, and three colors; $\hat{m}_{0}=\texttt{diag}(m_{u},\ m_{d},\ m_{s})$ in flavor space; $G$ and $K$ are the four-point and six-point interacting constants, respectively. The four-point interaction term in the Lagrangian keeps the $SU_{V}(3)\times SU_{A}(3)\times U_{V}(1)\times U_{A}(1)$ symmetry, while the 't Hooft six-point interaction term breaks the $U_{A}(1)$ symmetry. The covariant derivative in the Lagrangian density is defined as $D_\mu=\partial_\mu-iA_\mu$. The gluon background field $A_\mu=\delta_\mu^0A_0$ is supposed to be homogeneous and static, with $A_0=g\mathcal{A}_0^\alpha \frac{\lambda^\alpha}{2}$, where $\frac{\lambda^\alpha}{2}$ are $SU(3)$ color generators. The effective potential $\mathcal{U}(\Phi[A],\bar{\Phi}[A],T)$ is expressed in terms of the traced Polyakov loop $\Phi=(\mathrm{Tr}_c L)/N_C$ and its conjugate $\bar{\Phi}=(\mathrm{Tr}_c L^\dag)/N_C$. The Polyakov loop $L$ is a matrix in color space \begin{equation} L(\vec{x})=\mathcal{P} exp\bigg[i\int_0^\beta d\tau A_4 (\vec{x},\tau) \bigg], \end{equation} where $\beta=1/T$ is the inverse of temperature and $A_4=iA_0$. The Polyakov loop can be expressed in a more intuitive physical form as \begin{equation}\label{poly} \Phi = \exp{[-\beta F_q(\vec{x})]} \end{equation} where $F_q$ is the free energy required to add an isolated quark to the system. So it will go from zero in the confined phase up to a finite value when deconfinement is reached \cite{McL81}. Different effective potentials are adopted in the literature~\cite{Ratti06,Robner07,Fukushima08}. The logarithmic one given in~\cite{Robner07} will be used in our calculation, which can reproduce well the data obtained in lattice calculation. The corresponding effective potential reads \begin{eqnarray} \frac{\mathcal{U}(\Phi,\bar{\Phi},T)}{T^4}&=&-\frac{a(T)}{2}\bar{\Phi}\Phi \\ &&+b(T)\mathrm{ln}\bigg[1-6\bar{\Phi}\Phi+4(\bar{\Phi}^3+\Phi^3)-3(\bar{\Phi}\Phi)^2\bigg], \nonumber \end{eqnarray} where \begin{equation} a(T)=a_0+a_1\bigg(\frac{T_0}{T}\bigg)+a_2\bigg(\frac{T_0}{T}\bigg)^2, \end{equation} and \begin{equation} b(T)=b_3\bigg(\frac{T_0}{T}\bigg)^3. \end{equation} We note that in this version of PNJL the direct coupling between quark condensates and Polyakov loop is only via the covariant derivative in the Lagrangian density Eq.(\ref{polylagr}). The parameters $a_i$, $b_i$ are precisely fitted according to the lattice result of QCD thermodynamics in pure gauge sector. $T_0$ is found to be 270 MeV as the critical temperature for the deconfinement phase transition of the gluon part at zero chemical potential~\cite{Fukugita90}. When fermion fields are included, a rescaling of $T_0$ is usually implemented to obtain consistent result between model calculation and full lattice simulation which gives a critical phase-transition temperature $T_c=173\pm8$ MeV~\cite{Karsch01,Karsch02,Kaczmarek05}. In this study we rescale $T_0=210$\, Mev so as to produce $T_c=171$ MeV for the phase transition temperature at zero chemical potential. In the mean field approximation, quarks can be seen as free quasiparticles with constituent masses $M_i$, and the dynamical quark masses~(gap equations) are obtained as \begin{equation} M_{i}=m_{i}-4G\phi_i+2K\phi_j\phi_k\ \ \ \ \ \ (i\neq j\neq k), \label{mass} \end{equation} with $i~=~u, d, s$, where $\phi_i$ stands for the quark condensate. The thermodynamic potential of the PNJL model at the mean field level is expressed as \begin{eqnarray} \Omega&=&\mathcal{U}(\bar{\Phi}, \Phi, T)+2G\left({\phi_{u}}^{2} +{\phi_{d}}^{2}+{\phi_{s}}^{2}\right)-4K\phi_{u}\,\phi_{d}\,\phi_{s}\nonumber \\ &&-T \sum_n\int \frac{\mathrm{d}^{3}p}{(2\pi)^{3}}\mathrm{Trln}\frac{S_i^{-1}(i\omega_n,\vec{p})}{T}. \end{eqnarray} Here $S_i^{-1}(p)=-(p\!\!\slash-M_i+\gamma_0(\mu_i-iA_4))$, with $\mu_i$ quark chemical potential, is the inverse fermion propagator in the background field $A_4$, and the trace has to be taken in color, flavor, and Dirac space. After summing over the fermion Matsubara frequencies, $p^0=i\omega_n=(2n+1)\pi T$, the thermodynamic potential can be written as \begin{widetext} \begin{eqnarray} \Omega&=&\mathcal{U}(\bar{\Phi}, \Phi, T)+2G\left({\phi_{u}}^{2} +{\phi_{d}}^{2}+{\phi_{s}}^{2}\right)-4K\phi_{u}\,\phi_{d}\,\phi_{s}-2\int_\Lambda \frac{\mathrm{d}^{3}p}{(2\pi)^{3}}3(E_u+E_d+E_s) \nonumber \\ &&-2T \sum_{u,d,s}\int \frac{\mathrm{d}^{3}p}{(2\pi)^{3}} \bigg[\mathrm{ln}(1+3\Phi e^{-(E_i-\mu_i)/T}+3\bar{\Phi} e^{-2(E_i-\mu_i)/T}+e^{-3(E_i-\mu_i)/T}) \bigg]\nonumber \\ &&-2T \sum_{u,d,s}\int \frac{\mathrm{d}^{3}p}{(2\pi)^{3}} \bigg[\mathrm{ln}(1+3\bar{\Phi} e^{-(E_i+\mu_i)/T}+3\Phi e^{-2(E_i+\mu_i)/T}+e^{-3(E_i+\mu_i)/T}) \bigg], \end{eqnarray} \end{widetext} where $E_i=\sqrt{\vec{p}^{\,2}+M_i^2}$ is the energy of quark flavor $i$. We remark some interesting differences with respect to the thermodynamical potential derived within the pure NJL model, see \cite{Buballa05,Shao11}. Apart the presence of the effective potential $\mathcal{U}(\bar{\Phi}, \Phi, T)$, the Polyakov loop is mostly acting on the quark-antiquark distribution functions, in the direction of a reduction, on the way to confinement. This is largely modifying the quark pressure, as seen in the calculations. Moreover, in spite of the minimal coupling introduced in the Lagrangian Eq.(\ref{polylagr}), only in the covariant derivative, the quark condensates will be strongly affected by the Polyakov loop via the modified $q,~\bar{q}$ distribution functions. This will be also clearly observed in the comparison of NJL and PNJL phase diagrams. The values of $\phi_u, \phi_d, \phi_s, \Phi$ and $\bar{\Phi}$ are determined by minimizing the thermodynamical potential \begin{equation} \frac{\partial\Omega}{\phi_u}=\frac{\partial\Omega}{\phi_d}=\frac{\partial\Omega}{\phi_s}=\frac{\partial\Omega}{\Phi}=\frac{\partial\Omega}{\bar\Phi}=0. \end{equation} All the thermodynamic quantities relevant to the bulk properties of quark matter can be obtained from $\Omega$. Especially, the pressure and energy density should be zero in the vacuum. The baron (isospin) density and baryon (isospin )chemical potential in quark phase are defined as follows \begin{equation} \rho_{B}^Q=\frac{1}{3}(\rho_u+\rho_d),\ \ \ \ \rho_{3}^Q=\rho_u-\rho_d, \end{equation} and \begin{equation} \mu_{B}^Q=\frac{3}{2}(\mu_u+\mu_d),\ \ \ \ \mu_{3}^Q=\frac{1}{2}(\mu_u-\mu_d). \end{equation} The corresponding asymmetry parameter of quark phase is defined as \begin{equation} \alpha^{Q}\equiv-\frac{\rho_{3}^{Q}}{\rho_{B}^{Q}}=3\frac{\rho_d-\rho_u} {\rho_u+\rho_d}. \end{equation} As an effective model, the (P)NJL model is not renormalizeable, so a cut-off $\Lambda$ is implemented in 3-momentum space for divergent integrations. We take the model parameters: $\Lambda=603.2$ MeV, $G\Lambda^{2}=1.835$, $K\Lambda^{5}=12.36$, $m_{u,d}=5.5$ and $m_{s}=140.7$ MeV, determined by fitting $f_{\pi},\ M_{\pi},\ m_{K}$ and $\ m_{\eta}$ to their experimental values~\cite{Rehberg95}. The coefficients in Polyakov effective potential are listed in Table \ref{tab:1}. \begin{table}[ht] \tabcolsep 0pt \caption{\label{tab:1}Parameters in Polyakov effective potential given in~\cite{Robner07}} \setlength{\tabcolsep}{2.5pt} \begin{center} \def\temptablewidth{0.58\textwidth} \begin{tabular}{c c c c} \hline \hline {$a_0$} & $a_1$ & $a_2$ & $a_3$ \\ \hline $ 3.51$ & -2.47 & 15.2 & -1.75 \\ \hline \hline \end{tabular} \end{center} \end{table} So far we have introduced how to describe the hadronic and quark phase by the RMF hadron model and PNJL quark model, respectively. The key point in the Two-EoS model is to construct the phase transition from hadronic to quark matter. As mentioned above, the two phases are connected by Gibbs criteria, i.e., the thermal, chemical and mechanical equilibrations being required. For the Hadron--quark-gluon phase transition relevant to heavy-ion collision of duration about $10^{-22}sec$, ($10~-~20~fm/c$), thermal equilibration is only possible for strongly interacting processes, where baryon number and isospin conservations are preserved. So the strange-antistrange quark number may be rich, but the net strange quark number should be zero before the beginning of hadronization in the expansion stage~\cite{Greiner87}, which can be approximately realized by requiring $\mu_s=0$~(Hadronization is out of the range of this study ). Based on the conservations of baryon number and isospin during strong interaction, the Gibbs conditions describing the phase transition can be expressed by \begin{eqnarray} & &\mu_B^H(\rho_B^{},\rho_3^{},T)=\mu_B^Q(\rho_B^{},\rho_3^{},T)\nonumber\\ & &\mu_3^H(\rho_B^{},\rho_3^{},T)=\mu_3^Q(\rho_B^{},\rho_3^{},T)\nonumber\\ & &P^H(\rho_B^{},\rho_3^{},T)=P^Q(\rho_B^{},\rho_3^{},T), \end{eqnarray} where $\rho_B^{}=(1-\chi)\rho_B^{H}+\chi \rho_B^{Q}$ and $\rho_3^{}=(1-\chi)\rho_3^{H}+\chi \rho_3^{Q}$ are the total baryon density, isospin density of the mixed phase, respectively, and $\chi$ is the fraction of quark matter. The global asymmetry parameter $\alpha$ for the mixed phase is \begin{equation} \alpha\equiv-\frac{\rho_{3}^{}}{\rho_{B}^{}}= \frac{(1-\chi)\rho_3^{H}+ \chi \rho_3^{Q}}{(1-\chi)\rho_B^{H}+\chi \rho_B^{Q}}=\alpha^H\mid_{\chi=0}^{} = \alpha^Q\mid_{\chi=1}^{}, \end{equation} which is determined by the heavy-ion source formed in experiments. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.29]{Pressure-NJL.eps} \caption{\label{fig:Pressure-NJL}Pressure of quark matter as function of baryon density at different temperatures in the NJL model.Isospin symmetric matter. In the shaded area we show also the Hadron (NL) curves in the temperature region between 75 and 100 MeV.} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.29]{Pressure-PNJL.eps} \caption{\label{fig:Pressure-PNJL}Pressure of quark matter as function of baryon density at different temperatures in the PNJL model. Isospin symmetric matter. In the shaded area we show also the Hadron (NL) curves in the temperature region between 150 and 170 MeV.} \end{center} \end{figure} \section{Expected effect of the confinement dynamics} Before showing detailed phase diagram results within the Two-EoS approach it is very instructive to analyze the effects of chiral and (de)confinement dynamics in the pure quark sector. In order to understand the physics which is behind we will show separately the results in the NJL, with the same parameters given before, and the PNJL model, for isospin symmetric matter. In Figs.~\ref{fig:Pressure-NJL},~\ref{fig:Pressure-PNJL} we plot the pressure of isospin symmetric quark matter as function of baryon density for different temperatures respectively for the NJL and PNJL models. In this calculation isospin symmetric matter with $\mu=\mu_u=\mu_d$ and $\mu_s=0$ is considered and $\phi_l$ stands for the chiral condensate of $u, d$ quarks. From the two figures, we can see that the pressure has a local maximum and a local minimum at low temperature. The local extrema will disappear with the increase of temperature. The temperature with the disappearance of the two local extrema corresponds to the Critical-End-Point $CEP$ of the first order chiral transition, for a more detailed discussion please refer to \cite{Buballa05,Costa10}. It is interesting to note that the critical temperature of the chiral transition is rather different in the two cases, around 70 MeV in the NJL and around 130 MeV in the PNJL, while the density region is not much affected. This is due to the fact that for a fixed baryon density (or chemical potential) the NJL presents a much larger pressure for a given temperature, as clearly seen from the two figures \ref{fig:Pressure-NJL},~\ref{fig:Pressure-PNJL} \cite{Hansen07}. This is a nice indication that when we have a coupling to the deconfinement, even if in the minimal way included here, the quark pressure at finite temperatures is reduced since the quarks degrees of freedom start to decrease. All that will imply important differences at higher temperatures since above the chiral restoration the quark pressure will rapidly increase reaching an end-point in the Two-EoS approach where the matching to the hadron pressure will not be possible. This will happen in different points of the ($T,\mu$), ($T,\rho$) planes for the Hadron-NJL \cite{Shao11} and the Hadron-PNJL, and higher temperatures will be requested in the PNJL case. In fact this can be also clearly seen from Figs.~\ref{fig:Pressure-NJL} and ~\ref{fig:Pressure-PNJL}, of the NJL- and PNJL-pressures, where we plot also the corresponding curves of the hadronic EoS in the end-point regions (shaded area). The Gibbs (Maxwell) conditions have no solution with decreasing density (chemical potential) and increasing temperature if we encounter a crossing of the hadron and quark curves in the $T-\rho_B$ ($T-\mu_B$) plane, with the quark pressure becoming larger than the hadron one. We see that this is happening for $T \simeq 75$~MeV and $\rho/\rho_0 \simeq 1.8$ in the NJL case and for $T \simeq 170$~MeV and $\rho/\rho_0 \simeq 1.6$ in the PNJL quark picture. In conclusion, due to the noticeable quark pressure difference at finite temperatures, besides the Critical-End-Points, we expect in general rather different phase diagrams given by the Hadron-NJL and Hadron-PNJL models. This will be seen in the Section V, Figs.~\ref{fig:T-Mu-with-delta-alpha=0} and \ref{fig:T-Mu-with-delta-alpha=02}. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.3]{phil-Phi-T.eps} \caption{\label{fig:phil-Phi-T} Chiral condensate $\phi_l$ (normalized to the vacuum value) and Polyakov-Loop $\Phi$ as functions of temperature for various values of the quark chemical potential. Isospin symmetric matter.} \end{center} \end{figure} \section{PNJL Phase diagram in the quark sector} In order to better understand the effects of the coupling between quark condensates and Polyakov loop and also to compare with the Two-EoS results, we discuss here also the Phase diagram in the pure quark sector obtained from the PNJL model. We plot in Fig.~\ref{fig:phil-Phi-T} the temperature evolution of the chiral condensate $\phi_l$ and the Polyakov-Loop $\Phi$ for various values of the quark chemical potential. $\Phi$ and $\bar{\Phi}$ have the same values at zero chemical potential and their difference is very small at finite chemical potential, hence we only present the results of $\Phi$ in Fig.~\ref{fig:phil-Phi-T} and later in the discussion. Firstly, we can see that the chiral condensate and Polyakov loop $\Phi$ vary continuously at $\mu$~=~0 and 200 MeV, and there exist sharp decreases (increases) at high temperature indicating the onset of chiral (deconfinement) phase transitions. These characteristics show that the corresponding chiral and deconfinement phase transitions are crossovers for small chemical potential at high temperature. At variance, for large chemical potentials, e.g., $\mu=350$~MeV, the chiral condensate varies discontinuously with the temperature, which indicates the presence of a first order chiral phase transition, as already seen in the pure NJL approach, although at much lower temperature \cite{Buballa05}, as discussed in the previous Section. The Polyakov loop is always showing a continuous behavior indicating that we have only crossover transitions. The jump observed for the dash-dotted curve corresponding to a $\mu=350$~MeV chemical potential is just an effect of the coupling to the sharp variation of the quark condensates at the first order chiral transition. Moreover this is happening in a region of very small values of the $\Phi$ field at lower temperatures. As a matter of fact such discontinuity disappears for the results at $\mu= 400$~MeV, i.e. above the chiral transition, see the dashed curves for both $\phi$ and $\Phi$ fields. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.3]{Phase-Transition-PNJL.eps} \caption{\label{fig:Phase-Transition-PNJ} Phase diagram of the PNJL model. The corresponding chiral phase transition for the pure NJL model is also shown. Isospin symmetric matter.} \end{center} \end{figure} Finally in Fig.~\ref{fig:Phase-Transition-PNJ} we plot the phase diagram of the PNJL model in $T-\mu_q$ plane (always for isospin symmetric matter). The phase transition curves are obtained by requiring ${\partial \phi_l}/{\partial T}$ and ${\partial \Phi}/{\partial T}$ taking the maximum values. For deconfinement phase transition the use of the maximum value of ${\partial \Phi}/{\partial T}$ as the phase-transition tracer is a good choice when $\mu$ is not too large. In fact we see from Fig.~\ref{fig:phil-Phi-T} a sharp increase of $\Phi$ for $\mu=0$ and 200 MeV. However, with the increase of chemical potential, although we are still able to observe the maximum of ${\partial \Phi}/{\partial T}$, the width of the maximum increases. The peaks of ${\partial \Phi}/{\partial T}$ are more smoothed and this will be not any more a well defined phase-transition parameter as $\mu$ is large. Therefore, some authors take $\Phi=1/2$ as the phase transition parameter \cite{Fukushima08,Sakai10,Sakai11}. In conclusion from Fig.~\ref{fig:phil-Phi-T} we can see that the chiral phase transition is continuous at high temperature and relatively smaller chemical potential, while a first order phase transition takes place at low temperature and larger chemical potential. The {\it Critical-End-Point} ($CEP$) of the chiral transition appears at $(132.2,~296.6)$~MeV in the $T-\mu_q$ plane, in agreement with similar calculations \cite{Sakai10}. At variance, the deconfinement phase transition is always a continuous crossover in the PNJL model, but the peak of ${\partial \Phi}/{\partial T}$ becomes more and more smooth with the increase of baryon chemical potential. In addition, at large chemical potential, a chirally restored but still confined matter, the \emph{quarkyonic matter}, can be realized in the PNJL model. All that is reported in Fig.~\ref{fig:Phase-Transition-PNJ} where we plot the full phase diagram of the PNJL approach. Here we give a short discussion about the coincidence of chiral and deconfinement phase transitions as well as the presence of quarkyonic matter. The temperature dependence of the chiral condensate and of the Polyakov loop of Fig.~\ref{fig:phil-Phi-T} as well the PNJL phase diagram of Fig. \ref{fig:Phase-Transition-PNJ} are obtained with the rescaled parameter $T_0=210$~MeV. The coincidence of chiral restoration and deconfinement takes place at about $\mu_q=290$ MeV. If we take $T_0=270$~MeV, the approximate coincidence, with the different phase-transition temperatures less than 10 MeV, will move down to $\mu\simeq0$. In any case, there is only one cross point of the two phase transitions. Up to now, the relation between chiral-symmetry restoration and deconfinement phase transition is still an open question. It is possible that the coincidence of chiral and deconfinement phase transition takes place in a wider range of chemical potentials. Such coincidence indeed has been recently realized by considering a larger coupling ($entanglement$) between chiral condensate and Polyakov loop, with an explicit $\Phi$-dependence of the condensate couplig $G(\Phi)$ and a chemical potential dependent $T_0$ ~\cite{Sakai10,Sakai11}. In the same Fig.~\ref{fig:Phase-Transition-PNJ} we report also the chiral transition curve for the pure NJL model (same parameters). We note the the coupling between the chiral condensates and the Polyakov-Loop fields ($\Phi,~\bar{\Phi}$) is mostly affecting the temperature of the chiral $CEP$ as expected from the pressure discussion of the previous Section. From Fig.~\ref{fig:Phase-Transition-PNJ} we can see that the deconfinement phase-transition temperature is still high at large chemical potential, and so the region of quarkyonic matter appears very wide. On the other hand the signature of a deconfinement transition is disappearing at large chemical potentials and lower temperatures. Because of the lack of lattice QCD data at large real chemical potentials, more investigations are needed to study the physics in this range. The results in \cite{Sakai11} also show that the range of quarkyonic matter shrinks when a $\mu$-dependent $T_0$ and/or a larger entanglement between quark condensate and Polyakov loop is considered. We remark that this ($T-\mu$) zone just represents the nuclear metter phase diagram region possibly reached in the collision of heavy ions at intermediate energies and so it is of large interest to perform Two-EoS predictions, wich should have a good connection to the more fundamental results of effective quark models. This is the subject of the next Section. \section{Hadron-Quark Phase Transitions} In the following we will discuss the phase diagrams obtained in the Two-EoS model, i.e., explicitly considering a hadronic EoS with the parameter set of $NL$ for symmetric matter and $NL\rho \delta$ for asymmetric matter at low density and chemical potential~\cite{BaranPR,Toro06}. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.3]{T-RHO-with-delta-alpha=0.eps} \caption{\label{fig:T-RHO-with-delta-alpha=0}Phase diagram in $T-\rho_B^{}$ plane in the Two-EoS model for symmetric matter. } \end{center} \end{figure} We present firstly the phase transition from hadronic to deconfined quark phase in $T-\rho_B^{}$ plane in Fig.~\ref{fig:T-RHO-with-delta-alpha=0} for symmetric matter and in Fig.~\ref{fig:T-RHO-with-delta-alpha=02} for asymmetric matter with the global asymmetry parameter $\alpha=0.2$. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.3]{T-RHO-with-delta-alpha=02.eps} \caption{\label{fig:T-RHO-with-delta-alpha=02}Phase diagram in $T-\rho_B^{}$ plane in the Two-EoS model for asymmetric matter with the global asymmetry parameter $\alpha=0.2$. $\chi$ represents the fraction of quark matter.} \end{center} \end{figure} For symmetric matter at a fixed $T$, the first order phase transition takes place with the same pressure and $\mu_B^{}$ in both phases, but a jump of $\rho_B^H$ to $\rho_B^Q$, just as shown in Fig.~\ref{fig:T-RHO-with-delta-alpha=0}. In the mixed phase, the pressures of both phases keep unchanged and $\alpha=\alpha^H_{}=\alpha^Q_{}=0$ for any quark fraction $\chi$. These features are quite different for the mixed phase in isospin asymmetric matter. As already noted in \cite{Shao11}, where the NJL quark EoS has been used, also in the PNJL case we see a clear Isospin Distillation effect, i.e., a strong enhancement of the isospin asymmetry in the quark component inside the mixed phase, as reported in Fig.~\ref{fig:kai-alphaQ-NLrho-with-delta-alpha=02}, where the asymmetry parameter in the two components are plotted vs. the quark fraction $\chi$. As a consequence the pressure in the mixed phase keeps rising with $\chi$, more rapidly for quark concentrations below $50~\%$ \cite{Shao11}. From Fig.~\ref{fig:kai-alphaQ-NLrho-with-delta-alpha=02} we remark that this isospin enrichment of the quark phase is rather robust vs. the increasing temperature. This is important since color pairing correlations at low temperatures will decrease symmetry energy effects \cite{Pagliara10}. We have to note that such large isospin distillation effect is due to the large difference in the symmetry terms in the two phases, mainly because all the used quark effective models do not have explicit isovector fields in the interaction \cite{Torohq09}. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.3]{kai-alphaQ-NLrho-with-delta-alpha=02.eps} \caption{\label{fig:kai-alphaQ-NLrho-with-delta-alpha=02} The behavior of local asymmetric parameters $\alpha^H$ and $\alpha^Q$ in the mixed phase for several values of temperature. Parameter set $NL\rho\delta$ is used in the calculation.} \end{center} \end{figure} Such behavior of the local asymmetry parameters will possibly produce some observational signals in the following hadronization during the expansion. We can expect an inverse trend in the emission of neutron rich clusters, as well as an enhancement of $\pi^-/\pi^+$, $K^0/K^+$ yield ratios from the high density n-rich regions which undergo the transition. Besides, an enhancement of the production of isospin-rich resonances and subsequent decays may be found. For more details one can refer to \cite{Torohq09, Shao11}. Moreover, an evident feature of Fig.~\ref{fig:T-RHO-with-delta-alpha=02} is that the onset density of hadron-quark phase transition for asymmetric matter is much lower than that of the symmetric one, and therefore it will be easier to probe in heavy-ion collision experiments. We plot the $T-\mu_B^{}$ phase diagrams in Fig.~\ref{fig:T-Mu-with-delta-alpha=0} for symmetric matter and Fig.~\ref{fig:T-Mu-with-delta-alpha=02} for asymmetric matter. Fig.~\ref{fig:T-Mu-with-delta-alpha=0} clearly shows that there is only one phase-transition curve in the $T-\mu_B^{}$ plane. The phase transition curve is independent of the quark fraction $\chi$. However, for asymmetric matter, the phase transition curve varies for different quark fraction $\chi$. The phase transition curves in Fig.~\ref{fig:T-Mu-with-delta-alpha=02} are obtained with $\chi=0$ and $1$, representing the beginning and the end of hadron-quark phase transition, respectively. In Fig.~\ref{fig:T-Mu-with-delta-alpha=0} and Fig.~\ref{fig:T-Mu-with-delta-alpha=02} we also plot the phase transition curves with the Hadron-NJL model. For the NJL model with only chiral dynamics, no physical solution exists when the temperature is higher than $\sim80$ MeV. The corresponding temperature is enhanced to about $\sim166$ MeV with the Hadron-PNJL model, which is closer to the phase transition (crossover) temperature given by full lattice calculation at zero or small chemical potential~\cite{Karsch01,Karsch02,Kaczmarek05}. In this sense the Hadron-PNJL model gives significally different results and represents certainly an improvement respect to the Hadron-NJL scheme of ref. \cite{Shao11}. From Fig.~\ref{fig:T-Mu-with-delta-alpha=02} we remark that in both cases the region around the $Critical-End-Points$ is not affected by isospin asymmetry contributions, which are relevant at lower temperatures and larger chemical potentials. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.3]{T-Mu-with-delta-alpha=0.eps} \caption{\label{fig:T-Mu-with-delta-alpha=0}Phase diagram in $T-\mu_B^{}$ plane for symmetric matter.} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.3]{T-Mu-with-delta-alpha=02.eps} \caption{\label{fig:T-Mu-with-delta-alpha=02}Phase diagram in $T-\mu_B^{}$ plane for asymmetry matter with the global asymmetry parameter $\alpha=0.2$.} \end{center} \end{figure} From the detailed discussions of the previous two Sections now we nicely understand the large difference between Hadron-NJL and Hadron-PNJL phase transitions and the important role of the confinement dynamics. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.3]{T-Mu.eps} \caption{\label{fig:T-Mu}Phase diagrams of the PNJL model and the Two-EoS model (dashed curve). The shaded area is just a guide for the eye.} \end{center} \end{figure} Finally in Fig.~\ref{fig:T-Mu} we present together the phase diagrams obtained by the PNJL model and the Hadron-PNJL model. We find that the deconfined phase transition curve in the PNJL model is close to that obtained in the Hadron-PNJL model at high temperature and intermediate chemical potential. At larger chemical potential, the deconfinement phase transition curve in the PNJL model has still a high temperature. On the other hand from the previous Section we have seen that deconfinement phase transition order parameter $\Phi$ cannot describe well the phase transition at larger chemical potential and lower temperatures. We must rely on the predictions of the Two-EoS approach, which in fact nicely show a good connection to the results more reliable of the PNJL quark model, at high temperature and small or vanishing chemical potential. The Two-EoS Hadron-(P)NJL model also shows that the phase transition at low temperature takes place at much larger chemical potential, consistent with the expectation of a more relevant contribution from the hadron sector \cite{Liubo11}. We notice that at $T=0$ there is no difference between the Hadron-NJL and Hadron-PNJL models. This is due to the fact that there is no dependence of $\Phi$ on $\mu_B$, therefore it vanishes and the PNJL reduces to the NJL. This may casts some doubts on the reliability of the present calculations at $T=0$ and large $\mu_B$. However our main interest is a region at finite T ( $T \simeq 50-100$~MeV) and $\mu_B$ ( $\mu_B \simeq 1000-2000$~MeV) region that can be reached by Heavy Ion Collisions at relativistic energies. Moreover the results obtained by the Hadron-PNJL model at high $T$, small $\mu_B^{}$ and low $T$, large $\mu_B^{}$ may be improved with the consideration of a stronger entanglement between chiral condensate and Polyakov loop, and a chemical potential dependent $T_0$ \cite{Sakai10,Sakai11}. The relevant investigation will be performed as a further study. In any case, since we lack of reliable lattice data at large chemical potential, in general more theoretical work is encouraged. \section{Summary} In this study, the hadron-quark phase transition are investigated in the Two-EoS model. The nonlinear Walecka model and the PNJL(NJL) model are used to describe hadron matter and quark matter, separately. We follow the Gibbs criteria to construct the mixed phase with baryon number and isospin conservations, likely reached during the hot and dense phase formed in heavy-ion collision at intermediate energies. The parameters in both models are well fitted to give a good description of the properties of nuclear matter (even isospin asymmetric), at saturation as well as at higher baryon densities, or lattice data at high temperature with zero/small chemical potential. The phase diagrams for both symmetric and asymmetric matter are explored in both $T-\rho_B^{}$ and $T-\mu_B^{}$ planes. In both Hadron-(P)NJL calculations we get a first order phase transition with a Critical-End-Point at finite temperature and chemical potential. In the PNJL case the $CEP$ is shifted to larger temperatures and smaller chemical potential, to the ($166,600$)~MeV point in the ($T, \mu_B$) plane. This appears a nice indication of a decrease of the quark pressure when confinement is accounted for. Such result is particularly interesting since the $CEP$ is now in the region of $\mu_q/T_c~\simeq~1$ (where $\mu_q$ is the quark chemical potential) and so it could be reached with some confidence by lattice-QCD complete calculations. Another interesting result is that isospin effects are almost negligible when we approach the $CEP$. At variance the calculation shows that the onset density of asymmetric matter is lower than that of symmetric matter. Moreover in the mixed phase of asymmetric matter, the decreasing of local asymmetry parameter $\alpha^H$ and $\alpha^Q$ with the increasing quark fraction $\chi$ may produce some observable signals. In particular we remark the noticeable isospin distillation mechanism (isospin enrichment of the quark phase) at the beginning of the mixed phase, i.e. for low quark fractions, that should show up in the hadronization stage during the expansion. We also see from Fig.\ref{fig:kai-alphaQ-NLrho-with-delta-alpha=02} that this effect is still there even at relatively large temperatures, certainly present in the high density stage of heavy ion collisions at relativistic energies \cite{Toro06,Toro09}. All that support the possibility of an experimental observation in the new planned facilities, for example, FAIR at GSI-Darmstadt and NICA at JINR-Dubna, with realistic asymmetries for stable/unstable beams. Some expected possible signals are suggested. Because of the lack of lattice data at larger real chemical potential, we are left with the puzzle between chiral symmetry restoration and deconfinement. More investigations on the chiral dynamics and (de)confinement, as well as their entanglement are needed. The improvement of the understanding of quark-matter interaction is beneficial to get more accurate results in the Two-EoS model. \begin{acknowledgments} This project has been supported in part by the National Natural Science Foundation of China under Grants Nos. 10875160, 11075037, 10935001 and the Major State Basic Research Development Program under Contract No. G2007CB815000. This work has been partially performed under the FIRB Research Grant RBFR0814TT provided by the MIUR. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The atmospheric mixing angle has experimentally turn out to be quite large \cite{exper} and, according to recent analysis \cite{Fogli}, its $1\sigma$ range is $\theta_{23}=38^\circ -47^\circ$. From the theoretical point of view this invites to speculate on the possibility of maximal flavour violation for second and third lepton families. In flavour model building, achieving a maximal $\theta_{23}$ is far from trivial, taking also into account that $|U_{e3}|$ is small and the solar mixing angle large but not maximal: $\theta_{13} \le 7^\circ$ and $\theta_{12}= 33^\circ -36^\circ$ at 1$\sigma$ \cite{Fogli}. Our aim is to investigate which are the most natural mechanisms to generate a maximal atmospheric mixing\footnote{To be quantitative (and rather subjective), we are going to ask $|1-\tan\theta_{23}|\lesssim 5\%$, corresponding to an uncertainty of about $2^\circ$ in $\theta_{23}$.}. If an underlying flavour symmetry exists, it selects a {\it privileged flavour basis} for fermion mass matrices. The lepton mixing matrix results from combining the unitary matrices which - in that basis - diagonalise left-handed charged lepton and neutrino mass matrices respectively, $U_{MNS}=U_e^\dagger U_\nu$. Obtaining a maximal atmospheric angle because of a conspiracy between many large mixings present in $U_e$ and $U_\nu$ appears to be quite a fortuitous explanation, especially in the case of effective neutrino masses as in the seesaw mechanism\footnote{Exceptions are the $A_4$ models \cite{A4}, where tunings are eventually displaced in the neutrino spectrum.}. A better starting point for a flavor model is to predict one between $\theta^e_{23}$ and $\theta^\nu_{23}$ to be maximal, so that the goal is reached when the other parameters present in $U_e^\dagger U_\nu$ marginally affect this maximal 23-angle. Models so far proposed \cite{others} along these lines adopted the strategy of having, in the flavour symmetry basis, a maximal $\theta^{\nu}_{23}$ together with a negligibly small $\theta^{e}_{23}$, or viceversa. Since the upper bound on $\theta_{13}$ naturally suggests $\theta^e_{12}$ and $\theta^e_{13}$ to be small and since the latter affect $\theta_{23}$ at second order, this is in principle a simple and effective framework to end up with a maximal atmospheric angle. However, the drawback in model building is the difficulty in managing such a huge hierarchy among $\theta^{e}_{23}$ and $\theta^{\nu}_{23}$. In this letter we point out an alternative mechanism to achieve a maximal atmospheric angle, which is based on the presence of a maximal CP violating phase difference between second and third lepton families and which does not require any particular hierarchy among $\theta^{e}_{23}$ and $\theta^{\nu}_{23}$, provided that one of them is maximal. If $\theta^e_{12}$ and $\theta^e_{13}$ are small as suggested by the bound on $\theta_{13}$, then $\theta_{23}=\pi/4$ is robustly predicted. This mechanism is based on maximal CP violation in the sense that, denoting by $g/\sqrt{2}~ (e^{-i w_2} {\bar\mu}_L ~\gamma^\lambda \nu_\mu + {\bar \tau}_L ~\gamma^\lambda \nu_\tau)~ W^-_\lambda +$ h.c. the weak charged currents of the second and third lepton families in the flavour symmetry basis (before doing the rotations in the $\mu-\tau$ and $\nu_\mu-\nu_\tau$ planes to go in the mass eigenstate basis), it requires the phase difference $w_2$ to be $\pm \pi/2$. For three families and Majorana neutrinos, $U_{MNS}$ contains three CP violating phases, which turn out to be complicated functions of $w_2$ and of other phases potentially present. We will focus in particular on the connection between $\delta$ and $w_2$ and on the expectations for $|U_{e3}|$ and $\theta_{12}$. This mechanism also has remarkable analogies with the quark sector. \section{On the Origin of the Atmospheric Mixing} In the basis of the unknown flavour symmetry the leptonic sector is described by \begin{eqnarray}} \def\eea{\end{eqnarray} & &{\cal L}~ = -\frac{1}{2}~\nu^T m_{\nu}^{eff} \nu ~-~ \bar e_R^{T} m_e e_L ~ +~\frac{g}{\sqrt{2}} ~\bar e_L^T \gamma^\lambda \nu ~W^-_\lambda ~+~ \mathrm{h.c.}~~~~~\label{lagr}~\\ & & ~~~~~~~~~~~m_{\nu}^{eff}= U_\nu^* \hat m_\nu U_\nu^\dagger ~~,~~~~~~ m_e = U^R_e \hat m_e {U^L_e}^\dagger \nonumber \eea where a hat is placed over a diagonal matrix with real positive eigenvalues whose order is established conventionally by requiring $|m^2_2 -m^2_3| \ge m^2_2 - m^2_1 \ge 0$, and the $U$'s are unitary matrices. The MNS mixing matrix is $U_{MNS}={U^L_e}^\dagger U_\nu$. We find it convenient to write unitary matrices in terms of a matrix in the standard CKM parameterization \cite{RPP} multiplied at right and left by diagonal matrices with five independent phases, defined for definiteness according to \beq U_{\ell} = e^{i \alpha_{\ell}} ~ {\cal W}_{\ell}~ U^{(s)}_{\ell}~ {\cal V}_{\ell} ~~~~~~~~\ell=e,\nu, \mathrm{MNS} ~~~~~ \label{par} \eeq where, omitting the $\ell$ index, ${\cal W}=$ diag$(e^{i (w_1+w_2)}, e^{i w_2},1)$, ~${\cal V}=$ diag$(e^{i (v_1+v_2)}, e^{i v_2},1)$,~ $U^{(s)}=R(\theta_{23}) \Gamma_\delta R(\theta_{13}) \Gamma_\delta^\dagger R(\theta_{12})$, $\Gamma_\delta =$diag$(1,1,e^{i \delta})$, angles belong to the first quadrant and phases to $[0, 2 \pi [$. Upon phase redefinitions for $\nu$ and $e_L$ fields and a unitary transformation for $e_R$ fields, one can go into the basis where the Lagrangian (\ref{lagr}) reads \beq {\cal L} = -\frac{1}{2}~\nu^T ( {U^{(s)}_\nu}^* \hat m_\nu {\cal V_\nu}^{*2} {U^{(s)}_\nu}^\dagger ) \nu - \bar e_R^{T} (\hat m_e {U^{L(s)}_e}^\dagger) e_L +~\frac{g}{\sqrt{2}}~ \bar e_L^T \gamma^\lambda {\cal W}^* \nu ~W^-_\lambda +~ \mathrm{h.c.}~~~ , \label{tre} \eeq where ${\cal W}^*={\cal W}_e^*{\cal W}_\nu =$ diag$(e^{-i (w_1+w_2)}, e^{-i w_2},1)$. The phases $w_2$ and $w_1$ can be chosen in $]-\pi,\pi] $ and represent the phase difference between the second and third generations of leptons, first and second respectively, in the basis (\ref{tre}), namely before shuffling the flavours by means of $U^{(s)}_\nu$ and $U^{L(s)}_e$ to go in the mass basis. They are a source for CP violation and, in spite of the particular convention adopted here to define angles and phases\footnote{Needless to say, the standard parameterization for unitary matrices is as noble as other possible ones.}, it has to be recognized that they are univocally determined by the flavour symmetry and cannot be removed. Clearly $w_2$ and $w_1$ are not directly measurable, but nevertheless play a quite profound role for the MNS mixing matrix: \begin{eqnarray}} \def\eea{\end{eqnarray} U_{MNS} & = & {U^{L(s)}_e}^\dagger {\cal W}^* U^{(s)}_\nu {\cal V}_\nu \nonumber \\ & = & \underbrace{R^T(\theta^e_{12}) \Gamma_{\delta^e} R^T(\theta^e_{13}) \Gamma_{\delta^e}^\dagger}_{S_e} ~ \underbrace{R^T(\theta^e_{23}) {\cal W}^* R(\theta^\nu_{23}) }_{L} ~ \underbrace{ \Gamma_{\delta^\nu} R(\theta^\nu_{13}) \Gamma_{\delta^\nu}^\dagger R(\theta^\nu_{12}){\cal V}_\nu }_{S_\nu}~~. \label{twelve} \eea None of the 12 parameters dictated by the flavour symmetry and appearing in the r.h.s of eq. (\ref{twelve}) is actually measurable, because only 9 combinations of them are independent - see eq. (\ref{par}) -, among which 3 can be absorbed, $\alpha_{MNS}$ and ${\cal W}_{MNS}$. Note that CP violation through $\delta$ can be generated in the limit where only $w_1$ and/or $w_2$ are present, as well as in the limit where there are just $\delta^e$ and/or $\delta^\nu$ (${\cal V}_\nu$ solely contributes to Majorana CP violating phases). Different flavour symmetries can thus predict the same leptonic (and even hadronic) physics and there is no way to discriminate between them unless adopting theoretical criteria like, e.g., absence of tunings and stability of the results. Adhering to such criteria, we now ask which sets of flavour symmetry parameters more robustly predict the atmospheric angle to be maximal. It is natural to expect the leading role to be played by the core of the MNS mixing matrix, denoted by $L$ in eq. (\ref{twelve}): \beq L =\left( \matrix{ e^{-i( w_1+ w_2)} & 0 & 0 \cr 0 & e^{- i w_2} c_e c_\nu + s_e s_\nu & e^{- i w_2}c_e s_\nu - s_e c_\nu \cr 0 & e^{- i w_2}s_e c_\nu -c_e s_\nu & e^{- i w_2} s_e s_\nu +c_e c_\nu } \right) \eeq \\ where from now on $s_{e,\nu} = \sin \theta^{e,\nu}_{23}$, $c_{e,\nu} = \cos \theta^{e,\nu}_{23}$ for short. If also $S_e$ in eq. (\ref{twelve}) had large mixings, bringing it at the right of $L$ would in general induce large contributions to all three MNS mixings. Both the experimentally small $\theta_{13}$ and a potentially maximal $\theta_{23}$ would then result from a subtle conspiracy between the many angles and phases involved. At the price of some tuning in the neutrino spectrum, this may happen in the case of tribimixing models \cite{A4}, which are often based on an $A_4$ flavour symmetry. On the contrary, the bound on $\theta_{13}$ is naturally fulfilled if each mixing in $S_e$ and $R(\theta^\nu_{13})$ does. Denoting them with $\varphi = \sin \theta^e_{12}$, $\psi = \sin \theta^e_{13}$ and $\xi = \sin \theta^\nu_{13}$, it turns out that they induce second order corrections in the 23 block of $L$, hence smaller than $O(5\%)$ given the bound on $\theta_{13}$. Adopting the latter point of view, the atmospheric mixing angle can be identified with the one of $L$ up to corrections smaller than about $2^\circ$. It turns out to depend on $w_2$ and, symmetrically, on $\theta^e_{23}$ and $\theta^\nu_{23}$: \beq \tan^2\theta_{23} = \frac{c_e^2 s_ \nu^2 + s_e^2 c_\nu^2 - 2 \cos w_2~ c_e s_e c_\nu s_\nu} {s_e^2 s_ \nu^2 + c_e^2 c_\nu^2 + 2 \cos w_2 ~c_e s_e c_\nu s_\nu}~~~~~. \label{tan2atm} \eeq The crucial role played by the phase $w_2$ is manifest: only in the case $w_2=^{~0}_{~\pi}$ one has the simple relation $\theta_{23}= |\theta^\nu_{23} \mp \theta^e_{23} |$. A maximal atmospheric angle requires the following relation among three parameters to be fulfilled \beq \cos w_2 = - \frac{1}{\tan(2 \theta^\nu_{23}) \tan(2 \theta^e_{23})}~~~. \eeq Notice that maximality is generically lost by slightly varying one of the parameters involved. However, one realises from the above expressions that for some exceptional values of two parameters the atmospheric angle turns out to be maximal independently from the value assumed by the third parameter: \vskip 0 cm {\bf i)} for $\theta^{e(\nu)}_{23} = \pi/4$ and $w_2 = \pm \pi/2$, independently of $\theta^{\nu(e)}_{23}$; \vskip 0 cm {\bf h)} for $\theta^{e(\nu)}_{23} = \pi/4$ and $\theta^{\nu(e)}_{23} = 0$ or $\pi/2$, independently of $w_2$. \vskip 0 cm \noindent All this is graphically seen in fig. \ref{fig1} which, for different values of $\theta^{e(\nu)}_{23}$, shows the region of the plane $\{ w_2,\theta^{\nu(e)}_{23} \}$ allowed at $1,2,3 \sigma$ by the experimental data on the atmospheric angle. \begin{figure}[!h] \vskip .5 cm \centerline{ \psfig{file=fig1.eps,width=1.1 \textwidth} \special{color cmyk 1. 1. 0.3 0} \put(-470, 100){ $\theta^{e(\nu)}=0$}\put(-375, 100){ $\theta^{e(\nu)}=0.1$} \put(-280, 100){ $\theta^{e(\nu)}=\frac{\pi}{8}$} \put(-195, 100){ $\theta^{e(\nu)}=\frac{\pi}{4} -0.1$}\put(-87, 100){ $\theta^{e(\nu)}=\frac{\pi}{4}$} \special{color cmyk 0 0 0 1.} \put(-518, 50){\large \bf $\theta^{\nu(e)}$} \put(-10, 50){\large \bf $\theta^{\nu(e)}$} \put(-445, -10){\large \bf $w_2$}\put(-256, -10){\large \bf $w_2$}\put(-65, -10){\large \bf $w_2$}} \caption{Region of the plane $\{ w_2,\theta^{\nu(e)}_{23} \}$ allowed at $1,2,3 \sigma$ by the experimental data on the atmospheric angle \cite{Fogli} for $\theta^{e(\nu)}_{23} =\{0, 0.1, \pi/8, \pi/4 -0.1, \pi/4\}$. The dotted curve is the surface where the atmospheric angle is maximal. The plots also correspond to $\theta^{e(\nu)}_{23} = \pi / 2 - \{0, 0.1, \pi/8, \pi/4 -0.1, \pi/4\}$ upon the substitution $\theta^{e(\nu)}\rightarrow \pi/2 - \theta^{e(\nu)}$. } \label{fig1} \vskip .5 cm \end{figure} The above limits can be pictorially represented in terms of triangles. In the case that $\theta^{e(\nu)}=\pi/4$, eq. (\ref{tan2atm}) can be rewritten under the form \beq \sqrt{2} \sin\theta_{23}=r=| c_{\nu(e)} - e^{-i w_2} s_{\nu(e)} |~~~~~ \label{tri} \eeq which is clearly reminiscent of a triangle, $w_2$ being the angle opposite to $r$. A maximal atmospheric angle requires $r=1$, as happens in the two cases below. \begin{figure}[!h] \vskip 0 cm \centerline{ \psfig{file=triangles3.ps,width=.8 \textwidth} } \label{trian} \end{figure} The possibility h) has been widely exploited in flavour model building. The difficulty of this approach is not to predict a maximal $\theta^{e(\nu)}_{23}$, but rather to manage in having a sufficiently small $\theta^{\nu(e)}_{23}$, say less than $2^\circ$. For instance, a negligible $\theta^\nu_{23}$ is somewhat unnatural in the case of hierarchical neutrinos because the ratio between the corresponding eigenvalues is not so small: $m_2/m_3 \sim 1/6$. Notice also that in seesaw models $\theta^{\nu}_{23}$ is an effective angle which depends on both the Dirac and Majorana Yukawa couplings. The possibility i) has not (to our knowledge) been singled out so far\footnote{We thank however the authors of ref. \cite{quater} for pointing out some interesting analogies with one of their quaternion family symmetry models.}. It has the advantage that it does not require a huge hierarchy between $\theta^e_{23}$ and $\theta^\nu_{23}$. Indeed, in the case that $g/\sqrt{2}~ (\bar \tau_L \gamma^\lambda \nu_\tau ~\pm ~i~ \bar \mu_L \gamma^\lambda \nu_\mu)~ W^-_\lambda +$ h.c. are the charged currect interactions of the second and third lepton families in the basis (\ref{tre}) - namely before applying $R(\theta^e_{23})$ and $R(\theta^\nu_{23})$ to go in the mass eigenstate basis - the maximal phase difference $i$ shields a maximal $\theta^{e(\nu)}_{23}$ from any interference due to $\theta^{\nu(e)}_{23}$. This remarkable fact can be visually seen in fig. \ref{fig2}, where we plot $\tan\theta_{23}$ as a function of $w_2$ for different values of $\theta^e_{23}$ and $\theta^{\nu}_{23}$. \begin{figure}[!ht] \vskip .3 cm \centerline{ \psfig{file=fig2.eps,width=1 \textwidth} \special{color cmyk 1. 1. 0.3 0} \put(-400, 160){ \large $\theta^e_{23}=\frac{\pi}{4}$} \put(-270, 160){ \large $\theta^e_{23}=\frac{\pi}{4}-0.1$} \put(-110, 160){ \large $\theta^e_{23}=\frac{\pi}{8}$} \special{color cmyk 0 0 0 1.} \put(-500, 100){ \large $\tan\theta_{23}$} \put(-13, 100){ \large $\tan\theta_{23}$} \put(-380, -5){ \large $w_2$}\put(-236, -5){ \large $w_2$}\put(-90, -5){ \large $w_2$}} \caption{ Values of $\tan\theta_{23}$ as a function of $w_2$ for $\theta^e_{23}=\pi/4,\pi/4-0.1,\pi/8$. Colored curves correspond, as marked, to $\theta^\nu_{23} = \pi/32, \pi/16 $, $\pi /8, \pi/4 $. The experimental range at $1,2,3\sigma$ \cite{Fogli} is also shown. The same plot holds for $e \leftrightarrow \nu$. } \label{fig2} \vskip .5 cm \end{figure} It is worth to stress that a maximal CP violating phase difference in between fermion families is likely to be at work in the quark sector too, in particular for the Cabibbo angle \cite{Fri} - by the way, again the largest angle of the mixing matrix. Neglecting the presumably very small $13$ and $23$ mixings, the charged currect interaction of the lighter families in the basis analog of (\ref{tre}) is $g/\sqrt{2}~ (\bar c_L ~\gamma^\lambda s_L~+$ $ e^{-i w^q_1} ~\bar u_L ~\gamma^\lambda d_L)$ $~ W^+_\lambda +$ h.c.. Expanding at first order in $s_u=\sin \theta^u_{12}$ and $s_d=\sin \theta^d_{12}$, the analog of eq. (\ref{tan2atm}) reads \beq \tan^2\theta_{C} = s_u^2 + s_d^2 - 2 \cos w^q_1~ s_u s_d + O(s^4)~~~~. \eeq In the plausible case that $s_d= \sqrt{m_d/m_s}$, $s_u= \sqrt{m_u/m_c}$, one has \beq |V_{us}|= | \sqrt{\frac{m_d}{m_s}}-e^{-i w^q_1} \sqrt{\frac{m_u}{m_c}} | \eeq and it is well known that experimental data strongly indicate $w^q_1= \pm \pi/2$. Models of this sort have been studied where also $\alpha \approx w^q_1$ \cite{alfa}, so that the unitarity triangle turns out to be rectangular. \section{An Explicit Model with Maximal $w_2$} We now sketch a supersymmetric $SO(3)$ flavour symmetry model which predicts a maximal atmospheric angle due to the presence of a maximal phase $w_2$. The "flavon" chiral superfields and the lepton doublet $\ell$ are assigned to a triplet of $SO(3)$, while the lepton singlets $e^c,\mu^c,\tau^c$ and the Higgs doublets $h$ are $SO(3)$ singlets. Along the lines of \cite{BHKR}\footnote{Actually, in ref. \cite{BHKR} models are discussed where $\theta_{23}=\pi/4$ because $\theta^e_{23}=\pi/4$ and $\theta^\nu_{23}=0$.} and in its spirit, interesting alignments for the flavon fields can be obtained. Consider the superpotential: \begin{eqnarray}} \def\eea{\end{eqnarray} W &=& X_\chi ~(\chi^2 - M^2_\chi) + X_\varphi ~\varphi^2 + Y_{\chi \varphi}~ (\chi \varphi - M^2_{\chi \varphi}) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~(m^2_{\chi,\varphi,\phi}>0) \nonumber \\ &+& X_\phi ~(\phi^2 - M^2_\phi) + Y_{\chi \phi}~ (\chi \phi -M^2_{\chi \phi})+ X_\xi ~\xi^2 + Y_{\chi \xi}~ \chi \xi ~~~~~~~~~~~~~~~~~~( m^2_\xi<0) \nonumber \\ &+&(\ell \chi)^2 h h + (\ell \phi)^2 h h + (\ell \xi)^2 h h + \tau^c (\ell \varphi) h + \mu^c (\ell \phi) h+ e^c (\ell \xi) h \label{super} \eea where $X$'s and $Y$'s are $SO(3)$ singlet "driver" chiral superfields, $\chi$, $\varphi, \phi, \xi$ are "flavon" chiral superfields with positive soft mass squared except for $\xi$, and dimensionful - eventually hierarchical - couplings are understood in the last line of eq. (\ref{super}). A thorough discussion of the discrete symmetries which could guarantee the previous couplings and forbidding undesirable ones is beyond the spirit of the present discussion. Minimization of the potential induces $SO(3)$ breaking by $<\chi>=(0,0,1)M_\chi$, $<\varphi>=(0,i,1)M^2_{\chi\varphi}/M_\chi$, $<\xi>=(i,1,0) M_\xi$ and $<\phi>=(0,\sin\alpha,\cos\alpha) M_\phi$, where $\cos \alpha= M_\chi M_\phi /M^2_{\chi\phi}$. The following textures are then obtained \beq m_\nu \propto \left( \matrix{ -\lambda_\xi & i \lambda_\xi & 0 \cr i \lambda_\xi & \lambda_\xi + \sin^2\alpha ~\lambda_\phi & \sin\alpha~\cos\alpha ~\lambda_\phi \cr 0 & \sin\alpha ~\cos\alpha ~\lambda_\phi & \cos^2\alpha~ \lambda_\phi + \lambda_\chi } \right) ~~~~ m_e \propto \left( \matrix{ i \epsilon_\xi & \epsilon_\xi & 0 \cr 0 & \sin\alpha~ \epsilon_\phi & \cos\alpha~ \epsilon_\phi \cr 0 & i \epsilon_\varphi & \epsilon_\varphi } \right) \eeq which, as we now turn to discuss, can easily reproduce the experimental data. The spectrum of $m_e$ depends negligibly on $\alpha$ and is accomodated for $\epsilon_\xi:\epsilon_\phi:\epsilon_\varphi= \sqrt{2} m_e : 2 m_\mu : m_\tau$. As for $m_\nu$, a hierarchical spectrum follows from taking $\cos\alpha=0.8$ and $\lambda_\xi:\lambda_\phi:\lambda_\chi=0.08:0.2:1$. The latter values imply $w_2=-\pi/2$, $w_1=\pi$ and, for the charged lepton sector, $\theta^e_{23}=\pi/4+O( m^2_\mu/m^2_\tau)$, $\theta^e_{12}=\theta^e_{13}=\delta^e=0$, while for the neutrino sector $\theta^\nu_{13}=0.4^\circ$, $\theta^\nu_{12}=34^\circ$, $\theta^\nu_{23}=6^\circ$, $\delta_\nu=v_2^\nu=0$, $v_1^\nu=\pi$. Combining the latter 12 parameters to obtain the MNS mixing matrix - see eq. (\ref{twelve}) -, it turns out that, due to the maximal $w_2$, the atmospheric angle is also maximal, $\theta_{23}=\pi/4+O( m^2_\mu/m^2_\tau)$. In addition, $\theta_{12}=\theta^\nu_{12}$, $\theta_{13}=\theta^\nu_{13}$, Majorana phases vanish but $\delta=\pi/2$. Note that such maximal CP violation in weak charged currents through $\delta$ has to be completely ascribed to the maximality of $w_2$. \section{Some General Relations and Limiting Cases} Here we discuss the general relations between the parameters in the basis (\ref{tre}) and the measurable quantities $|U_{e3}|$, $\theta_{12}$ and the MNS phase $\delta$, under the assumptions that the bound on $\theta_{13}$ is naturally fulfilled because so do $\varphi = \sin \theta^e_{12}$, $\psi = \sin \theta^e_{13}$ and $\xi = \sin \theta^\nu_{13}$. This allows $S_e$ to commute with $L$ and the dependence on the mechanism responsible for a maximal atmospheric angle is encoded in $w_2,\theta^e_{23},\theta^\nu_{23}$. Since $L_{22}=e^{- i w_2} L_{33}^*$, $L_{32}= - e^{- i w_2} L_{23}^*$ and a maximal atmospheric angle implies $|L_{ij}|= 1/\sqrt{2}$ for $i,j=2,3$, this dependence is equivalently expressed in terms of $w_2$, $\lambda_{23}=$Arg$(L_{23})$, $\lambda_{33}=$Arg$(L_{33})$. We collect in table 1 the values of $\lambda_{23}, \lambda_{33}$ for the cases h) and i) discussed previously. \begin{table}[!h] \begin{center} \begin{tabular}{c||c|c|c|c} & $\theta^{e}_{23} = \pi/4$ & $\theta^{\nu}_{23} = \pi/4$ & $\theta^{e}_{23} = \pi/4$ & $\theta^{\nu}_{23} = \pi/4$ \\ & $\theta^{\nu}_{23} = 0 ~(\pi/2)$ & $\theta^{e}_{23} = 0~(\pi/2)$ & $w_2 = \pm \pi/2$ & $w_2 = \pm \pi/2$ \\ \hline $\lambda_{23}$ & $\pi~ (-w_2)$ & $- w_2~(\pi)$ & $\pm \theta^\nu_{23} + \pi$ & $\mp (\theta^e_{23}+\pi/2)$\\ $\lambda_{33}$ & $0 ~(-w_2)$ & $0~(-w_2)$ & $\mp \theta^\nu_{23}$ & $\mp \theta^e_{23}$ \\ \end{tabular} \\ \vspace{.3 cm} Table 1 \end{center} \end{table} Introducing the quantities \beq v_\varphi= \frac{\varphi}{\sqrt{2}}~ e^{i (w_1-\lambda_{33})} ~,~~~ v_\psi=\frac{\psi}{\sqrt{2}}~ e^{i (w_1 -\lambda_{23}-\delta^e)} ~,~~~ v_\xi = \xi ~e^{-i (w_2+\lambda_{23}+\lambda_{33}+\delta^\nu)} \eeq one has~ $\theta_{23}= \pi/4 + O(v^2)$ with $v^2\sim 5\%$, together with the general formulas \begin{eqnarray}} \def\eea{\end{eqnarray} \theta_{12} &=& \theta^\nu_{12} - \mathrm{Re}(v_\varphi - v_\psi) +O(v^2) \nonumber\\ U_{e3} &=& v_\varphi + v_\psi - v_\xi + O(v^3) \\ \delta & = & \pi - \mathrm{Arg}U_{e3} + O(v\sin(\mathrm{Arg}U_{e3})) ~~.\nonumber \eea The above expressions allow to complete the phenomenological study of our framework and generalise previous studies that assumed bimixing \cite{devdabimax} or tri-bimixing \cite{devdabitrimax} for $U_{\nu}$. As already stressed, a measure of these independent observable quantities cannot reveal from which mechanism they come from, nor whether the $v$'s interfere in originating them. In order to find some potential correlations, additional hypothesis have to be introduced. The model of the previous section corresponds to the limit $\xi \gg \varphi,\psi$, in which case the above expressions simplify to \beq |U_{e3}|\approx \xi ~~,~~~~\delta \approx w_2+\lambda_{23}+\lambda_{33}+\delta^\nu ~~, ~~~~ \theta_{12} \approx \theta^\nu_{12} ~. \eeq Notice that there are no correlations between $|U_{e3}|$, $\theta_{12}$ and $\delta$. The latter does not depend on $w_1$ and is rather related to the mechanism at work for the atmospheric angle, even though it cannot reveal which one is actually at work. As can be seen from table 1, in the case i) with $\theta^{e}_{23} = \pi/4$, $\delta=w_2+\pi+\delta^\nu$. The model of the previous section had $\delta^\nu=0$ and $w_2=-\pi/2$, which explicitely shows that a maximal $w_2$ was the source of the maximal CP violation in $\delta$. Interesting correlations emerge for $\psi\gg \varphi,\xi$ in which case \beq |U_{e3}| \approx \frac{\psi}{\sqrt{2}} ~~,~~~~~\delta \approx \pi +\lambda_{23} +\delta^e - w_1 ~~,~~~~ \theta_{12} \approx \theta^\nu_{12} - |U_{e3}| \cos \delta~, \eeq and for $\varphi\gg \psi,\xi$ in which case \beq |U_{e3}| \approx \frac{\varphi}{\sqrt{2}} ~~,~~~~~\delta \approx \pi +\lambda_{33} - w_1 ~~,~~~~ \theta_{12} \approx \theta^\nu_{12} + |U_{e3}| \cos \delta ~. \eeq Notice that these situations are phenomenologically equivalent provided $\delta \leftrightarrow \delta +\pi$. Both $\delta$ and $\theta_{12}$ depend on the mechanism at work for the atmospheric angle. For instance, for $\varphi\gg \psi,\xi$ and with $\theta^{e(\nu)}_{23} = \pi/4$, $w_2 = \pm \pi/2$, it turns out that $\delta$ depends on $w_1$ and the 23-angle whose magnitude is irrelevant for the atmospheric angle: $\delta \approx \pi - w_1 \mp \theta^{\nu(e)}_{23}$. In the following we focus on the possibility that $\sin \theta^e_{12}= \varphi$ dominates. This is a particularly interesting scenario because naturally compatible with a grandunified picture. The correlations are shown in fig. \ref{fig3} by plotting, for different values of $\theta^\nu_{12}$, the region of the $\{ \delta, |U_{e3}|\}$ plane allowed by the present range of $\theta_{12}$ at $1,2,3 \sigma$. The case of a maximal $\theta^\nu_{12}$ is particularly interesting from the theory point of view. As shown by the plot, present data \cite{Fogli} suggest $\delta \approx \pi$ and $|U_{e3}|\approx 0.2$, dangerously close to its $3\sigma$ bound and interestingly close to the Cabibbo angle $\theta_C$. Notice that the so-called "quark lepton complementarity" proposal $\theta_{12}=\pi/4- \theta_C$ \cite{qlc} corresponds precisely to $\delta = \pi$ and $|U_{e3}|= \theta_C$, i.e. exact CP and $\varphi= \sqrt{2} \theta_C$. Anyway, also $\varphi=\theta_C$, i.e. $U_{e3}$ close to its $1\sigma$ bound, falls inside the $2\sigma$ window for $\theta_{12}$ provided $\delta = \pi$. Remarkably enough, it turns out that a maximal $\delta$ strongly favours $\tan\theta^{\nu}_{12} \approx 1/\sqrt{2}$, with a mild dependence on $|U_{e3}|$. The possibility $\varphi=\theta_C/3$, particularly relevant for grandunified models, is thus well compatible with tribimixing and maximal CP violation. \begin{figure}[!h] \vskip 0. cm \centerline{ \psfig{file=fig3.eps,width=1.1 \textwidth} \special{color cmyk 0 1. 1. 0.2} \put(-458, 130){ $\tan\theta^{\nu}_{12}=1$} \put(-350, 130){ $\tan\theta^{\nu}_{12}=1/\sqrt{2}$} \put(-233, 130){ $\tan\theta^{\nu}_{12}=1/\sqrt{3}$} \put(-108, 130){ $\tan\theta^{\nu}_{12}=1/2$} \Green \put(-400, 91){\footnotesize $3\sigma$}\put(-400, 81){\footnotesize $2\sigma$}\put(-400, 69){\footnotesize$1\sigma$} \special{color cmyk 0 0 0 1.} \put(-530, 70){\large \bf $|U_{e3}|$} \put(-10, 70){\large \bf $|U_{e3}|$} \put(-430, -10){\large \bf $\delta$}\put(-312, -10){\large \bf $\delta$} \put(-192, -10){\large \bf $\delta$}\put(-72, -10){\large \bf $\delta$} } \caption{Region of the $\{ \delta, |U_{e3}|\}$ plane allowed by the present range of $\theta_{12}$ at $1,2,3 \sigma$ \cite{Fogli} for different values of $\tan\theta^\nu_{12}$. The dotted line corresponds to the best fit of $\theta_{12}$. Also shown are the $1,2,3 \sigma$ bounds on $|U_{e3}|$.} \label{fig3} \vskip 0. cm \end{figure} \section{Conclusions and Outlook} We pointed out that if CP and flavour are maximally violated by second and third lepton families in the flavour symmetry basis, a maximal atmospheric angle is automatically generated when the bound on $|U_{e3}|$ is fulfilled in a natural way. This mechanism has two advantages with respect to the one usually exploited: it is very suggestive of the quark sector and it does not require one between $\theta^\nu_{23}$ and $\theta^e_{23}$ to vanish, which could be difficult to achieve especially for seesaw models. We think that such a mechanism deserves more studies, both from the point of view of grandunified theories and flavour symmetries. Under the assumption that the bound on $|U_{e3}|$ is naturally fulfilled, we discussed the general relations between the parameters in the basis (\ref{tre}) and the measurable quantities $|U_{e3}|$, $\theta_{12}$ and the CP violating phase $\delta$, clarifying in particular its relation with the phases among lepton families. These general results have also been confronted with the preditions of a specific realisation of the above mechanism, a supersymmetric model based on a $SO(3)$ flavour symmetry where a maximal CP violating phase $\delta$ arose as a direct consequence of the maximal phase difference between second and third lepton families. \section*{Acknowledgements} I thank G. Altarelli and C.A. Savoy for enlightening discussions. Thanks also go to the Dept. of Physics of Rome1 and to the SPhT CEA-Saclay for hospitality during the completion of this work. This project is partially supported by the RTN European Program MRTN-CT-2004-503369.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} Protoplanetary discs evolve over million year time scales during which the accretion rate $\dot{M}$ onto the central star drops, from typically $5\times 10^{-6}M_\odot$/yr around the youngest stars to $10^{-9}M_\odot$/yr after $3-10$ Myr \citep{1998ApJ...495..385H}. This reduction in $\dot{M}$ can be interpreted as a reduction of the gas surface density $\Sigma_G$. Ensembles of protoplanetary discs can be observed in star forming regions, like the Ophiuchus cluster \citep{2010ApJ...723.1241A}. These observations give constraints on the structure of protoplanetary discs (e.g. gradients in surface densities), although with large uncertainties. In the late stages of the discs, $\dot{M}$ becomes so small that the disc is blown away by photo evaporation on a very short time scale \citep{2013arXiv1311.1819A}. The central star also evolves on these time scales \citep{1998A&A...337..403B}. As the stellar luminosity changes, the amount of stellar heating received by the disc changes as well, which in turn affects the temperature and density structure. During this evolution, planets form. Gas giant planets have to form, while the gas-rich protoplanetary disc is still present, and large parts of the growth of terrestrial planets and super Earths occur during the gaseous disc phase as well. Several important growth and formation stages rely on detailed knowledge of the protoplanetary disc structure: \begin{itemize} \item Growth of dust particles to pebble sized objects \citep{2010A&A...513A..57Z, 2012A&A...539A.148B, 2013A&A...552A.137R} \item Movement of pebbles inside the disc due to gas drag \citep{1977Ap&SS..51..153W,2008A&A...487L...1B} \item Formation of planetesimals, e.g. via the streaming instability \citep{2005ApJ...620..459Y, 2007ApJ...662..627J} \item Turbulent stirring of planetesimals \citep{2009Icar..204..558M, 2009ApJ...707.1233Y, 2012ApJ...748...79Y} \item Formation of planetary cores from embryos and planetesimals \citep{2010AJ....139.1297L} or via pebble accretion \citep{2012A&A...544A..32L} and subsequent gas accretion onto them \citep{1996Icar..124...62P} \item Migration of planets and cores in the gas disc \citep{1997Icar..126..261W, 2006A&A...459L..17P, 2008A&A...487L...9K, 2009A&A...506..971K}. \item The late stages of the disc evolution, where migration of newly formed gas giants shape the inner solar system \citep{2009Icar..203..644R, 2011Natur.475..206W} \end{itemize} Even though these mechanisms happen on different length scales and time scales, they are all strongly dependent on the underlying disc structure (temperature $T$, gas surface density $\Sigma_G$, aspect ratio $H/r$), which makes the protoplanetary disc structure therefore a key parameter to understand the formation of planets and planetary cores. We now highlight some key processes, which will be discussed further in this paper. {\it Planetesimal formation} and the formation of planetary embryos can be aided by reducing the radial pressure gradient in the protoplanetary disc. Bumps and dips in the surface density of the disc, which is where the pressure gradient changes, can locally reduce the pressure support of the gas. A reduction in the pressure support of the gas causes a reduction of the headwind acting on the pebble and would reduce their inward motion \citep{1977MNRAS.180...57W, 2008A&A...487L...1B, 2012A&A...539A.148B}. Reduced pressure support also decreases the metallicity threshold for particle concentration through the streaming instability, significantly helping the formation of planetesimals \citep{2010ApJ...722L.220B}. {\it Core accretion} is the process where planetary embryos grow to the size of the cores of the giant planets ($\sim$10 M$_{\rm Earth}$). The accretion of planetesimals by embryos is slower than disc life times \citep{1996Icar..124...62P, 2004astro-ph0406469, 2010AJ....139.1297L}, but growth time scales can be drastically reduced by considering the accretion of pebbles \citep{2010MNRAS.404..475J, 2010A&A...520A..43O, 2012A&A...544A..32L, 2012A&A...546A..18M}. In the latter scenario the cores can grow on time scales of $10^6$ years at a radial distance of $5$~AU from the host star \citep{2014arXiv1408.6094L}. Such fast growth depends on the point where a core grows large enough to enter a phase of rapid accretion (so-called Hill accretion), which occurs earlier in regions with lower scale heights. {\it Planetary migration} describes the gravitational interactions of planets and planetary cores with the gas disc \citep{1997Icar..126..261W}. In a locally isothermal disc, cores are expected to migrate inwards on time scales that are much shorter than the discs' lifetime \citep{2002ApJ...565.1257T}, posing a problem for the formation of the cores of giant planets. However, the migration of cores cores depends on the thermodynamics in the disc, and even outward migration of planetary cores is possible \citep{2006A&A...459L..17P, 2008A&A...487L...9K, 2008ApJ...672.1054B, 2009A&A...506..971K}. The migration depends on the local radial gradient of entropy, which can drive outward migration (see \citet{2013arXiv1312.4293B} for a review). The structure of the protoplanetary disc that surrounded our own Sun can be approximated by the Minimum Mass Solar Nebular (MMSN) which comes from fitting the solid mass (dust and ice) of the existing planets in our solar system with a power law \citep{1977Ap&SS..51..153W, 1981PThPS..70...35H}. It is then often assumed that other protoplanetary discs have similar power law structures \citep{2013MNRAS.431.3444C}, however applying it to all extrasolar systems is troublesome \citep{2014MNRAS.440L..11R}. Along with the MMSN model, \citet{2010AREPS..38..493C} propose a model that features slightly different gradients in the disc (hereafter named CY2010). Both models, the MMSN and the CY2010 model, approximate the disc structure by uniform power laws in temperature $T$ and gas surface density $\Sigma_G$. The difference in the two models originates in updated condensate mass fractions of solar abundances \citep{2003ApJ...591.1220L} that lead to a higher estimate of the surface density. The different temperature profile is explained by different assumptions on the grazing angle of the disc that determines the absorption of stellar irradiation. In the CY2010 model that follows the calculations of \citep{1997ApJ...490..368C}, $T(r) \propto r^{-3/7}$, in contrast to the MMSN where $T(r) \propto r^{-1/2}$ stemming from the assumption of an optically thin disc. However, simulations including radiative cooling and viscous and stellar heating have shown that the disc structure features bumps and dips in the temperature profile (\citet{2013A&A...549A.124B} - hereafter Paper I). \citet{2014A&A...564A.135B} (hereafter Paper II) highlighted the direct link in accretion discs between changes in opacity $\kappa$ and the disc profile. The mass flux $\dot{M}$ is defined for a steady state disc with constant $\dot{M}$ at each radius $r$ as \begin{equation} \label{eq:Mdotvr} \dot{M} = - 2\pi r \Sigma_G v_r \ . \end{equation} Here $\Sigma_G$ is the gas surface density and $v_r$ the radial velocity. Following the $\alpha$-viscosity approach of \citet{1973A&A....24..337S} we can write this as \begin{equation} \label{eq:mdot} \dot{M} = 3 \pi \nu \Sigma_G = 3 \pi \alpha H^2 \Omega_K \Sigma_G \ . \end{equation} Here $H$ is the height of the disc and $\Omega_K$ the Keplerian rotation frequency. At the ice line, the opacity changes because of the melting and the sublimation of ice grains or water vapour. This change in opacity changes the cooling rate of the disc [$D \propto 1 / (\rho \kappa)$], hence changing the temperature in this region. A change in temperature is directly linked to a change in $H$, so that the viscosity changes. However, since the disc has a constant $\dot{M}$ rate at all radii, a change in viscosity has to be compensated by a change in surface density, creating bumps and dips in the disc profile, which do not exist in the MMSN and CY2010 model. We present detailed 2D ($r,z$) simulations of discs with constant $\dot{M}$, including stellar and viscous heating, as well as radiative cooling. We use the $\alpha$ approach to parametrize viscous heating, but the protoplanetary disc is not temporally evolved with the $\alpha$ approach. Instead $\alpha$ is used to break the degeneracy between column density $\Sigma_G$ and viscous accretion speed $v_r$ (eq.~\ref{eq:Mdotvr}). Each value of $\dot{M}$ is linked to an evolutionary time through observations \citep{1998ApJ...495..385H}. This then allows us to take the correct stellar luminosity from stellar evolution \citep{1998A&A...337..403B}. In contrast to previous work \citep{2014A&A...570A..75B}, we do not include transitions in the $\alpha$ parameter to mimic a dead zone, because we are primarily interested in investigating how the stellar luminosity and the disc's metallicity affect the evolution of the disc structure in time. For the time evolution of the disc over several Myr (and therefore several orders of magnitudes in $\dot{M}$), we present a semi-analytical formula that expresses all disc quantities as a function of $\dot{M}$ and metallicity $Z$. This paper is structured as follows. In section~\ref{sec:methods} we present the numerical methods used and the link between $\dot{M}$ and the time evolution. We point out important differences in the disc structure between our model and the MMSN and CY2010 model in section~\ref{sec:discstructure}. We show the influence of the temporal evolution of the star on the structure of discs with different $\dot{M}$ in section~\ref{sec:timeevolve}. We then discuss the influence of metallicity on the disc structure and evolution in section~\ref{sec:metallicity}. We then discuss the implications of the evolution of protoplanetary discs on planet formation in section~\ref{sec:formplanets}. We finally summarize in section~\ref{sec:summary}. In appendix~\ref{ap:model} we present the fits for the full time evolution model of protoplanetary discs. \section{Methods} \label{sec:methods} \subsection{Numerical set-up} We treat the protoplanetary disc as a three-dimensional (3D), non-self-gravitating gas, whose motion is described by the Navier-Stokes equations. We assume an axisymmetric disc structure because we do not include perturbers (e.g. planets) in our simulations. Therefore we use only one grid cell in the azimuthal direction, making the computational problem de facto 2D in the radial and vertical direction. We utilize spherical coordinates ($r$-$\theta$) with $386 \times 66$ grid cells. The viscosity is treated in the $\alpha$-approach \citep{1973A&A....24..337S}, where our nominal value is $\alpha=0.0054$. Here the viscosity is used as a heating parameter and not to evolve the disc viscously, because the viscous evolution of the disc is very slow. Instead we use an initial radial gas density profile from a 1D analytic model for each accretion rate $\dot{M}$. The vertical profile is computed from an analytic model of passive discs irradiated by the central star. The simulations are then started until they reach an equilibrium state between heating and cooling, which happens much faster than the viscous evolution of the disc. This final equilibrium state is different from the initial one, because the disc is not passive (viscous heating is included) and opacities depend on the temperature. This changes $H/r$ in the disc, which in turn changes the local viscosity because of the $\alpha$-prescription. Therefore we continue the simulations until a new radial density profile in the disc is achieved. This happens on a viscous time scale, but because the changes are only local variations relative to the initial profile, the new equilibrium state is achieved relatively fast, much faster than the global evolution of the disc and decay of the stellar accretion rate (see section~\ref{subsec:time}). The viscosity in protoplanetary discs can be driven by the magnetorotational instability (MRI), where ionized atoms and molecules cause turbulence through interactions with the magnetic field \citep{1998RvMP...70....1B}. As ionization is more efficient in the upper layers of the disc (thanks to cosmic and X-rays), the midplane regions of the disc are not MRI active and therefore feature a much smaller viscosity ($\alpha$ parameter). In discs with a constant $\dot{M}$, a change in viscosity has to be compensated for by an equal change in surface density (eq.~\ref{eq:mdot}); however, in 3D simulations, much of the accretion flow can be transported through the active layer, so that the change in surface density is smaller than for 2D discs \citep{2014A&A...570A..75B}. Additionally, hydrodynamical instabilities, such as the baroclinic instability \citep{2003ApJ...582..869K} or the vertical shear instability \citep{2013MNRAS.435.2610N, 2014arXiv1409.8429S}, can act as a source of turbulence in the weakly ionised regions of the disc. A realistic picture of the source of turbulence inside accretion discs is still being debated (see e.g. \citet{Turner2014}). We therefore feel that it is legitimate to neglect the effects of a dead zone and assume a constant $\alpha$ throughout the disc. The dissipative effects can then be described via the standard viscous stress-tensor approach \citep[e.g.][]{1984frh..book.....M}. We also include irradiation from the central star, as described in Papers I and II. For that purpose we use the multi-dimensional hydrodynamical code FARGOCA, as originally presented in \citet{2014MNRAS.440..683L} and in Paper II. The radial extent of our simulations spans from $1$ AU to $50$ AU, which includes the range of the MMSN, which is defined from $0.4$ to $36$ AU. We apply the radial boundary conditions described in the appendix of Paper II. The radiative energy associated with viscous heating and stellar irradiation is transported through the disc and emitted from its surfaces. To describe this process we utilize the flux-limited diffusion approximation \citep[FLD,][]{1981ApJ...248..321L}, an approximation that allows the transition from the optically thick mid-plane to the thin regions near the disc's surface. The hydrodynamical equations solved in the code have already been described in detail \citep{2009A&A...506..971K}, and the two-temperature approach for the stellar irradiation was described in detail in Paper I, so we refrain from quoting it here again. The flux $F_\star$ from the central star is given by \begin{equation} \label{eq:luminosity} F_\star = \frac{R_\star^2 \sigma T_\star^4}{r^2} \ , \end{equation} where $R_\star$ and $T_\star$ give the stellar radius and temperature and $\sigma$ is the Stefan–Boltzmann constant. Stellar heating is responsible for keeping the disc flared in the outer parts (Paper I). The size and temperature of the star changes in time (see section~\ref{subsec:time}) and the corresponding values are displayed in table~\ref{tab:Starsize}, where $t_\star$ gives the age of the star. The stellar mass is fixed to $1 M_\odot$. We describe the effects of time on the size and temperature of the star in section~\ref{subsec:time} and take these effects into account in section~\ref{sec:timeevolve}. {% \begin{table} \centering \begin{tabular}{ccccc} \hline \hline $\dot{M}$ in $M_\odot/\text{yr}$ & $T_\star$ in K & $R_\star$ in $R_\odot$ & $L$ in $L_\odot$ & $t_\star$ in Myr \\ \hline {$1 \times 10^{-7}$} & 4470 & 2.02 & 1.47 & 0.20 \\ {$7 \times 10^{-8}$} & 4470 & 1.99 & 1.42 & 0.25 \\ {$3.5 \times 10^{-8}$} & 4450 & 1.92 & 1.31 & 0.41 \\ {$1.75 \times 10^{-8}$} & 4430 & 1.83 & 1.16 & 0.67 \\ {$8.75 \times 10^{-9}$} & 4405 & 1.70 & 0.98 & 1.10 \\ {$4.375 \times 10^{-9}$} & 4360 & 1.55 & 0.80 & 1.80 \\ \hline \end{tabular} \caption{Stellar parameters for different $\dot{M}$ as time evolves. \label{tab:Starsize} } \end{table} }% The initial surface density profile follows $\Sigma_G \propto r^{-15/14}$, which follows from eq.~\ref{eq:mdot} for a flared disc with $H/r \propto r^{2/7}$. We model different $\dot{M}$ values by changing the underlying value of the surface density, while we keep $\alpha$ constant. This does not imply that the same viscosity ($\nu = \alpha H^2 \Omega$) is present at all the different $\dot{M}$ stages, because $H$ changes as the disc evolves (Paper II). In our simulations we set the adiabatic index $\gamma = 1.4$ and used a mean molecular weight of $\mu =2.3$. We use the opacity profile of \citet{1994ApJ...427..987B}, which is derived for micrometer-sized grains. In fact dust growth can be quite fast \citep{2010A&A...513A..57Z}, depleting the micro meter-sized dust grains to some level. However, larger grains (starting from mm size) only make a very small contribution to the opacity at high temperatures. At lower temperatures ($T < 15$ K), the larger mm grains will dominate the opacity, but those temperatures are not relevant within $50$AU. Additionally, if these grains start to grow and form larger pebbles, they will make a zero contribution to the opacity. We define $\Sigma_Z$ as the surface density of heavy elements that are $\mu m$ size in condensed form and $\Sigma_G$ as the gas surface density. Thus, the metallicity $Z$ is the ratio $Z=\Sigma_Z / \Sigma_G$, assumed to be independent of $r$ in the disc. We assume that the grains are perfectly coupled to the gas, meaning that the dust-to-gas ratio is the same at every location in the disc. In our simulations we assume metallicities from $0.1\%$ to $3.0\%$. This means that if grain growth occurs and the total amount of heavy elements (independent of size) stays the same, then the metallicity in our sense is reduced (as in Paper II). \subsection{Time evolution of discs} \label{subsec:time} Protoplanetary accretion discs evolve with time and reduce their $\dot{M}$. Observations can give a link between mass accretion $\dot{M}$ and time $t$. \citet{1998ApJ...495..385H} find a correlation between the mass accretion rate and the star age \begin{equation} \label{eq:harttime} \log \left( \frac{\dot{M}}{M_\odot /\text{yr}} \right) = -8.00 \pm 0.10 - (1.40 \pm 0.29) \log \left( \frac{t}{10^6 \text{yr}} \right) \ . \end{equation} This correlation includes stars in the Taurus star cluster that span an $\dot{M}$ range from $\dot{M}=5 \times 10^{-6} M_\odot$/yr to $\dot{M}=5 \times 10^{-10} M_\odot$/yr over a range of $10$ Myr. If the disc is accreting viscously, the evolution of the disc is directly proportional to the viscosity and hence to $\alpha$. However, eq.~\ref{eq:harttime} was derived without any parametrization in $\alpha$. We therefore consider that $\alpha$ in our simulations is not the time evolution parameter of the surface density, but simply a parameter for viscous heating ($Q^+ \propto \alpha$) and for determining $v_r$. The time evolution of the disc is parametrized by eq.~\ref{eq:harttime}, where we use the values without the error bars for our time evolution. This approach also implies that the evolution time of the disc has to be longer than the time to relax to a radially constant $\dot{M}$ state. This is more critical in the early evolution of the disc (high $\dot{M}$), because the disc evolves more rapidly at that point. But the time the simulations need to settle in a steady state (constant $\dot{M}$ at all $r$) is about a factor of a few ($\approx 5$) shorter than the disc evolution time in eq.~\ref{eq:harttime} for high $\dot{M}$ values and is much shorter for the small $\dot{M}$ ranges, validating our approach. A disc with $\dot{M}=1 \times 10^{-9} M_\odot$/yr can be cleared by photo evaporation quite easily, so that the remaining lifetime is very short, only a few thousand years \citep{2013arXiv1311.1819A}. Therefore we do not simulate discs with very low $\dot{M}$. Recent observations have reported that even objects as old as $10$ Myr can have accretion rates up to $\dot{M}=1 \times 10^{-8} M_\odot$/yr \citep{arXiv:1406.0722}, which is in contrast to eq.~\ref{eq:harttime}. Such high accretion rates at that age cannot be explained by viscous evolution models, unless the disc was very massive in the beginning of the evolution. Even if more of these objects are observed, this will not change the validity of the approach we are taking here. It would just change the time evolution of the disc presented in eq.~\ref{eq:harttime}, but the disc structure as a function of $\dot{M}$ (presented in section~\ref{sec:timeevolve}) would stay the same. During the millions of years of the disc's evolution, the star evolves as well \citep{1998A&A...337..403B}. As the star changes temperature and size, its luminosity changes (eq.~\ref{eq:luminosity}), influencing the amount of stellar heating received by the disc. The stellar evolution sequence used was calculated by \citet{1998A&A...337..403B} (see table~\ref{tab:Starsize}), where we display the stellar age, temperature, density, luminosity, and the corresponding accretion rate $\dot{M}$ from \citet{1998ApJ...495..385H}. In Fig.~\ref{fig:LMdot} we plot the stellar luminosity and the $\dot{M}$ rate of the disc as a function of time. As the stellar luminosity drops by a factor of $3$, the $\dot{M}$ rate decreases by two orders of magnitude. Summarizing our methods, we have a disc model that features the full 2D structure with realistic heating and cooling that is linked to the temporal evolution of the star and disc. \begin{figure} \centering \includegraphics[scale=0.71]{LMdot.eps} \caption{Time evolution of the stellar luminosity after \citet{1998A&A...337..403B} and the evolution of $\dot{M}$ after \citet{1998ApJ...495..385H}. The luminosity of the star reduces by a factor of $3$ in $10$ Myr and the accretion rate reduces by over $2$ orders of magnitude during the same time period. However, an accretion rate of $\dot{M}=1 \times 10^{-9} M_\odot$/yr is already reached after $5$ Myr when the disc can be cleared by photo evaporation. \label{fig:LMdot} } \end{figure} The total mass flowing through the disc can be evaluated by integrating the accretion rate specified in eq.~\ref{eq:harttime} over time. This gives a minimum estimate of the total mass of the disc, as leftover material is blown away by photo evaporation as soon as $\dot{M}<1 \times 10^{-9} M_\odot$/yr. During the disc's lifetime of $5$ Myr, a total of $0.05 M_\odot$ of gas flows through the disc, marking the total mass of the disc. We therefore present fits to our simulations in Appendix~\ref{ap:model} that can then be easily used by other studies that need a simple, but accurate time evolution model of the accretion disc structure. \section{Disc structure} \label{sec:discstructure} In this section we compare the structure of a simulated $\dot{M}$ disc with the MMSN and the CY2010 nebula to point out crucial differences in the disc structure and their effect on the formation of planetesimals and planetary embryos. The simulations in this section feature a metallicity of $0.5\%$ in $\mu m$-sized dust grains. This value allows the disc to contain more heavy elements that could represent pebbles, planetesimals, or planetary cores (that do not contribute to the opacity) without increasing the total amount of heavy elements (grains and larger particles) to very high values. \subsection{$\dot{M}$ disc} \label{subsec:dotMdisc} In Fig.~\ref{fig:HrTSigall} the temperature (top), the aspect ratio $H/r$ (middle), and the surface density $\Sigma_G$ (bottom) are displayed. The MMSN and the CY2010 model follow power laws in all disc quantities. These are quoted in table~\ref{tab:power}. The simulated $\dot{M}=3.5 \times 10^{-8} M_\odot$/yr disc model features bumps and dips in all disc quantities. More specifically, the simulation features a bump in $T$ at the ice line (at $T_{\text{ice}} \approx 170$ K, illustrated in Fig.~\ref{fig:HrTSigall}). As ice grains sublimate at higher temperature, the opacity reduces for larger $T$ and therefore the cooling rate of the disc [$D \propto 1/(\kappa \rho)$] increases. This reduces the gradient of the temperature for high $T$ compared to $T<T_{\text{ice}}$, creating an inflection in $T$. This also changes the scale height $H/r$ of the disc, which in turn influences the viscosity. Since the disc features a radially constant $\dot{M}$, the disc adapts to this change in viscosity by changing $\Sigma_G$, creating the flattening of $\Sigma_G$ at the ice line (Paper II). {% \begin{table} \centering \begin{tabular}{ccc} \hline \hline & \textbf{MMSN} & \textbf{CY2010} \\\hline {\textbf{$H/r$}} & 1/4 & 2/7 \\ {\textbf{$T$}} & -1/2 & -3/7 \\ {\textbf{$\Sigma_G$}} & -3/2 & -3/2 \\ {\textbf{$\Delta v / c_s$}} & 1/4 & 2/7 \\ \hline \end{tabular} \caption{Parameters of the power laws used in the MMSN and the CY2010 models. \label{tab:power} } \end{table} }% \begin{figure} \centering \includegraphics[scale=0.71]{MMSN.eps} \caption{Mid-plane temperature $T$ (top), aspect ratio $H/r$ (middle) and integrated surface density $\Sigma_G$ (bottom) for a disc with $\dot{M}= 3.5 \times 10^{-8} M_\odot$/yr and for the MMSN and the CY2010 model. The grey area marks in the temperature plot the region of the opacity transition at the ice line. The green line in the surface density plot indicates a fit for the surface density in the outer part of the disc. The main difference between the $\dot{M}$ disc and the MMSN and CY2010 model is the higher temperature in the inner parts of the disc, which is caused by viscous heating. This increases $H/r$ in the inner parts and then results in a reduced $\Sigma_G$ in the inner parts, owing to a change in viscosity that is compensated by a change in $\Sigma_G$. \label{fig:HrTSigall} } \end{figure} In the inner parts of the disc, our models feature a much higher temperature than in the MMSN and the CY2010 models due to the inclusion of viscous heating. In the outer parts of the disc, where viscous heating becomes negligible and the temperature is solely determined by stellar irradiation, the temperatures of our simulations are comparable to the CY2010 model. The aspect ratio $H/r$ is related quite simply to the temperature (through the relation of hydrostatic equilibrium); \begin{equation} T = \left( \frac{H}{r} \right)^2 \frac{G M_\star}{r} \frac{\mu}{\cal R} \ , \end{equation} where $\cal R$ is the gas constant, $G$ the gravitational constant, and $M_\star$ the mass of the star. Therefore, the $H/r$ plot (middle panel in Fig.~\ref{fig:HrTSigall}) features the same properties as the temperature plot: i) a higher $H/r$ in the inner disc for our simulations and ii) about the same $H/r$ in the outer disc compared to the CY2010 model. Strikingly, the aspect ratio of the MMSN is off by about $50\%$ at $20$ AU compared to our simulations. As the MMSN model does not feature viscous heating in the inner parts, $H/r$ is smaller there, and because the radial change of $H/r$ is roughly the same in both models in the outer parts of the disc, the MMSN has a much smaller $H/r$ in the outer parts of the disc. The $H/r$ diagram now features bumps and dips correlated to the wiggles in the temperature diagram. The dip in $H/r$ starting beyond $3$ AU represents a shadowed regions inside the disc. The stellar irradiation does not penetrate well into this region because it is absorbed by the bump in $H/r$ at $\approx 3$ AU. We emphasise that the drop in $H/r$ is caused by the change of the cooling rate in the disc as the opacity changes and not by the heat transition from regions dominated by viscous heating to regions dominated by stellar heating (Paper II). If a constant opacity had been used, shadowed regions would not have appeared in the disc (Paper I). A drop in $H/r$ normally implies outward migration for low mass planets (Papers I and II), a phenomenon that is not seen in the MMSN and CY2010 models. The surface density profile (bottom panel in Fig.~\ref{fig:HrTSigall}) of our simulations shows an inflection at the same location as where $H/r$ shows a bump. Generally our simulations feature a lower surface density in the inner parts of the disc than in the MMSN and the CY2010 nebula. In the outer parts, on the other hand, the surface density of our simulations is much higher. This difference in surface density between our simulations and the other two models is caused by our underlying $\dot{M}$ approximation (see eq.~\ref{eq:mdot}), which gives a shallower surface density slope for the outer disc. \citet{2010ApJ...723.1241A} find that the gradients of the surface density profile of accretion discs in the Ophiuchus star forming region are between $0.4$ and $1.1$. Our simulations feature a surface density gradient of $\approx 1$ in the outer parts of the disc, but it is much shallower in the inner parts, matching the observations in contrast to the MMSN and CY2010 model. \subsection{Influence on planet formation} \label{subsec:planetform} The streaming instability can lead to formation of planetesimals as a first step in forming planetary embryos and planets \citep{2007Natur.448.1022J,2007ApJ...662..627J,2010ApJ...722.1437B,2010ApJ...722L.220B}. The important quantity for triggering particle concentration by the streaming instability is $\Delta$, which is the difference between the azimuthal mean gas flow and the Keplerian orbit divided by the sound speed. This difference is caused by the reduction of the effective gravitational force by the radially outwards pointing force of the radial pressure gradient. The parameter $\Delta$ is given by \begin{equation} \label{eq:stream} \Delta = \frac{\Delta v}{c_s} = \eta \frac{v_K}{c_s} = - \frac{1}{2} \frac{H}{r} \frac{\partial \ln (P)}{\partial \ln (r)} \ , \end{equation} where $v_K=\sqrt{GM/r}$ is the Keplerian velocity and $c_s / v_K = H/r$. Here $\eta$ represents a measure of the gas pressure support \citep{1986Icar...67..375N}. The sub-Keplerian rotation of the gas makes small solid particles drift towards the star. In Fig.~\ref{fig:Petaall} the radial pressure gradient $\partial \ln(P) / \partial \ln(r)$ (top) and $\Delta$ (bottom) are displayed. The radial pressure gradient is constant for the MMSN and the CY2010 model, since these models are built upon strict power laws. The simulations, on the other hand, feature bumps and dips that result from the opacity transition. In the inner parts of the disc, the pressure gradient is much more shallow in the $\dot{M}$ disc. We recall here that a shallower negative pressure gradient is preferred in order to form planetesimals. \begin{figure} \centering \includegraphics[scale=0.71]{MMSNstream.eps} \caption{Radial pressure gradient (top) and pressure support parameter $\Delta = \Delta v/c_s$ (bottom) for the disc with $\dot{M}= 3.5 \times 10^{-8} M_\odot$/yr and for the MMSN and CY2010 models. The grey area indicates the radial range of the opacity transition at the ice line, as already indicated in Fig.~\ref{fig:HrTSigall}. In contrast to the MMSN and CY2010 models that have a constant pressure gradient and therefore a steadily increasing $\Delta$ parameter, the $\dot{M}$ model features a dip in the profile at $\approx 5$ AU, which makes the formation of planetesimals in this region more likely. The reduced $\Delta$ parameter in the outer parts of the disc also makes it more likely to form planetesimals there compared to the MMSN and CY2010 model. The horizontal green lines at $\Delta=0.025$ and $\Delta=0.05$ mark the amount of heavy elements in the disc needed for the streaming instability to operate ($\approx 1.5\%$ and $\approx 2\%$, respectively), see \citep{2010ApJ...722L.220B}. \label{fig:Petaall} } \end{figure} The $\Delta$ parameter (bottom in Fig.~\ref{fig:Petaall}) is nearly constant for our simulation in the inner parts of the disc, while it steadily decreases towards the star in the MMSN and CY2010 models. In the outer parts, $\Delta$ is up to $50\%$ smaller in the $\dot{M}$ model than in the MMSN, indicating significantly different conditions for forming planetesimals. A reduced $\Delta$ parameter significantly helps the formation of large clumps via the streaming instability \citep{2010ApJ...722L.220B}. Additionally, the formation of clumps is also dependent on the amount of heavy elements and particle sizes in the disc, where a larger amount of heavy elements strongly increases the clumping. The number of heavy elements is not restricted to the metallicity defined with $\mu m$ dust grains, but also includes larger grains and pebbles that do not contribute to the opacity profile of the disc. For $\Delta=0.025$ (lower horizontal line in Fig.~\ref{fig:Petaall}), a fraction of $\approx 1.5\%$ in heavy elements is needed to form large clumps, while already for $\Delta=0.05$ (top horizontal green line in Fig.~\ref{fig:Petaall}), a fraction of $\approx 2\%$ in heavy elements is needed. For $\Delta = 0.1$ a very high, probably not achievable, number of heavy elements is needed for the streaming instability to work. A reduction in $\Delta$ in the outer parts of the disc, as proposed by our model, therefore makes the formation of planetesimals much easier in the Kuiper belt. Planetesimals can grow further by mutual collisions \citep{2010AJ....139.1297L} or by the accretion of pebbles. In the latter case, core growth enters the fast Hill regime, when it reaches the `transition mass', \begin{equation} \label{eq:Mtrans} M_{\rm t} \approx \sqrt{\frac{1}{3}} \Delta^3 \left( \frac{H}{r} \right)^3 M_\star \end{equation} where $M_\star$ is the stellar mass \citep{2012A&A...544A..32L}. This corresponds to $0.03 M_{\rm Earth}$ at $5.2$ AU in our simulated $\dot{M}$ disc. A reduced disc scale height and $\Delta$ parameter thus help smaller embryos to reach this growth regime where cores are formed on time scales of $10^5$\,yr, even at wide orbits (50\,AU), provided the pebble surface density is similar to MMSN estimates. Furthermore, a lower $\Delta$ makes accretion more efficient by increasing the proportion of pebbles that are accreted by a core versus those particles that drift past \citep{2012ApJ...747..115O}. Once planetary embryos have formed, these embryos are subject to gas-driven migration (for a review see \citet{2013arXiv1312.4293B}). The migration rate of planets can be determined by 3D simulations of protoplanetary discs \citep{2009A&A...506..971K, 2011A&A...536A..77B}. However, these simulations are quite computationally expensive. Instead, one can calculate the torque acting on an embedded planet from the disc structure \citep{2010MNRAS.401.1950P, 2011MNRAS.410..293P}. The torque formula of \citet{2011MNRAS.410..293P} takes torque saturation due to thermal diffusion into account and matches the 3D simulations of planets above $15M_{\text{Earth}}$ well\citep{2011A&A...536A..77B}. However, for low mass planets ($M_P \leq 5M_{\text{Earth}}$), there is still a discrepancy between the torque formula and 3D simulations that actually show a more negative torque \citep{2014MNRAS.440..683L}. Nevertheless, the predictions from the torque formula can already give first clues about the planetary migration history in evolving accretion discs. In \citet{2011MNRAS.410..293P} the total torque acting on an embedded planet is a composition of its Lindblad torque and its corotation torque \begin{equation} \Gamma_{\text{tot}} = \Gamma_{\text L} + \Gamma_{\text C} \ . \end{equation} The Lindblad and corotation torques depend on the local radial gradients of surface density $\Sigma_G \propto r^{-s}$, temperature $T \propto r^{-\beta}$, and entropy $S \propto r^{-\xi}$, with $\xi = \beta - (\gamma - 1.0) s$. Very roughly said, for not too negative $\Sigma_G$ gradients, a radially strong negative gradient in entropy, caused by a large negative gradient in temperature (large $\beta$), will lead to outward migration, while a shallow gradient in entropy will not lead to outward migration. \begin{figure} \centering \includegraphics[width=1.0\linwx]{Mdot3508star192.eps} \caption{Torque acting on discs with $\dot{M}=3.5 \times 10^{-8} M_\odot$/yr. The black lines encircle the regions of outward migration. The vertical red lines indicate the ice line at $170$K. The region of outward migration is exactly correlated to the region in the disc, where the temperature gradient is the steepest and can trap planets between $5$ and $30$ Earth masses. \label{fig:Migsim} } \end{figure} In Fig.~\ref{fig:Migsim} the migration maps for the $\dot{M}=3.5 \times 10^{-8} M_\odot$/yr disc is displayed. The torque parameter $\Gamma_0$ is defined as \begin{equation} \Gamma_0 = \left(\frac{q}{h}\right)^2 \Sigma r^4 \Omega^2 \ , \end{equation} where $q$ is the planet-to-star mass ratio, $h=H/r$, $r$ the semi-major axis and $\Omega_P$ the Keplerian rotation. The actual speed of inward migration changes with planetary mass not only because the torque is proportional to the planetary mass squared, but also because the mass changes the saturation effects of the corotation torque \citep{2011MNRAS.410..293P}. The regions of outward migration correspond to the regions in the disc where $H/r$ decreases with $r$. The MMSN and the CY2010 model feature a flared disc for all $r$ (Fig.~\ref{fig:HrTSigall}). This imposes a very shallow temperature gradient leading to a shallow entropy gradient, which is not enough to produce a corotation torque that can overcompensate the Lindblad torque. Planets, regardless of mass, therefore migrate inwards towards the star in both the MMSN and CY2010 models. We do not display these migration maps here. This lack of zones of outward migration makes the formation of giant planet cores much harder. \section{Influence of the time evolution of disc and star} \label{sec:timeevolve} The protoplanetary disc gets accreted onto the star over millions of years, but the star also evolves on these time scales \citep{1998A&A...337..403B}, contracting and changing its size and luminosity. The luminosity in turn determines the stellar heating that is absorbed by the disc, and thus influences the disc structure. By using eq.~\ref{eq:harttime}, we can link different $\dot{M}$ stages to different times. These times then give us the stellar evolution time and thus the stellar radius and temperature (table~\ref{tab:Starsize}). The stellar temperature stays roughly constant with time, while the stellar radius becomes significantly smaller as time evolves. This means that for discs with lower $\dot{M}$, the stellar heating will decrease as well. This will influence the outer regions of the disc, which are dominated by stellar heating. An increase of a factor of $2$ in stellar luminosity results in a change of $H/r$ of up to $\approx 20\%$ and of up to $\approx 45\%$ in temperature in the parts dominated by stellar irradiation. In this section all shown simulations feature a metallicity of $0.5\%$, as in section~\ref{sec:discstructure}. In Fig.~\ref{fig:HrTSigallfit} the mid-plane temperature (top), $H/r$ (middle), and the surface density $\Sigma_G$ (bottom) are displayed for different values of $\dot{M}$ ranging from $\dot{M}=1 \times 10^{-7} M_\odot$/yr to $\dot{M}=4.375 \times 10^{-9} M_\odot$/yr. The temperature in the inner parts of the disc drops as $\dot{M}$ decreases, because the viscous heating decreases as $\Sigma_G$ decreases. In the outer parts of the disc, the temperature also drops, because the stellar irradiation decreases in time. However, this drop in temperature is not as large as in the inner parts of the disc. In the late stages of the disc evolution, the disc becomes so cold that ice grains will exist throughout the disc so that no opacity transition is visible any more. For low $\dot{M}$ there is a temperature inversion at approximately $5$ AU. This inversion is caused by the different vertical heights of the absorptions of stellar photons compared to the vertical height of the cooling through diffusion. \begin{figure} \centering \includegraphics[scale=0.71]{THrSfit.eps} \caption{Mid-plane temperature (top), $H/r$ (middle), and surface density $\Sigma_G$ (bottom) for discs with different values of $\dot{M}$ around an evolving star. The grey area in the temperature plot marks the transition range in temperature for the opacity law at the ice line. The black lines mark the fits discussed in Appendix~\ref{ap:model}. For lower $\dot{M}$ rates, the inner parts of the disc are colder, as viscous heating is reduced. For very low $\dot{M}$, the temperature is below the ice condensation temperature throughout the disc's mid plane, and therefore the bump in the inner part of the protoplanetary disc vanishes. \label{fig:HrTSigallfit} } \end{figure} As $\dot{M}$ evolves, the shadowed regions of the disc (seen by a reduction in $H/r$ as a function of $r$) shrink. For very low $\dot{M}$, no bumps in $H/r$ exist any more, because the temperature of the disc is so low that the opacity transition is no longer inside the computed domain, but further inside at $r<1$ AU. This will have important consequences for planet migration, since outward migration is only possible when radially strong negative temperature gradients exist. The wiggles in the temperature profile directly translate to dips in the surface density profile, because a change in the viscosity of the disc must be directly compensated for by a change in the disc's surface density (eq.~\ref{eq:mdot}) to maintain a constant $\dot{M}$. In the very late stages of the disc evolution, when the accretion rate and the surface density are very low ($\dot{M} \leq 1 \times 10^{-9} M_\odot$/yr), the disc will experience rapid photo-evaporation \citep{2013arXiv1311.1819A}. \begin{figure} \centering \includegraphics[scale=0.71]{etaall.eps} \caption{ $\Delta$ parameter for the disc simulations with evolving $\dot{M}$. The black lines mark the fits discussed in Appendix~\ref{ap:model}. The $\Delta$ parameter describes the triggering of particle concentrations in the streaming instability, which can lead to planetesimal formation. In the regions of lower $\Delta$ at $\approx 5$ AU, the formation of planetesimals is more likely, compared to the inner regions of the disc ($\approx 2$ AU) where $\Delta$ is higher. \label{fig:etaall} } \end{figure} In Fig.~\ref{fig:etaall} the $\Delta$ parameter (eq.~\ref{eq:stream}) for the discs with different $\dot{M}$ is displayed. In the inner parts of the disc, $\Delta$ only slightly reduces as $\dot{M}$ shrinks, as long as $\dot{M}$ is still high enough that the inner parts of the disc are dominated by viscous heating. As soon as the disc starts to become dominated by stellar heating in the inner parts, $\Delta$ drops by a significant factor. This is because $\Delta$ is proportional to $H/r$, which shows exactly that behaviour as well. This reduction of $\Delta$ for low $\dot{M}$ helps the formation of planetesimals significantly, since lower metallicity is needed to achieve clumping \citep{2010ApJ...722L.220B}, indicating that in the very late stages of the disc evolution, planetesimal formation becomes easier. In the outer parts of the disc, $\Delta$ is very high throughout the different $\dot{M}$ stages and changes only slightly, following the reduction in $H/r$ as the stellar luminosity decreases. This indicates that for the formation of planetesimals, very high metallicity is needed in the outer disc. The growth of planetary embryos via pebble accretion is significantly accelerated, when the embryo reaches the Hill accretion regime, defined in eq.~\ref{eq:Mtrans}. Through all $\dot{M}$ stages, the minimum of $H/r$ and $\Delta$ coincides, reducing the transition mass towards the Hill accretion regime and making planet formation in these locations easier. Additionally, as $H/r$ and $\Delta$ drop in the late stages of evolution, pebble accretion is much more efficient because the transition mass is reduced, making core growth more efficient in the late stages of the disc, provided that enough pebbles are still available in the disc. \begin{figure} \centering \includegraphics[width=1.0\linwx]{Mdot708star199.eps} \includegraphics[width=1.0\linwx]{Mdot87509star17.eps} \caption{Gravitational torque acting on planets in discs with $\dot{M}=7 \times 10^{-8} M_\odot$/yr (top) and $\dot{M}= 8.75 \times 10^{-9} M_\odot$/yr (bottom). The black lines encircle the regions of outward migration. The vertical red lines indicate the ice line at $170$ K. The white line represents the zero-torque line of the fits presented in Appendix~\ref{ap:model}. As $\dot{M}$ drops, the regions of outward migration shrink so that only planets with lower mass can be saved from inward type-I-migration. Additionally, the orbital distance at which outward migration acts becomes smaller with decreasing $\dot{M}$. This is caused by the shallower gradient in temperature for lower $\dot{M}$ discs. \label{fig:Migfit} } \end{figure} In Fig.~\ref{fig:Migfit} the migration map for the $\dot{M}=7 \times 10^{-8} M_\odot$/yr (top) and $\dot{M}= 8.75 \times 10^{-9} M_\odot$/yr disc are displayed. The migration map for $\dot{M}=3.5 \times 10^{-8} M_\odot$/yr can be found in Fig.~\ref{fig:Migsim}. As the disc evolves and $\dot{M}$ decreases, the area of outward migration shrinks and moves inwards. This is caused by the disc becoming colder, and therefore the region of opacity transition at the ice line moves inwards as well. This implies that the strong radial negative gradients in temperature also move inwards, shifting the regions of outward migration to smaller radii (Paper II). In the late stages of the disc evolution, not only is the region of outward migration shifted towards the inner regions of the disc, but it also seems that outward migration is only supported for lower mass planets than for earlier disc evolution stages (high $\dot{M}$). This was already observed in Paper II. \section{Influence of metallicity} \label{sec:metallicity} As the disc evolves in time, the micro-metre dust grains can grow and form larger particles, which (above mm size) do not contribute to the opacity of the disc any more and the opacity of the disc decreases, as the number of micro-metre dust grains reduces. In the later stages of the disc, the number of small dust grains can be replenished because of destructive collisions between planetesimals and larger objects, which would increase the opacity again. We therefore extend our model to range over a variety of metallicities, namely from $0.1\%$ to $3.0\%$. \begin{figure}[!ht] \centering \includegraphics[width=0.85\linwx]{Tmetalall.eps} \caption{Mid-plane temperature for discs with various $\dot{M}$ and metallicity. The metallicity is $0.1\%$ in the top panel, $1.0\%$ in the second from top panel, $2.0\%$ in the third from top panel, and $3.0\%$ in the bottom panel. An increasing metallicity reduces the cooling rate of the disc, making the disc hotter. \label{fig:Tmetalall} } \end{figure} The temperature for discs with metallicities of $0.1\%$, $1.0\%$, $2.0\%$, and $3.0\%$ (from top to bottom) are displayed in Fig.~\ref{fig:Tmetalall}. The $H/r$ and surface density profiles of those discs are shown in Appendix~\ref{ap:model}. Clearly, discs with less metallicity are colder than their high-metallicity counterparts. This is caused by the increased cooling for low-metallicity discs, because $D \propto 1/(\rho \kappa_R)$. The general trends of the disc structure, however, hold for all metallicities, namely, that all discs feature a bump in temperature at the opacity transition at the ice line. However, discs with $\dot{M}<1 \times 10^{-8} M_\odot$/yr and a metallicity of $0.1\%$ no longer show a bump in temperature, and they follow the perfect flaring disc, because they are so cold that the transition in opacity at the ice line is no longer present in the outer parts of the disc. Additionally, the temperature inversion observed in sect.~\ref{sec:timeevolve} occurs at higher $\dot{M}$ rates for low-metallicity discs than for discs with higher metallicity. Discs that have the same $\dot{M}$ value, but different metallicity do not have the same surface density profile (see Fig.~\ref{fig:Sigmetalall} in Appendix~\ref{ap:model}). As the opacity increases, the disc becomes hotter because it cools less efficiently, and therefore $H$ increases, which then results in higher viscosity, reducing $\Sigma_G$ as $\dot{M}$ is constant in $r$ (eq.~\ref{eq:Mdothydro}). An increase in metallicity therefore does not result in an increase of the same factor in temperature. The opposite applies for a lower metallicity, which results in a cooler disc (smaller $H$), decreasing the viscosity and therefore increasing $\Sigma_G$. The increase in temperature is much greater in the inner parts of the disc than in the outer parts. The reason for that lies in the fact that the inner disc is dominated by viscous heating, which does not depend on the opacity, and only the change in the cooling rate depends on the opacity and therefore changes the temperature in the inner disc. In the outer disc, however, the cooling and the heating both depend on the opacity. A reduced opacity increases the cooling, making the disc thinner ($H/r$ decreases), but at the same time, the absorption of stellar photons is also reduced in the upper layers because fewer dust grains are available. This means that stellar irradiation can be efficient at much smaller heights from the mid plane, heating the disc at a smaller vertical height $z$, shifting the heated region closer to the mid plane, and making the disc hotter. This effect counterbalances the increased cooling, so that the disc stays roughly at the same temperature in the parts that are dominated by stellar irradiation for all metallicities, as long as the changes in metallicity are not too big. The higher temperature in the inner disc, especially for lower $\dot{M}$, caused by higher metallicity has important implications for planetesimal formation, planetary migration, and the evolution of the ice line in time. These are discussed in sect.~\ref{sec:formplanets}. \section{Discussion} \label{sec:formplanets} Planets will form during the evolution of the protoplanetary accretion disc. We discuss here the general implications of the disc structure on the formation of planetesimals and embryos (sect.~\ref{subsec:SI}) and giant planets (sect.~\ref{subsec:giants}). Additionally, we focus on the inward motion of the ice line as the disc evolves in time (sect.~\ref{subsec:iceline}). \subsection{Planetesimal and embryo formation} \label{subsec:SI} The first building blocks of planetary embryos are planetesimals, which can be formed by the streaming instability \citep{2007ApJ...662..627J}. The streaming instability requires not only a small $\Delta$ parameter (Fig.~\ref{fig:etaall}), but also an increased amount of heavy elements \citep{2010ApJ...722L.220B}. Both of these requirements are satisfied at the ice line, where an increased amount of heavy elements is likely to be present owing to the condensation of ice grains and pebbles \citep{2013A&A...552A.137R}. Additionally this region features a low $\Delta$ value, because it is in the shadowed region of the disc, the minimum of $H/r$. This feature is independent of the overall metallicity of the disc (sect.~\ref{sec:metallicity}), but because the disc is hotter for higher metallicity discs, the $\Delta$ parameter is also larger, as $H/r$ increases. This implies that for higher metallicity discs, the streaming instability might not be able to be operated, because $H/r$ is greater. However, the formation of the first planetesimals and planetary embryos is still most likely at the location of the ice line, since the disc features a minimum of $\Delta$ in that region. In the inner parts of the disc, the $\Delta$ parameter is also smaller. But since the disc is very hot there, the icy particles have evaporated and fewer metal grains are thus available for triggering the streaming instability. If indeed the first planetesimals form at the ice line, then embryo formation is most likely to occur here as well. These embryos can then grow via collisions between each other and through pebble accretion \citep{2012A&A...544A..32L}. The growth rate versus the migration rate is now key to determining what kind of planet emerges. If the planet grows rapidly to several Earth masses, then it can overcome the inward type-I-migration, be trapped in a region of outward migration \citep{2014arXiv1408.6094L}, and eventually become the core of a giant planet at a large orbital radius. If the planet does not grow fast enough to be trapped in a region of outward migration, the planet migrates inwards and can end up as either a giant planet in the inner system (if the core can grow fast enough in the inner parts of the disc) or `just' be a hot super-Earth or mini-Neptune \citep{2014arXiv1407.6011C}. As the disc evolves in time, the $\Delta$ parameter becomes smaller in the inner disc, meaning that the formation of planetesimals might be more likely at later times. However, the farther the disc has evolved in time, the less time is left for giant planets to form, because those have to accrete gas from the surrounding disc. Planetesimal formation might also be triggered indirectly by grain growth. As the metallicity of $\mu m$ dust grains drops, e.g. due to grain growth, $H/r$ decreases, because of a drop in opacity, which increases the cooling of the disc. This then leads to a lower $\Delta$ value, making it easier for the streaming instability to operate. These two effects would imply that the formation of planetesimals is easier in the late stages of the disc evolution, thus potentially explaining the abundance of small planets compared to gas giants \citep{2013ApJ...766...81F}, which require an earlier core formation in order to accrete an gaseous envelope. In the outer parts of the disc ($r>10$ AU), the $\Delta$ parameter of the $\dot{M}$ discs, regardless of the discs metallicity, is up to $50\%$ smaller than in the MMSN and CY2010 model. A reduction in $\Delta$ by this factor significantly helps the clumping of particles in the streaming instability that can then lead to the formation of planetesimals, making the formation of planetesimals much more likely in our presented model. This makes it much easier to form Kuiper belt objects and the cores of Neptune and Uranus in our model. During time, as $\Sigma_G$ reduces, the $\Delta$ parameter even reduces slightly in the outer disc, making the formation of planetesimals more likely in the later stages. Such late formation of planetesimals in the outer disc could explain why Neptune and Uranus did not grow to become gas giants \citep{2014arXiv1408.6087L, 2014arXiv1408.6094L}. A full analysis of the effect of the disc structure on planetary formation and migration will require introducing detailed prescriptions for planetary growth. This will be the topic of a future paper. \subsection{Giant planet formation} \label{subsec:giants} For solar-like stars, the occurrence of a giant planet ($M_{\rm P} > 0.1M_{\rm Jup}$) is related to the metallicity of the host star. In particular, higher metallicity implies a higher occurrence rate of giant planets \citep{2004A&A...415.1153S, 2005ApJ...622.1102F}. This implies that the formation of planets in systems with higher metallicity might be systematically different from systems with lower metallicity. In the core accretion scenario, a core forms first, which then accretes gas to become a giant planet. The core itself migrates through the disc as it grows. \citet{2014arXiv1407.6011C} show that planets that are not trapped inside regions of outward migration are more likely to become super-Earths rather than giant planets, making zones of outward migration important for the final structure of planetary systems. The mid-plane temperature profiles for discs with different $\dot{M}$ and metallicity are shown in Fig.~\ref{fig:Tmetalall}. They imply that higher metallicity results in a hotter disc that can keep the shadowed regions longer (see also Fig.~\ref{fig:Hrmetalall}), which are able to support outward migration at a few AU for a longer time compared to lower metallicity discs. This is illustrated in Fig.~\ref{fig:Migmetal}, which shows the migration regions where outward migration is possible for discs with different metallicity and for two $\dot{M}$ values; $\dot{M}= 1.0 \times 10^{-7} M_\odot$/yr and $\dot{M}= 8.75 \times 10^{-9} M_\odot$/yr. \begin{figure} \centering \includegraphics[scale=0.71]{Mig107contour.eps} \includegraphics[scale=0.71]{Mig87509contour.eps} \caption{Migration contours for discs with different metallicities. The top panel shows $\dot{M}= 1.0 \times 10^{-7} M_\odot$/yr; the bottom panel shows $\dot{M}= 8.75 \times 10^{-9} M_\odot$/yr. Planets migrate outwards in the regions of the disc that are enclosed by the solid lines, where the colours mark different metallicities. \label{fig:Migmetal} } \end{figure} For the $\dot{M}= 1.0 \times 10^{-7} M_\odot$/yr discs, there are three clear trends visible for higher metallicities: \begin{itemize} \item the regions of outward migration are shifted farther away from the central star, because the disc is hotter and therefore the opacity transition at the ice line is farther away; \item the regions of outward migration are larger in radial extent, because the shadowed regions in the disc are larger, which enlarges the region where a steep radial temperature gradient exits, in turn enlarging the entropy driven corotation torque; \item outward migration is only possible for higher minimal masses, which is caused by the changes in the disc structure that overcompensate for the reduced cooling time caused by the higher metallicity, which would actually reduce the minimal mass required for outward migration \end{itemize} In principle one could also infer from Fig.~\ref{fig:Migmetal} that outward migration is also possible for higher planetary masses for increasing metallicity. However, as the torques acting on the planets are calculated by a torque formula that was derived in the linear regime for low mass planets ($\approx 5M_{\rm Earth}$), one cannot extend it towards masses of a few $10$ Earth masses. Additionally planets that are that massive start to open up gaps in the disc, which means that the planet transitions into type-II-migration. The regions of outward migration then shrink and move closer towards the star as the disc evolves in time and $\dot{M}$ reduces. For the $\dot{M}= 8.75 \times 10^{-9} M_\odot$/yr disc, the discs with high metallicity ($>2.0\%$) maintain two large regions of outward migration at a few AU, which are able to trap planets of up to $\approx 30 M_{\rm Earth}$. The lower metallicity discs, on the other hand, only feature one region of outward migration in the inner region of the disc, which is only able to trap cores up to $\approx 15M_{\rm Earth}$. Additionally, as the regions of outward migration last longer in the outer disc for high metallicity, the growing core can stay farther out, being released at a larger orbital distance into type-II-migration when it becomes large enough. This core would then migrate inwards in the type-II regime and finally be stopped when the disc dissipates \citep{2012MNRAS.422L..82A}, which might explain the pile-up of Jupiter-sized planets around $1$AU. On the other hand, in a low metallicity disc, the planetary core would be trapped closer to the star, and when it is released in type-II-migration as a gas giant it is already closer to the star and might therefore become a hot Jupiter. \subsection{Ice line} \label{subsec:iceline} As the disc evolves in time and $\dot{M}$ drops, the inner regions of the disc become colder, moving the ice line closer to the star. In the evolution of all our disc models, the ice line (at $170$ K) moves to $1$AU at $\approx 2$ Myr. However, this result is troublesome considering evidence from meteors and asteroids in our own solar system. Ordinary and enstatite chondrites contain very little water and must have formed on the warm side of the ice line. The parent bodies formed $2-3$ Myr after CAIs (which mark the zero age of the solar system). However, at this time the ice line in our nominal models had moved to approximately $1$ AU. This is potentially in conflict with the dominance of S-type asteroids, believed to be the source of ordinary chondrites in the inner regions of the asteroid belt. We propose two different ideas that could potentially add to the solution of this problem. {\it The metallicity} is the key parameter for the temperature profile of the disc. Our simulations show that a higher metallicity in $\mu m$ dust increases the temperature of the inner disc. If the metallicity is even higher (e.g. $5\%$) the ice line would have been farther outside. Therefore an intrinsic higher metallicity of the disc could help this problem. Alternatively, a higher metallicity in $\mu m$-sized dust grains can be created by dust-polluted ice balls that drift across the ice line and then release their dust grains as the ice melts, increasing the metallicity in $\mu m$-sized dust in the inner parts of the disc \citep{2011ApJ...733L..41S}. {\it The time evolution} of the accretion disc contains large error bars in time \citep{1998ApJ...495..385H}, which could simply imply that the solar system was one of the slower evolving discs, keeping a higher $\dot{M}$ rate at later times, which would keep the ice line farther out and thus the inner system dry at later stages of the disc evolution. For example, in \citet{1998ApJ...495..385H} one observed disc still has $\dot{M}\approx 2.0 \times 10^{-8} M_\odot$/yr at $3$ Myr, which has a high enough temperature to keep the ice line at $\approx 2$ AU for $Z \geq 0.005$. \citet{arXiv:1406.0722} report discs with high accretion rates ($\dot{M}=1 \times 10^{-8} M_\odot$/yr) that are $10$ Myr old, which would then have an ice line farther out and thus avoid the mentioned problem. However, it is unlikely that these simple scenarios are solely responsible for keeping the snow line far out during the lifetime of the disc, and more complicated scenarios can play a role. This includes local heating from shocks in disc with dead zones \citep{2013MNRAS.434..633M} or the interplay among the radial motion of the gas, drifting icy particles, and growing planets, which will be the subject of a future paper. \section{Summary} \label{sec:summary} In this work we have compared the power law assumptions of the minimum mass solar nebula and the \citet{2010AREPS..38..493C} models with simulations of protoplanetary discs that feature realistic radiative cooling, viscous, and stellar heating. The modelled disc structures show bumps and dips that are caused by transitions in the opacity, because a change in opacity changes the cooling rate of the disc \citep{2014A&A...564A.135B}. These features can act as sweet spots for forming planetesimals via the streaming instability and for stopping the inward migration of planetary cores of a few Earth masses. Regions of low pressure support also enhance the growth of planetary cores via pebble accretion. These attributes are lacking in the MMSN and CY2010 models, making the formation of planets in these models much harder. Additionally, as the disc evolves in time and the accretion rate $\dot{M}$ decreases, the radial gradients of $\Sigma_G$, $H/r$ and $T$ in the disc change. This temporal evolution is not taken into account in the MMSN and CY2010 models either. During the time span of a few Myr, the star evolves along with the disc. The star changes its size and temperature and therefore its luminosity. This changes the amount of stellar heating received by the disc and strongly changes the parts of the disc where stellar heating is the dominant heat source. For higher $\dot{M}$ this mostly affects the outer parts of the disc, while for lower $\dot{M}$, as viscous heating becomes less and less important, the whole disc structure is affected. We present a simple fit of our simulated discs over all accretion rates and for different metallicities. The fit consists of three parts that correspond to three different regions in the disc, which are dominated by different heat sources. The inner disc is dominated by viscous heating, while the outer disc is dominated by stellar irradiation. In between, a transition region in the disc exists where viscous heating starts to become less important, but is in the shadow of direct stellar illumination at the same time. The different $\dot{M}$ values can be linked to a time evolution of the disc obtained from observations \citep{1998ApJ...495..385H}. With this simple relation between time and accretion rate, our presented fit can easily be used to calculate the disc structure at any given evolution time of the star-disc system. This can then be used as input for planet formation models. A Fortran script producing $T$, $\Sigma$, and $H/r$ as functions of $Z$, $\dot{M,}$ and time is available upon request. \begin{acknowledgements} B.B.,\,A.J.,\,and M.L.\,thank the Knut and Alice Wallenberg Foundation for their financial support. A.J.\,was also supported by the Swedish Research Council (grant 2010-3710) and the European Research Council (ERC Starting Grant 278675-PEBBLE2PLANET). A.M.\, is thankful to the Agence Nationale pour la Recherche under grant ANR-13-BS05-0003-01 (MOJO). The computations were done on the "Mesocentre SIGAMM" machine, hosted by the Observatoire de la C\^{o}te d'Azur. We thank M. Havel for discussing with us on the early evolution model of the Sun. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The colored degree matrix problem, \cite{hmcd2016} also known as finding edge packing \cite{bushetal2012}, edge disjoint realizations \cite{guinezetal2011} or degree constrained edge partitioning \cite{bentzetal2009}, asks if a series of degree sequences have edge disjoint realizations. The general problem is known to be NP-complete \cite{guinezetal2011}, but certain special cases are easy. These special cases include the case when one of the degree sequences is almost regular and there are only two degree sequences \cite{kundureg}, or, equivalently, when the sum of two degree sequences is almost regular \cite{chen1988}, or when the degrees are sparse \cite{bentzetal2009,hmcd2016}. Kundu proved that two degree sequences of trees have edge disjoint tree realizations if and only if their sum is graphical \cite{kundutree}. On the other hand, this is not true of three such sequences: there exist $3$ degree sequences of trees such that any couple of them have a sum which is graphical; furthermore, the sum of the $3$ is still graphical, and they do not have edge disjoint tree realizations \cite{kundu3tree}. However, $3$ degree sequences of trees do have edge disjoint tree realizations when the smallest sum of the degrees is $5$ \cite{kundu3tree}. This minimal degree condition includes the case when the degree sequences have no common leaves. It is easy to see that the sum of two degree sequences of trees is always graphical if they do not have common leaves. Therefore $k$ degree sequences of trees always have edge disjoint tree realizations if they do not have common leaves for $k=2, 3$. A natural question is to ask if this statement is true for arbitrary $k$. In this paper we conjecture that it is true for arbitrary $k$, and prove it for $4$ degree sequences of trees. For $5$ degree sequences of trees, we prove that the conjecture is true if it is true up to $18$ vertices. Computer aided search then confirmed that it is indeed true up to $18$ vertices. We also prove the conjecture for arbitrary $k$ in a special case, when there are a prescribed number of vertices which are not leaves in any of the degree sequences. All the presented proofs are based on induction, and the key point in the inductive steps is to find rainbow matchings in certain configurations. \section{Preliminaries} In this section, we give the necessary definitions and notation, as well as state the conjecture that we prove for some special cases. \begin{definition} \begin{sloppypar} A \emph{degree sequence} is a list of non-negative integers, ${D = d_1, d_2, \ldots d_n}$. A degree sequence $D$ is \emph{graphical} if there exists a graph $G$ whose degrees are exactly $D$. Such a graph is a \emph{realization} of $D$. A degree sequence $D = d_1,d_1\ldots d_n$ is a \emph{tree degree sequence} if all degrees are positive and $\sum_{i=1}^{n} d_i = 2n-2$. A degree sequence is a \emph{path degree sequence} if two of its degrees are $1$ and all other degrees are $2$. \end{sloppypar} \end{definition} It is easy to see that a tree degree sequence is always graphical and there is a tree realization of it. \begin{definition} A \emph{degree matrix} is a matrix of non-negative integers. A degree matrix $M$ of dimension $k \times n$ is \emph{graphical} if there exists a series of edge disjoint graphs, $G_1, G_2, \ldots, G_k$ such that for each $i$, $G_i$ is a realization of the degree sequence in the $i{\th}$ row. Such a series of graphs is a \emph{realization} of $M$. Alternatively, an edge colored simple graph is also called a realization of $M$ if it is colored with $k$ colors and for each color $c_i$, the subgraph containing the edges with color $c_i$ is $G_i$. The degree matrix might also be defined by its rows, which are degree sequences $D_1, D_2, \ldots D_k$. The degree of vertex $v$ in degree sequence $D_i$ is denoted by $d_v^{(i)}$. \end{definition} In this paper, we consider the following conjecture. \begin{conjecture}\label{conj:main} Let $D_1, D_2, \ldots, D_k$ be tree degree sequences without common leaves, that is, for any $v$ and $i$, $d_v^{(i)} = 1$ implies that for all $j\ne i$, $d_v^{(j)} >1$. Then they have edge disjoint realizations. \end{conjecture} A trivially necessary condition for a series of degree sequences be graphical is that the sum of the degree sequences is graphical. Therefore, Conjecture~\ref{conj:main} implies that sum of tree degree sequences without common leaves is always graphical. This implication is in fact true, and is proven below. Before proving it, we also prove a lemma which is interesting on its own. \begin{lemma}\label{lem:2k-eg} Let $F = f_1 \ge f_2 \ge \ldots \ge f_n$ be the sum of $k$ arbitrary tree degree sequences. Then the Erd{\H o}s-Gallai inequality \begin{equation} \sum_{i = 1}^s f_i \le s(s-1) + \sum_{j=s+1}^n \min\{s, f_j\} \end{equation} holds for any $s \ge 2k$. \end{lemma} \begin{proof} For any sum of $k$ tree degree sequences, \begin{equation} \sum_{i = 1}^s f_i \le k(2n-2) - (n-s)k, \end{equation} since each tree degree sequence has a sum $2n-2$ and the minimum sum is $k$ on any vertex. Furthermore if $s \ge 2k$, then \begin{equation} s(s-1) + (n-s)k \le s(s-1) + \sum_{j=s+1}^n \min\{s,f_j\}. \end{equation} Therefore, it is sufficient to prove that \begin{equation} k(2n-2) - (n-s)k \le s(s-1) + (n-s)k. \end{equation} Rearanging this, we get that \begin{equation} 2k(s-1) \le s(s-1), \end{equation} which is true when $s\ge 1$ and $2k \le s$. \qed \end{proof} We use this lemma to prove the following theorem -- now on tree degree sequences without common leaves. \begin{theorem} Let $D_1, D_2, \ldots, D_k$ be tree degree sequences without common leaves. Then their sum is graphical. \end{theorem} \begin{proof} Let $F = f_1, f_2, \ldots, f_n$ denote the sum of the degrees in decreasing order. We use the Erd{\H o}s-Gallai theorem \cite{eg1960}, which says that a degree sequence $F$ in decreasing order is graphical if and only if for all $s$ with $1 \leq s \leq n$, \begin{equation} \sum_{i = 1}^s f_i \le s(s-1) + \sum_{j=s+1}^n \min\{s, f_j\}.\label{eq:eg} \end{equation} According to Lemma~\ref{lem:2k-eg}, it is sufficient to prove the inequality for $s \le 2k-1$, since for larger $s$, the inequality holds. Since there are no common leaves, any $f_j$ is at least $2k-1$, therefore $\min\{s,f_j\}$ is $s$ for any $s \le 2k-1$. Writing this into Equation~\ref{eq:eg}, we get that \begin{equation} \sum_{i = 1}^s f_i \le s(s-1) + (n-s)s = s(n-1). \end{equation} And this is in fact the case since we claim that the sum of the degrees cannot be more than $n-1$ on any vertex. Indeed, $d_v^{(i)} = l$ means at least $l$ leaf vertices which are not $v$ in tree $T_i$. Since there are no common leaves, and there are $n-1$ vertices when $v$ is excluded, $f_v = \sum_{i=1}^k d_v^{(i)} \le n-1$. So this inequality holds for $s \leq 2k-1$. \qed \end{proof} We now present partial results on Conjecture~\ref{conj:main}. The results are obtained by inductive proofs in which larger realizations are constructed from smaller realizations. The constructions use the existence of rainbow matchings, defined below. \begin{definition} A \emph{matching} is a set of disjoint edges. In an edge-colored graph, a \emph{rainbow matching} is a matching in which no two edges have the same color. \end{definition} \section{The theorem for $4$ tree degree sequences} In this section, we are going to prove that $4$ tree degree sequences always have edge disjoint realizations if they do not have common leaves. The proof is based on induction. In the inductive step, we need the following lemma to reduce the case to a case with fewer number of vertices. \begin{lemma}\label{lem:reduction} Let $D^{(1)},D^{(2)},\ldots,D^{(k)}$ be tree degree sequences without common leaves such that not all of them are degree sequences of paths. Then there exists vertices $v$, $w$ and an index $i$ such that $d_v^{(i)} = 1$, $\forall\ j \ne i$ $d_v^{(j)} =2$, and $d_w^{(i)} >2$. \end{lemma} \begin{proof} We structure the proof in terms of the degree matrix and certain submatrices. To start with, let the degree matrix be structured such that the $v\th$ column corresponds to vertex $v$ and the $i\th$ row to degree sequence $i$, so that entry $(i,v)$ of the matrix is $d_v^{(i)}$. We will refer to vertices and columns interchangeably, and a set of vertices $S$ determines a set of columns that can be seen as a submatrix (but each submatrix does not always determine a set of vertices). We will use $|S|$ to denote the cardinality of the set, i.e.\ the number of columns of the submatrix $S$, and $\overline{M}$ to denote the average of the elements of a submatrix $M$. Let $A$ be the set of ``low-degree'' vertices, $A = \{ v \mid d_v = 2k-1 \}$, and $B$ the set of ``high-degree'' vertices, $B = \{ v \mid d_v \geq 2k \}$. Note that this forms a partition of the vertex set $V$; there are no vertices $v$ with $d_v < 2k-1$ since that would force at least two common leaves. Note also that $A$ is nonempty; if it weren't then we would have $B=V$, which is impossible since the degrees of vertices in $B$ are too high: the total degree sum is at least $2kn$ when it should of course be exactly $2k(n-1)$. Similarly $B$ is nonempty as well, since not all the degree sequences are degree sequences of paths. Observe that the vertices in $A$ must consist of exactly one leaf and the rest degree-2 vertices. That is, for all $v \in A$ there exists exactly one $i$ such that $d_v^{(i)} = 1$, and for all other $j$ we have $d_v^{(j)} = 2$; otherwise they would have common leaves. The proof is by contradiction. Assume that for every pair $v \in A, w \in B$, in every sequence $i$ it is the case that $d_w^{(i)} \leq 2$ whenever $d_v^{(i)} = 1$. Permute the rows and columns such that the columns of $A$ are to the left and those of $B$ to the right, where $A$ (considered as a submatrix) is ordered such that the rows with 1s are all on top. These rows determine submatrices $A'$ of $A$ and $B'$ of $B$, and the other rows determine submatrices $A''$ and $B''$ similarly, sitting at the bottom. We make some important observations: \begin{enumerate} \item Each row in $A'$ contains at least one 1 by construction (this was precisely why we picked those rows). \item $A''$ consists entirely of 2s. This is again by construction, since we placed all rows with 1s on top in $A'$. In particular, $\overline{A''} = 2$. \item Each element of $B'$ is at most 2. This follows from our assumption: for each row $i$ of $B'$ we know that $d_v^{(i)} = 1$ for all $v \in A$, and so by assumption $d_w^{(i)} \leq 2$ for all $w \in B$. In particular, $\overline{B'} \leq 2$. \item $A''$ (and hence $B''$) is nonempty. If not then take any vertex $w$ having a degree greater than $2$. Such a vertex exists since not all degrees sequences are path degree sequences. If $i$ is an index for which $d_w^{(i)} > 2$, take a $v \in A'$ such that $d_v^{(i)} = 1$. Again, such vertex exists if $A''$ is empty. \end{enumerate} Now consider the submatrix $\left[\begin{array}{cc} A'' & B'' \end{array}\right]$, i.e.\ all the bottom rows. The row sum for each row must be $2n-2$, and since $\overline{A''} = 2$ this forces $\overline{B''} < 2$. But now consider the submatrix $B$, i.e.\ $\left[\begin{array}{c} B' \\ B'' \end{array}\right]$. The column sum in each column must be at least $2k$. Since $\overline{B'} \leq 2$, this forces $\overline{B''} \geq 2$. This is a contradiction. \qed \end{proof} We also need the following lemma to be able to build up edge disjoint realization of a tree degree sequence quartet from the realization of a smaller tree degree sequence quartet. \begin{lemma}\label{lem:rainbow} Let $H = (V,E)$ be an edge colored graph in which $|V| \ge 10$ and the number of colors is $4$. Furthermore, the subgraphs for each color is a tree on n vertces, and the trees do not have a common leaf. That is, if $d_v^{(i)} = 1$, then $d_v^{(j)} >1$ for all $j\ne i$. Let $v_j \in V$ be an arbitrary vertex, and let $G$ be the subgraph of $H$ containing the first $3$ colors. Then $G \setminus \{v_j\}$ contains a rainbow matching of size $3$. \end{lemma} \begin{proof} We will call the $3$ colors red, blue and green. Let $v$ be a vertex with at least one edge of each color going to a vertex that is not $v_j$; such a vertex can easily be seen to exist. Indeed, just pick any vertex adjacent to $v_j$ in the fourth tree. Thus let vertices $v, u_1, u_2, u_3$ be such that $(v, u_1)$ is blue, $(v, u_2)$ green and $(v, u_3)$ red. We will refer to these vertices as ``the complex''. Let $W = \{w_1, \dots, w_5\}$ be five other vertices; these exist because $|V| \geq 10$. We make some important observations. Let $w$ be an arbitrary vertex in $W$. Because we have no common leaves, vertex $w$ has degree at least 2 in all colors with the possible exception of one, in which it may have degree 1; in any case it has total degree at least 5. It may be adjacent to $v_j$ in some color, so excluding $v_j$ it has total degree at least 4. Moreover, for each color, $w$ is incident with at least two edges not of this color and avoiding $v_j$. We claim that we can always find a rainbow matching in this setup. Our proof will have three cases based on how many disjoint edges there exist within $W$: at least two, exactly one, and none (i.e.\ no edges within $W$ at all). \paragraph{Case 1: At least two disjoint edges within $W$.} Let two disjoint edges be $(w_1, w_2)$ and $(w_3, w_4)$. Clearly we may assume they are the same color because otherwise we have a rainbow matching easily; say they are red. Now observe that $w_5$ has at least 2 non-red edges, and they must both go to the complex because if there was one edge, $e$, which didn't, it would cover only one of the two disjoint red edges in $W$ and so we'd have a rainbow matching: $e$, the red edge in $W$ which is vertex independent from $e$, and the third color edge in the complex. So $w_5$ has 2 non-red edges going to the complex. Clearly at most one of these goes to v; w.l.o.g. say one which does not go to $v$ is blue --- then it must be $(w_5, u_2)$ to avoid a rainbow matching. We now identified two vertex disjoint blue edges, $(v,u_1)$ and $(w_5,u_2)$, two disjoint red edges, $(w_1, w_2)$ and $(w_3, w_4)$, and one more red edge $(v,u_3)$. We claim that any green edge not blocking both blue edges can be extended to a rainbow matching. Indeed, if it blocks both red edges in $W$, then that green edge, $(v,u_3)$ and $(w_5,u_2)$ is rainbow matching. If it does not block both red edges in $W$ and does not block the two blue edges (as we assume), then we can find a rainbow matching. However, only $3$ green edges can block both blue edges, otherwise there was a green cycle. If there are no more green edges in $G \setminus \{v_j\}$ than these edges, then the the non-green degree of $v_j$ was at most $3$, contradicting that $v_j$ is a leaf in at most one of the trees (in $H$!). Therefore there is at least one green edge not blocking both blue edges, and thus, there is a rainbow matching. \paragraph{Case 2: Exactly one disjoint edge within $W$.} There is at least one edge within W, but no pair of disjoint edges. Then these edges either form a star or a triangle. First suppose we have a star; w.l.o.g., say $w_1$ is the center of the star. We claim that among $w_2, w_3, w_4, w_5$ we can find $w_i, w_j$ so that $(w_1, w_i)$ is an edge of color red (say) and $w_j$ sends two non-red edges to the complex. Indeed, either the star with center $w_1$ contains $4$ leaves or there exists $w_j, j \in \{2, 3, 4, 5\}$ such that $(w_1, w_j)$ is not an edge. First, suppose each of $w_2, w_3, w_4, w_5$ have an edge going to $w_1$. Then by the Pigeonhole Principle, we can find $w_i, w_j$ among them such that $(w_1, w_i)$ and $(w_1, w_j)$ are the same color (say red). Notice then that $w_j$ has at least two non-red edges by the no-common-leaves condition, and both of these must go to the complex since $w_1$ is the center of our star. Suppose on the other hand that $\exists w_j, j \in \{2, 3, 4, 5\}$ such that $(w_1, w_j)$ is not an edge. Then fix this $w_j$ and choose $w_i$ to be such that $(w_1, w_i)$ is an edge (which is possible since there's at least one edge within W and $w_1$ is the center of our star). Let the color of $(w_1, w_i)$ be red w.l.o.g. and notice that $w_j$ must send at least two non-red edges to the complex. Thus the claim is true. We can say w.l.o.g. that $i=2$, $j=3$. That is, $(w_1, w_2)$ is a red edge and $w_3$ sends two non-red edges to the complex. In particular $w_3$ sends a non-red edge to the complex that does not go to $v$; w.l.o.g. say it is blue. Then it must be $(w_3, u_2)$ to prevent a rainbow matching. Now consider $w_4$ and $w_5$: notice that if either one has a green edge, we are done. Therefore we may assume they both have green-degree 0. Fix the green edge $(v, u_2)$, meaning we will build a rainbow matching containing this edge. There are no edges between $w_4$ and $w_5$ since $w_1$ is the center of our star so then $w_4$ and $w_5$ have at least 4 distinct red and 4 distinct blue edges between them. By no double edges, at most 4 of these 8 edges -- at most 2 from each of $w_4$ and $w_5$ -- cover the green edge $(v, u_2)$, and since we have no monochromatic cycles, at most 3 of either color do. Therefore we can, w.l.o.g., choose a red edge from $w_4$ and a blue edge from $w_5$ which are both disjoint from the green $(v, u_2)$. Then they must have the same endpoint, or else we have a rainbow matching. Moreover, this endpoint must be $w_1$ because otherwise we could choose the red $(w_1, w_2)$ along with the blue edge from $w_5$ which is disjoint from the green $(v, u_2)$ and we would have a rainbow matching. Thus we can assume we have red $(w_4, w_1)$ and blue $(w_5, w_1)$. Now $w_4$ and $w_5$ each have at least three more edges. These edges must avoid vertices in $W$ because the edges within $W$ were assumed to be a star (centred at $w_1$). In particular, they must each have one edge which avoids both $v$ and $u_2$. If either of these is blue, we are done along with the red $(w_1, w_2)$; on the other hand, if the edge from say $w_4$ is red, then we take it along with the blue $(w_5, w_1)$. So in any case, we can find a rainbow matching, and the star case is complete. Now suppose we have a triangle within W: we can say w.l.o.g. that it is between the vertices $w_1, w_2, w_3$. Then all edges from $w_4$ and $w_5$ go to the complex. Clearly the edges in the triangle cannot all be the same color because that would make a cycle. If we have one edge of each color, then we claim we are done easily: w.l.o.g., say we have red $(w_1, w_2)$, blue $(w_1, w_3)$, and green $(w_2, w_3)$. Then choose any edge from $w_4$ which does not go to $v$. We can say w.l.o.g. that it is green. It blocks at most one of the red $(v, u_3)$ and the blue $(v, u_1)$, so we can pick one of these along with our green edge, and then complete our rainbow matching with an edge from the triangle. So we may now assume that we have two edges in the triangle of one color, and the third edge is a different color. We can say w.l.o.g. that $(w_1, w_2)$ and $(w_2, w_3)$ are red and $(w_1, w_3)$ is blue. Now notice that $w_4$ and $w_5$ each send at least 2 non-red edges to the complex. If all four of these are blue, then at least one is disjoint from the green $(v, u_2)$ (otherwise we'd have a cycle), and we're done. So then we may assume at least one of these edges is green. If this green edge does not go to $v$, it is disjoint from either the red $(v, u_3)$ or the blue $(v, u_1)$, and we finish the rainbow matching with an appropriate edge from the triangle. So we may assume the green edge goes to $v$: w.l.o.g., say it is $(w_4, v)$. Now look at any non-red edge from $w_5$ which doesn't go to $v$. If it's green, we're done, as argued above. And if it's blue, then we take it along with the green $(w_4, v)$ and a red edge from the triangle. So in either case we have a rainbow matching, and the triangle case is complete. \paragraph{Case 3: No edges within $W$.} Observe that if any edge from a vertex in $W$, say $w_1$ were to go to some entirely new vertex $w$, then we could replace say $w_2$ with $w$ to get back to Case 2 (or possibly Case 1). So we may assume that all edges from $W$ go to the complex. That is, ignoring the edges within the complex, we have a bipartite graph with $W$ on one side and the complex on the other. We claim that this is in fact a complete bipartite graph. Indeed, observe that each vertex in $W$ sends four vertices to the complex. Since it cannot send two edges to the same vertex, it must send one edge each to each vertex in the complex. This describes a complete bipartite graph. Together with the edges in the complex, there are $23$ edges. By the Pigeonhole Principle, there is a color, w.l.o.g. say red, with at least $8$ edges on these $9$ vertices. Notice, that it cannot be more, because then there would be a cycle. Thus, there are exaclty $8$ red edges, including the red one in the complex. For similar reasons, the remaning $15$ edges is split $8$ and $7$ between the remaining two colors. Therefore, in the bipartite complete graph, we can say w.l.o.g. that there are $7$ red and $7$ blue edges and there are $6$ green edges. Take any of the green edges. The remaining $K_{4,3}$ complete graph contains $12$ edges, at most $5$ of them are green. So there are at least $7$ non-green edges. Not all can be red, otherwise there would be cycles. We can assume w.l.o.g. that there are more red edges than blue ones, so there are at least $4$ red edges. There are the following cases \begin{enumerate} \item There is only one blue edge, but then there are at least $6$ red edges. Only $2$ red edges can block one of its vertices, and only $3$ the other one, so there should be vertex independent red edge, we are ready. \item There are two or three blue edges, they share a common vertex. The shared vertex can be blocked by at most two red edges. There are more red edges, which block at most one of the blue edges. We are ready. \item There are a pair of blue edges, vertex disjoint. Only 2 red edges can block both of them, however, there are at least 4 red edges, so we are ready. \end{enumerate}\qed \end{proof} The following two lemmas establish the base cases of the induction. The first lemma is stated and proved for an arbitrary number of path degree sequences, later we use that version in a proof. \begin{lemma}\label{lem:hamilton} Let $D_1, D_2,\ldots D_k$ be path degree sequences without common leaves. They have edge disjoint realizations. \end{lemma} \begin{proof} The proof is by construction. It should be clear that $n \ge 2k$, since any tree contains at least two leaves. We can say, w.l.o.g. that the leaves in the $i$ path have indexes $i$ and $\left\lceil\frac{n}{2}\right\rceil+i$. Then the $i\th$ path contains the edges $(i,n-1+i)$, $(n-1+i,1+i)$, $(1+i,n-2+i)$, $(n-2+i,2+i)$, $\ldots$, where the indexes are modulo $n$ shifted by $1$, that is, between $1$ and $n$. The last edge is $\left(\left\lceil\frac{n}{2}\right\rceil+i-1,\left\lceil\frac{n}{2}\right\rceil+i\right)$ if $n$ is even, and $\left(\left\lceil\frac{n}{2}\right\rceil+i+1,\left\lceil\frac{n}{2}\right\rceil+i\right)$ if $n$ is odd. Figure~\ref{fig:octagon} shows an example for $8$ vertices. It is easy to see that there are no parallel edges if such a path is rotated with at most $\left\lfloor\frac{n}{2}\right\rfloor$ vertices. \qed \end{proof} \begin{figure} \setlength{\unitlength}{1cm} \begin{center} \begin{tikzpicture} \draw (4,0) -- (2,0) -- (5.4,1.4) -- (0.6,1.4) -- (5.4,3.4) -- (0.6,3.4) -- (4,4.8) -- (2,4.8) ; \draw[black, fill= black] (2,0) circle(0.05); \draw[black, fill= black] (4,0) circle(0.05); \draw[black, fill= black] (0.6,1.4) circle(0.05); \draw[black, fill= black] (5.4,1.4) circle(0.05); \draw[black, fill= black] (0.6,3.4) circle(0.05); \draw[black, fill= black] (5.4,3.4) circle(0.05); \draw[black, fill= black] (2,4.8) circle(0.05); \draw[black, fill= black] (4,4.8) circle(0.05); \end{tikzpicture} \end{center} \caption{An example Hamiltonian path on 8 vertices. See the text for details.}\label{fig:octagon} \end{figure} \begin{lemma}\label{lem:smallcases} Let $D_1, D_2, D_3, D_4$ be tree degree sequences on at most $10$ vertices, without common leaves. They have edge disjoint realizations. \end{lemma} \begin{proof} Up to isomorphism, there are only $14$ possible such degree sequence quartets. The appendix contains a realization for each of them. \qed \end{proof} Now we are ready to prove the main theorem. \begin{theorem} Let $D_1, D_2, D_3, D_4$ be tree degree sequences without common leaves. They have edge disjoint tree realizations. \end{theorem} \begin{proof} The proof is by induction, the base cases are the degree sequences on at most $10$ vertices and the path degree sequence quartets. They all have edge disjoint realizations, based on Lemmas~\ref{lem:hamilton}~and~\ref{lem:smallcases}. So assume that $D_1, D_2, D_3, D_4$ are tree degree sequences on more than $10$ vertices and at least one of them is not a path degree sequence. Then there exist vertices $v$ and $w$ and an index $i$ such that $d_v^{(i)} =1$ for all $j \ne i$, $d_v^{j} =2$ and $d_w^{(i)} > 2$, according to Lemma~\ref{lem:reduction}. Consider the degree sequences $D_1', D_2', D_3', D_4'$ wich is obtained by deleting vertex $v$ and subtracting $1$ from $d_w^{(i)}$. These are tree degree sequences without common leaves, and based on the inductive assumption, they have edge disjoint realizations. Let $H$ be the colored graph representing these edge disjoint realizations and permute the degree sequences (and the colors accordingly) that $D_i$ is moved to the fourth position. Let $G$ be the subgraph of $H$ containing the first $3$ colors after the beforementioned permutation. Since $G$ contains at least $10$ vertices, $G \setminus \{w\}$ contains a rainbow matching according to Lemma~\ref{lem:rainbow}. Let $(v_1,v_2)$, $(v_3,v_4)$ and $(v_5,v_6)$ denote the edges in the rainbow matching. The realization of $D_1, D_2, D_3$ and $D_4$ is obtained by the following way. Take the realization representd by $H$. Add vertex $v$. Connect $v$ with $w$ in the first tree, delete the edges of the rainbow matching, and connect $v$ to all the vertices incident to the edges of the rainbow matching, $2$ edges for each tree, according to the color of the deleted edge. \qed \end{proof} \section{Some results in the general case} We now present some results in the general case, i.e.\ for an arbitrary $k$ number of tree degree sequences. First we show that $n \geq 4k - 2$ suffices to guarantee a rainbow matching. However for our original purpose of finding edge-disjoint realizations via the inductive proof, this is not sufficient to show that the induction step goes through every time, since our base case is $n = 2k$. We need something else to bridge the gap between $n = 2k$ and $n = 4k - 2$. This is accomplished by our second characterization, which adds an extra condition and says that if we have at least $2k-4$ vertices that are not leaves in any tree, then we are indeed guaranteed edge-disjoint realizations. \subsection{Rainbow matchings from matchings: $n = O(k)$ guarantees a rainbow matching} We now show that $n = O(k)$ suffices to guarantee a rainbow matching. The broad line of attack will be to stitch a rainbow matching together from regular (singly-colored, and large but not necessarily perfect) matchings. A crucial ingredient in guaranteeing large matchings will be the fact that a tree with $m$ non-leaves must contain a matching roughly of size $m/2$. The idea will be that when $n$ is large enough, the no-common-leaves condition guarantees a large number of non-leaves in each color, which then guarantees large matchings in each color, which can then be stitched together into a rainbow matching. We formalize the main ingredients as the following lemmas. \begin{lemma} \label{lem:greedy-rainbow} Let $G$ be an edge colored graph such that for each color $c_i$, $i = 1, 2, \ldots k$ there is a matching of size $2i$ in the subgraph of color $c_i$. Let $v$ be an arbitrary vertex. Then $G \setminus \{v\}$ has a rainbow matching of size $k$. \end{lemma} \begin{proof} The proof is by induction using the Pigeonhole Principle. Since there are $2$ disjoint edges of the first color in $G$, at least one of them is not incident to $v$. Take that edge, which will be in the rainbow matching. Assume that we already found a rainbow matching of size $i$. There is a matching of size $2i+2$ in the subgraph of color $c_{i+1}$. At most $2i$ of them are blocked by the rainbow matching of size $i$, and at most one of them is incident to $v$. Thus, there is an edge of color $i+1$ which is disjoint from the rainbow matching of size $i$ and not incident to $v$. Extend the rainbow matching with this edge. \qed \end{proof} \begin{lemma} \label{lem:matching-internal} A tree with at least one edge and $m$ internal nodes contains a matching of size at least $\left\lceil\frac{m+1}{2}\right\rceil$. \end{lemma} \begin{proof} The proof is by induction. The base cases are the trees with $2$ and $3$ vertices. They have $0$ and $1$ internal nodes (i.e.\ non-leaves) respectively, and they each have an edge, which is a matching of size $1$. Now assume that the number of vertices in tree $T$ is more than $3$, and the number of internal nodes in it is $m$. Take any leaf and its incident edge $e$. There are two cases. \begin{enumerate} \item The non-leaf vertex of $e$ has degree more than $2$. Then $T' = T \setminus \{e\}$ has the same number of internal nodes as $T$. By the inductive hypothesis, $T'$ has a matching of size $\left\lceil\frac{m+1}{2}\right\rceil$, so $T$ does also. \item The non-leaf vertex of $e$ has degree $2$. Let its other edge be denoted by $f$. Then the internal nodes in $T' = T \setminus \{e,f\}$ is the internal nodes in $T$ minus at most $2$. Thus $T'$ has a matching $M$ of size $\left\lceil\frac{m-1}{2}\right\rceil$. $M \cup \{e\}$ is a matching in $T$ with size $\left\lceil\frac{m+1}{2}\right\rceil$. \end{enumerate} \qed \end{proof} We now show that $n \geq 4k - 2$ suffices to guarantee a rainbow matching. \begin{theorem}\label{theo:lower-bound-vertices} Let $k$ trees be given on $n$ vertices, $k \ge 5$, having no common leaves. Let $w$ be an arbitrary vertex. Then if the number of vertices are greater or equal than $4k - 2$, we can find a rainbow matching in the first $k-1$ trees avoiding $w$. \end{theorem} \begin{proof} Arrange our $k-1$ trees in increasing order of number of internal nodes. We would like to show that the $i\th$ tree has a matching of size $2i$. This is sufficient to find a rainbow matching, according to Lemma~\ref{lem:greedy-rainbow}. Since internal nodes are exactly the vertices of a tree which are not leaves, we have also arranged the trees in decreasing order of number of leaves. Each tree has at least $2$ leaves, therefore in the $k-1-i$ trees above the $i\th$ tree and in the $k\th$ tree there are altogether at least $2(k-i)$ leaves. Since no vertex is a leaf in more than one tree, there remain only at most $n - 2(k-i)$ vertices that might still be leaves in the the $i\th$ tree and the $i-1$ trees below. And since the number of leaves in the trees below is no less than in the $i\th$ tree, the $i\th$ tree contains at most $$ \left\lfloor\frac{n - 2(k-i)}{i}\right\rfloor $$ leaves, and thus at least $$ n-\left\lfloor\frac{n - 2(k-i)}{i}\right\rfloor = \left\lceil\frac{(i-1)n + 2(k-i)}{i}\right\rceil $$ internal nodes. If $n \ge 4k - 2$, this means at least $$ \left\lceil\frac{(i-1)(4k -2) + 2(k-i)}{i}\right\rceil = \left\lceil\frac{4ki - 2k - 4i +2}{i}\right\rceil = 4k - 4 - \left\lfloor\frac{2k-2}{i}\right\rfloor $$ internal nodes. According to Lemma~\ref{lem:matching-internal}, there is a matching of a given lowerly bounded size that must exist in the $i\th$ tree, and we are going to show that \begin{equation} \left\lceil\frac{4k - 4 - \left\lfloor\frac{2k-2}{i}\right\rfloor+1}{2}\right\rceil \ge 2i.\label{eq:inequality} \end{equation} When $i = k-1$, the left hand side is $$ \left\lceil\frac{4k-4-2+1}{2}\right\rceil =2(k-1) =2i. $$ For $i<k-1$, it is sufficient to show that $$ \frac{4k - 3 - \frac{2k-2}{i}}{2} \ge 2i. $$ After rearranging, we get that $$ 0 \ge 4i^2 - (4k-3)i + 2k -2 $$ Solving the second order equation, we get that $$ \frac{4k-3-\sqrt{(4k-7)^2-8}}{8} \le i \le \frac{4k-3+\sqrt{(4k-7)^2-8}}{8}. $$ Rounding the discriminant knowing that $k \ge 5$, we get that $$ \frac{4k-3-(4k-8)}{8} \le i \le \frac{4k-3+4k-8}{8}. $$ namely, $$ \frac{5}{8} \le i \le k - \frac{11}{8} $$ which holds since $1\le i \le k-2$. Therfore, in the $i\th$ tree there is a matching of size at least $2i$, which is sufficient to have the prescribed rainbow matching. \qed \end{proof} \subsection{Edge-disjoint realizations under a condition on the degree distribution} Theorem~\ref{theo:lower-bound-vertices} is not strong enough to prove the full theorem of edge-disjoint realizations, since in our inductive proof we need to find rainbow matchings at each inductive step, starting from $n = 2k$. But by adding an extra condition to the degree distribution, and showing that this condition is maintained throughout the induction process, we are successfully able to guarantee edge-disjoint realizations. Define a \emph{never-leaf} to be a vertex that is not a leaf in any tree. \begin{theorem} $k$ tree degree sequences without common leaves and with at least $2k-4$ never-leaves always have edge-disjoint realizations. \end{theorem} \begin{proof} We will use the same inductive proof as presented originally. The crucial observation about that proof is that nowhere during the inductive step do we create any new leaves in any tree. This means the number of never-leaves does not change during the inductive step, and so at each step we have at least $2k-4$ never-leaves. It only remains to be shown, then, that whenever we have $2k-4$ never-leaves we can find a rainbow matching. We claim that in each tree there are at least $4k-6$ internal nodes. Indeed, the $2k-4$ never-leaves are certainly internal nodes in this tree. And in each of the other $k-1$ trees there are at least two leaves, and these leaves are internal nodes in all other trees because no common leaves, giving an additional $2k-2$ internal nodes, altogether $4k-6$ internal nodes. By Lemma~\ref{lem:matching-internal} this means we have matchings of size at least $$ \left\lceil\frac{4k-5}{2}\right\rceil = 2k-2 $$ in each tree, and by Lemma~\ref{lem:greedy-rainbow} these guarantee a rainbow matching, and we are done. \qed \end{proof} \subsection{A conditional theorem and the $k = 5$ case} The consequence of Theorem~\ref{theo:lower-bound-vertices} is the following. \begin{theorem}\label{theo:conditional} Fix a $k$. If all tree degree sequence $k$-tuples without common leaves on at most $4k-2$ vertices have edge disjoint tree realizations, then any tree degree sequence $k$-tuples without common leaves have edge disjoint tree realizations. \end{theorem} \begin{proof} The proof is by induction. The base cases are the path degree sequences, which have edge disjoint realizations, according to Lemma~\ref{lem:hamilton}, and the degree sequences on at most $4k-2$, which have edge disjoint realizations by the condition of the theorem. Let $D_1, D_2, \ldots, D_k$ be tree degree sequences without common leaves on more than $4k-2$ vertices. By Lemma~\ref{lem:reduction}, there are vertices $v$, $w$ and index $i$, such that $d_v^{(i)} = 1$, for all $j \ne i$, $d_v^{(j)} =2$ and $d_w^{(i)} >2$. Construct the degree sequences $D_1', D_2', \ldots, D_k'$ by removing $v$ and subtracting $1$ from $d_w^{(i)}$. These are tree degree sequences on at least $4k-2$ vertices, and they have edge disjoint realizations $T_1', T_2',\ldots, T_k'$ by the inductive hypothesis. Furthermore, there is a rainbow matching on all the trees except the $i\th$ avoiding vertex $w$, according to Theorem~\ref{theo:lower-bound-vertices}. Construct a realization of $D_1, D_2, \ldots, D_k$ in the following way. Start with $T_1', T_2', \ldots T_k'$. Add vertex $v$, connect it to $w$ in $T_i'$. Delete the edges in the rainbow matching, and connect $v$ to their $2k-2$ vertices, two edges in each tree, according the color of the deleted edge. \qed \end{proof} When $k = 5$, Theorem~\ref{theo:conditional} says the following: if all tree degree sequence quintets without common leaves and on at most $18$ vertices have edge disjoint tree realizations, then all tree degree sequence quintets have edge disjoint tree realizations. A computer-aided search showed that up to permutation of sequences and vertices, there are at most $592000$ tree degree quintets without common leaves and on at most 18 vertices, and they all have edge disjoint tree realizations. \section*{Appendix} Up to permutations of degree sequences and vertices, there are $14$ tree degree sequence quartets on at most $10$ vertices without common leaves. This appendix gives an example realization for all of them. If the number of vertices is $8$, there is only one possible degree sequence quartet, each degree sequence is a path degree sequence (case 1). If the number of vertices is $9$, there are $2$ possible cases: either all degree squences are path degree sequences (case 2) or there is a degree $3$ (case 3). If the number of vertices is $10$, there are $11$ possible cases. All degree sequences are path degree sequences (case 4), there is a degree 3 which might be on a vertex with a leaf (case 5) or without a leaf (case 6), there is a degree $4$ (case 7) or there are $2$ degree $3$s in the degree sequences (cases 8-14). The two $3$s might be in the same degree sequence, and the leaves on these two vertices might be in the same degree sequence (case 8) or in different degree sequences (case 9). If the two degree $3$s are in different degree sequences, they might be on the same vertex (case 10) or on different vertices. If the two degree $3$s are in different sequences, $D_i$ and $D_j$, and on different vertices $u$ and $v$, consider the degrees of $u$ and $v$ in $D_i$ and $D_j$ which are not $3$. They might be both $1$ (case 11), or else maybe one of them is $1$ and the other is $2$ (case 12), or else both of them are $2$. In this latter case, the degree $1$s on $u$ and $v$ might be in the same degree sequence (case 13) or in different degree sequences (case 14). The realizations are represented with an adjacency matrix, in which $0$ denotes the absence of edges, and for each degree sequence $D_i$, $i$ denotes the edges in the realization of $D_i$. \begin{enumerate} \item \begin{eqnarray} D_1& = &1, 2, 2, 2, 1, 2, 2, 2 \nonumber\\ D_2& = &2, 1, 2, 2, 2, 1, 2, 2 \nonumber\\ D_3& = &2, 2, 1, 2, 2, 2, 1, 2 \nonumber\\ D_4& = &2, 2, 2, 1, 2, 2, 2, 1 \nonumber \end{eqnarray} $$ \left( \begin{array}{cccccccc} 0& 1& 2& 2& 3& 3& 4& 4 \\ 1& 0& 2& 3& 3& 4& 4& 1 \\ 2& 2& 0& 3& 4& 4& 1& 1 \\ 2& 3& 3& 0& 4& 1& 1& 2 \\ 3& 3& 4& 4& 0& 1& 2& 2 \\ 3& 4& 4& 1& 1& 0& 2& 3 \\ 4& 4& 1& 1& 2& 2& 0& 3 \\ 4& 1& 1& 2& 2& 3& 3& 0 \end{array} \right) $$ \item \begin{eqnarray} D_1& = &1, 2, 2, 2, 2, 1, 2, 2, 2 \nonumber\\ D_2& = &2, 1, 2, 2, 2, 2, 1, 2, 2 \nonumber\\ D_3& = &2, 2, 1, 2, 2, 2, 2, 1, 2 \nonumber\\ D_4& = &2, 2, 2, 1, 2, 2, 2, 2, 1 \nonumber \end{eqnarray} $$ \left( \begin{array}{ccccccccc} 0& 1& 2& 2& 3& 3& 4& 4& 0 \\ 1& 0& 2& 3& 3& 4& 4& 0& 1 \\ 2& 2& 0& 3& 4& 4& 0& 1& 1 \\ 2& 3& 3& 0& 4& 0& 1& 1& 2 \\ 3& 3& 4& 4& 0& 1& 1& 2& 2 \\ 3& 4& 4& 0& 1& 0& 2& 2& 3 \\ 4& 4& 0& 1& 1& 2& 0& 3& 3 \\ 4& 0& 1& 1& 2& 2& 3& 0& 4 \\ 0& 1& 1& 2& 2& 3& 3& 4& 0 \end{array} \right) $$ \item \begin{eqnarray} D_1& = &1, 3, 2, 2, 1, 2, 2, 2, 1 \nonumber\\ D_2& = &2, 1, 2, 2, 2, 1, 2, 2, 2 \nonumber\\ D_3& = &2, 2, 1, 2, 2, 2, 1, 2, 2 \nonumber\\ D_4& = &2, 2, 2, 1, 2, 2, 2, 1, 2 \nonumber \end{eqnarray} $$ \left( \begin{array}{ccccccccc} 0& 1& 0& 2& 3& 3& 4& 4& 2 \\ 1& 0& 2& 3& 3& 4& 4& 1& 1 \\ 0& 2& 0& 3& 4& 4& 1& 1& 2 \\ 2& 3& 3& 0& 0& 1& 1& 2& 4 \\ 3& 3& 4& 0& 0& 1& 2& 2& 4 \\ 3& 4& 4& 1& 1& 0& 2& 0& 3 \\ 4& 4& 1& 1& 2& 2& 0& 3& 0 \\ 4& 1& 1& 2& 2& 0& 3& 0& 3 \\ 2& 1& 2& 4& 4& 3& 0& 3& 0 \end{array} \right) $$ \item \begin{eqnarray} D_1& = &1, 2, 2, 2, 2, 1, 2, 2, 2, 2 \nonumber\\ D_2& = &2, 1, 2, 2, 2, 2, 1, 2, 2, 2 \nonumber\\ D_3& = &2, 2, 1, 2, 2, 2, 2, 1, 2, 2 \nonumber\\ D_4& = &2, 2, 2, 1, 2, 2, 2, 2, 1, 2 \nonumber \end{eqnarray} $$ \left( \begin{array}{cccccccccc} 0& 1& 2& 2& 3& 3& 4& 4& 0& 0 \\ 1& 0& 2& 3& 3& 4& 4& 0& 0& 1 \\ 2& 2& 0& 3& 4& 4& 0& 0& 1& 1 \\ 2& 3& 3& 0& 4& 0& 0& 1& 1& 2 \\ 3& 3& 4& 4& 0& 0& 1& 1& 2& 2 \\ 3& 4& 4& 0& 0& 0& 1& 2& 2& 3 \\ 4& 4& 0& 0& 1& 1& 0& 2& 3& 3 \\ 4& 0& 0& 1& 1& 2& 2& 0& 3& 4 \\ 0& 0& 1& 1& 2& 2& 3& 3& 0& 4 \\ 0& 1& 1& 2& 2& 3& 3& 4& 4& 0 \end{array} \right) $$ \item \begin{eqnarray} D_1& = &1, 3, 2, 2, 2, 1, 2, 2, 2, 1 \nonumber\\ D_2& = &2, 1, 2, 2, 2, 2, 1, 2, 2, 2 \nonumber\\ D_3& = &2, 2, 1, 2, 2, 2, 2, 1, 2, 2 \nonumber\\ D_4& = &2, 2, 2, 1, 2, 2, 2, 2, 1, 2 \nonumber \end{eqnarray} $$ \left( \begin{array}{cccccccccc} 0& 1& 0& 2& 3& 3& 4& 4& 0& 2 \\ 1& 0& 2& 3& 3& 4& 4& 0& 1& 1 \\ 0& 2& 0& 3& 4& 4& 0& 1& 1& 2 \\ 2& 3& 3& 0& 0& 0& 1& 1& 2& 4 \\ 3& 3& 4& 0& 0& 1& 1& 2& 2& 4 \\ 3& 4& 4& 0& 1& 0& 2& 2& 3& 0 \\ 4& 4& 0& 1& 1& 2& 0& 0& 3& 3 \\ 4& 0& 1& 1& 2& 2& 0& 0& 4& 3 \\ 0& 1& 1& 2& 2& 3& 3& 4& 0& 0 \\ 2& 1& 2& 4& 4& 0& 3& 3& 0& 0 \end{array} \right) $$ \item \begin{eqnarray} D_1& = &1, 2, 2, 2, 3, 1, 2, 2, 2, 1 \nonumber\\ D_2& = &2, 1, 2, 2, 2, 2, 1, 2, 2, 2 \nonumber\\ D_3& = &2, 2, 1, 2, 2, 2, 2, 1, 2, 2 \nonumber\\ D_4& = &2, 2, 2, 1, 2, 2, 2, 2, 1, 2 \nonumber \end{eqnarray} $$ \left( \begin{array}{cccccccccc} 0& 1& 0& 2& 3& 3& 4& 4& 0& 2 \\ 1& 0& 2& 0& 3& 4& 4& 0& 1& 3 \\ 0& 2& 0& 3& 4& 4& 0& 1& 1& 2 \\ 2& 0& 3& 0& 4& 0& 1& 1& 2& 3 \\ 3& 3& 4& 4& 0& 1& 1& 2& 2& 1 \\ 3& 4& 4& 0& 1& 0& 2& 2& 3& 0 \\ 4& 4& 0& 1& 1& 2& 0& 3& 3& 0 \\ 4& 0& 1& 1& 2& 2& 3& 0& 0& 4 \\ 0& 1& 1& 2& 2& 3& 3& 0& 0& 4 \\ 2& 3& 2& 3& 1& 0& 0& 4& 4& 0 \end{array} \right) $$ \item \begin{eqnarray} D_1& = &1, 4, 2, 2, 1, 2, 2, 2, 1, 1 \nonumber\\ D_2& = &2, 1, 2, 2, 2, 1, 2, 2, 2, 2 \nonumber\\ D_3& = &2, 2, 1, 2, 2, 2, 1, 2, 2, 2 \nonumber\\ D_4& = &2, 2, 2, 1, 2, 2, 2, 1, 2, 2 \nonumber \end{eqnarray} $$ \left( \begin{array}{cccccccccc} 0& 1& 0& 0& 3& 3& 4& 4& 2& 2 \\ 1& 0& 2& 3& 3& 4& 4& 1& 1& 1 \\ 0& 2& 0& 3& 0& 4& 1& 1& 2& 4 \\ 0& 3& 3& 0& 0& 1& 1& 2& 4& 2 \\ 3& 3& 0& 0& 0& 1& 2& 2& 4& 4 \\ 3& 4& 4& 1& 1& 0& 2& 0& 3& 0 \\ 4& 4& 1& 1& 2& 2& 0& 0& 0& 3 \\ 4& 1& 1& 2& 2& 0& 0& 0& 3& 3 \\ 2& 1& 2& 4& 4& 3& 0& 3& 0& 0 \\ 2& 1& 4& 2& 4& 0& 3& 3& 0& 0 \end{array} \right) $$ \item \begin{eqnarray} D_1& = &1, 3, 2, 2, 1, 3, 2, 2, 1, 1 \nonumber\\ D_2& = &2, 1, 2, 2, 2, 1, 2, 2, 2, 2 \nonumber\\ D_3& = &2, 2, 1, 2, 2, 2, 1, 2, 2, 2 \nonumber\\ D_4& = &2, 2, 2, 1, 2, 2, 2, 1, 2, 2 \nonumber \end{eqnarray} $$ \left( \begin{array}{cccccccccc} 0& 1& 0& 2& 0& 3& 4& 4& 2& 3 \\ 1& 0& 0& 3& 3& 4& 4& 1& 1& 2 \\ 0& 0& 0& 3& 4& 4& 1& 1& 2& 2 \\ 2& 3& 3& 0& 0& 1& 1& 2& 0& 4 \\ 0& 3& 4& 0& 0& 1& 2& 2& 4& 3 \\ 3& 4& 4& 1& 1& 0& 2& 0& 3& 1 \\ 4& 4& 1& 1& 2& 2& 0& 3& 0& 0 \\ 4& 1& 1& 2& 2& 0& 3& 0& 3& 0 \\ 2& 1& 2& 0& 4& 3& 0& 3& 0& 4 \\ 3& 2& 2& 4& 3& 1& 0& 0& 4& 0 \end{array} \right) $$ \item \begin{eqnarray} D_1& = &1, 3, 3, 2, 1, 2, 2, 2, 1, 1 \nonumber\\ D_2& = &2, 1, 2, 2, 2, 1, 2, 2, 2, 2 \nonumber\\ D_3& = &2, 2, 1, 2, 2, 2, 1, 2, 2, 2 \nonumber\\ D_4& = &2, 2, 2, 1, 2, 2, 2, 1, 2, 2 \nonumber \end{eqnarray} $$ \left( \begin{array}{cccccccccc} 0& 1& 0& 0& 3& 3& 4& 4& 2& 2 \\ 1& 0& 2& 3& 3& 0& 4& 1& 1& 4 \\ 0& 2& 0& 3& 4& 4& 1& 1& 2& 1 \\ 0& 3& 3& 0& 0& 1& 1& 2& 4& 2 \\ 3& 3& 4& 0& 0& 1& 2& 2& 4& 0 \\ 3& 0& 4& 1& 1& 0& 2& 0& 3& 4 \\ 4& 4& 1& 1& 2& 2& 0& 0& 0& 3 \\ 4& 1& 1& 2& 2& 0& 0& 0& 3& 3 \\ 2& 1& 2& 4& 4& 3& 0& 3& 0& 0 \\ 2& 4& 1& 2& 0& 4& 3& 3& 0& 0 \end{array} \right) $$ \item \begin{eqnarray} D_1& = &1, 3, 2, 2, 1, 2, 2, 2, 1, 2 \nonumber\\ D_2& = &2, 1, 2, 2, 2, 1, 2, 2, 2, 2 \nonumber\\ D_3& = &2, 3, 1, 2, 2, 2, 1, 2, 2, 1 \nonumber\\ D_4& = &2, 2, 2, 1, 2, 2, 2, 1, 2, 2 \nonumber \end{eqnarray} $$ \left( \begin{array}{cccccccccc} 0& 1& 0& 2& 3& 3& 4& 0& 2& 4 \\ 1& 0& 2& 3& 3& 4& 4& 1& 1& 3 \\ 0& 2& 0& 3& 4& 4& 1& 1& 2& 0 \\ 2& 3& 3& 0& 0& 0& 1& 2& 4& 1 \\ 3& 3& 4& 0& 0& 1& 0& 2& 4& 2 \\ 3& 4& 4& 0& 1& 0& 2& 0& 3& 1 \\ 4& 4& 1& 1& 0& 2& 0& 3& 0& 2 \\ 0& 1& 1& 2& 2& 0& 3& 0& 3& 4 \\ 2& 1& 2& 4& 4& 3& 0& 3& 0& 0 \\ 4& 3& 0& 1& 2& 1& 2& 4& 0& 0 \end{array} \right) $$ \item \begin{eqnarray} D_1& = &1, 3, 2, 2, 1, 2, 2, 2, 1, 2 \nonumber\\ D_2& = &3, 1, 2, 2, 2, 1, 2, 2, 2, 1 \nonumber\\ D_3& = &2, 2, 1, 2, 2, 2, 1, 2, 2, 2 \nonumber\\ D_4& = &2, 2, 2, 1, 2, 2, 2, 1, 2, 2 \nonumber \end{eqnarray} $$ \left( \begin{array}{cccccccccc} 0& 1& 0& 2& 3& 3& 4& 4& 2& 2 \\ 1& 0& 2& 3& 3& 4& 4& 1& 1& 0 \\ 0& 2& 0& 3& 0& 4& 1& 1& 2& 4 \\ 2& 3& 3& 0& 0& 0& 1& 2& 4& 1 \\ 3& 3& 0& 0& 0& 1& 2& 2& 4& 4 \\ 3& 4& 4& 0& 1& 0& 2& 0& 3& 1 \\ 4& 4& 1& 1& 2& 2& 0& 0& 0& 3 \\ 4& 1& 1& 2& 2& 0& 0& 0& 3& 3 \\ 2& 1& 2& 4& 4& 3& 0& 3& 0& 0 \\ 2& 0& 4& 1& 4& 1& 3& 3& 0& 0 \end{array} \right) $$ \item \begin{eqnarray} D_1& = &1, 3, 2, 2, 1, 2, 2, 2, 1, 2 \nonumber\\ D_2& = &2, 1, 3, 2, 2, 1, 2, 2, 2, 1 \nonumber\\ D_3& = &2, 2, 1, 2, 2, 2, 1, 2, 2, 2 \nonumber\\ D_4& = &2, 2, 2, 1, 2, 2, 2, 1, 2, 2 \nonumber \end{eqnarray} $$ \left( \begin{array}{cccccccccc} 0& 0& 0& 2& 3& 3& 4& 4& 2& 1 \\ 0& 0& 2& 3& 3& 4& 4& 1& 1& 1 \\ 0& 2& 0& 3& 4& 4& 1& 1& 2& 2 \\ 2& 3& 3& 0& 0& 1& 1& 2& 0& 4 \\ 3& 3& 4& 0& 0& 1& 2& 2& 4& 0 \\ 3& 4& 4& 1& 1& 0& 2& 0& 3& 0 \\ 4& 4& 1& 1& 2& 2& 0& 0& 0& 3 \\ 4& 1& 1& 2& 2& 0& 0& 0& 3& 3 \\ 2& 1& 2& 0& 4& 3& 0& 3& 0& 4 \\ 1& 1& 2& 4& 0& 0& 3& 3& 4& 0 \end{array} \right) $$ \item \begin{eqnarray} D_1& = &1, 3, 2, 2, 1, 2, 2, 2, 1, 2 \nonumber\\ D_2& = &2, 1, 2, 2, 2, 1, 2, 2, 2, 2 \nonumber\\ D_3& = &2, 2, 1, 2, 2, 3, 1, 2, 2, 1 \nonumber\\ D_4& = &2, 2, 2, 1, 2, 2, 2, 1, 2, 2 \nonumber \end{eqnarray} $$ \left( \begin{array}{cccccccccc} 0& 0& 0& 2& 3& 3& 4& 4& 2& 1 \\ 0& 0& 2& 3& 3& 4& 4& 1& 1& 1 \\ 0& 2& 0& 3& 4& 4& 1& 1& 2& 0 \\ 2& 3& 3& 0& 0& 1& 1& 2& 0& 4 \\ 3& 3& 4& 0& 0& 1& 0& 2& 4& 2 \\ 3& 4& 4& 1& 1& 0& 2& 0& 3& 3 \\ 4& 4& 1& 1& 0& 2& 0& 3& 0& 2 \\ 4& 1& 1& 2& 2& 0& 3& 0& 3& 0 \\ 2& 1& 2& 0& 4& 3& 0& 3& 0& 4 \\ 1& 1& 0& 4& 2& 3& 2& 0& 4& 0 \end{array} \right) $$ \item \begin{eqnarray} D_1& = &1, 3, 2, 2, 1, 2, 2, 2, 1, 2 \nonumber\\ D_2& = &2, 1, 2, 2, 2, 1, 2, 2, 2, 2 \nonumber\\ D_3& = &2, 2, 1, 3, 2, 2, 1, 2, 2, 1 \nonumber\\ D_4& = &2, 2, 2, 1, 2, 2, 2, 1, 2, 2 \nonumber \end{eqnarray} $$ \left( \begin{array}{cccccccccc} 0& 0& 0& 2& 3& 3& 4& 4& 2& 1 \\ 0& 0& 2& 3& 3& 4& 4& 1& 1& 1 \\ 0& 2& 0& 3& 4& 0& 1& 1& 2& 4 \\ 2& 3& 3& 0& 0& 1& 1& 2& 4& 3 \\ 3& 3& 4& 0& 0& 1& 0& 2& 4& 2 \\ 3& 4& 0& 1& 1& 0& 2& 0& 3& 4 \\ 4& 4& 1& 1& 0& 2& 0& 3& 0& 2 \\ 4& 1& 1& 2& 2& 0& 3& 0& 3& 0 \\ 2& 1& 2& 4& 4& 3& 0& 3& 0& 0 \\ 1& 1& 4& 3& 2& 4& 2& 0& 0& 0 \end{array} \right) $$ \end{enumerate} \begin{acknowledgements} IM is supported by NKFIH Funds No. K116769 and No. SNN-117879. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and Results} Let $X$ be a smooth projective curve of genus 2 over an algebraically closed field $k$ of characteristic $p > 0$. Let $\Omega$ be its canonical bundle. Define the (absolute) Frobenius morphism \cite{Ka1, Ka2} \begin{equation*} F : X \longrightarrow X \end{equation*} which maps local sections $f \in \O_X$ to $f^p$. As $X$ is smooth, $F$ is a (finite) flat map. Let $J^0, J^1$ be the moduli schemes of isomorphism classes of line bundles of degree $0$ and $1$, respectively. Choose a theta characteristic $L_{\theta} \in J^1$. Denote by $S_O$ (resp. $S_{\theta}$) the moduli scheme of S-equivalence classes of semi-stable vector bundles of rank $2$ and determinant $O_X$ (resp. $L_{\theta}$) on $X$ \cite{Se}. We study the Frobenius pull-backs of the bundles in $S_O$ and $S_{\theta}$. The geometry of $S_{\theta}$ has been studied extensively by Bhosle \cite{Bh}. The operation of Frobenius pull-back has a tendency to destabilize bundles \cite{Ra}. In particular, the map $V \longmapsto F^*(V)$ is rational on the moduli scheme. The Frobenius destabilizes only finite many bundles in $S_O$ (see Theorem~\ref{prop:3.2b}). For any $V \in S_O$, Proposition~\ref{prop:3.5} gives a necessary and sufficient criterion for $F^*(V)$ to be non-semi-stable in terms of theta characteristic. For a given vector bundle $V$ on $X$, let $$ J_2(V) = \{V \otimes L : L \in J^0, L^2 = O_X\} $$ \begin{thm} \label{thm:2} Suppose $p=2$. Then there exists a bundle $V_1 \in Ext^1(L_{\theta}, \O_X)$ such that if $$ V \in S_{\theta} \setminus J_2(V_1), $$ then $F^*(V)$ is semi-stable. Hence, the Frobenius map induces a map $$ \Omega^{-1} \otimes F^* : S_{\theta} \setminus J_2(V_1) \longrightarrow S_O. $$ \end{thm} We show that there is a natural way of resolving the indeterminacy of the Frobenius map at the points in $J_2(V_1)$, by replacing $S_O, S_{\theta}$ with moduli schemes of suitable Higgs bundles. Denote by $S_O(\Omega)$ (resp. $S_{\theta}(L_{\theta})$) the moduli scheme of semi-stable Higgs bundles with associated line bundle $O_X$ (resp. $L_{\theta}$) \cite{Ni}. For any Higgs bundle on $X$, one may also consider its Frobenius pull-back. \begin{thm} \label{thm:3} Suppose $p=2$. \begin{enumerate} \item If $(V, \phi) \in S_{\theta}(L_{\theta})$, then either $V \in S_{\theta}$ or $V \in J_2(O_X \oplus L_{\theta})$. \item There exist Higgs fields $\phi_0$ and $\phi_1$ such that $(F^*(W), F^*(\phi_0))$ and $(F^*(V), F^*(\phi_1))$ are semi-stable for all $W \in J_2(O_X \oplus L_{\theta})$ and $V \in J_2(V_1)$. \end{enumerate} Hence, the Frobenius defines a map on a Zariski open set $U \subset S_{\theta}(L_{\theta})$ $$ \Omega^{-1} \otimes F^* : U \longrightarrow S_O(\O_X), $$ where $U$ contains the scheme $S_{\theta} \setminus J_2(V_1)$ and the points $(W, \phi_0), (V, \phi_1)$ for any $W \in J_2(O_X \oplus L_{\theta})$ and $V \in J_2(V_1)$. \end{thm} Cartier's theorem gives a criterion for descent under Frobenius \cite{Ka1}. Higgs bundles appear naturally in characteristic $p > 0$ context. To see this, let $(V, \nabla)$ be a vector bundle with a (flat) connection, $$ \nabla : V \longrightarrow \Omega \otimes_{O_X} V. $$ One associates to the pair $(V, \nabla)$ its $p$-curvature which is a homomorphism of $O_X$-modules \cite{Ka1, Ka2}: $$ \psi : V \longrightarrow F^*(\Omega) \otimes_{O_X} V. $$ Thus the pair $(V, \nabla)$ gives a Higgs bundle with associated line bundle $F^*(\Omega)$. \centerline{\sc Acknowledgments} We thank Professor Usha Bhosle for reading a previous version and for her comments and suggestions for improvement. We thank Professors Minhyong Kim, N. Mohan Kumar, V. B. Mehta and S. Ramanan for insightful discussions and comments. Finally, we thank the referee for his or her comments. \section{Bundle Extensions and the Frobenius Morphism} Suppose $L$ is a line bundle on $X$. Then $F^*(L) = L^p$. The push-forward, $F_*(O_X)$, is a vector bundle of rank $p$ and one has the exact sequence of vector bundles \cite{Ra} \begin{equation*} 0 \longrightarrow O_X \longrightarrow F_*(O_X) \longrightarrow B_1 \longrightarrow 0. \end{equation*} Tensoring the sequence with a line bundle $L$ and using the projection formula, we obtain \begin{equation*} 0 \longrightarrow L \longrightarrow F_*(L^p) \longrightarrow B_1 \otimes L \longrightarrow 0. \end{equation*} The associated long cohomology sequence is \begin{equation*} \cdots \longrightarrow H^0(B_1 \otimes L) \longrightarrow H^1(L) \stackrel{f_L}{\longrightarrow} H^1(F_*(L^p)) \longrightarrow \cdots \end{equation*} Since $F$ is an affine morphism, the Leray spectral sequence for $F$ degenerates at $E_2$. Hence \begin{equation*} H^i(F_*(L^p)) \cong H^i(L^p). \end{equation*} Substituting this into the long exact sequence, one obtains \begin{equation} \cdots \longrightarrow H^0(B_1 \otimes L) \longrightarrow H^1(L) \stackrel{f_L}{\longrightarrow} H^1(L^p) \longrightarrow \cdots \end{equation} Suppose $V \in Ext^1(L_2, L_1) \cong H^1(L_2^{-1} \otimes L_1)$, i.e. \begin{equation*} \label{eqn:1} 0 \longrightarrow L_1 \longrightarrow V \longrightarrow L_2 \longrightarrow 0, \end{equation*} where $L_1, L_2$ are line bundles. Since $F$ is a flat morphism, we have \begin{equation*} 0 \longrightarrow F^*(L_1) \longrightarrow F^*(V) \longrightarrow F^*(L_2) \longrightarrow 0. \end{equation*} This gives a map \begin{equation*} F^* : Ext^1(L_2, L_1) \longrightarrow Ext^1(F^*(L_2), F^*(L_1)) \cong Ext^1(L_2^p, L_1^p). \end{equation*} Take $L = L_2^{-1} \otimes L_1$ in (1). \begin{prop} \label{prop:2.1} $F^*(V) = L_1^p \oplus L_2^p$ if and only if $V$ is in the image of the connecting homomorphism $$ H^0(B_1 \otimes L_2^{-1} \otimes L_1) \longrightarrow H^1(L_2^{-1} \otimes L_1). $$ \end{prop} \begin{pf} Since the functors $\Gamma(X,.)$ and $Hom(O_X,.)$ are equivalent, the diagram \begin{equation*} \begin{CD} H^1(L_2^{-1} \otimes L_1) @>f_{L_2^{-1} \otimes L_1}>> H^1(L_2^{-p} \otimes L_1^p)\\ @VV{id}V @VV{id}V\\ Ext^1(L_2, L_1) @>F^*>> Ext^1(L_2^p, L_1^p) \end{CD} \end{equation*} commutes. Now the proposition follows directly from the long exact sequence $$ \cdots \longrightarrow H^0(B_1 \otimes L_2^{-1} \otimes L_1) \longrightarrow H^1(L_2^{-1} \otimes L_1) \longrightarrow H^1(L_2^{-p} \otimes L_1^p) \longrightarrow \cdots $$ \end{pf} \section{The moduli of Semi-Stable Vector and Higgs Bundles} Suppose $V$ is a vector bundle on $X$. The slope of $V$ is defined as $$ \mu(V) = \deg(V) / \mbox{rank}(V). $$ A vector bundle $V$ is semi-stable (resp. stable) if for every proper subbundle $W$ of $V$, $\mu(W) \le \mu(V)$ (resp. $\mu(W) < \mu(V)$). The schemes $S_O$ and $S_{\theta}$ are defined to be the moduli schemes of all $S$-equivalence classes \cite{Se} of rank 2 semi-stable vector bundles with determinant equal to $O_X$ and $L_{\theta}$, respectively. A Higgs bundle $(V,\phi)$ with an associated line bundle $L$ on $X$ consists of a vector bundle $V$ and a Higgs field which is a morphism of bundles: $$ \phi : V \longrightarrow V \otimes L. $$ Frobenius pulls back Higgs fields $$ F^*(V) \stackrel{F^*(\phi)}{\longmapsto} F^*(V) \otimes F^*(L), $$ hence, pulls back Higgs bundles. A Higgs bundle $(V,\phi)$ is said to be semi-stable (resp. stable) if for every proper subbundle $W$ of $V$, satisfying $\phi(W) \subset W \otimes L$, one has $\mu(W) \le \mu(V)$ (resp. $\mu(W) < \mu(V)$). The scheme $S_O(\Omega)$ (resp. $S_{\theta}(L_{\theta})$) is defined to be the moduli scheme of all $S$-equivalence classes of rank 2 semi-stable Higgs bundles on $X$ with determinant $O_X$ (resp. $L_{\theta}$) and with associated line bundle $\Omega$ (resp. $L_{\theta}$) \cite{Ni}. Let $$ K = \{V \in S_O : V \mbox{ is semi-stable but not stable}\}. $$ Suppose $V \in K$. Then there exists $L \in J^0$ such that $$ 0 \longrightarrow L^{-1} \longrightarrow V \longrightarrow L \longrightarrow 0. $$ The pull-back of $V$ by Frobenius then fits into the following sequence $$ 0 \longrightarrow L^{-p} \stackrel{f_1}{\longrightarrow} F^*(V) \stackrel{f_2}{\longrightarrow} L^p \longrightarrow 0. $$ \begin{prop} \label{prop:3.1} $$ F^* : K \longrightarrow K. $$ is a well-defined morphism. \end{prop} \begin{pf} Let $H \subset V$ be a subbundle of maximum degree. If $f_2 |_H = 0$, then $H = L^{-p}$ and $\deg(H) = \deg(L^{-p}) = 0$. If $f_2 |_H \neq 0$, then $\deg(H) \le \deg(L^p) = 0$. \end{pf} In general, $F^*(V)$ may not be semi-stable. For example, a theorem of Raynaud states that the bundle $B_1$ is always semi-stable while $F^*(B_1)$ is never semi-stable for all $p > 2$ \cite{Ra}. The following theorem was communicated to Joshi by V.B. Mehta: \begin{thm}\label{prop:3.2b} Let $X$ be a curve of genus $2$ over an algebraically closed field of characteristic $p > 2$. Then there exists a finite set S, such that $F^*(V)$ is semi-stable for all $V \in S_O \setminus S$. In other words, $F^*$ induces a morphism: $$ F^* : S_O \setminus S \longrightarrow S_O. $$ \end{thm} \begin{pf} By a theorem of Narasimhan-Ramanan, when $p=0$ \cite{Na}, $S_O \cong {\mathbb P}^3$. Moreover, as was remarked to one of us by Ramanan, the proof given there works in all characteristic $p \neq 2$. The Frobenius morphism is defined on a non-empty Zariski open set $U$ in $S_O \cong {\mathbb P}^3$. By Proposition~\ref{prop:3.1}, $U$ contains $K$ which is an ample divisor in ${\mathbb P}^3$. Therefore $S_O \setminus U$ is of co-dimension 3, hence, is a finite set. Note that $K$ can also be identified with the Kummer surface of $J^0$ in ${\mathbb P}^3$ \cite{Na}. \end{pf} When $X$ is ordinary, $F^*$ is \'{e}tale on a non-empty Zariski open set of $S_O$ \cite{Me}. Although unable to identify explicitly this finite set upon which the Frobenius is not defined, we provide the following criterion. \begin{prop} \label{prop:3.5} Let $X$ be a curve of genus $2$ over an algebraically closed field of characteristic $p > 0$. Suppose $V \in S_O$ Then $F^*(V)$ is not semi-stable if and only if $F^*(V)$ is an extension $$ 0 \longrightarrow M \longrightarrow F^*(V) \longrightarrow M^{-1} \longrightarrow 0, $$ where $M \in J_2(L_{\theta})$. \end{prop} \begin{pf} One direction is clear. We use inseparable descent to prove the other direction. Suppose $F^*(V)$ is not semi-stable. Then we have an exact sequence $$ 0 \longrightarrow M \longrightarrow F^*(V) \longrightarrow M^{-1} \to 0, $$ where $\deg(M) > 0$. Following \cite{Ka1}, consider the natural connection on $F^*(V)$ with zero $p$-curvature. Then the second fundamental form of this connection is a morphism \begin{equation*} T_X \longrightarrow Hom(M,M^{-1})=M^{-2}. \end{equation*} As $V$ is semi-stable, this morphism must not be the zero morphism. In other words, $M^{-2} \otimes \Omega$ has a non-zero section. Since $\deg(M) > 0$ and $\deg(\Omega) = 2$, we must have $\Omega = M^2$. Hence $M \in J_2(L_{\theta})$. \end{pf} \section{The Moduli Spaces in Characteristic 2} In this section, we assume $p=2$. Then $B_1$ is a line bundle and equal to a theta characteristic \cite{Ra}. Choose $L_{\theta}$ to be $B_1$. \subsection{The moduli of semi-stable bundles} Suppose $V \in S_{\theta}$. By a theorem in \cite{Na}, there exist $L_1 \in J^0, L_2 \in J^1$ with $L_1 \otimes L_2 = L_{\theta}$ such that $V \in Ext^1(L_2, L_1)$. Since $L_{\theta} = B_1$, $h^0(B_1 \otimes L_2^{-1} \otimes L_1)$ is 1 if $L_{\theta} = L_2 \otimes L_1^{-1}$ and $0$ otherwise. Hence, by Proposition~\ref{prop:2.1}, there is a unique (up to a scalar) $V_1$ not isomorphic to $O_X \oplus L_{\theta}$ and $$ 0 \longrightarrow O_X \longrightarrow V_1 \longrightarrow L_{\theta} \longrightarrow 0 $$ such that $F^*(V_1) = O_X \oplus \Omega$. It is immediate that $V_1$ is stable \cite{Na}. Suppose $V \not\in J_2(V_1)$. Then by Proposition~\ref{prop:2.1}, $$ F^*(V) \neq F^*(L_1) \oplus F^*(L_2) = L_1^2 \oplus L_2^2. $$ If $M \subset F^*(V)$ is a destabilizing subbundle, i.e. $\deg(M) \ge 2$, then $M^{-1} \otimes L_2^2$ has a global section implying $$ \deg(M) \le \deg(L_2^2) = 2. $$ Moreover if $\deg(M) = 2$, then $M = L_2^2$ implying that $F^*(V)$ contains $L_2^2$ as a subbundle. Then the sequence $$ 0 \longrightarrow L_1^2 \longrightarrow F^*(V) \longrightarrow L_2^2 \longrightarrow 0, $$ splits. This is a contradiction. This proves Theorem~\ref{thm:2}. \subsection{Restoring Frobenius Stability: Higgs Bundles} The scheme $S_{\theta}$ embeds in $S_{\theta}(L_{\theta})$ by the map $V \longmapsto (V,0)$. If $(V, \phi) \in S_{\theta}(L_{\theta})$ and $V$ is not semi-stable, then $V$ is an extension \begin{equation} 0 \longrightarrow L_1 \longrightarrow V \stackrel{f}{\longrightarrow} L_2 \longrightarrow 0, \end{equation} where $\deg(L_1) \ge 1 > \deg(L_2)$. Moreover $\phi(L_1)$ is not contained in $L_1 \otimes L_{\theta}$ (otherwise $\phi(L_1) \subset L_1 \otimes L_{\theta}$ implying $(V,\phi)$ is not semi-stable). This implies that there exists a line bundle $H \subset V$ such that $H \neq L_1$ and $\phi(L_1) \subset H \otimes L_{\theta}$. Then $$ \deg(L_1) \le \deg(H) + \deg(L_{\theta}). $$ Since $L_1 \neq H$, $0 \neq f(H) \subset L_2$ implies that $\deg(H) \le \deg(L_2)$. To summarize, we have the following inequalities: $$ \deg(L_2) + \deg(L_{\theta}) \ge \deg(H) + \deg(L_{\theta}) \ge \deg(L_1) > \deg(L_2). $$ Since $\deg(L_{\theta}) = 1$, $\deg(L_1) = \deg(H) + 1 = \deg(L_2) + 1 = 1$. The degree of $H$ is thus zero implying that $f(H) = L_2$, so the exact sequence (2) splits. In addition, since $0 \neq \phi(L_1) \subset H \otimes L_{\theta}$, $\phi |_{L_1}$ must be a non-zero constant morphism and $$ L_1 = L_2 \otimes L_{\theta}. $$ Since $L_1 \otimes L_2 = L_{\theta}$, $V \in J_2(O_X \oplus L_{\theta})$. This proves the first part of Theorem~\ref{thm:3}. Suppose $(V,\phi) \in S_{\theta}(L_{\theta})$. If $V \in S_{\theta} \setminus J_2(V_1)$, then $F^*(V)$ is semi-stable by Theorem~\ref{thm:2}; hence, $(F^*(V), F^*(\phi))$ is semi-stable. \noindent{\sl The split case:} Suppose $W = L \oplus (L \otimes L_{\theta})$, where $L \in J_2(O_X)$. We take the Higgs field $\phi_0$ to be the identity map: $$ 1 = \phi_0 : L \otimes L_{\theta} \longrightarrow L \otimes L_{\theta}. $$ If $M \subset W$, then either $M = L \otimes L_{\theta}$ or $\mu(M) < \mu(W)$. Since $L \otimes L_{\theta}$ is not $\phi_0$-invariant, $(W,\phi_0)$ is stable. The Frobenius pull-back $F^*(\phi_0)$ is again a constant map $$ F^*(\phi_0) : \Omega \longrightarrow O_X \otimes \Omega. $$ Now if $N \subset O_X \oplus \Omega$, then either $N = \Omega$ or $\mu(N) < \mu(O_X \oplus \Omega)$. Since $\Omega$ is not $F^*(\phi_0)$-invariant, $(F^*(W), F^*(\phi_0))$ is stable. \noindent{\sl The non-split case:} Suppose $V = L \otimes V_1$, where $L \in J_2(O_X)$. The bundle $V$ is a non-trivial extension: \begin{equation} 0 \longrightarrow L \stackrel{f_1}{\longrightarrow} V \stackrel{f_2}{\longrightarrow} L \otimes L_{\theta} \longrightarrow 0. \end{equation} Tensoring the sequence with $L_{\theta}$ gives \begin{equation} 0 \longrightarrow L \otimes L_{\theta} \stackrel{g_1}{\longrightarrow} V \otimes L_{\theta} \stackrel{g_2}{\longrightarrow} L \otimes \Omega \longrightarrow 0. \end{equation} Set $$ \phi_1 = g_1 \circ \phi_0 \circ f_2 : V \longrightarrow L_{\theta} \otimes V. $$ The Frobenius pull-back decomposes $V$: $$ F^*(V) = O_X \oplus \Omega. $$ Pulling back the exact sequences (3) and (4) by Frobenius gives $$ 0 \longrightarrow O_X \stackrel{F^*(f_1)}{\longrightarrow} O_X \oplus \Omega \stackrel{F^*(f_2)}{\longrightarrow} \Omega \longrightarrow 0 $$ $$ 0 \longrightarrow O_X \otimes \Omega \stackrel{F^*(g_1)}{\longrightarrow} (O_X \oplus \Omega) \otimes \Omega \stackrel{F^*(g_2)}{\longrightarrow} \Omega \otimes \Omega \longrightarrow 0 $$ Suppose $N \subset O_X \oplus \Omega$. Then either $N = \Omega$ or $\mu(N) < \mu(O_X \oplus \Omega)$. The Frobenius pull-back of $\phi_1$ is a composition: $$ F^*(\phi_1) = F^*(g_1) \circ F^*(\phi_0) \circ F^*(f_2). $$ Since the map $F^*(f_2)$ is surjective, the restriction map $F^*(f_2)|_{\Omega}$ is an isomorphism. The map $\phi_0$ is an isomorphism and $g_1$ is injective; hence, $g_1 \circ \phi_0$ is injective. This implies $F^*(g_1) \circ F^*(\phi_0)$ is injective. Therefore $F^*(\phi_1)|_{\Omega}$ is injective. Since $\deg(\Omega) < \deg(\Omega \otimes \Omega)$, $F^*(\phi_1)|_{\Omega}$ being injective implies $$ F^*(\phi_1)(\Omega) \not\subset \Omega \otimes \Omega \subset (O_X \oplus \Omega) \otimes \Omega. $$ In other words, $\Omega \subset O_X \oplus \Omega$ is not $F^*(\phi_1)$-invariant. Hence $(F^*(V), F^*(\phi_1))$ is stable. This proves Theorem~\ref{thm:3}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{intro} Evidences for the acceleration of cosmic expansion now prevail \cite{SNIA,newSN,data}, however we do not yet possess a compelling explanation for what causes the acceleration. Two major efforts for the acceleration mechanism have been adding a new substance (termed as dark energy) or reformulating gravity (modifying general relativity) \cite{Carroll:2004de,Caldwell:2009ix,Nojiri:2010wj,Trodden:2011xa}. By now, a well-established procedure to build and test models in both approaches is first to reproduce the observed redshift-distance relation and second examine the evolution of perturbations for a fixed background expansion. The first step should be a common goal for any dark energy or modified gravity models, and the second can distinguish them from each other \cite{Baker:2012zs,Huterer:2013xky}. Previously we performed the second step \cite{us-2012, us-2013} on a modified gravity model proposed by Deser and Woodard \cite{DW-2007, DW-2013} once its background was fitted to the expansion history of $\Lambda$CDM \cite{NO-2007, NO-2008, Koivisto, DW-2009}. The conclusion was the growth of perturbations predicted by the model is enhanced compared to the one by the $\Lambda$CDM model which is statistically disfavored by observations \cite{us-2013}. In fact, this problem of enhanced growth is shared by many modified gravity models including the nonlocal model we considered \cite{reference-for-this}. In the current work we challenge this problem by re-adjusting the model with a background expansion rather than that of $\Lambda$CDM. For example, one can find a set of parameters - $\Omega_m$ and a nontrivial equation of state $w$ for dark energy which leads to the same luminosity distance as pointed out in \cite{Shafieloo-Linder2011}. Also, it seems to be more natural not to fix the background expansion to $\Lambda$CDM if we want to go beyond $\Lambda$CDM. (We have already checked that this nonlocal model cannot do better than $\Lambda$CDM in suppressing growth when its background expansion is fixed by $\Lambda$CDM.) The upshot is the growth rate for this nonlocal model dramatically changes according to the choices of the background expansion: we can find a set of $\{\Omega_m, w \}$ which significantly lowers the growth rate for this nonlocal model. In contrast, for the case of GR the growth rate is not significantly affected by the changes of the background. \section{Reconstruction of the Expansion} The model introduces the {\it nonlocal distortion function} $f$ which multiplies the Ricci scalar, to the Einstein-Hilbert Lagrangian \cite{DW-2007}, \begin{equation} \mathcal{L} = \mathcal{L}_{\rm EH} + \Delta\mathcal{L} = \frac{1}{16\pi G}\sqrt{-g}\bigg[R + f(X) R \bigg] \;, \eql{action} \end{equation} where the argument $X$ of the function $f$ is the inverse scalar d'Alembertian acting on the Ricci scalar, i.e., $X = \square^{-1}R$. One may interpret the function $f$ as a coefficient in front of $R$ which nontrivially modulates the curvature and henceforth changes the geometry. The causal and conserved field equations are derived by varying the action and imposing the retarded boundary conditions on the propagator $\square^{-1}$, \begin{equation} G_{\mu\nu} + \Delta G_{\mu\nu} = 8\pi G T_{\mu\nu} \;, \eql{nonlocal_field_eq} \end{equation} where the nonlocal correction to the Einstein tensor takes the form \cite{DW-2007}, \begin{eqnarray} \lefteqn{ \Delta G_{\mu\nu} = \Bigl[ G_{\mu\nu} \!+\! g_{\mu\nu}\square \!-\! D_{\mu}D_{\nu} \Bigr] \biggl\{\! f(X) \!+\! \frac{1}{\square}\Bigl[R f'(X)\Bigr] \!\biggr\} } \nonumber \\ && \hspace {1cm} + \Bigl[ \delta_{\mu}^{(\rho}\delta_{\nu}^{\sigma)} \!-\! \frac{1}{2}g_{\mu\nu}g^{\rho\sigma}\Bigl] \partial_{\rho} X \partial_{\sigma} \frac{1}{\square}\Bigl[R f'(X)\Bigr] \;. \quad \quad \eql{DeltaGmn} \end{eqnarray} The functional form of $f$ can be determined so as to fit a given background geometry. The problem of adjusting $f$, or any parameters of a model in general, to a given geometry is termed as the \textit{reconstruction problem} and the generic procedure for the reconstruction was given in \cite{DW-2009}. In summary, $f$ can be reconstructed by applying the field equations \Ec{nonlocal_field_eq} to the FLRW geometry, \begin{equation} ds^2 = -dt^2 +a^2(t) d\vec{x}\cdot d\vec{x} \end{equation} and supposing the scale factor $a(t)$ is known as a function of time. In the Ref. \cite{DW-2009}, $f$ was solved for the case of $\Lambda$CDM expansion. In the present work, we reconstruct $f$ for the various cases of non-$\Lambda$CDM. Following the notations of \cite{DW-2009}, we use the dimensionless Hubble parameter $h(\zeta)$ to represent the expansion history and express $f$ in terms of it, \begin{eqnarray} \lefteqn{f \Bigl(X(\zeta)\Bigr) = -2\int_{\zeta}^{\infty} \!\! d\zeta_{1} \,\zeta_{1} \varPhi(\zeta_{1}) } \nonumber \\ && \hspace{1cm} - 6 \Omega_{\Lambda} \int_{\zeta}^{\infty} \!\! d\zeta_{1} \, \frac{\zeta_{1}^{ 2}}{h(\zeta_{1}) I(\zeta_{1})} \int_{\zeta_{1}}^{\infty} \!\! d\zeta_{2} \, \frac{I(\zeta_{2})}{\zeta_{2}^{ 4} h(\zeta_{2})} \nonumber\\ & & \hspace{0.8cm} + 2 \int_{\zeta}^{\infty} \!\! d\zeta_{1} \, \frac{\zeta_{1}^{ 2}}{h(\zeta_{1}) I(\zeta_{1})} \int_{\zeta_{1}}^{\infty} \!\! d\zeta_{2} \, \frac{r(\zeta_{2})\varPhi(\zeta_{2})}{\zeta_{2}^{5}} \;. \quad \eql{ffin} \end{eqnarray} Here, the time variable $\zeta$ is defined in terms of the redshift $z$ as\begin{equation} \nonumber \zeta \equiv 1 + z = \frac1{a(t)}, \end{equation} and the dimensionless Hubble parameter $h$ and the dimensionless Ricci scalar $r$ are \begin{equation} h \equiv \frac{H}{H_0}, \quad H \equiv \frac{\dot{a}}{a} \quad \mbox{and} \quad r \equiv \frac{R}{H_0^2} = 6(\dot{h} + 2h^2) \end{equation} where an overdot denotes a derivative with respect to the cosmic time $t$ and the $H_0$ is the current value of the Hubble parameter. The functions $\varPhi(\zeta)$ and $I(\zeta)$ are given by \cite{DW-2009}, \begin{eqnarray} \varPhi(\zeta) &=& -6\Omega_{\Lambda} \int_{\zeta}^{\infty} \!\! d\zeta_{1} \, \frac1{h(\zeta_{1})} \int_{\zeta_{1}}^{\infty} \!\! d\zeta_{2} \, \frac1{\zeta_{2}^{4} h(\zeta_{2})} \;, \\ I(\zeta) &=& \int^\infty_\zeta d\zeta_{1} \frac{r(\zeta_{1})}{\zeta_{1}^{4}h(\zeta_{1})}\; . \eql{Phi_Idef} \end{eqnarray} We take the expression of $h(z)$ employed in \cite{Shafieloo-Linder2011} as a non-$\Lambda$CDM expansion\footnote{We ignore the spatial curvature included in $h(z)$, thus in our expression $\Omega_{\Lambda} \approx \Omega_{de} \approx 1-\Omega_m$.}, \begin{equation} h^2(\zeta) = \Omega_{m}\zeta^3 + \Omega_{de}\exp\biggl[3 \! \int_{1}^{\zeta} \!d \zeta' \frac{1+w(\zeta')}{\zeta'} \biggr] \;, \eql{h_non-lcdm} \end{equation} and numerically integrate\ \Ec{ffin} to get the distortion function $f$. Note that for the case of $\Lambda$CDM expansion, $h^2(\zeta) = \Omega_{\Lambda} + \Omega_m \zeta^3 + \Omega_r \zeta^4$, all the expressions in \Ec{ffin} and \Ec{Phi_Idef} recover their forms in \cite{DW-2009}. The point is that once the reconstruction of $f$ is done, the model automatically fulfills the first goal of reproducing a given expansion history. The next step is to examine the growth of perturbations with different $f$'s corresponding to the various (non-$\Lambda$CDM) expansions determined by the free parameters $\Omega_m$ and $w$. Good news is that we can find the parameter sets of $\{\Omega_m, w\}$ and relatively suppressed growth rate. The result is presented in the next section. \section{Growth of Perturbations} We perturb the metric around the FLRW background as \begin{equation} ds^2 = -\left(1\!+\!2\Psi(t,\vec x)\right) dt^2 + a^2(t) dx^2 \left(1\!+\!2\Phi(t,\vec x)\right)\eql{metric} . \end{equation} By substituting the perturbed metric back in the nonlocal field equation \Ec{nonlocal_field_eq} and expanding it to the first order, we obtain the evolution equations for the perturbations \cite{us-2012}, \begin{eqnarray} \lefteqn{ (\Phi + \Psi) = -(\Phi + \Psi)\biggl\{f(\overline{X}) + \frac{1}{\overline{\square}}\bigg[\overline{R}f'\Bigl(\overline{X}\Bigr)\bigg]\biggl\} } \nonumber \\ && \hspace{1.3cm} -\biggl\{ f'(\overline{X})\frac{1}{\overline{\square}}\delta R + \frac{1}{\overline{\square}} \bigg[f'\Bigl(\overline{X}\Bigr)\delta R \bigg]\biggr\}\;, \quad \eql{aniso} \\ \lefteqn{ \frac{k^2}{a^2}\Phi +\frac{k^2}{a^2}\Biggl[ \Phi\biggl\{f(\overline{X}) + \frac{1}{\overline{\square}}\bigg[\overline{R}f'\Bigl(\overline{X}\Bigr)\bigg] \biggr\} } \nonumber \\ && \hspace{-0.2cm} + \frac{1}{2} \Biggl\{ f'(\overline{X}) \frac{1}{\overline{\square}}\delta R + \frac{1}{\overline{\square}} \left[ f'\Bigl(\overline{X}\Bigr)\delta R \right] \biggr\} \Biggr] = 4\pi G\bar{\rho} \delta \;. \quad \eql{modpos} \end{eqnarray} Here $\bar{\rho}$ is the mean matter density and $\delta$ is the fractional over-density in matter. These two are compared with the corresponding perturbation equations in general relativity (GR),\begin{eqnarray} (\Phi + \Psi)&=& 0\;, \eql{anisoGR} \\ \frac{k^2}{a^2}\Phi &=& 4\pi G\bar{\rho} \delta \;. \eql{posGR} \end{eqnarray} Since the modified field equations \Ec{nonlocal_field_eq} are conserved, i.e., $\nabla^\mu \Delta G_{\mu\nu} = 0$, the two conservation equations in GR still hold in the nonlocal model, \begin{eqnarray} \dot{\delta}+H\theta &=& 0 \;, \eql{delta} \\ H\dot{\theta} + \Bigl(\dot{H} + 2 H^2 \Bigr)\theta -\frac{k^2}{a^2}\Psi &=& 0 \;, \eql{theta} \end{eqnarray} and complete the system of the evolution equations for the four perturbation variables, $\Phi, \Psi, \delta$ and $\theta \equiv \nabla \cdot \vec{v}/H$. Here $\vec{v}$ is the comoving peculiar velocity. Combining the four equations leads to the equation governing the growth of perturbations, \begin{equation} \frac{d^2\delta}{d\zeta^2} + \biggl[ \frac{1}{h(\zeta)}\frac{d h(\zeta)}{d\zeta} - \frac{1}{\zeta} \biggr]\frac{d\delta}{d\zeta} - \frac{3}{2}(1+\mu)\Omega_m \frac{\zeta}{h^2(\zeta)}\delta = 0 \eql{delta_eqn_zeta} \end{equation} Note that the deviations from GR in the nonlocal model are encoded into the parameter $\mu$ devised in \cite{us-2013:9and19}. Hence, when the background is fixed by the one of GR, i.e., the $\Lambda$CDM expansion, the only factor differs from GR in this equation \Ec{delta_eqn_zeta} is $\mu$. The growth of $\delta$ is then simply determined by the sign of $\mu$: positive (negative) $\mu$ gives enhanced (suppressed) growth. In our previous analysis for this model, it turned out to be positive and hence we concluded that growth is enhanced in the nonlocal model \cite{us-2013}. Now, we note that the effect of non-$\Lambda$CDM backgrounds for which the Hubble expansion rate $h$ is different from the one for $\Lambda$CDM - denote it $h_{\Lambda}$. The non-$\Lambda$CDM expansion rate changes both the source term (not only $\mu$ but also $h^2$ in the denominator) and the friction term in the growth equation \Ec{delta_eqn_zeta}, and hence leads to more complicated dynamics. Before analyzing it, let us emphasize that the conclusion we made in \cite{us-2013} was based on fixing the background with $\Lambda$CDM and the growth equation we used was actually \begin{equation} \frac{d^2\delta}{d\zeta^2} + \biggl[ \frac{1}{h_{\Lambda}(\zeta)}\frac{d h_{\Lambda}(\zeta)}{d\zeta} - \frac{1}{\zeta} \biggr]\frac{d\delta}{d\zeta} - \frac{3}{2}\Bigl[1+\mu(h_\Lambda)\Bigr]\Omega_m \frac{\zeta}{h_{\Lambda}^2(\zeta)}\delta = 0 \eql{delta_eqn_zeta_hLambda} \end{equation} Here $\mu$ is a function of $f$ which is determined by $h$, hence it is eventually a function of $h$. \subsection{$\Lambda$CDM vs. Non-$\Lambda$CDM Backgrounds} First, we investigate the features of non-$\Lambda$CDM backgrounds. For simplification we focus on the cases where the equation of state $w$ is constant so that the dimensionless Hubble expansion rate \Ec{h_non-lcdm} becomes \begin{equation} h^2(\zeta) = \Omega_{m}\zeta^3 + (1-\Omega_m)\zeta^{3(1+w)} \;, \eql{h_non-lcdm_const-w} \end{equation} where we set $\Omega_{de} = 1- \Omega_m$. We survey the three quantities - the dimensionless Hubble parameter, the deceleration parameter and the \textit{Om} diagnostic - for different values of parameters $\Omega_m$ and $w$. For example, Fig. \ref{fig:h-q-Om} depicts these three for the fixed value of $\Omega_m = 0.255$ and different $w$'s. \begin{figure*}[!t] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.31\textwidth]{1h-Om0255-wolegend.pdf} & \includegraphics[width=0.31\textwidth]{1q-Om0255-wolegend.pdf} & \includegraphics[width=0.31\textwidth]{1Om-Om0255-wolegend.pdf} \end{tabular} \end{center} \caption{The dimensionless Hubble parameter, $h(z)$, the deceleration parameter, $q(z)$ and the \textit{Om} diagnostic, $Om(z)$ as a function of redshift for $\Omega_m = 0.255$ and $w=-0.8$ (green), $-0.9$ (blue), $-1$ (red), $-1.1$ (brown) respectively.} \label{fig:h-q-Om} \end{figure*} The dimensionless Hubble parameter increases as $w$ becomes less negative, as easily expected from \Ec{h_non-lcdm_const-w}, which would mean the source term in \Ec{delta_eqn_zeta} gets smaller ($h^2$ being in the denominator) and leads to suppressed growth. The deceleration parameter $q(\zeta) = -1 +\frac{\zeta h'(\zeta)}{h(\zeta)}$, where prime means a derivative with respect to $\zeta$, affects the friction term in \Ec{delta_eqn_zeta}. It changes more dramatically for more negative $w$ so that the absolute value of $q$ is larger in the high (or deceleration phase $q > 0$) and low (or acceleration phase $q<0$) redshift, which would lead to more friction and suppressed growth. But we will see later the reduction in the source term is more important so less negative $w$ actually better suppress the growth. The \textit{Om} diagnostic defined by ~\cite{om,om_chris} \begin{equation} Om(z) \equiv \frac{h^2(z) - 1}{(1+z)^3 -1} \end{equation} measures how much $h$ deviates from the $\Lambda$CDM value. The farther it is from the horizontal straight line of $\Lambda$CDM, the more different from $\Lambda$CDM. Of course these background effects enter both GR and modified gravity models including this model. However, the changes in the background have more influence on this nonlocal model than on GR as we will see shortly. \subsection{The Growth Rate} The growth function, $D(\zeta)$, is the solution to \Ec{delta_eqn_zeta} with initial conditions $D(\zeta) = 1/\zeta$ at early times when matter still dominates ($z \simeq 10$). Fig. \ref{fig:D-Om0255} depicts the growth function in GR and the nonlocal model with different background expansions corresponding to Fig. \ref{fig:h-q-Om}. The initial condition at $z_{\rm init} = 9$ was set the same for each solution. As noted above, less negative $w$ makes the source term in \Ec{delta_eqn_zeta} smaller and hence lowers the growth for both GR and the nonlocal model. The product of the growth rate $\beta\equiv d\ln D/d\ln a$ and the fluctuations amplitude $\sigma_8$ is a quantity directly measured in spectroscopic surveys. We examine two slightly different normalization conditions for the growth rate: One way is to set the initial amplitude $\sigma_8(z_{\rm init})$ the same for GR and the nonlocal using the growth function of GR (with $\Lambda$) and $\sigma_8(z=0)$, which is the method also employed in \cite{us-2013}, \begin{equation} \sigma_8(z_{\rm init}) = \sigma_8(z=0) \,\frac{D^{\rm GR}(z_{\rm init})}{D^{\rm GR}(0)} \;. \eql{sigma8-set-at-9} \end{equation} In this case, the theoretically computed $\sigma_8(z)$ using each solution of the growth equation does not evolve to the measured $\sigma_8(z=0)$. The other way is to set the amplitude today $\sigma_8(z=0)$ the same using their own growth functions $D(z)$ (see the 8 different growth functions depicted in Fig. \ref{fig:D-Om0255}), \begin{equation} \sigma_8(z) = \sigma_8(z=0) \,\frac{D(z)}{D(0)} \;. \eql{sigma8-set-at-0} \end{equation} In this case, $\sigma_8(z=0)$ is guaranteed to be the same but $\sigma_8(z_{\rm init})$ computed by \Ec{sigma8-set-at-0} is different for each solution. Fig. \ref{fig:fsigma8} shows both cases: at the left panel $\sigma_8(z_{\rm init})$ is set the same and at the right panel $\sigma_8(z = 0)$ the same for each solution. In the sense of fitting the growth data, the nonlocal model with the background of a slightly less negative equation of state does a remarkable job with this second normalization condition: $\chi^2 = 7.88$ for GR with $w=-1$ \textit{vs.} $8.44$ for the nonlocal model with $w=-0.8$. The $\chi^2$ values for these eight solutions with the two normalization conditions are summarized in the Table \ref{tab:chi^2}. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{2D-Om0255-wolegend.pdf} \caption{The growth function, $D(z)$ as a function of redshift in GR (solid lines) and the nonlocal (dotted lines) model $\Omega_m = 0.255$ and $w=-0.8$ (green), $-0.9$ (blue), $-1$ (red), $-1.1$ (brown) respectively.} \label{fig:D-Om0255} \end{figure} \begin{figure*}[!t] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.42\textwidth]{3fsigma8-Om0255-samezinit-wolegend.pdf} & \includegraphics[width=0.42\textwidth]{3fsigma8-Om0255-samez0-wolegend.pdf} & \includegraphics[width=0.1\textwidth, height=4.8cm]{3fsigma8-data-legend.pdf} \end{tabular} \end{center} \caption{The growth rate $\beta(z)\sigma_8(z)$ as a function of redshift in GR (solid lines) and the nonlocal model (dotted lines) for $\Omega_m = 0.255$ and$w=-0.8$ (green), $-0.9$ (blue), $-1$ (red), $-1.1$ (brown) respectively. At the left panel $\sigma_8(z=9)$ is set the same and at the right panel $\sigma_8(z = 0)$ the same for each solution of the growth equation \Ec{delta_eqn_zeta}. The fluctuation amplitude today was chosen as $\sigma_8 = 0.8$ following \cite{BOSS-2016}. Data points come from 6dFGRS, 2dFGRS, SDSS main galaxies, SDSS LRG, BOSS LOWZ, WiggleZ, BOSS CMASS, VVDS and VIPERS \cite{Teppei}. (The numbers of the data points are taken from Figure 17 of \cite{Teppei} with the aid of Teppei Okumura.) The most recent BOSS data \cite{BOSS-2016} are not used here, however including them will not change results much.} \label{fig:fsigma8} \end{figure*} \begin{table*}[!t] \begin{center} \caption{The $\chi^2$ values between the data points and the $\beta(z)\sigma_8(z)$ predicted by GR and the nonlocal model with four different background expansion histories. The same $\sigma_8(z=9)$ means it is normalized by \Ec{sigma8-set-at-9} and the same $\sigma_8(z=0)$ by \Ec{sigma8-set-at-0}. } \label{tab:chi^2} \begin{tabular}{c|cccc|cccc} \hline \hline & \multicolumn{4}{c|}{GR} & \multicolumn{4}{c}{Nonlocal} \\ Same & $~~w=-1.1~~$ & $~~w=-1~~$ & $~~w=-0.9~~$ &$~~w=-0.8~~$ & $~~w=-1.1~~$ & $~~w=-1~~$ & $~~w=-0.9~~$ &$~~w=-0.8~~$ \\ \cline{2-9} $\sigma_8(z=9)$ & $8.69$ & $7.88$ & $8.85$ & $11.91$ & $42.75$ &$28.46$ & $17.45$ & 10.34\\ $\sigma_8(z=0)$ & $8.35$ & $7.88$ & $8.60$ & $10.75$ & $24.29$ &$17.13$ & $11.70$ & $8.44$\\ \hline \end{tabular} \end{center} \end{table*} \section{Discussion}\label{discuss} We have analyzed the growth of perturbations predicted by a nonlocal gravity model of type \Ec{action}. The nonlocal distortion function $f$ can be constructed to reproduce any desired expansion history. (As remarked in \cite{DW-2009}, ``absent a derivation from fundamental theory, $f$ has the same status as the potential $V(\varphi)$ in a scalar quintessence model and the function $F(R)$ in $F(R)$ gravity''.) Once the function $f$ is fixed, no free parameter is remained and hence the evolution of perturbations is fully governed by the model's rule for gravity and the expansion history it chose to mimic. That is, the growth of perturbations depends on the background expansion which they ride on. Previously we have found that when the background is chosen to be the exact $\Lambda$CDM, the model enhances growth compared to that predicted in general relativity with $\Lambda$, which is disfavored by measurements \cite{us-2013}. In the present paper, we have examined the background effects and found that the model can suppress growth when its background is chosen to be some particular non-$\Lambda$CDM expansion. A non-$\Lambda$CDM background with the equation of state for dark energy $w$ less negative than $-1$ tends to lower the growth. Notably, this tendency is more dramatic in the nonlocal model than in GR. That is, the statistical significance substantially improves for this nonlocal model with the slight change of $w$ whereas it does not vary much for GR for the same change of $w$. (see Table \ref{tab:chi^2}). While the growth rate data still appear best fit to GR with $w=-1$ ($\chi^2 = 7.88$), the statistical difference from the nonlocal model with $w=-0.8$ ($\chi^2 = 8.44$ or $10.34$ depending on the normalization conditions) is insignificant. In summary, relaxing the condition of the background being exactly $\Lambda$CDM and setting $w$ less negative than $-1$ tends to lower the growth of perturbations; however, further examination of observationally allowed forms of $w(z)$ is needed. The interpretation of the tight parametric constraints on the equation of state of dark energy using CMB data should be performed carefully. In general, CMB data alone are not very suitable to directly study dark energy, hence they are usually used in combination with other data assuming a parametric form for the expansion history. Constraints obtained from such model fitting analysis can be sometimes conflictive (although they may hint towards some new physics in the data). Therefore, conclusions cannot be easily drawn without extensive analysis and support from different observations. For instance, Planck CMB data has already shown some conflicts with other cosmology surveys in estimation of the value of Hubble constant $H_0$ (assuming concordance $\Lambda$CDM model) \cite{Planck2015-13}. Another recent major survey analysis pointed out some discrepancy between $H_0$ and Lyman-$\alpha$ forest BAO data when assuming $\Lambda$CDM model \cite{Zhao-2017}. Hence, we have considered the direct distance-scale data such as standard candles and rulers for the expansion history rather than using constraints obtained by model fitting analysis of CMB data. This is particularly important in the present work, since we are discussing a very different cosmology model which in turn generates different perturbations. It should also be noted that the most recent compilation of supernovae data known as JLA compilation \cite{SDSS-2014} does not rule out $w=[-1.1, -0.8]$ for the constant equation of state of dark energy. Our next step is to look for a choice of the background expansion history extensively (not just for constant $w$ but more nontrivial evolution of it), within the flexibility data allows, that leads to a reasonable fit to the growth data with this nonlocal model. Another purpose of this work is to recall the usage of nonlocal modifications for gravity and aid model builders to extend it. The nonlocal invariant $X=\square^{-1}R$ is the simplest, so easy to handle with, that's why we chose to analyze it first. However, as pointed out by Woodard (one of the inventors of this model) \cite{W-review-nonlocal-2014}, it achieves acceleration by strengthening gravity which would lead to enhanced growth. (That's why less negative $w$ meaning less acceleration works better for it.) A better way would be to make a nonlocal model emulates time-varying cosmological constant \cite{W-review-nonlocal-2014}. There have been projects of building nonlocal gravity model in this direction, but have not reached the level to fully describe the phenomenology of the late-time cosmic acceleration. Those extended models inherit the main virtues of this simplest, $f(\square^{-1}R)$ model: \begin{itemize} \item{$\square^{-1}R$ is a dimensionless quantity so that no new mass parameter is introduced.} \item{$\square^{-1}R$ grows slowly so it does not require huge fine-tuning.} \item{It evades any deviation from general relativity for the solar system by exploiting the sign of $\square^{-1}R$} \item{It does not require an elaborate screening mechanism to avoid kinetic instability (or ghosts.)} \end{itemize} It is also worth noting that the distinction of this class of models from the nonlocal gravity models proposed and developed by Maggiore and his collaborators \cite{Maggiore}. In those models, nonlocal invariants are not dimensionless and multiplied by a free parameter of $mass^2$ dimension. The mass, whose origin seems not to be known yet, plays a role of cosmological constant and no arbitrary function (like the nonlocal distortion function $f$) exists in those models. The phenomenology of those nonlocal models has been reported to be very successful and investigation for deriving those nonlocalities from a fundamental theory through quantum processes is in progress \cite{Maggiore, Akrami}. It would be interesting to compare the approaches to nonlocal gravity by these two groups and relate their origins from fundamental theories. \textit{Additional note:} A very recent paper by Nersisyan, Cid and Amendola \cite{Amendola2017} claims that a localized version of this nonlocal model when its background expansion is fixed by $\Lambda$CDM leads to suppressed growth for the perturbations, which is opposite to our previous result \cite{us-2013}. If their analysis turns out correct, localizing could be a better way to have lower growth than changing the background (as studied in the present paper). However, we find that their implementation of the sub-horizon limit is different from ours \cite{us-2012,us-2013} and it is the main source of the discrepancy. Currently we are collaborating with them to further investigate this issue and we will jointly report the detailed explanations for it. \vskip 1cm \centerline{\bf Acknowledgements} We thank Scott Dodelson, Emre Kahya, Seokcheon Lee, Eric Linder, Chris Sabiu, Richard Woodard and Yi Zheng for useful suggestions and conversations. We acknowledge and thank Teppei Okumura for providing us with the growth rate data points and Scott Dodelson and Richard Woodard for reading our manuscript.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Observer design for linear systems is generally acknowledged to be understood well enough. For discrete-time linear system $x^{+}=Ax$ with output $y=Cx$, Luenberger observer \cite{luenberger64} dynamics read \begin{eqnarray}\label{eqn:luenberger} \xhat^{+}=A\xhat+L(y-C\xhat) \end{eqnarray} and designing the observer is nothing but choosing an observer gain $L$ that places the eigenvalues of matrix $A-LC$ within the unit circle. Simple and elegant, anyone would hardly doubt that this construction is {\em the} construction for linear systems. However, perhaps due arguably to over-elegance of the notation, it is nontrivial to unearth the true mechanism (if it exists) running behind Luenberger observer in order to generalize it in some {\em natural} way for nonlinear systems. In this paper we aim to provide a geometric interpretation of the righthand side of \eqref{eqn:luenberger} for the particular case where matrix $A-LC$ is nilpotent, i.e., when the observer is deadbeat. Our interpretation allows one to construct deadbeat observers for nonlinear systems provided that certain conditions (Assumption~\ref{assume:singleton} and Assumption~\ref{assume:invariance}) hold. We now note and later demonstrate that when the system is linear those assumptions are minimal for a deadbeat observer to exist. The literature on observers accommodates significant results. See, for instance, \cite{karafyllis07,glad83,moraal95,valcher99,shamma99,fuhrmann06,besancon00,wong04}. The toy example that we keep in the back of our mind while we attempt to reach a generalization is the simple case where $A$ is a rotation matrix in $\Real^{2}$ \begin{eqnarray*} A =\left[\!\!\begin{array}{rr} \cos\theta &-\sin\theta\\ \sin\theta &\cos\theta \end{array}\!\!\right] \end{eqnarray*} with angle of rotation $\theta$ different from $0$ and $\pi$. Letting $y=x_{2}$, i.e., $C=[0\ \ 1]$, the deadbeat observer turns out to be \begin{eqnarray*} \xhat^{+}=A\xhat+\left[\!\!\begin{array}{c} \cos2\theta/\sin\theta\\ \sin2\theta/\sin\theta \end{array}\!\!\right](y-C\xhat) \end{eqnarray*} which can be rewritten as \begin{eqnarray*} \xhat^{+}=A\left(\xhat+\left[\!\!\begin{array}{c} \cot\theta\\ 1 \end{array}\!\!\right](y-C\xhat)\right) \end{eqnarray*} Now we state the key observation in this paper: The term in brackets is the intersection of two equivalence classes (sometimes called congruence classes \cite{lax96}). Namely, \begin{eqnarray*} \xhat+\left[\!\!\begin{array}{c} \cot\theta\\ 1 \end{array}\!\!\right](y-C\xhat)=(\xhat+A\, \nal(C))\cap(x+\nal(C)) \end{eqnarray*} as shown in Fig.~\ref{fig:intersect}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{intro.eps} \caption{Intersection of two equivalence classes.}\label{fig:intersect} \end{center} \end{figure} Based on this observation, one contribution of this paper is intended to be in showing that such equivalence classes can be defined even for nonlinear systems of arbitrary order, which in turn allows one to construct deadbeat observers. There is another possible contribution that is of more practical nature: We present a simple algorithm that computes, for linear systems with scalar output, deadbeat gain $L$ by iteratively intersecting linear subspaces. (Devising reliable numerical techniques to compute deadbeat gain for discrete-time linear systems had once been an active field of research; see, for instance, \cite{franklin82,lewis82,sugimoto93}.) The remainder of the paper is organized as follows. Next section contains some preliminary material. In Section~\ref{sec:def} we give the formal problem definition. Section~\ref{sec:sets} is where we describe the sets that we use in construction of the deadbeat observer. We state and prove the main result in Section~\ref{sec:main}. An extension of the main result where we consider the case with input ($x^{+}=f(x,\,u)$) is in Section~\ref{sec:input}. We provide examples in Section~\ref{sec:ex}, where we construct deadbeat observers for two different third order systems. In Section~\ref{sec:alg} we present an algorithm to compute the deadbeat observer gain for a linear system with scalar output. \section{Preliminaries} Identity matrix is denoted by $I$. Null space and range space of a matrix $M\in\Real^{m\times n}$ are denoted by $\N(M)$ and $\R(M)$, respectively. Given map $\mu:\X\to\Y$, $\mu^{-1}(\cdot)$ denotes the {\em inverse} map in the general sense that for $y\in\Y$, $\mu^{-1}(y)$ is the set of all $x\in\X$ satisfying $\mu(x)=y$. That is, we will not need $\mu$ be bijective when talking about its inverse. Note that $y\notin\mu(\X)$ will imply $\mu^{-1}(y)=\emptyset$. Linear maps $x\mapsto Mx$ will not be exempt from this notation. The reader should not think that $M$ is a nonsingular matrix when we write $M^{-1}y$. (In our case $M$ need even not be square.) For instance, for $M=[0\ \ 0]$ we have $M^{-1}y=\emptyset$ for $y\neq 0$ and $M^{-1}0=\Real^{2}$. The set of nonnegative integers is denoted by $\Natural$ and $\Real_{>0}$ denotes the set of strictly positive real numbers. \section{Problem definition}\label{sec:def} Consider the following discrete-time system \begin{subeqnarray}\label{eqn:system} x^{+}&=&f(x)\\ y&=&h(x) \end{subeqnarray} where $x\in\X\subset\Real^{n}$ is the {\em state}, $x^{+}$ is the state at the next time instant, and $y\in\Y\subset\Real^{m}$ is the {\em output} or the {\em measurement}. The {\em solution} of system~\eqref{eqn:system} at time $k\in\Natural$, starting at initial condition $x\in\X$ is denoted by $\phi(k,\,x)$. Note that $\phi(0,\,x)=x$ and $\phi(k+1,\,x)=f(\phi(k,\,x))$ for all $x$ and $k$. Now consider the following cascade system \begin{subeqnarray} x^{+}&=&f(x)\\ \xhat^{+}&\in&g(\xhat,\,h(x))\label{eqn:cascade} \end{subeqnarray} We denote {\em a} solution of subsystem~(\ref{eqn:cascade}b) by $\psi(k,\,\xhat,\,x)$. We then have $\psi(0,\,\xhat,\,x)=\xhat$ and $\psi(k+1,\,\xhat,\,x)\in g(\psi(k,\,\xhat,\,x),\,h(\phi(k,\,x)))$ for all $x$, $\xhat$, and $k$. We now use \eqref{eqn:cascade} to define deadbeat observer. \begin{definition} Given $g:\X\times\Y\rightrightarrows\X$, system \begin{eqnarray*} \xhat^{+}\in g(\xhat,\,y) \end{eqnarray*} is said to be a {\em deadbeat observer for system~\eqref{eqn:system}} if there exists $p\geq 1$ such that {\em all} solutions of system~\eqref{eqn:cascade} satisfy \begin{eqnarray*} \psi(k,\,\xhat,\,x)=\phi(k,\,x) \end{eqnarray*} for all $x,\,\xhat\in\X$ and $k\geq p$. \end{definition} \begin{definition} System~\eqref{eqn:system} is said to be {\em deadbeat observable} if there exists a deadbeat observer for it. \end{definition} In this paper we present a procedure to construct a deadbeat observer for system~\eqref{eqn:system} provided that certain conditions (Assumption~\ref{assume:singleton} and Assumption~\ref{assume:invariance}) hold. Our construction will make use of some sets, which we define in the next section. Before moving on into the next section, however, we choose to remind the reader of a standard fact regarding the observability of linear systems. Then we provide a Lemma~\ref{lem:subspace} as a geometric equivalent of that well-known result. Lemma~\ref{lem:subspace} will find use later when we attempt to interpret and display the generality of the assumptions we will have made. The following criterion, known as Popov-Belevitch-Hautus (PBH) test, is an elegant tool for checking (deadbeat) observability. \begin{proposition}[PBH test] The linear system \begin{subeqnarray}\label{eqn:linsystem} x^{+}&=&Ax\\ y&=&Cx \end{subeqnarray} with $A\in\Real^{n\times n}$ and $C\in\Real^{m\times n}$ is deadbeat observable if and only if \begin{eqnarray}\label{eqn:PBH} {\rm rank}\left[\!\!\begin{array}{cc}A-\lambda I\\ C\end{array}\!\!\right]=n\quad \mbox{for all}\quad \lambda\neq 0 \end{eqnarray} where $\lambda$ is a complex scalar. \end{proposition} The below result is a geometric equivalent of PBH test. \begin{lemma}\label{lem:subspace} Given $A\in\Real^{n\times n}$ and $C\in\Real^{m\times n}$, let subspace $\setS_{k}$ of $\Real^{n}$ be defined as $\setS_{k}:=A\setS_{k-1}\cap\setS_{0}$ for $k=1,\,2,\,\ldots$ with $\setS_{0}:=\N(C)$. Then system~\eqref{eqn:linsystem} is deadbeat observable if and only if \begin{eqnarray}\label{eqn:SET} \setS_{n}=\{0\}\,. \end{eqnarray} \end{lemma} \begin{proof} For simplicity we provide the demonstration for the case where each $\setS_{k}$ is a subspace of $\Complex^{n}$ (over field $\Complex$). The case $\setS_{k}\subset\Real^{n}$ is a little longer to prove yet it is true. We first show \eqref{eqn:SET}$\implies$\eqref{eqn:PBH}. Suppose \eqref{eqn:PBH} fails. That is, there exists an eigenvector $v\in\Complex^{n}$ and a nonzero eigenvalue $\lambda\in\Complex$ such that $Av=\lambda v$ and $Cv=0$. Now suppose for some $k$ we have $v\in\setS_{k}$. Then, since $v$ is an eigenvector with a nonzero eigenvalue, we can write $v\in A\setS_{k}$. Observe that $v\in\setS_{0}$ for $Cv=0$. As a result $v\in A\setS_{k}\cap\setS_{0}=\setS_{k+1}$. By induction therefore we have $v\in\setS_{k}$ for all $k$, which means that \eqref{eqn:SET} fails. Now we demonstrate the other direction \eqref{eqn:PBH}$\implies$\eqref{eqn:SET}. We first claim that $\setS_{k+1}\subset\setS_{k}$ for all $k$. We use induction to justify our claim. Suppose $\setS_{k+1}\subset\setS_{k}$ for some $k$. Then we can write \begin{eqnarray*} \setS_{k+2} &=&A\setS_{k+1}\cap\setS_{0}\\ &\subset&A\setS_{k}\cap\setS_{0}\\ &=&\setS_{k+1}\,. \end{eqnarray*} Since $\setS_{1}\subset\setS_{0}$ our claim is valid. A trivial implication of our claim then follows: $\dim\setS_{k+1}\leq\dim\setS_{k}$ for all $k$. Let us now suppose \eqref{eqn:SET} fails. That is, $\dim\setS_{n}\geq 1$. Note that $\dim\setS_{0}\leq n$. Therefore $\dim\setS_{n}\geq 1$ and $\dim\setS_{k+1}\leq\dim\setS_{k}$ imply the existence of some $\ell\in\{0,\,1,\,\ldots,\,n-1\}$ such that $\dim\setS_{\ell+1}=\dim\setS_{\ell}\geq 1$. Since $\setS_{\ell+1}\subset\setS_{\ell}$, both $\setS_{\ell+1}$ and $\setS_{\ell}$ having the same dimension implies $\setS_{\ell+1}=\setS_{\ell}$. Hence we obtained $\setS_{\ell}=A\setS_{\ell}\cap\setS_{0}$ which allows us to write $\setS_{\ell}\subset A\setS_{\ell}$. Since $\dim\setS_{\ell}\geq\dim A\setS_{\ell}$ we deduce that $\setS_{\ell}=A\setS_{\ell}$. Since $\dim\setS_{\ell}\geq 1$, equality $A\setS_{\ell}=\setS_{\ell}$ implies that there exists an eigenvector $v\in\setS_{\ell}$ and a nonzero eigenvalue $\lambda\in\Complex$ such that $Av=\lambda v$. Note also that $Cv=0$ because $\setS_{\ell}\subset\setS_{0}$. Hence \eqref{eqn:PBH} fails. \end{proof} \begin{remark}\label{rem:dimension} It is clear from the proof that if \eqref{eqn:SET} fails then $\dim\setS_{k}\geq 1$ for all $k$. \end{remark} \section{Sets}\label{sec:sets} In this section we define certain sets (more formally, {\em equivalence classes}) associated with system~\eqref{eqn:system}. For $x\in\X$ we define \begin{eqnarray*} [x]_{0}:=h^{-1}(h(x))\,. \end{eqnarray*} Note that when $h(x)=Cx$, where $C\in\Real^{m\times n}$, we have $[x]_{0}=x+\N(C)$. We then let for $k=0,\,1,\,\ldots$ \begin{eqnarray*} [x]_{k+1}:=[x]_{k}^{+}\cap[x]_{0} \end{eqnarray*} where \begin{eqnarray*} [x]_{k}^{+}:=f([f^{-1}(x)]_{k})\,. \end{eqnarray*} Note that $[x]_{k}^{+}=\emptyset$ when $x\notin f(\X)$ since then $f^{-1}(x)=\emptyset$. \begin{remark}\label{rem:subset} Note that $[x]_{k+1}\subset[x]_{k}$ and $[x]^{+}_{k+1}\subset[x]^{+}_{k}$ for all $x$ and $k$. \end{remark} The following two assumptions will be invoked in our main theorem. In hope of making them appear somewhat meaningful and revealing their generality we provide the conditions that they would boil down to for linear systems. \begin{assumption}\label{assume:singleton} There exists $p\geq 1$ such that, for each $x\in\X$, set $[x]_{p-1}$ is either singleton or empty set. \end{assumption} Assumption~\ref{assume:singleton} is equivalent to deadbeat observability for linear systems. Below result formalizes this. \begin{theorem} Linear system~\eqref{eqn:linsystem} is deadbeat observable if and only if Assumption~\ref{assume:singleton} holds. \end{theorem} \begin{proof} Let $\setS_{k}$ for $k=0,\,1,\,\ldots$ be defined as in Lemma~\ref{lem:subspace}. Note then that $[x]_{0}=x+\setS_{0}$. We claim that the following holds \begin{eqnarray}\label{eqn:equivalence} [x]_{k}=\left\{\begin{array}{cl} x+\setS_{k}&\quad\mbox{for}\quad x\in\R(A^{k})\\ \emptyset&\quad\mbox{for}\quad x\notin\R(A^{k}) \end{array}\right. \end{eqnarray} for all $k$. We employ induction to establish our claim. Suppose \eqref{eqn:equivalence} holds for some $k$. Then we can write \begin{eqnarray*} [x]_{k}^{+} &=&A[A^{-1}x]_{k}\\ &=&A[A^{-1}x\cap\R(A^{k})]_{k}\,. \end{eqnarray*} Note that $A^{-1}x\cap\R(A^{k})\neq\emptyset$ if and only if $x\in\R(A^{k+1})$. Since $[x]_{k+1}=[x]_{k}^{+}\cap[x]_{0}$, we deduce that $[x]_{k+1}=\emptyset$ for $x\notin\R(A^{k+1})$. Otherwise if $x\in\R(A^{k+1})$ then there exists some $\eta\in\R(A^{k})$ such that $A\eta=x$. Using this $\eta$ we can construct the equality $A^{-1}x=\eta+\N(A)$ and we can write \begin{eqnarray*} [x]_{k+1} &=&[x]_{k}^{+}\cap[x]_{0}\\ &=&A[A^{-1}x]_{k}\cap[x]_{0}\\ &=&A[\eta+\N(A)]_{k}\cap[x]_{0}\\ &=&A(\eta+(\N(A)\cap\R(A^{k}))+\setS_{k})\cap[x]_{0}\\ &=&(A\eta+A\setS_{k})\cap(x+S_{0})\\ &=&(x+A\setS_{k})\cap(x+\setS_{0})\\ &=&x+(A\setS_{k}\cap\setS_{0})\\ &=&x+\setS_{k+1}\,. \end{eqnarray*} Since \eqref{eqn:equivalence} holds for $k=0$, our claim is valid. Now suppose that the system is deadbeat observable. Then by \eqref{eqn:equivalence} we see that Assumption~\ref{assume:singleton} holds with $p=n+1$ thanks to Lemma~\ref{lem:subspace}. If however the system is not deadbeat observable, then by Remark~\ref{rem:dimension} $\dim \setS_{k}\geq 1$ for all $k$. We deduce by \eqref{eqn:equivalence} therefore that $[0]_{k}$ can never be singleton nor is it empty. Hence Assumption~\ref{assume:singleton} must fail. \end{proof} \begin{assumption}\label{assume:invariance} Given $x,\,\xhat\in\X$ and $k$; $\xhat\in[x]_{k}^{+}$ implies $[\xhat]_{k}^{+}=[x]_{k}^{+}$. \end{assumption} \begin{theorem} Assumption~\ref{assume:invariance} comes for free for linear system~\eqref{eqn:linsystem}. \end{theorem} \begin{proof} Evident. \end{proof} \\ Last we let $[x]_{-1}^{+}:=\X$ and define map $\pi:\X\times\Y\to\{-1,\,0,\,1,\,\ldots,\,p-2\}$ as \begin{eqnarray*} \pi(\xhat,\,y) := \max\,\{-1,\,0,\,1,\,\ldots,\,p-2\}\quad \mbox{subject to}\quad [\xhat]^{+}_{\pi(\xhat,\,y)}\cap h^{-1}(y) \neq\emptyset \end{eqnarray*} where $p$ is as in Assumption~\ref{assume:singleton}. \section{The result}\label{sec:main} Below is our main theorem. \begin{theorem}\label{thm:main} Suppose Assumptions~\ref{assume:singleton}-\ref{assume:invariance} hold. Then system \begin{eqnarray}\label{eqn:deadbeat} \hat{x}^{+}\in f([\xhat]^{+}_{\pi(\xhat,\,y)}\cap h^{-1}(y)) \end{eqnarray} is a deadbeat observer for system~\eqref{eqn:system}. \end{theorem} \begin{proof} We claim the following \begin{eqnarray}\label{eqn:claim} \xhat\in [x]_{\ell-1}^{+}\implies \xhat^{+}\in[f(x)]_{\ell}^{+} \end{eqnarray} for all $\ell\in\{0,\,1,\,\ldots,\,p-1\}$. Let us prove our claim. Note that $\xhat\in[x]_{\ell-1}^{+}$ yields $[\xhat]_{\ell-1}^{+}=[x]_{\ell-1}^{+}$ by Assumption~\ref{assume:invariance}. Since $[x]_{\ell-1}^{+}\neq\emptyset$ we have $[x]_{\ell-1}^{+}\cap[x]_{0}\neq\emptyset$ and, consequently, $[\xhat]_{\ell-1}^{+}\cap[x]_{0}\neq\emptyset$. Remark~\ref{rem:subset} then yields $[\xhat]_{\pi(\xhat,\,h(x))}^{+}\subset[\xhat]_{\ell-1}^{+}$. Starting from \eqref{eqn:deadbeat} we can proceed as \begin{eqnarray}\label{eqn:yesnumber} \hat{x}^{+} &\in& f([\xhat]^{+}_{\pi(\xhat,\,y)}\cap h^{-1}(y))\nonumber\\ &=& f([\xhat]^{+}_{\pi(\xhat,\,h(x))}\cap h^{-1}(h(x)))\nonumber\\ &\subset& f([\xhat]^{+}_{\ell-1}\cap [x]_{0})\nonumber\\ &=& f([x]^{+}_{\ell-1}\cap [x]_{0})\nonumber\\ &=& f([x]_{\ell})\\ &\subset& f([f^{-1}(f(x))]_{\ell})\nonumber\\ &=& [f(x)]^{+}_{\ell}\,.\nonumber \end{eqnarray} Hence \eqref{eqn:claim} holds. In particular, \eqref{eqn:yesnumber} gives us \begin{eqnarray}\label{eqn:genco} \xhat\in [x]_{\ell-1}^{+}\implies \xhat^{+}\in f([x]_{\ell}) \end{eqnarray} for all $\ell\in\{0,\,1,\,\ldots,\,p-1\}$. Note that $\xhat\in [x]^{+}_{-1}$ holds for all $x,\,\xhat$. Therefore \eqref{eqn:claim} and Remark~\ref{rem:subset} imply the existence of $\ell^{\ast}\in\{0,\,1,\,\ldots,\,p-1\}$ such that \begin{eqnarray}\label{eqn:dede} \psi(k,\,\xhat,\,x)\in[\phi(k,\,x)]^{+}_{p-2} \end{eqnarray} for all $k\geq\ell^{\ast}$. Also, Assumption~\ref{assume:singleton} yields us \begin{eqnarray}\label{eqn:ibo} [\phi(k,\,x)]_{p-1}=\phi(k,\,x) \end{eqnarray} for all $k\geq p-1$. Combining \eqref{eqn:genco}, \eqref{eqn:dede}, and \eqref{eqn:ibo} we can write \begin{eqnarray*} \psi(k,\,\xhat,\,x)=\phi(k,\,x) \end{eqnarray*} for all $k\geq p$. Hence the result. \end{proof} \begin{corollary}\label{cor:foralg} Consider linear system~\eqref{eqn:linsystem} with $A\in\Real^{n\times n}$ and $C\in\Real^{1\times n}$. Suppose pair $(C,\,A)$ is observable\footnote{That is, $\mbox{rank}\ [C^{T}\ A^{T}C^{T}\ \ldots\ A^{(n-1)T}C^{T}]=n$.}. Let $\setS_{k}$ for $k=0,\,1,\,\ldots$ be defined as in Lemma~\ref{lem:subspace}. Then system \begin{eqnarray*} \xhat^{+}=A((\xhat+A\setS_{n-2})\cap(x+\setS_{0})) \end{eqnarray*} is a deadbeat observer for system~\eqref{eqn:linsystem}. \end{corollary} \section{System with input}\label{sec:input} In this section we look at the case where the evolution of system to be observed is dependent not only on the initial condition but also on some exogenous signal, which we call the input. To construct a deadbeat observer for such system we again make use of sets. Consider the system \begin{subeqnarray}\label{eqn:systemwu} x^{+}&=&f(x,\,u)\\ y&=&h(x) \end{subeqnarray} where $u\in\U\subset\Real^{q}$ is the {\em input} or some known {\em disturbance} (e.g. time). Let ${\bf u}=(u_{0},\,u_{1},\,\ldots)$, $u_{k}\in\U$, denote an input sequence. The {\em solution} of system~\eqref{eqn:systemwu} at time $k$, starting at initial condition $x\in\X$, and having evolved under the influence of input sequence $\yu$ is denoted by $\phi(k,\,x,\,\yu)$. Note that $\phi(0,\,x,\,\yu)=x$ and $\phi(k+1,\,x,\,\yu)=f(\phi(k,\,x,\,\yu),\,u_{k})$ for all $x$, $\yu$, and $k$. Now consider the following cascade system \begin{subeqnarray} x^{+}&=&f(x,\,u)\\ \xhat^{+}&\in&g(\xhat,\,h(x),\,u)\label{eqn:cascadewu} \end{subeqnarray} We denote a solution of subsystem~(\ref{eqn:cascadewu}b) by $\psi(k,\,\xhat,\,x,\,\yu)$. We then have $\psi(0,\,\xhat,\,x,\,\yu)=\xhat$ and $\psi(k+1,\,\xhat,\,x,\,\yu)\in g(\psi(k,\,\xhat,\,x,\,\yu),\,h(\phi(k,\,x,\,\yu)),\,u_{k})$ for all $x$, $\xhat$, $\yu$, and $k$. \begin{definition} Given $g:\X\times\Y\times\U\rightrightarrows\X$, system \begin{eqnarray*} \xhat^{+}\in g(\xhat,\,y,\,u) \end{eqnarray*} is said to be a {\em deadbeat observer for system~\eqref{eqn:systemwu}} if there exists $p\geq 1$ such that solutions of system~\eqref{eqn:cascadewu} satisfy \begin{eqnarray*} \psi(k,\,\xhat,\,x,\,\yu)=\phi(k,\,x,\,\yu) \end{eqnarray*} for all $x$, $\xhat$, $\yu$, and $k\geq p$. \end{definition} How to define sets $[x]_{k}$ and $[x]_{k}^{+}$ for system~\eqref{eqn:systemwu} is obvious. We again let \begin{eqnarray*} [x]_{0}:=h^{-1}(h(x))\,. \end{eqnarray*} and (for $k=0,\,1,\,\ldots$) \begin{eqnarray*} [x]_{k+1}:=[x]_{k}^{+}\cap[x]_{0} \end{eqnarray*} this time with \begin{eqnarray*} [x]_{k}^{+}:=\bigcup_{f(\eta,\,u)=x} f([\eta]_{k},\,u)\,. \end{eqnarray*} The following result is a generalization of Theorem~\ref{thm:main}. (The demonstration is parallel to that of Theorem~\ref{thm:main} and hence omitted.) \begin{theorem}\label{thm:mainu} Suppose Assumptions~\ref{assume:singleton}-\ref{assume:invariance} hold. Then system \begin{eqnarray*} \hat{x}^{+}\in f([\xhat]^{+}_{\pi(\xhat,\,y)}\cap h^{-1}(y),\,u) \end{eqnarray*} is a deadbeat observer for system~\eqref{eqn:systemwu}. \end{theorem} \section{Examples}\label{sec:ex} Here, for two third order nonlinear systems, we construct deadbeat observers. In the first example we study a simple autonomous homogeneous system and show that the construction yields a homogeneous observer. Hence our method may be thought to be somewhat {\em natural} in the vague sense that the observer it generates inherits certain intrinsic properties of the system. In the second example we aim to provide a demonstration on observer construction for a system with input. \subsection{Homogeneous system} Consider system~\eqref{eqn:system} with \begin{eqnarray*} f(x):=\left[\!\! \begin{array}{c} x_{2}\\ x_{3}^{1/3}\\ x_{1}^{3}+x_{2}^{3} \end{array}\!\!\right]\quad\mbox{and}\quad h(x):=x_{1} \end{eqnarray*} where $x=[x_{1}\ x_{2}\ x_{3}]^{T}$. Let $\X=\Real^{3}$ and $\Y=\Real$. If we let dilation $\Delta_{\lambda}$ be \begin{eqnarray*} \Delta_{\lambda}:=\left[\!\! \begin{array}{ccc} \lambda&0&0\\ 0&\lambda&0\\ 0&0&\lambda^{3} \end{array}\!\!\right] \end{eqnarray*} with $\lambda\in\Real$, then we realize that \begin{eqnarray*} f(\Delta_{\lambda}x)=\Delta_{\lambda}f(x)\quad\mbox{and}\quad h(\Delta_{\lambda}x)=\lambda h(x)\,. \end{eqnarray*} That is, the system is homogeneous \cite{rinehart09} with respect to dilation $\Delta$. Before describing the relevant sets $[x]_{k}$ and $[x]_{k}^{+}$ we want to mention that $f$ is bijective and its inverse is \begin{eqnarray}\label{eqn:finv} f^{-1}(x)=\left[\!\! \begin{array}{c} (x_{3}-x_{1}^{3})^{1/3}\\ x_{1}\\ x_{2}^{3} \end{array}\!\!\right] \end{eqnarray} Since $h(x)=x_{1}$ we can write \begin{eqnarray}\label{eqn:x0} [x]_{0}=\left\{ \left[\!\!\begin{array}{c} x_{1}\\ \alpha\\ \beta \end{array}\!\!\right]:\alpha,\,\beta\in\Real \right\} \end{eqnarray} By \eqref{eqn:finv} we can then proceed as \begin{eqnarray}\label{eqn:x0plus} [x]_{0}^{+}&=&f([f^{-1}(x)]_{0})\nonumber\\ &=&f\left(\left\{ \left[\!\!\begin{array}{c} (x_{3}-x_{1}^{3})^{1/3}\\ \gamma\\ \delta \end{array}\!\!\right]:\gamma,\,\delta\in\Real \right\}\right)\nonumber\\ &=&\left\{ f\left(\left[\!\!\begin{array}{c} (x_{3}-x_{1}^{3})^{1/3}\\ \gamma\\ \delta \end{array}\!\!\right]\right):\gamma,\,\delta\in\Real \right\}\nonumber\\ &=&\left\{ \left[\!\!\begin{array}{c} \gamma\\ \delta^{1/3}\\ x_{3}-x_{1}^{3}+\gamma^{3} \end{array}\!\!\right]:\gamma,\,\delta\in\Real \right\} \end{eqnarray} Recall that $[x]_{1}=[x]_{0}^{+}\cap[x]_{0}$. Therefore intersecting sets \eqref{eqn:x0} and \eqref{eqn:x0plus} we obtain \begin{eqnarray*} [x]_{1}=\left\{ \left[\!\!\begin{array}{c} x_{1}\\ \alpha\\ x_{3} \end{array}\!\!\right]:\alpha\in\Real \right\} \end{eqnarray*} We can now construct $[x]_{1}^{+}$ as \begin{eqnarray}\label{eqn:x1plus} [x]_{1}^{+}&=&f([f^{-1}(x)]_{1})\nonumber\\ &=&f\left(\left\{ \left[\!\!\begin{array}{c} (x_{3}-x_{1}^{3})^{1/3}\\ \gamma\\ x_{2}^{3} \end{array}\!\!\right]:\gamma\in\Real \right\}\right)\nonumber\\ &=&\left\{ f\left(\left[\!\!\begin{array}{c} (x_{3}-x_{1}^{3})^{1/3}\\ \gamma\\ x_{2}^{3} \end{array}\!\!\right]\right):\gamma\in\Real \right\}\nonumber\\ &=&\left\{ \left[\!\!\begin{array}{c} \gamma\\ x_{2}\\ x_{3}-x_{1}^{3}+\gamma^{3} \end{array}\!\!\right]:\gamma\in\Real \right\} \end{eqnarray} Now note that sets \eqref{eqn:x0} and \eqref{eqn:x1plus} intersect at a single point. In particular, $[x]_{2}=[x]_{1}^{+}\cap[x]_{0}=x$. Therefore Assumption~\ref{assume:singleton} is satisfied with $p=3$. Observe also that \begin{eqnarray*} [\xhat]_{1}^{+}\cap h^{-1}(y) &=& \left\{ \left[\!\!\begin{array}{c} \gamma\\ \xhat_{2}\\ \xhat_{3}-\xhat_{1}^{3}+\gamma^{3} \end{array}\!\!\right]:\gamma\in\Real \right\}\cap\left\{ \left[\!\!\begin{array}{c} y\\ \alpha\\ \beta \end{array}\!\!\right]:\alpha,\,\beta\in\Real \right\}\\ &=&\left[\!\!\begin{array}{c} y\\ \xhat_{2}\\ \xhat_{3}-\xhat_{1}^{3}+y^{3} \end{array}\!\!\right] \end{eqnarray*} which means that $\pi(\xhat,\,y)=p-2=1$ for all $\xhat$ and $y$. The dynamics of the deadbeat observer then read \begin{eqnarray*} \xhat^{+}&=&f([\xhat]_{1}^{+}\cap h^{-1}(y))\\ &=&\left[\!\!\begin{array}{c} \xhat_{2}\\ (\xhat_{3}-\xhat_{1}^{3}+y^{3})^{1/3}\\ \xhat_{2}^{3}+y^{3} \end{array}\!\!\right] \end{eqnarray*} We finally notice that \begin{eqnarray*} f([\Delta_{\lambda}\xhat]_{1}^{+}\cap h^{-1}(\lambda y))=\Delta_{\lambda}f([\xhat]_{1}^{+}\cap h^{-1}(y))\,. \end{eqnarray*} That is, the deadbeat observer also is homogeneous with respect to dilation $\Delta$. \subsection{System with input} Our second example is again a third order system, this time however with an input. Consider system~\eqref{eqn:systemwu} with \begin{eqnarray*} f(x,\,u):=\left[\!\! \begin{array}{c} x_{1}x_{2}x_{3}\\ x_{3}/x_{1}\\ \sqrt{x_{1}x_{2}u} \end{array}\!\!\right] \quad\mbox{and}\quad h(x):=x_{1}\,. \end{eqnarray*} Let $\X=\Real_{>0}^{3}$, $\Y=\Real_{>0}$, and $\U=\Real_{>0}$. Let us construct the relevant sets $[x]_{k}$ and $[x]_{k}^{+}$. We begin with $[x]_{0}$. \begin{eqnarray}\label{eqn:x0nd} [x]_{0}=\left\{ \left[\!\!\begin{array}{c} x_{1}\\ \alpha\\ \beta \end{array}\!\!\right]:\alpha,\,\beta>0 \right\} \end{eqnarray} Note that $f$ satisfies the following \begin{eqnarray*} f\left(\left[\!\! \begin{array}{c} x_{1}u/(x_{2}x_{3}^{2})\\ x_{2}x_{3}^{4}/(x_{1}u^{2})\\ x_{1}u/x_{3}^{2} \end{array}\!\!\right],\,u\right)=x \end{eqnarray*} for all $x$ and $u$. Hence we can write \begin{eqnarray}\label{eqn:x0plusnd} [x]_{0}^{+}&=&\bigcup_{u\in\U}f\left(\left[\!\! \begin{array}{c} x_{1}u/(x_{2}x_{3}^{2})\\ x_{2}x_{3}^{4}/(x_{1}u^{2})\\ x_{1}u/x_{3}^{2} \end{array}\!\!\right]_{0},\,u\right)\nonumber\\ &=&\bigcup_{u\in\U}f\left(\left\{ \left[\!\!\begin{array}{c} x_{1}u/(x_{2}x_{3}^{2})\\ \gamma\\ \delta \end{array}\!\!\right]:\gamma,\,\delta>0 \right\},\,u\right)\nonumber\\ &=&\bigcup_{u\in\U}f\left(\left\{ \left[\!\!\begin{array}{c} x_{1}u/(x_{2}x_{3}^{2})\\ x_{2}x_{3}^{4}\gamma/(x_{1}u^{2})\\ x_{1}u\delta/x_{3}^{2} \end{array}\!\!\right]:\gamma,\,\delta>0 \right\},\,u\right)\nonumber\\ &=&\bigcup_{u\in\U}\left\{ f\left(\left[\!\!\begin{array}{c} x_{1}u/(x_{2}x_{3}^{2})\\ x_{2}x_{3}^{4}\gamma/(x_{1}u^{2})\\ x_{1}u\delta/x_{3}^{2} \end{array}\!\!\right],\,u\right):\gamma,\,\delta>0 \right\}\nonumber\\ &=&\left\{\left[\!\!\begin{array}{c} x_{1}\gamma\delta\\ x_{2}\delta\\ x_{3}\sqrt{\gamma} \end{array}\!\!\right] :\gamma,\,\delta>0 \right\} \end{eqnarray} Since $[x]_{1}=[x]_{0}^{+}\cap[x]_{0}$, intersecting sets \eqref{eqn:x0nd} and \eqref{eqn:x0plusnd} we obtain \begin{eqnarray*} [x]_{1}=\left\{ \left[\!\!\begin{array}{c} x_{1}\\ x_{2}/\alpha^{2}\\ x_{3}\alpha \end{array}\!\!\right]:\alpha>0 \right\} \end{eqnarray*} We can now construct $[x]_{1}^{+}$ as \begin{eqnarray}\label{eqn:x1plusnd} [x]_{1}^{+}&=&\bigcup_{u\in\U}f\left(\left[\!\! \begin{array}{c} x_{1}u/(x_{2}x_{3}^{2})\\ x_{2}x_{3}^{4}/(x_{1}u^{2})\\ x_{1}u/x_{3}^{2} \end{array}\!\!\right]_{1},\,u\right)\nonumber\\ &=&\bigcup_{u\in\U}f\left(\left\{ \left[\!\!\begin{array}{c} x_{1}u/(x_{2}x_{3}^{2})\\ x_{2}x_{3}^{4}/(x_{1}u^{2}\gamma^{2})\\ x_{1}u\gamma/x_{3}^{2} \end{array}\!\!\right]:\gamma>0 \right\},\,u\right)\nonumber\\ &=&\left\{\left[\!\!\begin{array}{c} x_{1}/\gamma\\ x_{2}\gamma\\ x_{3}/\gamma \end{array}\!\!\right] :\gamma>0 \right\} \end{eqnarray} Now note that sets \eqref{eqn:x0nd} and \eqref{eqn:x1plusnd} intersect at a single point. In particular, $[x]_{2}=[x]_{1}^{+}\cap[x]_{0}=x$. Therefore Assumption~\ref{assume:singleton} is satisfied with $p=3$. Observe also that \begin{eqnarray*} [\xhat]_{1}^{+}\cap h^{-1}(y) &=& \left\{\left[\!\!\begin{array}{c} \xhat_{1}/\gamma\\ \xhat_{2}\gamma\\ \xhat_{3}/\gamma \end{array}\!\!\right] :\gamma>0 \right\}\cap \left\{ \left[\!\!\begin{array}{c} y\\ \alpha\\ \beta \end{array}\!\!\right]:\alpha,\,\beta>0 \right\}\\ &=& \left[\!\!\begin{array}{c} y\\ \xhat_{1}\xhat_{2}/y\\ \xhat_{3}y/\xhat_{1} \end{array}\!\!\right] \end{eqnarray*} which means that $\pi(\xhat,\,y)=p-2=1$ for all $\xhat$ and $y$. The dynamics of the deadbeat observer then read \begin{eqnarray*} \xhat^{+}&=&f([\xhat]_{1}^{+}\cap h^{-1}(y),\,u)\\ &=&\left[\!\!\begin{array}{c} \xhat_{2}\xhat_{3}y\\ \xhat_{3}/\xhat_{1}\\ \sqrt{\xhat_{1}\xhat_{2}u} \end{array}\!\!\right] \end{eqnarray*} \section{An algorithm for deadbeat gain}\label{sec:alg} In this section we provide an algorithm to compute the deadbeat observer gain for a linear system with scalar output. (The algorithm directly follows from Corollary~\ref{cor:foralg}.) Namely, given an observable pair $(C,\,A)$ with $C\in\Real^{1\times n}$ and $A\in\Real^{n\times n}$, we provide a procedure to compute the gain $L\in\Real^{n\times 1}$ that renders matrix $A-LC$ nilpotent. Below we let $\nal(\cdot)$ be some function such that, given matrix $M\in\Real^{m\times n}$ whose dimension of null space is $k$, $\nal(M)$ is some $n\times k$ matrix whose columns span the null space of $M$. \begin{algorithm}\label{alg:db} Given $C\in\Real^{1\times n}$ and $A\in\Real^{n\times n}$, the following algorithm generates deadbeat gain $L\in\Real^{n\times 1}$. \begin{eqnarray*} && X = \nal(C)\\ && \mbox{{\bf for}}\quad i = 1:n-2\\ && \qquad X = \nal\left(\left[ \begin{array}{c} C\\ \nal((AX)^{T})^{T} \end{array} \right]\right)\\ &&\mbox{{\bf end}}\\ &&L_{\rm pre}=AX\\ &&L = \frac{AL_{\rm pre}}{CL_{\rm pre}} \end{eqnarray*} \end{algorithm} For the interested reader we below give a M{\small ATLAB} code. Exploiting Algorithm~\ref{alg:db}, this code generates a function (which we named {\tt dbLfun}) whose inputs are matrices $C$ and $A$. The output of the function, as its name indicates, is the deadbeat gain $L$. \begin{verbatim} function L = dbLfun(C,A) X = null(C); for i = 1:length(A)-2 X = null([C;null((A*X)')']); end Lpre = A*X; L = A*Lpre/(C*Lpre); \end{verbatim} One can also use the built-in M{\small ATLAB} function {\tt acker} to compute the deadbeat gain. We can therefore compare {\tt dbLfun} with {\tt acker} via a numerical experiment. Table~\ref{tab:one} gives the experimental results. Number $n$ is the dimension of the system (that is, the number of columns of $A$ matrix) and the numbers at the bottom row are the percentages of the cases (among $10^4$ random trials for each $n$) in which {\tt dbLfun} performed better than {\tt acker}. How we determine which one is better in a given case is as follows. Given pair $(C,\,A)$, we let $L_{1}$ be the gain resulting from {\tt dbLfun(C,A)} and $L_{2}$ be the gain given by {\tt acker(A',C',zeros(n,1))'}. Then we compare norms $|(A-L_{1}C)^n|$ and $|(A-L_{2}C)^n|$, neither of which is zero due to round-off errors. The function yielding the smaller norm is considered to be better. \begin{table}\caption{Percentages of cases where {\tt dbLfun} performed better than {\tt acker}.} \begin{center}\label{tab:one} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline n=3 & n=4 & n=5 & n=6 & n=7 & n=8 & n=9 & n=10\\ \hline \%51 & \%60 & \%67 & \%74 & \%80 & \%85 & \%87 & \%91\\ \hline \end{tabular} \end{center} \end{table} \section{Conclusion} For nonlinear systems a method to construct a deadbeat observer is proposed. The resultant observer can be considered as a generalization of the linear deadbeat observer. The construction makes use of sets that are generated iteratively. Through such iterations, observers are derived for two academic examples. Also, for computing the deadbeat gain for a linear system with scalar output, an algorithm that works no worse than an already existing one is given. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $u(t) \in C^0([0,T]; X)$ satisfy a first order evolution equation $\partial_t u(t) = A_1(t) u(t)$ in a Banach space $X$. In this situation, the associated evolution operator $U(t,s)$ satisfying \[ u(t) = U(t,s) u_s, \quad u(s) = u_s \in X \] is assumed to exist as a two-parameter $C^0$-semigroup. Let $k$ be a positive integer. For a given solution $u(t)$ in a fixed interval $t \in [0, T]$, much attention is paid to find high order evolution equations \begin{equation} \label{kthmastereq} \partial_t^k u(t) = A_k(t) u(t) ~ \Leftrightarrow ~ \partial_t^k U(t,s) u_s = A_k(t) U(t,s) u_s \qquad \end{equation} which is satisfied by exactly the same $u(t) = U(t,s) u_s$ of the first order evolution equation, and therefore by the evolution operator $U(t,s)$. According to the preceding results \cite{17iwata-1,17iwata-3,20iwata-3}, a set of operators $\{ A_k(t) \}_{0 \le t \le T}$ is represented by the logarithm of operators (for historical milestones of logarithm of operators under the sectorial assumption, see \cite{94boyadzhiev,03hasse,06hasse,69nollau,00okazawa-1,00okazawa-2}). Meanwhile, abstract representations of Cole-Hopf transform \cite{51cole,50hopf} and the Miura transform \cite{68miura} have been obtained by the logarithmic representation of operators \cite{19iwata-2,20iwata-1}, which are associated with $A_1$ and $A_2$ in a Banach space, respectively. The purpose of this paper is to present a method of profiling the $k$-th order operator $A_k(t)$ within a mathematical recurrence relation. In the mathematical procedure the logarithmic representation \cite{17iwata-1} plays an indispensable role. In this paper, by generalizing the logarithmic representation (for its physical applications, see \cite{19iwata-1,18iwata-2, 22iwata-1}) providing nonlinear transforms, recursive relations between the solutions of first order evolution equations and those of $k$-th order evolution equations are shown. It is equivalent to find out unknown $k$-th order operator $A_k(t)$ from the first order operator $A_1(t)$, where both operators are associated with the same solution $u(t)$. In the recurrence formula, the Cole-Hopf transfer corresponds to the first order relation, and the Miura transform to the second order relation. \section{Mathematical settings} Let $X$ be a Banach space and $B(X)$ be a set of bounded operators on $X$. The norms of both $X$ and $B(X)$ are denoted by $\| \cdot \|$ if there is no ambiguity. Let $t$ and $s$ be real numbers included in a finite interval $[0, T]$, and $U(t,s)$ be the evolution operator in $X$. The two parameter semigroup $U(t,s)$, which is continuous with respect to both parameters $t$ and $s$, is assumed to be a bounded operator on $X$. That is, the boundedness condition \begin{equation} \label{bound} \begin{array}{ll} \| U(t,s) \| \le M e^{\omega (t-s)} \end{array} \end{equation} is assumed. Following the standard theory of abstract evolution equations~\cite{61kato,70kato,73kato,60tanabe,61tanabe,79tanabe}, the semigroup property: \[ \begin{array}{ll} U(t,r) U(r,s) = U (t,s) \end{array} \] is assumed to be satisfied for arbitrary $s \le r \le t$ included in a finite interval $[0, T]$. Let the evolution operator $U(t,s)$ be generated by $A_1(t)$. Then, for certain functions $u(t) \in C^0([0,T]; X)$, \begin{equation} \label{1stmastereq} \partial_t u(t) = A_1(t) u(t) \end{equation} is satisfied in $X$. That is, the operator $A_1(t)$ is an infinitesimal generator of $C^0$-semigroup $U(t,s)$. \section{Back ground and basic concepts} \subsection{Logarithmic representation of operators} According to the preceding work \cite{17iwata-1,17iwata-3,20iwata-3} dealing with the logarithmic representation of operators, the evolution operator $U(t,s)$ is assumed to be generated by $A_1(t)$. Let $\psi$ satisfy Eq.~\eqref{1stmastereq}. The solution $\psi$ is generally represented by $\psi(t) = U(t,s) u_s$ for a certain $u_s \in X$, so that the operator equality is obtained by assuming $\psi(t) = U(t,s)$. For instance, it is practical to imagine that $\psi(t)$ is represented by \begin{equation} \label{expop} \psi(t) = \exp \left( \int_s^t A_1(\tau) d \tau \right). \end{equation} Here the integral representation \eqref{expop} is valid at least if $A_1(t)$ is $t$-independent, whose validity in $t$-independent cases is shown in Appendix I. Using the Riesz-Dunford integral \cite{43dunford}, the infinitesimal generator $A_1 (t)$ of the first order evolution equation is written by \begin{equation} \label{mastart} A_1(t) = \psi^{-1} \partial_t \psi = (I + \kappa U(s,t)) \partial_t {\rm Log} ( U(t,s) + \kappa I) \end{equation} under the commutation, where ${\rm Log}$ means the principal branch of logarithm, and $U(t,s)$ is temporarily assumed to be a group (i.e., existence of $U(s,t) = U(t,s)^{-1}$ is temporarily assumed to be valid for any $0 \le s \le t \le T$), and not only a semigroup. The validity of \eqref{mastart} is confirmed formally by \[ \begin{array}{ll} (I + \kappa U(s, t) ) \partial_t {\rm Log}(U(t, s) + \kappa I) \vspace{1.5mm}\\ = U(s, t)(U(t, s) + \kappa I)\partial_t U(t, s)(U(t, s) + \kappa I)^{-1} \vspace{1.5mm}\\ = U(s, t) \partial_t U(t, s) \vspace{1.5mm}\\ = U(s, t)A_1(t)U(t, s) = A_1(t) \end{array} \] under the commutation. This relation is associated with the abstract form of the Cole-Hopf transform \cite{19iwata-2}. The correspondence between $\partial_t \log \psi$ and $A_1(t)$ can be understood by $U(s, t) \partial_t U(t, s) = A_1(t)$ shown above. Indeed, \[ \psi^{-1} \partial_t \psi = \partial_t \log \psi \quad \Rightarrow \quad \partial_t \psi = ( \partial_t \log \psi) \psi \] is valid under the commutation. By introducing alternative infinitesimal generator $a(t,s)$ \cite{17iwata-3} satisfying \[ e^{a(t,s)} := U(t,s)+ \kappa I, \] a generalized version of the logarithmic representation \begin{equation} \label{altrep} A_1(t) = \psi^{-1} \partial_t \psi = ( I - \kappa e^{-a(t,s)} )^{-1} \partial_t a(t,s) \end{equation} is obtained, where $U(t,s)$ is assumed to be only a semigroup. The right hand side of Eq.~(\ref{altrep}) is actually a generalization of \eqref{mastart}; indeed, by only assuming $U(t,s)$ as a semigroup defined on $X$, $e^{-a(t,s)}$ is always well defined by a convergent power series, and there is no need to have a temporal assumption for the existence of $U(s,t)= U(t,s)^{-1}$. It is remarkable here that $e^{-a(t,s)} = e^{a(s,t)}$ is not necessarily satisfied~\cite{17iwata-3}. The validity of Eq.~\eqref{altrep} is briefly seen in the following. Using the generalized representation, \[ \begin{array}{ll} e^{a(t,s)} = U(t, s) + \kappa I \quad \Rightarrow \quad a(t, s) = {\rm Log}(U(t, s) + \kappa I) \vspace{1.5mm} \\ \qquad \Rightarrow \quad \partial_t a(t, s) = \left[ \partial_t U(t, s) \right] (U(t, s) + \kappa I)^{-1} = A_1(t)U(t, s)(U(t, s) + \kappa I)^{-1} \end{array} \] is obtained. It leads to \[ \begin{array}{ll} (I - \kappa e^{-a(t,s)})^{-1} \partial_t a(t,s) = (I - \kappa e^{-a(t,s)})^{-1} A_1(t)U(t, s)(U(t, s) + \kappa I)^{-1}, \end{array} \] and therefore \[ \begin{array}{ll} A_1 (t) = (U(t,s) + \kappa I ) U(t,s) ^{-1} A_1(t)U(t, s)(U(t, s) + \kappa I)^{-1} \vspace{1.5mm} \\ \quad \Leftrightarrow \quad A_1 (t) = (I - \kappa e^{-a(t,s)})^{-1} A_1(t) (I - \kappa e^{-a(t,s)} ) \end{array} \] is valid under the commutation assumption. It simply shows the consistency of representations using the alternative infinitesimal generator $a(t,s)$. In the following, the logarithmic representation \eqref{altrep} is definitely used, and the original representation \eqref{mastart} appears if it is necessary. It is notable here that, based on ordinary and generalized logarithmic representations, the algebraic property of set of generally unbounded infinitesimal generator is known \cite{18iwata,20iwata-2}. \subsection{Miura transform} The Miura transform, which holds the same form as the Riccati's differential equation, is represented by \[ u = \partial_x v + v^2, \] where $u$ is a solution of the modified Korteweg-de Vries equation (mKdV equation), and $v$ is a solution of the Korteweg-de Vries equation (KdV equation). For the Riccati's differential equation, $u$ is a function standing for an inhomogeneous term, and $v$ is unknown function. It provides a representation of the infinitesimal generator of second order differential equation. Indeed, if the Miura transform is combined with the Cole-Hopf transform $v = \psi^{-1} \partial_x \psi$, it is written by \begin{equation} \label{basic} \begin{array}{ll} u = \partial_x ( \psi^{-1} \partial_x \psi ) + ( \psi^{-1} \partial_x \psi)^2 \vspace{2.5mm} \\ = - \psi^{-2} (\partial_x \psi)^2+ \psi^{-1} \partial_x ( \partial_x \psi ) + \psi^{-2} (\partial_x \psi)^2 \vspace{2.5mm} \\ = \psi^{-1} \partial_x^2 \psi, \end{array} \end{equation} where the commutation between $\psi$, $\partial_x \psi$ and $\partial_x^2 \psi$ is assumed. This issue should be carefully treated in operator situation; i.e. in the standard theory of abstract evolution equations of hyperbolic type \cite{70kato,73kato}, $x$-dependent infinitesimal generators (corresponding to $\partial_x \psi$ or $\partial_x^2 \psi$ respectively) do not generally commute with the evolution operator (corresponding to $\psi$ or $\partial_x \psi$ respectively), although such commutations are always true in $x$-independent infinitesimal generators. For sufficiently smooth $\psi$, one of the implication here is that the function $\psi$ satisfies both the second order equation \[ \begin{array}{ll} u = \psi^{-1} \partial_x^2 \psi \quad \Leftrightarrow \quad \partial_x^2 \psi = u \psi \end{array} \] and the first order equation \[ \begin{array}{ll} v = \psi^{-1} \partial_x \psi \quad \Leftrightarrow \quad \partial_x \psi = v \psi , \end{array} \] at the same time under the commutation. The combined use of Miura transform and Cole-Hopf transform is called the combined Miura transform in \cite{20iwata-1} (for the combined use in the inverse scattering theory, e.g., see \cite{81ablowitz}). Provided the solvable first order autonomous differential equation (i.e., the Cole-Hopf transform) with its solution $\psi$, the Miura transform provides a way to find the second order autonomous differential equation to be satisfied by exactly the same $\psi$. The combined Miura transform $ \partial_x^2 \psi = u \psi$ can be generalized as the second order abstract equation $ \partial_t^2 \psi = A_2(t) \psi $ in finite or infinite dimensional Banach spaces by taking $u$ as a closed operator $A_2(t):D(A_2) \to X$ in a Banach space $X$, where the index $2$ denotes the order of differential equations, and the notation of variable is chosen as $t$. The solution $\psi$ is generally represented by $\psi(t) = U(t,s) u_s$ for a certain $u_s \in X$, so that the operator version of the combined Miura transform is formally obtained by assuming $\psi(t) = U(t,s)$. \begin{equation} \label{intermed} \begin{array}{ll} A_1(t) = \psi^{-1} \partial_t \psi \vspace{1.5mm} \\ \qquad = \partial_t \log \psi = \partial_t \log U(t,s) \vspace{1.5mm} \\ \qquad = ( I - \kappa e^{-a(t,s)} )^{-1} \partial_t a(t,s) \end{array} \end{equation} is valid under the commutation between $\psi = U(t,s)$ and $\partial_t \psi = \partial_t U(t,s)$. According to the spectral structure of $U(t,s)$, its logarithm $\log U(t,s)$ cannot necessarily be defined by the Riesz-Dunford integral. However, the logarithm $\log e^{a(t,s)}$ is necessarily well defined by the Riesz-Dunford integral, because $a(t,s)$ with a certain $\kappa$ is always bounded on $X$ regardless of the spectral structure of $U(t,s)$. That is, it is necessary to introduce a translation (i.e., a certain nonzero complex number $\kappa$) for defining logarithm functions of operator. Here it is not necessary to calculate $\log (U(t,s))$ at the intermediate stage, and only the most left hand side and the most right hand side of Eq.~\eqref{intermed} make sense. In terms of applying nonlinear transforms such as the Miura transform and the Cole-Hopf transform, it is necessary to identify the functions with the operators, which is mathematically equivalent to identify elements in $X$ with elements in $B(X)$. In other words, it is also equivalent to regard a set of evolution operators as a set of infinitesimal generators. The operator representation of the infinitesimal generator $A_2(t)$ of second order evolution equations has been obtained in \cite{20iwata-1}. Under the commutation assumption between $\psi$ and $\partial_t \psi$ and that between $\partial_t \psi$ and $\partial_t^2 \psi$, the representation of infinitesimal generator $A_2(t)$ in Eq.~(\ref{cm2nd}) is formally obtained by \begin{equation} \label{cm3rd} \begin{array}{ll} A_2(t) = \psi^{-1} \partial_t^2 \psi = \left[ \psi^{-1} \partial_t \psi \right] ~ \left[ (\partial_t \psi) ^{-1} \partial_t^2 \psi \right] = \left[ \partial_t \log \psi \right] ~ \left[ \partial_t \log (\partial_t \psi) \right] \vspace{1.5mm} \\ \qquad = \left[ \partial_t \log U(t,s) \right] ~ \left[ \partial_t \log (\partial_t U(t,s)) \right] \vspace{1.5mm} \\ \qquad = ( I - \kappa e^{-a(t,s)} )^{-1} \partial_t a_1(t,s) ~ ( I - \kappa e^{{u a}(s,t)} )^{-1} \partial_t a_2 (t,s) \end{array} \end{equation} in $X$, where alternative infinitesimal generators $a_1 (t,s)$ and $a_2 (t,s)$, which are defined by \[ e^{a_1 (t,s)} = U(t,s) + \kappa I , \quad e^{a_2 (t,s)} = \partial_t U(t,s) + \kappa I, \] and therefore by \[ a_1 (t,s) = {\rm Log} ( U(t,s) + \kappa I ), \quad a_2 (t,s) = {\rm Log} ( \partial_t U(t,s) + \kappa I ), \] generate $e^{ a_1 (t,s)}$ and $e^{ a_2 (t,s)}$, respectively. For $\psi(t) = U(t,s) u_s \in C^0([0,T];X)$, let generally unbounded operator $\partial_t U(t,s)$ be further assumed to be continuous with respect to $t$; for the definition of logarithm of unbounded evolution operators by means of the doubly-implemented resolvent approximation, see \cite{20iwata-3}. Note that the continuous and unbouded setting for $\partial_t U(t,s)$ is reasonable with respect to $C^0$-semigroup theory, since both $\psi (t) = U(t,s) u_s$ and $\partial_t U(t,s) u_s$ are the two main components of solution orbit defined in the infinite-dimensional dynamical systems. Similar to Eq.~\eqref{intermed}, it is necessary to introduce a translation to define $\log U(t,s) $, so that the most right hand side of Eq.~\eqref{cm3rd} is a mathematically valid representation for $A_2(t)$. Here it is not necessary to calculate $\log U(t,s)$ and $\log \partial_t U(t,s)$ at the intermediate stage. In this manner, the infinitesimal generators of second order evolution equations are factorized as the product of two logarithmic representations of operators: $\partial_t \log U(t,s)$ and $\partial_t \log \partial_t U(t,s)$ (for the details arising from the operator treatment, see \cite{20iwata-1}). The order of $\partial_t \log U(t,s)$ and $\partial_t \log \partial_t U(t,s)$ can be changed independent of the boundedness and unboundedness of operator, as it is confirmed by the commutation assumption \[ \psi^{-1} \partial_t^2 \psi = ( \partial_t^2 \psi) \psi^{-1} = ( \partial_t^2 \psi) (\partial_t \psi)^{-1} (\partial_t \psi) \psi^{-1} = [ (\partial_t \psi)^{-1} \partial_t^2 \psi ] [ \psi^{-1} \partial_t \psi ]. \] This provides the operator representation of the combined Miura transform. In unbounded operator situations, the domain space of infinitesimal generators should be carefully discussed. Indeed, the domain space of $A_2(t)$, which must be a dense subspace of $X$, is expected to satisfy \begin{equation} \begin{array}{ll} \label{cm1} D(A_2(t)) = \left\{ u \in X; ~ \{ \partial_t {\hat a}(t,s) u \} \subset D( \partial_t a(t,s) ) \right\} \subset X, \vspace{1.5mm} \\ D( \partial_t a(t,s) ) = \left\{ u \in X; ~ \{ \partial_t a(t,s) u \} \subset X \right\} \subset X \end{array} \end{equation} or \begin{equation} \begin{array}{ll} \label{cm2} D(A_2(t)) = \left\{ u \in X; ~ \{ \partial_t a (t,s) u \} \subset D( \partial_t {\hat a}(t,s) ) \right\} \subset X, \vspace{1.5mm} \\ D( \partial_t {\hat a}(t,s) ) = \left\{ u \in X; ~ \{ \partial_t {\hat a}(t,s) u \} \subset X \right\} \subset X \end{array} \end{equation} depending on the order of product, where note that both $a(t,s)$ and $\partial_t a(t,s)$ depend on $t,s \in [0,T]$. In the following, beginning with the second order formalism (i.e., the combined Miura transform), the relation is generalized as the recurrence formula for defining the higher order operator $A_k$ ($k \ge 3$). Consequently, for solutions $\psi(t) = U(t,s) u_s$ satisfying $ \partial_t \psi = A_1(t) \psi $, the second order evolution equation \begin{equation} \label{cm2nd} \begin{array}{ll} \partial_t^2 \psi = A_2(t) \psi \end{array} \end{equation} is also satisfied by setting the infinitesimal generator $A_2(t)$ as defined by the combined Miura transform \eqref{basic}. It means that $A_2(t)$ is automatically determined by a given operator $A_1(t)$. \section{Main result} \subsection{Recurrence formula generalizing the combined Miura transform} For exactly the same $\psi(t) = U(t,s)$ satisfying $ \partial_t \psi = A_1(t) \psi$, a generally-unbounde operator $\partial_t^k A$ be continuous with respect to $t$ ($k \ge 1$: integer). Let the infinitesimal generator of $k$-th order evolution equations be defined by \[ A_k = \psi^{-1} \partial_t^{k} \psi \] under the commutation assumption. For defining the higher-order infinitesimal generator in an abstract manner, a recurrence formula is introduced. In this section, the infinitesimal generators $A_k(t)$ are assumed to be general ones holding the time dependence. \begin{theorem} \label{thm01} For $t \in [0,T]$, let generally-unbounded closed operators $A_1(t), A_2(t), \cdots A_n(t)$ be continuous with respect to $t$ defined in a Banach space $X$. Here $A_1(t)$ is further assumed to be an infinitesimal generator of the first-order evolution equation \begin{equation} \label{cp1} \begin{array}{ll} \partial_t \psi(t) = A_{1}(t) \psi(t), \end{array} \end{equation} in $X$, where $\psi(t)$ satisfying the initial condition $\psi(0) = \psi_0 \in X$ is the solution of the Cauchy problem of \eqref{cp1}. The commutation between $\psi$ and $\partial_t \psi$, and that between $\psi$ and $\partial_t^n \psi$ are assumed to be valid ($n \ge 2$). Then the infinitesimal generator of the $n$-th order evolution equation \begin{equation} \label{cp2} \begin{array}{ll} \partial_t^n \psi(t) = A_{n} (t) \psi(t) \end{array} \end{equation} is given by the recurrence formula \begin{equation} \label{recc} \begin{array}{ll} A_{n}(t) = (\partial_t + A_{1}(t) ) A_{n-1}(t) \vspace{1.5mm} \\ \end{array} \end{equation} being valid for $n \ge 2$, where $\psi(t)$ is common to Eqs.~\eqref{cp1} and \eqref{cp2}. \end{theorem} \begin{proof} The statement is proved by the mathematical induction. Let $\psi$ satisfy $\partial_t \psi(t) = A_{1}(t) \psi(t) $. In case of $n=2$, let $A_2$ satisfy $\partial_t \psi(t) = A_{2}(t) \psi(t)$. The infinitesimal generator $A_2(t)$ is formally represented by \[ A_{2} (t) = \partial_t A_{1}(t) + A_{1}^2(t), \] as it is readily understood by the combined Miura transform (see also \eqref{basic}). Let the relation for $n=k-1$: \[ A_{k-1}(t) = \psi(t)^{-1} \partial_{t}^{k-1} \psi(t) \] be satisfied. By substituting $A_{k-1} (t)= \psi^{-1} \partial_{t}^{k-1} \psi$ and $ A_{1}(t) = \psi^{-1} \partial_{t} \psi$ into \[ A_k(t) = \partial_t A_{k-1}(t) + A_1(t) A_{k-1}(t), \] it results in \[ \begin{array}{ll} A_k(t) = \partial_t (\psi^{-1} \partial_{t}^{k-1} \psi) + (\psi^{-1} \partial_{t} \psi) ( \psi^{-1} \partial_{t}^{k-1} \psi) \vspace{1.5mm} \\ \quad =- \psi^{-2} (\partial_t \psi) (\partial_{t}^{k-1} \psi) + \psi^{-1} \partial_{t}^{k} \psi + (\psi^{-1} \partial_{t} \psi) ( \psi^{-1} \partial_{t}^{k-1} \psi) \vspace{1.5mm} \\ \quad = \psi^{-1} \partial_{t}^{k} \psi \end{array} \] under the commutation. Consequently, \[ \partial_{t}^{k} \psi(t) = A_k(t) \psi(t) \] is obtained. \end{proof} Using Eq.~(\ref{recc}), the operator version of Riccati's differential equation is obtained if $n=2$ is applied. The resulting equation \[ \begin{array}{ll} A_{2}(t) = \partial_t A_{1}(t) + [A_{1}(t)]^2 \vspace{1.5mm} \\ \end{array} \] is the Riccati type differential equation to be valid in the sense of operator. The domain space of $A_2(t)$ is determined by $A_1(t)$, and not necessarily equal to the domain space of $A_1(t)$. Consequently, the Riccati type nonlinear differential equation is generalized to the operator equation in finite/infinite-dimensional abstract Banach spaces. The obtained equation \[ A_{n}(t) = (\partial_t + A_{1}(t) ) A_{n-1}(t) \] itself is nonlinear if $n=2$, while linear if $n\ne 2$. Its dense domain, which is also recursively determined, is assumed to satisfy \[ \begin{array}{ll} D(A_n(t)) = \left\{ u \in X; ~ \{ A_{n-1} u \} \subset D( A_1(t) ) \right\} \subset X, \end{array} \] for $n \ge 2$, with \[ \begin{array}{ll} D( A_1(t) ) = \left\{ u \in X; ~ \{ A_1 u \} \subset X \right\} \subset X. \end{array} \] In the following examples, concrete higher order evolution equations are shown. In each case, the recurrence formula plays a role of transform between the first order equation and $n$th order equation. \vspace{2.5mm} \\ {\bf Example 1. [2nd order evolution equation].} ~The second order evolution equations, which is satisfied by a solution $\psi$ of first order evolution equation $\partial_t \psi = \partial_x^k \psi$, is obtained. The operator $A_1$ is given by $t$-independent operator $\partial_x^k$ ($k$ is a positive integer). \[ \begin{array}{ll} A_{2} = (\partial_t + \partial_x^k ) \partial_x^k = \partial_t \partial_x^k + \partial_x^{2k}, \vspace{1.5mm} \\ \end{array} \] and then, by applying Eq.~(\ref{recc}), the second order evolution equation \begin{equation} \label{evo02} \begin{array}{ll} \partial_t^2 u = \partial_t \partial_x^k u + \partial_x^{2k} u \end{array} \end{equation} is obtained. Indeed, let $u$ be a general or special solution of $\partial_t u = \partial_x^k u$. The validity of statement \[ \begin{array}{ll} \partial_t u = \partial_x^k u \qquad \Rightarrow \qquad \partial_t^2 u = \partial_t \partial_x^k u + \partial_x^{2k} u \end{array} \] is confirmed by substituting the operator equality $\partial_x^k = u^{-1} \partial_t u$ and the associated equalities \[ \begin{array}{ll} \partial_t \partial_x^k = \partial_t (u^{-1} \partial_t u) = -(u^{-1} \partial_t u)^2 + (\partial_t^{2} u) u^{-1}, \vspace{2.5mm} \\ \partial_x^{2k} = (u^{-1} \partial_t u)^2 \\ \end{array} \] to the right hand side of Eq.~\eqref{evo02}, where the commutation between $u$, $\partial_t u$ and $\partial_t^2 u$ is utilized. Equation~\eqref{evo02} is a kind of wave equation in case of $k=1$. Note that the obtained equation is a linear equation. \vspace{1.5mm} \\ {\bf Example 2. [3rd order evolution equation]} ~The third order evolution equations, which is satisfied by a solution $\psi$ of first order evolution equation $\partial_t \psi = \partial_x^k \psi$, is obtained in the same manner. The operator $A_1$ is given by time-independent operator $\partial_x^k$ ($k$ is a positive integer). \[ \begin{array}{ll} A_{3} = (\partial_t + \partial_x^k ) ( \partial_t \partial_x^{k} + \partial_x^{2k}) = \partial_t (\partial_t \partial_x^k + \partial_x^{2k}) + \partial_x^k ( \partial_t \partial_x^k + \partial_x^{2k}) , \vspace{1.5mm} \\ = \partial_t^2 \partial_x^{k} + 2 \partial_t \partial_x^{2k} + \partial_x^{3k} , \vspace{1.5mm} \\ \end{array} \] and the third order evolution equation \[ \begin{array}{ll} \partial_t^3 u = \partial_t^2 \partial_x^k u + 2 \partial_t \partial_x^{2k} u + \partial_x^{3k} u \end{array} \] is obtained. The validity is confirmed by substituting the operator equality $\partial_x^k = u^{-1} \partial_t u $. \subsection{Logarithmic representation of $n$-th order infinitesimal generator} Although the relation between $A_n$ and a given $A_1$ can be understood by Theorem 1, those representations and the resulting representations of evolution operator ($C_0$-semigroup) are not understood at this point. In this section, utilizing the logarithmic representation of the infinitesimal generator, the representation of infinitesimal generator for the high order evolution equation is obtained. Since the logarithmic representation has been known to be associated essentially with the first- and second-order evolution equations, the discussion in the present section clarifies a universal role of logarithm of operators, independent of the order of evolution equations. \begin{theorem} \label{thm02} For $t \in [0,T]$, let generally-unbounded closed operators $A_1(t), A_2(t), \cdots A_n(t)$ be continuous with respect to $t$ defined in a Banach space $X$. Here $A_1$ is further assumed to be an infinitesimal generator of the first-order evolution equation \eqref{cp1}. For the $n$-th order evolution equations \begin{equation} \label{nthmastereq} \partial_t^n u(t) = A_n(t) u(t) \end{equation} in $X$, the commutation between $\psi$, $\partial_t \psi$, $\cdots \partial_t^n \psi$ are assumed to be valid. Then $n$-th order operator $A_n(t)$ is represented by the product of logarithmic representations \begin{equation} \label{repu} \begin{array}{ll} A_n (t) = \Pi_{k=1}^n \left[ (\kappa {\mathcal U}_k(s,t) + I) \partial_t {\rm Log} ( {\mathcal U_k}(t,s) + \kappa I) \right], \end{array} \end{equation} where $\kappa$ is a certain complex number, and ${\mathcal U}_k(t,s)$ is the evolution operator of $ \partial_t^{k} \psi = {\mathcal A}_k(t) \partial_t^{k-1} \psi $. Note that the commutation between operators is assumed. In the operator situation, the commutation assumption is equivalent to assume a suitable domain space setting for each $ (\kappa {\mathcal U}_k(s,t) + I) \partial_t {\rm Log} ( {\mathcal U_k}(t,s) + \kappa I) $. \end{theorem} \begin{proof} According to Theorem~\ref{thm01} and therefore to the operator version of the combined Miura transform, the $n$-th order infinitesimal generator is regarded as $A_{n}(t) = \psi^{-1} \partial_t^{n} \psi$ under the commutation assumption. In the first step, the $n$-th order infinitesimal generator is factorized as \begin{equation} \label{recc2} \begin{array}{ll} \psi^{-1} \partial_t^{n} \psi \vspace{1.5mm} \\ = \psi^{-1} ( \partial_t^{n-1} \psi ) ( \partial_t^{n-1} \psi )^{-1} \partial_t^{n} \psi \vspace{1.5mm} \\ = \psi^{-1} ( \partial_t \psi ) ( \partial_t \psi )^{-1} \cdots ( \partial_t^{n-2} \psi ) ( \partial_t^{n-2} \psi )^{-1} ( \partial_t^{n-1} \psi ) ( \partial_t^{n-1} \psi )^{-1} \partial_t^{n} \psi \vspace{1.5mm} \\ = \left[ \psi^{-1} ( \partial_t \psi ) \right] \left[ ( \partial_t \psi )^{-1} ( \partial_t^{2} \psi ) \right] \cdots \left[ ( \partial_t^{n-2} \psi )^{-1} ( \partial_t^{n-1} \psi ) \right] \left[ ( \partial_t^{n-1} \psi )^{-1} \partial_t^{n} \psi \right] \end{array} \end{equation} where the commutation assumption is necessary for obtaining each $ \psi^{-1} \partial_t^{k} \psi$ with $1 \le k \le n$. Under the commutation assumption, the representation \begin{equation} \label{recx3} \begin{array}{ll} A_n(t) = \psi^{-1} \partial_t^{n} \psi = \Pi_{k=1}^n \left[ ( \partial_t^{k-1} \psi )^{-1} \partial_t^{k} \psi \right] . \end{array} \end{equation} is obtained. In the second step, the logarithmic representation is applied to $ ( \partial_t^{k-1} \psi )^{-1} \partial_t^{k} \psi$. The logarithmic representation for the first order abstract equation $ \partial_t {\tilde \psi} = {\mathcal A}_k (t) {\tilde \psi} \Leftrightarrow \partial_t^{k} \psi = {\mathcal A}_k(t) \partial_t^{k-1} \psi $ with ${\tilde \psi} = \partial_t^{k-1} \psi$ is \begin{eqnarray*} {\mathcal A}_k (t) &=& ( \partial_t^{k-1} \psi )^{-1} \partial_t^{k} \psi \vspace{1.5mm} \\ &=& ( \partial_t^{k-1} {\mathcal U}_k(t,s) )^{-1} \partial_t^{k} {\mathcal U}_k(t,s) \vspace{1.5mm} \\ &=& \left[ (\kappa {\mathcal U}_k(s,t) + I) \partial_t {\rm Log} ( {\mathcal U_k}(t,s) + \kappa I) \right] , \end{eqnarray*} where $ {\mathcal U}_k(t,s)$ is the evolution operator generated by ${\mathcal A}_k(t)$, and $ \psi = {\mathcal U}_k(t,s) $ is applied for obtaining the operator equality. Again, the commutation assumption is necessary to obtain the logarithmic representation. Consequently, the higher order infinitesimal generator becomes \begin{eqnarray*} {\mathcal A}_n (t) &=& \Pi_{k=1}^n \left[ ( \partial_t^{k-1} {\mathcal U}_k(t,s) )^{-1} \partial_t^{k} {\mathcal U}_k(t,s) \right] \vspace{1.5mm} \\ &=& \Pi_{k=1}^n \left[ (\kappa {\mathcal U}_k(s,t) + I) \partial_t {\rm Log} ( {\mathcal U_k}(t,s) + \kappa I) \right]. \end{eqnarray*} In this formalism, several possible orderings arise from the commutation assumption. Since each component $\left[ (\kappa {\mathcal U}_k(s,t) + I) \partial_t {\rm Log} ( {\mathcal U_k}(t,s) + \kappa I) \right]$ possibly unbounded in $X$, the choice of the domain space should be carefully chosen as discussed around Eqs.~\eqref{cm1} and \eqref{cm2}. \end{proof} Although the commutation assumption in Theorems \ref{thm01} and \ref{thm02} is restrictive to the possible applications, all the $t$-independent infinitesimal generators satisfy this property. That is, the present results are applicable to linear/nonlinear heat equations, linear/nonlinear wave equations, and linear/nonlinear Schr\"odinger equations. Consequently, a new path is introduced to the higher order evolution equation in which the concept of "higher order" is reduced to the concept of "operator product" in the theory of abstract evolution equations. Using the alternative infinitesimal generator being defined by $e^{\alpha_k(t,s)} = {\mathcal U}_k(t,s) + \kappa I$, ${\mathcal U}_k(t,s)$ can be replaced with $e^{\alpha_k(t,s)}$, and the following corollary is valid. \begin{corollary} \label{cor03} For $t \in [0,T]$, let generally-unbounded closed operators $A_1(t), A_2(t), \cdots A_n(t)$ be continuous with respect to $t$ defined in a Banach space $X$. Here $A_1$ is further assumed to be an infinitesimal generator of the first-order evolution equation \eqref{cp1}. For the $n$-th order evolution equations \[ \partial_t^n u(t) = A_n(t) u(t) \] in $X$, the commutation between $\psi$, $\partial_t \psi$, $\cdots \partial_t^n \psi$ are assumed to be valid. Then the $n$-th order operator is represented by the product of logarithmic representations \begin{equation} \label{repalt} \begin{array}{ll} A_n (t) = \Pi_{k=1}^n \left[ ( I - \kappa e^{-\alpha_k(t,s)} )^{-1} \partial_t \alpha_k(t,s) \right] , \end{array} \end{equation} where $\kappa$ is a certain complex number, and ${\mathcal U}_k(t,s)$ is the evolution operator generated by $ \partial_t^{k} \psi = {\mathcal A}_k \partial_t^{k-1} \psi $ and $\alpha_k(t,s)$ is an alternative infinitesimal generator to $ {\mathcal A}_k$ satisfying the relation $e^{\alpha_k(t,s)} = {\mathcal U}_k(t,s) + \kappa I$. Note that the commutation between operators is assumed, so that the order of logarithmic representation can be changed by assuming a suitable domain space settings. \end{corollary} \begin{proof} The statement follows from applying \[ e^{\alpha_k(t,s)} = {\mathcal U}_k(t,s) + \kappa I \] to the representation shown in Theorem \ref{thm02}. \end{proof} Equation~\eqref{repalt} is actually a generalization of Eq.~\eqref{repu} as discussed around Eq.~\eqref{altrep}. \subsection{$n$-th order generalization of Hille-Yosida type exponential function of operator} Let us take $t$-independent $n$-th order infinitesimal generator $A_n$. The operators $A_k$ with $k=1,2, \cdots n$ are assumed to be the infinitesimal generator of $k$-th order evolution equations. The specific equation for Eq.~\eqref{nthmastereq} is written as \[ \omega^n - A_n = 0 \] by substituting a formal solution $e^{t \omega}$. If the fractional power $A_n^{1/n} $ of operator exists (for the definition of fractional powers of operators, see \cite{01carracedo}), $A_n^{1/n} $ is a root of the specific equation. Furthermore, $A_n^{1/n} $ is assumed to be an infinitesimal generator of first order evolution equation $\partial_t u(t) = A_n^{1/n} u(t)$. In this case, the specific equation is also written by \[ \omega^n - A_n = 0 \quad \Leftrightarrow \quad \left( \omega/A_n^{1/n} \right)^n - I = 0. \] Note that the latter equation, which is called the cyclotomic equation in algebra \cite{1886scott,1836wantzel}, is known to hold the algebraic representation. Consequently, based on the discussion made in the Appendix I, the integral representation of evolution operator is valid and one specific solution (more precisely, one of the fundamental solutions) is represented by \begin{equation} \label{hy-gen} {\mathcal U}_n(t,s) = \exp \left(t A_n^{1/n} \right) = \exp \left( t ~ \left\{ \Pi_{k=1}^n \left[ ( I - \kappa e^{-a_k(t,s)} )^{-1} \partial_t a_k(t,s) \right] \right\}^{1/n} \right), \end{equation} where $\kappa$ is a certain complex number, and ${\mathcal U}_k(t,s)$ is the evolution operator generated by $ \partial_t^{k} \psi = {\mathcal A}_k \partial_t^{k-1} \psi $ and $\alpha_k(t,s)$ is an alternative infinitesimal generator to $ {\mathcal A}_k$ satisfying the relation $e^{\alpha_k(t,s)} = {\mathcal U}_k(t,s) + \kappa I$. The $n$-th order logarithmic representation is actually a generalization of the Hille-Yosida type generation theorem ($n=1$). The representation shown in the most right hand side of \eqref{hy-gen} is always valid for certain $\kappa$, even if the fractional power $A_n^{1/n} $ of operator is not well defined. \section{summary} The recurrence formula for any order evolution equations is presented. It connects the first order evolution equation with the higher order evolution equations. By means of the logarithmic representation of the operators, the rigorous representation for the infinitesimal generators of any order evolution equations are obtained. That is, \begin{itemize} \item introduction of recurrence formula for obtaining a class of higher order equations: e.g., \[ \begin{array}{ll} \partial_t^2 u = \partial_t \partial_x u + \partial_x^{2} u \end{array} \] associated with $\partial_t u = \partial_x u$ (Eq.~\eqref{evo02} with $k=1$), where the introduced transform is represented by the recurrence formula generalizing the Miura transform (Theorem 1); \vspace{1.5mm} \item higher order generalization of logarithmic representation of operators in which the concept of ``order differential operator with respect to $t$" is reduced to the concept of ``multiplicity of operator product of infinitesimal generators". (Theorem 2); \vspace{1.5mm} \item generalization of Hille-Yosida type exponential function of operator (Eq.~\eqref{hy-gen}) \end{itemize} have been done in this paper. The present discussion shows another aspect of the Miura transform, which originally transform the solution of ``first-order" KdV equations to the solution of ``first-order" modified KdV equations.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \setcounter{equation}{0} Coulomb gauge Yang-Mills theory (and by extension Quantum Chromodynamics) is a fascinating, yet frustrating endeavor. On the one hand, Coulomb gauge offers great potential for understanding such issues as confinement \cite{Zwanziger:1998ez,Gribov:1977wm}; on the other, the intrinsic noncovariance of the formalism makes any perturbative calculation formidably complicated. Many approaches to solving (or providing reliable approximations to solving) the problems in Coulomb gauge have been forwarded. Recent among these are the Hamiltonian approach of Ref.~\cite{Feuchter:2004mk}, based on the original work of Christ and Lee \cite{Christ:1980ku}. A lattice version of the Coulomb gauge action also exists \cite{Zwanziger:1995cv}, which has led to numerical studies, for example Refs.~\cite{Cucchieri:2000gu}. Functional methods based on the Lagrangian formalism have also been considered, especially within the first order (phase space) formalism \cite{Zwanziger:1998ez,Watson:2006yq} and most recently, one-loop perturbative results for both the ultraviolet divergent and finite parts of the various two-point functions have been obtained \cite{Watson:2007mz}. Similar results were previously obtained for the gluon propagator functions under a different formalism (using the chromoelectric field directly as a degree of freedom and without ghosts) and using different methods to evaluate the integrals \cite{Andrasi:2003zn}. In this paper, we consider the (standard, second order) functional approach to Coulomb gauge Yang-Mills theory. We derive the Dyson--Schwinger equations and Slavnov--Taylor identities for the two-point functions that arise in the construction and using the techniques of \cite{Watson:2007mz} we present results for the one-loop perturbative dressing functions. The paper is organized as follows. In the next section, the functional formalism used is described. Section~3 concerns the decomposition of the functions used. The (nonperturbative) Dyson--Schwinger equations and Slavnov--Taylor identities relating the various Green's functions are derived in Section~4. In Section~5, the one-loop perturbative results are obtained. Finally, there is a summary and outlook. \section{Functional Formalism} \setcounter{equation}{0} Let us begin by considering Coulomb gauge Yang-Mills theory. We use the framework of functional methods to derive the basic equations that will later give rise to the Dyson--Schwinger equations, Slavnov--Taylor identities, Feynman rules etc. Throughout this work, we will use the notation and conventions established in \cite{Watson:2006yq,Watson:2007mz}. We work in Minkowski space (until the perturbative integrals are to be explicitly evaluated) with metric $g_{\mu\nu}=\mathrm{diag}(1,-\vec{1})$. Greek letters ($\mu$, $\nu$, $\ldots$) denote Lorentz indices, roman subscripts ($i$, $j$, $\ldots$) denote spatial indices and superscripts ($a$, $b$, $\ldots$) denote color indices. We will sometimes also write configuration space coordinates ($x$, $y$, $\ldots$) as subscripts where no confusion arises. The Yang-Mills action is defined as \begin{equation} {\cal S}_{YM}=\int\dx{x}\left[-\frac{1}{4}F_{\mu\nu}^aF^{a\mu\nu}\right] \end{equation} where the (antisymmetric) field strength tensor $F$ is given in terms of the gauge field $A_{\mu}^a$: \begin{equation} F_{\mu\nu}^a =\partial_{\mu}A_{\nu}^a-\partial_{\nu}A_{\mu}^a+gf^{abc}A_{\mu}^bA_{\nu}^c. \end{equation} In the above, the $f^{abc}$ are the structure constants of the $SU(N_c)$ group whose generators obey $\left[T^a,T^b\right]=\imath f^{abc}T^c$. The Yang-Mills action is invariant under a local $SU(N_c)$ gauge transform characterized by the parameter $\th_x^a$: \begin{equation} U_x=\exp{\left\{-\imath\th_x^aT^a\right\}}. \end{equation} The field strength tensor can be expressed in terms of the chromoelectric and chromomagnetic fields ($\sigma=A^0$) \begin{equation} \vec{E}^a=-\partial^0\vec{A}^a-\vec{\nabla}\sigma^a+gf^{abc}\vec{A}^b\sigma^c,\;\;\;\; B_i^a=\epsilon_{ijk}\left[\nabla_jA_k^a-\frac{1}{2} gf^{abc}A_j^bA_k^c\right] \end{equation} such that ${\cal S}_{YM}=\int(E^2-B^2)/2$. The electric and magnetic terms in the action do not mix under the gauge transform which for the gauge fields is written \begin{equation} A_\mu\rightarrow A'_\mu =U_xA_\mu U_x^\dag-\frac{\imath}{g}(\partial_\mu U_x)U_x^\dag. \end{equation} Given an infinitesimal transform $U_x=1-\imath\th_x^aT^a$, the variation of the gauge field is \begin{equation} \delta A_{\mu}^a=-\frac{1}{g}\hat{D}_{\mu}^{ac}\th^c \end{equation} where the covariant derivative in the adjoint representation is given by \begin{equation} \hat{D}_{\mu}^{ac}=\delta^{ac}\partial_{\mu}+gf^{abc}A_{\mu}^b. \end{equation} Consider the functional integral \begin{equation} Z=\int{\cal D}\Phi\exp{\left\{\imath{\cal S}_{YM}\right\}} \end{equation} where $\Phi$ denotes the collection of all fields. Since the action is invariant under gauge transformations, $Z$ is divergent by virtue of the integration over the gauge group. To overcome this problem we use the Faddeev-Popov technique and introduce a gauge-fixing term along with an associated ghost term \cite{IZ}. Using a Lagrange multiplier field to implement the gauge-fixing, in Coulomb gauge ($\s{\div}{\vec{A}}=0$) we can then write \begin{equation} Z=\int{\cal D}\Phi\exp{\left\{\imath{\cal S}_{YM}+\imath{\cal S}_{fp}\right\}},\;\;\;\; {\cal S}_{fp}=\int d^4x\left[-\lambda^a\s{\vec{\nabla}}{\vec{A}^a} -\ov{c}^a\s{\vec{\nabla}}{\vec{D}^{ab}}c^b\right]. \end{equation} The new term in the action is invariant under the standard BRS transform whereby the infinitesimal gauge parameter $\th^a$ is factorized into two Grassmann-valued components $\th^a=c^a\delta\lambda$ where $\delta\lambda$ is the infinitesimal variation (not to be confused with the colored Lagrange multiplier field $\lambda^a$). The BRS transform of the new fields reads \begin{eqnarray} \delta\ov{c}^a&=&\frac{1}{g}\lambda^a\delta\lambda\nonumber\\ \delta c^a&=&-\frac{1}{2} f^{abc}c^bc^c\delta\lambda\nonumber\\ \delta\lambda^a&=&0. \end{eqnarray} It is at this point that this work diverges from Ref.~\cite{Watson:2006yq} in that we remain here within the standard (second order) formalism. By including source terms to $Z$, we construct the generating functional, $Z[J]$: \begin{equation} Z[J]= \int{\cal D}\Phi\exp{\left\{\imath{\cal S}_{YM}+\imath{\cal S}_{fp}+\imath{\cal S}_s\right\}} \end{equation} where \begin{equation} {\cal S}_s=\int d^4x\left[\rho^a\sigma^a+\s{\vec{J}^a}{\vec{A}^a}+\ov{c}^a\eta^a +\ov{\eta}^ac^a+\xi^a\lambda^a\right]. \end{equation} It is convenient to introduce a compact notation for the sources and fields and we denote a generic field $\Phi_\alpha$ with source $J_\alpha$ such that the index $\alpha$ stands for all attributes of the field in question (including its type) such that we can write \begin{equation} {\cal S}_s=J_\alpha\Phi_\alpha \end{equation} where summation over all discrete indices and integration over all continuous arguments is implicitly understood. Expanding the various terms we have explicitly \begin{eqnarray} {\cal S}_{YM}&=& \int d^4x\left\{-\frac{1}{2} A_i^f\left[\delta_{ij}\partial_0^2-\delta_{ij}\nabla^2 +\nabla_i\nabla_j\right]A_j^f-A_i^f\partial_0\nabla_i\sigma^f -\frac{1}{2}\sigma^f\nabla^2\sigma^f \right.\nonumber\\&&\left. +gf^{fbc}\left[-(\partial_0A_i^f)A_i^b\sigma^c-(\nabla_i\sigma^f)A_i^b\sigma^c +(\nabla_jA_k^f)A_j^bA_k^c\right] +g^2f^{fbc}f^{fde}\left[\frac{1}{2} A_i^b\sigma^cA_i^d\sigma^e -\frac{1}{4}A_i^bA_j^cA_i^dA_j^e\right]\right\}. \nonumber\\ \end{eqnarray} The field equations of motion are derived from the observation that the integral of a total derivative vanishes, up to boundary terms. The boundary terms vanish, although this is not trivial in the light of the Gribov problem \cite{Gribov:1977wm} (the reader is directed to Ref.~\cite{Watson:2006yq} and references therein for a discussion of this topic). Writing ${\cal S}={\cal S}_{YM}+{\cal S}_{fp}$, we have that \begin{equation} 0=\int{\cal D}\Phi\frac{\delta}{\delta\imath\Phi_\alpha} \exp{\left\{\imath{\cal S}+\imath{\cal S}_s\right\}}. \label{eq:eom0} \end{equation} The explicit form of the field equations of motion is given in Appendix~\ref{app:eom}. In addition to the field equations of motion, there exist identities derived by considering the BRS invariance of the action (these eventually form the Slavnov--Taylor identities). The BRS transform is continuous and we can regard it as a change of variables in the functional integral. Given that the Jacobian of such a change of variables is trivial and that the action is invariant, we have that \begin{eqnarray} 0&=&\int{\cal D}\Phi\frac{\delta}{\delta\imath\delta\lambda} \exp{\left\{\imath{\cal S}+\imath{\cal S}_s+\imath\delta{\cal S}_s\right\}}_{\delta\lambda=0} \nonumber\\ &=&\int{\cal D}\Phi\exp{\left\{\imath{\cal S}+\imath{\cal S}_s\right\}} \int d^4x\left[\frac{1}{g}\rho^a\partial_0c^a+f^{abc}\rho^a\sigma^bc^c -\frac{1}{g}J_i^a\nabla_ic^a+f^{abc}J_i^aA_i^bc^c+\frac{1}{g}\lambda^a\eta^a +\frac{1}{2} f^{abc}\ov{\eta}^ac^bc^c\right]. \nonumber\\ \end{eqnarray} So far, the generating functional, $Z[J]$, generates all Green's functions, connected and disconnected. The generating functional of connected Green's functions is $W[J]$ where \begin{equation} Z[J]=e^{W[J]}. \end{equation} We define the classical fields to be \begin{equation} \Phi_\alpha=\frac{1}{Z}\int{\cal D}\Phi\,\Phi_\alpha\exp{\imath{\cal S}} =\frac{1}{Z}\frac{\delta Z}{\delta\imath J_\alpha}. \end{equation} The generating functional of proper Green's functions is the effective action, $\Gamma$, which is a function of the classical fields and is defined through a Legendre transform of $W$: \begin{equation} \Gamma[\Phi]=W[J]-\imath J_\alpha\phi_\alpha. \end{equation} We introduce a bracket notation for derivatives of $W$ with respect to sources and of $\Gamma$ with respect to classical fields (no confusion arises since the two sets of derivatives are never mixed): \begin{equation} \ev{\imath J_\alpha}=\frac{\delta W}{\delta\imath J_\alpha},\;\;\;\; \ev{\imath\Phi_\alpha}=\frac{\delta\Gamma}{\delta\imath\Phi_\alpha}. \end{equation} It is now possible to present the field equations of motion in terms of proper functions (the Dyson--Schwinger equations are functional derivatives of these equations). Using the results listed in Appendix~\ref{app:eom} we have: \begin{eqnarray} \ev{\imath A_{ix}^a}&=& -\left[\delta_{ij}\partial_{0x}^2-\delta_{ij}\nabla_x^2+\nabla_{ix}\nabla_{jx}\right] A_{jx}^a-\partial_{0x}\nabla_{ix}\sigma_x^a+\nabla_{ix}\lambda_x^a \nonumber\\ &&+gf^{abc}\int\dx{y}\dx{z}\partial_{0x}\delta(y-x)\delta(z-x) \left[\ev{\imath J_{iy}^b\imath\rho_z^c}+A_{iy}^b\sigma_z^c\right] \nonumber\\ &&-gf^{fac}\int\dx{y}\dx{z}\delta(z-x)\nabla_{ix}\delta(y-x) \left[\ev{\imath\rho_y^f\imath\rho_z^c} +\ev{\imath\ov{\eta}_z^c\imath\eta_y^f}+\sigma_y^f\sigma_z^c+\ov{c}_y^fc_z^c\right] \nonumber\\ &&+gf^{abc}\int\dx{y}\dx{z} \left[\delta_{ij}\delta(z-x)\nabla_{kx}\delta(y-x) +\delta_{jk}\delta(y-x)\nabla_{ix}\delta(z-x) -\delta_{ki}\nabla_{jx}\delta(y-x)\delta(z-x)\right]\times \nonumber\\ &&\left[\ev{\imath J_{jy}^b\imath J_{kz}^c}+A_{jy}^bA_{kz}^c\right] \nonumber\\ &&+g^2f^{fac}f^{fde}\left[\ev{\imath\rho_x^c\imath J_{ix}^d\imath\rho_x^e} +\sigma_x^c\ev{\imath J_{ix}^d\imath\rho_x^e} +\sigma_x^e\ev{\imath\rho_x^c\imath J_{ix}^d} +A_{ix}^d\ev{\imath\rho_x^c\imath\rho_x^e}+\sigma_x^cA_{ix}^d\sigma_x^e\right] \nonumber\\ &&-\frac{1}{4}g^2f^{fbc}f^{fde}\delta_{jk}\delta_{il} \left[\delta^{gc}\delta^{eh}(\delta^{ab}\delta^{di}+\delta^{ad}\delta^{bi}) +\delta^{bg}\delta^{dh}(\delta^{ac}\delta^{ei}+\delta^{ae}\delta^{ci})\right]\times \nonumber\\ &&\left[\ev{\imath J_{jx}^g\imath J_{kx}^h\imath J_{lx}^i} +A_{jx}^g\ev{\imath J_{kx}^h\imath J_{lx}^i} +A_{lx}^i\ev{\imath J_{jx}^g\imath J_{kx}^h} +A_{kx}^h\ev{\imath J_{jx}^g\imath J_{lx}^i} +A_{jx}^gA_{kx}^hA_{lx}^i\right], \label{eq:adse0}\\ \ev{\imath\sigma_x^a}&=& -\partial_{0x}\nabla_{ix}A_{ix}^a-\nabla_x^2\sigma_x^a -gf^{fba}\int\dx{y}\dx{z}\delta(z-x)\partial_{0x}\delta(y-x) \left[\ev{\imath J_{iy}^f\imath J_{iz}^b}+A_{iy}^fA_{iz}^b\right] \nonumber\\ &&+gf^{abc}\int\dx{y}\dx{z}\left[\nabla_{ix}\delta(y-x)\delta(z-x) +\delta(y-x)\nabla_{ix}\delta(z-x)\right]\left[\ev{\imath J_{iy}^b\imath\rho_z^c} +A_{iy}^b\sigma_z^c\right] \nonumber\\ &&+g^2f^{fba}f^{fde}\left[\ev{\imath J_{ix}^b\imath J_{ix}^d\imath\rho_x^e} +A_{ix}^b\ev{\imath J_{ix}^d\imath\rho_x^e} +\sigma_x^e\ev{\imath J_{ix}^b\imath J_{ix}^d} +A_{ix}^d\ev{\imath J_{ix}^b\imath\rho_x^e}+A_{ix}^bA_{ix}^d\sigma_x^c\right], \label{eq:sidse0}\\ \ev{\imath\lambda_x^a}&=&-\nabla_{ix}A_{ix}^a, \label{eq:ladse0}\\ \ev{\imath\ov{c}_x^a}&=&-\nabla_x^2c_x^a +gf^{abc}\int\dx{y}\dx{z}\nabla_{ix}\delta(y-x)\delta(z-x) \left[\ev{\imath J_{iy}^b\imath\ov{\eta}_z^c}+A_{iy}^bc_z^c\right]. \label{eq:ghdse0} \end{eqnarray} It is also useful to express the $\lambda$ equation of motion in terms of connected functions: \begin{equation} \xi_x^a=\nabla_{ix}\ev{\imath J_{ix}^a}. \label{eq:xidse0} \end{equation} The identity stemming from the BRS invariance is also best expressed in terms of both connected and proper functions and reads: \begin{eqnarray} 0&=&\int\dx{x}\left\{\frac{1}{g}\eta_x^a\ev{\imath\xi_x^a} +\frac{1}{g}\rho_x^a\partial_{0x}\ev{\imath\ov{\eta}_x^a} +f^{abc}\rho_x^a\left[\ev{\imath\rho_x^b\imath\ov{\eta}_x^c} +\ev{\imath\rho_x^b}\ev{\imath\ov{\eta}_x^c}\right] -\frac{1}{g}\left[\frac{\nabla_{ix}}{(-\nabla_x^2)}J_{ix}^a\right]\eta_x^a \right.\nonumber\\&&\left. +f^{abc}J_{ix}^at_{ij}(x)\left[\ev{\imath J_{jx}^b\imath\ov{\eta}_x^c} +\ev{\imath J_{jx}^b}\ev{\imath\ov{\eta}_x^c}\right] +\frac{1}{2} f^{abc}\ov{\eta}_x^a\left[\ev{\imath\ov{\eta}_x^b\imath\ov{\eta}_x^c} +\ev{\imath\ov{\eta}_x^b}\ev{\imath\ov{\eta}_x^c}\right]\right\}, \label{eq:jstid0} \\ 0&=&\int\dx{x}\left\{-\frac{1}{g}\ev{\imath\ov{c}_x^a}\lambda_x^a -\frac{1}{g}\ev{\imath\sigma_x^a}\partial_{0x}c_x^a -f^{abc}\ev{\imath\sigma_x^a}\left[\ev{\imath\rho_x^b\imath\ov{\eta}_x^c} +\sigma_x^bc_x^c\right] -\frac{1}{g}\left[\frac{\nabla_{ix}}{(-\nabla_x^2)} \ev{\imath A_{ix}^a}\right]\ev{\imath\ov{c}_x^a} \right.\nonumber\\&&\left. -f^{abc}\ev{\imath A_{ix}^a}t_{ij}(x) \left[\ev{\imath J_{jx}^b\imath\ov{\eta}_x^c}+A_{jx}^bc_x^c\right] +\frac{1}{2} f^{abc}\ev{\imath c_x^a}\left[\ev{\imath\ov{\eta}_x^b\imath\ov{\eta}_x^c} +c_x^bc_x^c\right]\right\}, \label{eq:stid0} \end{eqnarray} where we have used the common trick of using the ghost equation of motion in order to reexpress one of the interaction terms transversely, with the transverse projector in configuration space being $t_{ij}(x)=\delta_{ij}+\nabla_{ix}\nabla_{jx}/(-\nabla_x^2)$. This manipulation will be useful when we consider the Slavnov--Taylor identities for the two-point functions later on. At this stage it is useful to explore some consequences of the above equations that lead to exact statements about the Green's functions. Introducing our conventions and notation for the Fourier transform, we have for a general two-point function (connected or proper) which obeys translational invariance: \begin{eqnarray} \ev{\imath J_{\alpha}(y)\imath J_\beta(x)}&=& \ev{\imath J_\alpha(y-x)\imath J_\beta(0)} =\int\dk{k}W_{\alpha\beta}(k)e^{-\imath k\cdot(y-x)}, \nonumber\\ \ev{\imath\Phi_{\alpha}(y)\imath\Phi_\beta(x)}&=& \ev{\imath\Phi_\alpha(y-x)\imath\Phi_\beta(0)} =\int\dk{k}\Gamma_{\alpha\beta}(k)e^{-\imath k\cdot(y-x)}, \end{eqnarray} where $\dk{k}=d^4k/(2\pi)^4$. Starting with \eq{eq:ladse0}, we have that the only non-zero functional derivative is \begin{equation} \ev{\imath A_{jy}^b\imath\lambda_x^a}=\imath\delta^{ba}\nabla_{jx}\delta(y-x) =\delta^{ba}\int\dk{k}k_je^{-\imath k\cdot(y-x)} \end{equation} and all other proper Green's functions involving derivatives with respect to the $\lambda$-field vanish (even in the presence of sources). In terms of connected Green's functions, \eq{eq:ladse0} becomes \eq{eq:xidse0} and the only non-zero functional derivative is \begin{equation} \nabla_{ix}\ev{\imath\xi_y^b\imath J_{ix}^a}=-\imath\delta^{ba}\delta(y-x). \end{equation} Because \eq{eq:xidse0} involves the contraction of a vector quantity, the information is less restricted than previously. However, we can write down the following (true once sources have been set to zero such that the tensor structure is determined): \begin{eqnarray} \ev{\imath J_{jy}^b\imath J_{ix}^a}&=& \int\dk{k}W_{AA}^{ba}(k)t_{ij}(\vec{k})e^{-\imath k\cdot(y-x)}, \nonumber\\ \ev{\imath\xi_y^b\imath J_{ix}^a}&=& \delta^{ba}\int\dk{k}\frac{k_i}{\vec{k}^2}e^{-\imath k\cdot(y-x)}, \nonumber\\ \ev{\imath\rho_y^b\imath J_{ix}^a}&=&0, \end{eqnarray} where $t_{ji}(\vec{k})=\delta_{ji}-k_jk_i/\vec{k}^2$ is the transverse projector in momentum space. These relations encode the transverse nature of the vector gluon field. Turning to \eq{eq:jstid0}, we recognize that if we functionally differentiate with respect to $\imath\eta_y^d$, again with respect to $\imath\xi_z^e$ and set sources to zero, we get that \begin{equation} \ev{\imath\xi_z^e\imath\xi_y^d}=0. \end{equation} In effect, the auxiliary Lagrange multiplier field $\lambda$ drops out of the formalism to be replaced by the transversality conditions, as it is supposed to. \section{Feynman Rules and Decompositions} \setcounter{equation}{0} Let us now discuss the Feynman rules and general decompositions of Green's functions that will be relevant to this work. The Feynman rules for the propagators can be derived from the field equations of motion (written in Appendix~\ref{app:eom}) by neglecting the interaction terms and functionally differentiating. Denoting the tree-level quantities with a superscript $(0)$, the corresponding equations read: \begin{eqnarray} J_{ix}^a&=& \left[\delta_{ij}\partial_{0x}^2-\delta_{ij}\nabla_x^2+\nabla_{ix}\nabla_{jx}\right] \ev{\imath J_{jx}^a}^{(0)}+\partial_{0x}\nabla_{ix}\ev{\imath\rho_x^a}^{(0)} -\nabla_{ix}\ev{\imath\xi_x^a}^{(0)}, \nonumber\\ \rho_x^a&=&\partial_{0x}\nabla_{ix}\ev{\imath J_{ix}^a}^{(0)} +\nabla_x^2\ev{\imath\rho_x^a}^{(0)}, \nonumber\\ \eta_x^a&=&\nabla_x^2\ev{\imath\ov{\eta}_x^a}^{(0)}. \end{eqnarray} The tree-level ghost propagator is then \begin{equation} \ev{\imath\ov{\eta}_x^a\imath\eta_y^b}^{(0)} =-\imath\delta^{ab}\int\dk{k}\frac{1}{\vec{k}^2}e^{-\imath k\cdot(y-x)} \end{equation} and we identify the momentum space propagator as \begin{equation} W_c^{(0)ab}(k)=-\delta^{ab}\frac{\imath}{\vec{k}^2}. \end{equation} The rest of the propagators follow a similar pattern and their momentum space forms (without the common color factor $\delta^{ab}$) are given in Table~\ref{tab:w0}. Note that it is understood that the denominator factors involving both temporal and spatial components implicitly carry the relevant Feynman prescription, i.e., \begin{equation} \frac{1}{\left(k_0^2-\vec{k}^2\right)} \rightarrow\frac{1}{\left(k_0^2-\vec{k}^2+\imath0_+\right)}, \end{equation} such that the integration over the temporal component can be analytically continued to Euclidean space. It is also useful to repeat this analysis for the proper two-point functions and using the tree-level components of Eqs.~(\ref{eq:adse0}), (\ref{eq:sidse0}) and (\ref{eq:ghdse0}) we have \begin{eqnarray} \ev{\imath A_{ix}^a}^{(0)}&=&-\left[\delta_{ij}\partial_{0x}^2-\delta_{ij}\nabla_x^2 +\nabla_{ix}\nabla_{jx}\right]A_{jx}^a-\partial_{0x}\nabla_{ix}\sigma_x^a +\nabla_{ix}\lambda_x^a, \nonumber\\ \ev{\imath\sigma_x^a}^{(0)}&=&-\partial_{0x}\nabla_{ix}A_{ix}^a-\nabla_x^2\sigma_x^a, \nonumber\\ \ev{\imath\ov{c}_x^a}^{(0)}&=&-\nabla_x^2c_x^a. \end{eqnarray} The ghost proper two-point function in momentum space is \begin{equation} \Gamma_c^{(0)ab}(k)=\delta^{ab}\imath\vec{k}^2 \end{equation} and the rest are presented (without color factors) in Table~\ref{tab:w0}. It is immediately apparent that the gluon polarization is \emph{not} transverse in contrast to Landau gauge. \begin{table} \begin{tabular}{|c|c|c|c|}\hline $W^{(0)}$&$A_j$&$\sigma$&$\lambda$ \\\hline\rule[-2.4ex]{0ex}{5.5ex} $A_i$&$t_{ij}(\vec{k})\frac{\imath}{(k_0^2-\vec{k}^2)}$&$\underline{0}$&$ \underline{\frac{(-k_i)}{\vec{k}^2}}$ \\\hline\rule[-2.4ex]{0ex}{5.5ex} $\sigma$&$\underline{0}$&$\frac{\imath}{\vec{k}^2}$&$\frac{(-k^0)}{\vec{k}^2}$ \\\hline\rule[-2.4ex]{0ex}{5.5ex} $\lambda$&$\underline{\frac{k_j}{\vec{k}^2}}$&$\frac{k^0}{\vec{k}^2}$&$ \underline{0}$ \\\hline \end{tabular} \hspace{1cm} \begin{tabular}{|c|c|c|c|}\hline $\Gamma^{(0)}$&$A_j$&$\sigma$&$\lambda$ \\\hline\rule[-2.4ex]{0ex}{5.5ex} $A_i$&$-\imath k_0^2\delta_{ij}+\imath\vec{k}^2t_{ij}(\vec{k})$& $\imath k^0k_i$&$\underline{k_i}$ \\\hline\rule[-2.4ex]{0ex}{5.5ex} $\sigma$&$\imath k^0k_j$&$-\imath\vec{k}^2$&$\underline{0}$ \\\hline\rule[-2.4ex]{0ex}{5.5ex} $\lambda$&$\underline{-k_j}$&$\underline{0}$&$\underline{0}$ \\\hline \end{tabular} \caption{\label{tab:w0}Tree-level propagators [left] and two-point proper functions [right] (without color factors) in momentum space. Underlined entries denote exact results.} \end{table} The tree-level vertices are determined by taking the various interaction terms of Eqs.~(\ref{eq:adse0}-\ref{eq:ghdse0}) and functionally differentiating. Since, in this study, we are interested only in the eventual one-loop perturbative results we omit the tree-level four-point functions ($\Gamma_{4A}$ and $\Gamma_{AA\sigma\si}$). Defining all momenta as incoming, we have: \begin{eqnarray} \Gamma_{\sigma AAjk}^{(0)abc}(p_a,p_b,p_c)&=&\imath gf^{abc}\delta_{jk}(p_b^0-p_c^0), \nonumber\\ \Gamma_{\sigma A\sigma j}^{(0)abc}(p_a,p_b,p_c)&=&-\imath gf^{abc}(p_a-p_c)_j, \nonumber\\ \Gamma_{3A ijk}^{(0)abc}(p_a,p_b,p_c)&=& -\imath gf^{abc} \left[\delta_{ij}(p_a-p_b)_k+\delta_{jk}(p_b-p_c)_i+\delta_{ki}(p_c-p_a)_j\right], \nonumber\\ \Gamma_{\ov{c}cA i}^{(0)abc}(p_{\ov{c}},p_c,p_A)&=&-\imath gf^{abc}p_{\ov{c}i}. \end{eqnarray} In addition to the tree-level expressions for the various two-point functions (connected and proper) it is necessary to consider their general nonperturbative structures. These structures are determined by considering the properties of the fields under the discrete transforms of time-reversal and parity (the noncovariant analogue of Lorentz invariance arguments for covariant gauges). Using the same techniques as in Ref.~\cite{Watson:2006yq} we can easily write down the results in momentum space. For the ghost, we have \begin{equation} W_c^{ab}(k)=-\delta^{ab}\frac{\imath}{\vec{k}^2}D_c(\vec{k}^2),\;\;\;\; \Gamma_c^{ab}(k)=\delta^{ab}\imath\vec{k}^2\Gamma_c(\vec{k}^2) \end{equation} and the rest are presented in Table~\ref{tab:decomp}. With the exception of the ghost, all dressing functions are scalar functions of \emph{two} independent variables, $k_0^2$ and $\vec{k}^2$. The ghost dressing functions are functions of $\vec{k}^2$ only for exactly the same reasons as in the first order formalism \cite{Watson:2006yq}. At tree-level, all dressing functions are unity. \begin{table} \begin{tabular}{|c|c|c|c|}\hline $W$&$A_j$&$\sigma$&$\lambda$ \\\hline\rule[-2.4ex]{0ex}{5.5ex} $A_i$&$t_{ij}(\vec{k})\frac{\imath}{(k_0^2-\vec{k}^2)}D_{AA}$&$0$& $\frac{(-k_i)}{\vec{k}^2}$ \\\hline\rule[-2.4ex]{0ex}{5.5ex} $\sigma$&$0$&$\frac{\imath}{\vec{k}^2}D_{\sigma\si}$& $\frac{(-k^0)}{\vec{k}^2}D_{\sigma\lambda}$ \\\hline\rule[-2.4ex]{0ex}{5.5ex} $\lambda$&$\frac{k_j}{\vec{k}^2}$&$\frac{k^0}{\vec{k}^2}D_{\sigma\lambda}$&$0$ \\\hline \end{tabular} \hspace{1cm} \begin{tabular}{|c|c|c|c|}\hline $\Gamma$&$A_j$&$\sigma$&$\lambda$ \\\hline\rule[-2.4ex]{0ex}{5.5ex} $A_i$&$-\imath(k_0^2-\vec{k}^2)t_{ij}(\vec{k})\Gamma_{AA} -\imath k_0^2\frac{k_ik_j}{\vec{k}^2}\ov{\Gamma}_{AA}$& $\imath k^0k_i\Gamma_{A\sigma}$&$k_i$ \\\hline\rule[-2.4ex]{0ex}{5.5ex} $\sigma$&$\imath k^0k_j\Gamma_{A\sigma}$&$-\imath\vec{k}^2\Gamma_{\sigma\si}$&$0$ \\\hline\rule[-2.4ex]{0ex}{5.5ex} $\lambda$&$-k_j$&$0$&$0$ \\\hline \end{tabular} \caption{\label{tab:decomp}General form of propagators [left] and two-point proper functions [right] (without color factors) in momentum space. All dressing functions are functions of $k_0^2$ and $\vec{k}^2$.} \end{table} The dressing functions for the propagators and two-point proper functions are related via the Legendre transform. The connection follows from \begin{equation} \frac{\delta\imath J_\beta}{\delta\imath J_\alpha}=\delta_{\alpha\beta} =-\imath\frac{\delta}{\delta\imath J_\alpha}\ev{\imath\Phi_\beta} =\frac{\delta\Phi_\gamma}{\delta\imath J_\alpha}\ev{\imath\Phi_\gamma\imath\Phi_\beta} =\ev{\imath J_\alpha\imath J_\gamma}\ev{\imath\Phi_\gamma\imath\Phi_\beta}. \label{eq:leg} \end{equation} (Recall here that there is an implicit summation over all discrete indices and integration over continuous variables labeled by $\gamma$.) Considering all the possibilities in turn, we find that \begin{eqnarray} D_{AA}&=&\Gamma_{AA}^{-1},\nonumber\\ D_{\sigma\si}&=&\Gamma_{\sigma\si}^{-1},\nonumber\\ D_c&=&\Gamma_c^{-1},\nonumber\\ D_{\sigma\lambda}&=&\Gamma_{A\sigma}\Gamma_{\sigma\si}^{-1}=\ov{\Gamma}_{AA}\Gamma_{A\sigma}^{-1}. \label{eq:rnd0} \end{eqnarray} Actually, whilst we have included $D_{\sigma\lambda}$ up to this point, since there is no vertex involving the $\lambda$-field this propagator will not directly play any role in the formalism. However, indirectly it does turn out to have a meaning as will be shown in the next section. \section{Dyson--Schwinger Equations and Slavnov--Taylor Identities} \setcounter{equation}{0} With the observation that \begin{equation} \frac{\delta}{\delta\imath\Phi_\beta}\ev{\imath J_\gamma\imath J_\alpha}= -\ev{\imath J_\gamma\imath J_\varepsilon}\ev{\imath\Phi_\varepsilon\imath\Phi_\beta\imath\Phi_\delta} \ev{\imath J_\delta\imath J_\alpha} \label{eq:leg1} \end{equation} [stemming from the Legendre transform and following from \eq{eq:leg}], the derivation of the Dyson--Schwinger equations becomes relatively straightforward. Starting with \eq{eq:adse0}, omitting the terms that will not contribute at one-loop perturbatively and recognizing the tree-level vertices in configuration space, we have that \begin{eqnarray} \ev{\imath A_{ix}^a}&=& \imath\left[\delta_{ij}\partial_{0x}^2-\delta_{ij}\nabla_x^2 +\nabla_{ix}\nabla_{jx}\right]\imath A_{jx}^a +\imath\partial_{0x}\nabla_{ix}\imath\sigma_x^a-\imath\nabla_{ix}\imath\lambda_x^a \nonumber\\ &&-\int\dx{y}\dx{z}\Gamma_{\sigma AAij}^{(0)cab}(z,x,y) \left[\ev{\imath J_{jy}^b\imath\rho_z^c}-\imath A_{jy}^b\imath\sigma_z^c\right] -\int\dx{y}\dx{z}\frac{1}{2!}\Gamma_{\sigma A\sigma i}^{(0)cab}(z,x,y) \left[\ev{\imath\rho_y^b\imath\rho_z^c}-\imath\sigma_y^b\imath\sigma_z^c\right] \nonumber\\ &&-\int\dx{y}\dx{z}\frac{1}{2!}\Gamma_{3Aijk}^{(0)abc}(x,y,z) \left[\ev{\imath J_{jy}^b\imath J_{kz}^c} -\imath A_{jy}^b\imath A_{kz}^c\right] +\int\dx{y}\dx{z}\Gamma_{\ov{c}cAi}^{(0)bca}(y,z,x) \left[\ev{\imath\ov{\eta}_z^c\eta_y^b}+\imath c_z^c\imath c_y^b\right] \nonumber\\ &&+\ldots \end{eqnarray} Taking the functional derivative with respect to $\imath A_{lw}^f$, using \eq{eq:leg1}, setting sources to zero and Fourier transforming to momentum space (each step is straightforward so we omit the details for clarity) we get the Dyson--Schwinger equation for the gluon polarization: \begin{eqnarray} \Gamma_{AAil}^{af}(k)&=&\delta^{af}\left[-\imath(k_0^2-\vec{k}^2)\delta_{il} -\imath k_ik_l\right]\nonumber\\ &&+\int\dk{\omega}\Gamma_{\sigma AAij}^{(0)cab}(\omega-k,k,-\omega)W_{AAjm}^{bd}(\omega) \Gamma_{\sigma AAml}^{edf}(k-\omega,\omega,-k)W_{\sigma\si}^{ec}(\omega-k) \nonumber\\ &&+\frac{1}{2!}\int\dk{\omega}\Gamma_{\sigma A\sigma i}^{(0)cab}(\omega-k,k,-\omega) W_{\sigma\si}^{bd}(\omega)\Gamma_{\sigma A\sigma l}^{dfe}(\omega,-k,k-\omega)W_{\sigma\si}^{ec}(\omega-k) \nonumber\\ &&+\frac{1}{2!}\int\dk{\omega}\Gamma_{3Aijk}^{(0)abc}(k,-\omega,\omega-k)W_{AAjm}^{bd}(\omega) \Gamma_{3Amln}^{dfe}(\omega,-k,k-\omega)W_{AAnk}^{ec}(\omega-k) \nonumber\\ &&-\int\dk{\omega}\Gamma_{\ov{c}cAi}^{(0)bca}(\omega-k,-\omega,k)W_c^{cd}(\omega) \Gamma_{\ov{c}cAl}^{def}(\omega,k-\omega,-k)W_c^{eb}(\omega-k)+\ldots \label{eq:gldse1} \end{eqnarray} Turning now to \eq{eq:sidse0}, we have \begin{eqnarray} \ev{\imath\sigma_x^a}&=&\imath\partial_{0x}\nabla_{ix}\imath A_{ix}^a +\imath\nabla_x^2\imath\sigma_x^a -\int\dx{y}\dx{z}\frac{1}{2!}\Gamma_{\sigma AAjk}^{(0)abc}(x,y,z) \left[\ev{\imath J_{jy}^b\imath J_{kz}^c} -\imath A_{jy}^b\imath A_{kz}^c\right] \nonumber\\ &&-\int\dx{y}\dx{z}\Gamma_{\sigma A\sigma j}^{(0)abc}(x,y,z) \left[\ev{\imath J_{jy}^b\imath\rho_z^c}-\imath A_{jy}^b\imath\sigma_z^c\right] +\ldots \end{eqnarray} where again, terms that do not contribute at the one-loop perturbative level are omitted. There are two functional derivatives of interest, those with respect to $\imath\sigma_w^f$ and $\imath A_{lw}^f$, which give rise to the following two Dyson--Schwinger equations: \begin{eqnarray} \Gamma_{\sigma\si}^{af}(k)&=&\delta^{af}(-\imath\vec{k}^2)+\frac{1}{2!}\int\dk{\omega} \Gamma_{\sigma AAjk}^{(0)abc}(k,-\omega,\omega-k)W_{AAjm}^{bd}(\omega) \Gamma_{\sigma AAmn}^{fde}(-k,\omega,k-\omega)W_{AAnk}^{ec}(\omega-k) \nonumber\\ &&+\int\dk{\omega}\Gamma_{\sigma A\sigma j}^{(0)abc}(k,-\omega,\omega-k)W_{AAjm}^{bd}(\omega) \Gamma_{\sigma A\sigma m}^{fde}(-k,\omega,k-\omega)W_{\sigma\si}^{ec}(\omega-k)+\ldots \label{eq:sidse1} \\ \Gamma_{\sigma Al}^{af}(k)&=&\delta^{af}\imath k_0k_l+\frac{1}{2!}\int\dk{\omega} \Gamma_{\sigma AAjk}^{(0)abc}(k,-\omega,\omega-k)W_{AAjm}^{bd}(\omega) \Gamma_{3Amln}^{dfe}(\omega,-k,k-\omega)W_{AAnk}^{ec}(\omega-k) \nonumber\\ &&+\int\dk{\omega}\Gamma_{\sigma A\sigma j}^{(0)abc}(k,-\omega,\omega-k)W_{AAjm}^{bd}(\omega) \Gamma_{\sigma AAml}^{edf}(k-\omega,\omega,-k)W_{\sigma\si}^{ec}(\omega-k)+\ldots \label{eq:siadse1} \end{eqnarray} Next we consider the ghost equation, \eq{eq:ghdse0}, which can be written \begin{equation} \ev{\imath\ov{c}_x^a}=\imath\nabla_x^2\imath c_x^a+\int\dx{y}\dx{z} \Gamma_{\ov{c}cAi}^{(0)abc}(x,y,z)\left[\ev{\imath J_{iz}^c\imath\ov{\eta}_y^b} -\imath A_{iz}^c\imath c_y^b\right]. \end{equation} The ghost Dyson--Schwinger equation is subsequently \begin{equation} \Gamma_c^{af}(k)=\delta^{af}\imath\vec{k}^2+\int\dk{\omega} \Gamma_{\ov{c}cAi}^{(0)abc}(k,-\omega,\omega-k)W_c^{bd}(\omega) \Gamma_{\ov{c}cAj}^{dfe}(\omega,-k,k-\omega)W_{AAji}^{ec}(\omega-k). \label{eq:ghdse1} \end{equation} In addition to the Dyson--Schwinger equations, the Green's functions are constrained by Slavnov--Taylor identities. These are the functional derivatives of \eq{eq:stid0}. Since \eq{eq:stid0} is Grassmann-valued, we must first functionally differentiate with respect to $\imath c_y^d$. We are not interested (here) in further ghost correlations, so we can then set ghost sources to zero. Also, there is no further information to be gained by considering the Lagrange multiplier field $\lambda^a$, and we set its source to zero also. Equation~(\ref{eq:stid0}) then becomes \begin{eqnarray} \lefteqn{\frac{\imath}{g}\partial_{0y}\ev{\imath\sigma_y^d} -f^{abd}\ev{\imath\sigma_y^a}\imath\sigma_y^b -f^{abd}\imath A_{jy}^bt_{ji}(y)\ev{\imath A_{iy}^a}} \nonumber\\ &=&\int\dx{x}\left\{-f^{abc}\ev{\imath\sigma_x^a}\frac{\delta}{\delta\imath c_y^d} \ev{\imath\rho_x^b\imath\ov{\eta}_x^c}+\frac{1}{g} \left[\frac{\nabla_{ix}}{(-\nabla_x^2)}\ev{\imath A_{ix}^a}\right] \ev{\imath\ov{c}_x^a\imath c_y^d} -f^{abc}\ev{\imath A_{ix}^a}t_{ij}(x)\frac{\delta}{\delta\imath c_y^d} \ev{\imath J_{jx}^b\imath\ov{\eta}_x^c}\right\}.\nonumber\\ \label{eq:stid1} \end{eqnarray} Taking the functional derivatives of this with respect to $\imath\sigma_z^e$ or $\imath A_{kz}^e$ and setting all remaining sources to zero gives rise to the following two equations: \begin{eqnarray} \frac{\imath}{g}\partial_{0y}\ev{\imath\sigma_z^e\imath\sigma_y^d} &=&\int\dx{x}\left\{\frac{1}{g}\left[\frac{\nabla_{ix}}{(-\nabla_x^2)} \ev{\imath\sigma_z^e\imath A_{ix}^a}\right]\ev{\imath\ov{c}_x^a\imath c_y^d} \right.\nonumber\\&&\left. -f^{abc}\ev{\imath\sigma_z^e\imath\sigma_x^a}\frac{\delta}{\delta\imath c_y^d} \ev{\imath\rho_x^b\imath\ov{\eta}_x^c} -f^{abc}\ev{\imath\sigma_z^e\imath A_{ix}^a}t_{ij}(x) \frac{\delta}{\delta\imath c_y^d}\ev{\imath J_{jx}^b\imath\ov{\eta}_x^c}\right\}, \label{eq:stids1}\\ \frac{\imath}{g}\partial_{0y}\ev{\imath A_{kz}^e\imath\sigma_y^d} &=&\int\dx{x}\left\{\frac{1}{g}\left[\frac{\nabla_{ix}}{(-\nabla_x^2)} \ev{\imath A_{kz}^e\imath A_{ix}^a}\right]\ev{\imath\ov{c}_x^a\imath c_y^d} \right.\nonumber\\&&\left. -f^{abc}\ev{\imath A_{kz}^e\imath\sigma_x^a}\frac{\delta}{\delta\imath c_y^d} \ev{\imath\rho_x^b\imath\ov{\eta}_x^c} -f^{abc}\ev{\imath A_{kz}^e\imath A_{ix}^a}t_{ij}(x) \frac{\delta}{\delta\imath c_y^d}\ev{\imath J_{jx}^b\imath\ov{\eta}_x^c}\right\}. \label{eq:stida1} \end{eqnarray} Now, using \eq{eq:leg1}, we have that \begin{equation} f^{abc}\frac{\delta}{\delta\imath c_y^d}\ev{\imath\rho_x^b\imath\ov{\eta}_x^c}= -f^{abc}\ev{\imath\ov{\eta}_x^c\imath\eta_\alpha} \ev{\imath\ov{c}_\alpha\imath c_y^d\imath\Phi_\gamma} \ev{\imath J_\gamma\imath\rho_x^b}=\delta^{ad}\tilde{\Sigma}_{\sigma;\ov{c}c}(x,y). \end{equation} Taking the Fourier transform \begin{equation} \tilde{\Sigma}_{\sigma;\ov{c}c}(x,y)=\int\dk{k}\tilde{\Sigma}_{\sigma;\ov{c}c}(k) e^{-\imath k\cdot(x-y)} \end{equation} we get that \begin{equation} \tilde{\Sigma}_{\sigma;\ov{c}c}(k)=N_c\int\dk{\omega}W_c(k-\omega) \Gamma_{\ov{c}c\gamma}(k-\omega,-k,\omega)W_{\gamma\sigma}(\omega). \end{equation} Since the ghost Green's functions are independent of the ghost line's energy scale \cite{Watson:2006yq}, after $\omega_0$ has been integrated out, there is no external energy scale and \begin{equation} \tilde{\Sigma}_{\sigma;\ov{c}c}(k)=\tilde{\Sigma}_{\sigma;\ov{c}c}(\vec{k}). \label{eq:sicc0} \end{equation} However, under time-reversal the $\sigma$-field changes sign (such that the action remains invariant) which in momentum space means that under the transform $k_0\rightarrow-k_0$, $\tilde{\Sigma}_{\sigma;\ov{c}c}(k)$ must change sign and so, given \eq{eq:sicc0} we have the result that \begin{equation} \tilde{\Sigma}_{\sigma;\ov{c}c}(k)=0. \end{equation} In the case of the term \begin{equation} \delta^{af}\tilde{\Sigma}_{Aj;\ov{c}c}(x,y)= f^{abc}\frac{\delta}{\delta\imath c_y^d}\ev{\imath J_{jx}^b\imath\ov{\eta}_x^c} \end{equation} we can see automatically that in momentum space, $\tilde{\Sigma}_{Aj;\ov{c}c}(k)\sim k_j$ and that the transverse projector that acts on it in Eqs.~(\ref{eq:stids1}) and (\ref{eq:stida1}) will kill the term. We thus have \begin{eqnarray} \frac{\imath}{g}\partial_{0y}\ev{\imath\sigma_z^e\imath\sigma_y^d} &=&\int\dx{x}\left\{\frac{1}{g}\left[\frac{\nabla_{ix}}{(-\nabla_x^2)} \ev{\imath\sigma_z^e\imath A_{ix}^a}\right] \ev{\imath\ov{c}_x^a\imath c_y^d}\right\}, \label{eq:stids2}\\ \frac{\imath}{g}\partial_{0y}\ev{\imath A_{kz}^e\imath\sigma_y^d} &=&\int\dx{x}\left\{\frac{1}{g}\left[\frac{\nabla_{ix}}{(-\nabla_x^2)} \ev{\imath A_{kz}^e\imath A_{ix}^a}\right] \ev{\imath\ov{c}_x^a\imath c_y^d}\right\}, \label{eq:stida2} \end{eqnarray} which in terms of the momentum space dressing functions gives \begin{eqnarray} \Gamma_{\sigma\si}(k_0^2,\vec{k}^2)&=&\Gamma_{A\sigma}(k_0^2,\vec{k}^2)\Gamma_c(\vec{k}^2), \label{eq:stids3}\\ \Gamma_{A\sigma}(k_0^2,\vec{k}^2)&=&\ov{\Gamma}_{AA}(k_0^2,\vec{k}^2)\Gamma_c(\vec{k}^2). \label{eq:stida3} \end{eqnarray} The Slavnov--Taylor identities for the two-point functions above are rather revealing. They are the Coulomb gauge equivalent of the standard covariant gauge result that the longitudinal part of the gluon polarization remains bare \cite{Slavnov:1972fg}. We notice that they relate the temporal, longitudinal and ghost degrees of freedom in a manner reminiscent of the quartet mechanism in the Kugo-Ojima confinement criterion \cite{Kugo:1979gm}. Also, they represent Gau\ss' law as applied to the Green's functions. Equation~(\ref{eq:stid1}) suggests that proper functions involving the temporal $\sigma$-field can be systematically eliminated and replaced by functions involving the vector $\vec{A}$ and ghost fields although whether this is desirable remains to be seen. We can now return to the general decompositions of the two-point functions. We see that as a consequence of either of the two Slavnov--Taylor identities above, Eqs.~(\ref{eq:stids3}) or (\ref{eq:stida3}), \eq{eq:rnd0} reduces to $D_{\sigma\lambda}=D_c$, reassuring us that at least the formalism is consistent. We also see that there are only three independent two-point dressing functions, whereas (accounting for the tensor structure of the gluon polarization) we have five Dyson--Schwinger equations. We will investigate this perturbatively in the next section. \section{One-Loop Perturbation Theory} \setcounter{equation}{0} Let us now consider the one-loop perturbative form of the two-point dressing functions that are derived from the Dyson--Schwinger equations. So far, all quantities are expressed in Minkowski space. The perturbative integrals must however be evaluated in Euclidean space. The analytic continuation to Euclidean space ($k_0\rightarrow\imath k_4$) is straightforward given the Feynman prescription for denominator factors. Henceforth, all dressing functions will be written in Euclidean space and are functions of $k_4^2$ and $\vec{k}^2$. The Euclidean four momentum squared is $k^2=k_4^2+\vec{k}^2$. We write the perturbative expansion of the two-point dressing functions as follows: \begin{equation} \Gamma_{\alpha\beta}=1+g^2\Gamma_{\alpha\beta}^{(1)}. \end{equation} The loop integrals will be dimensionally regularized with the (Euclidean space) integration measure \begin{equation} \dk{\omega}=\frac{d\omega_4\,d^d\vec{\omega}}{(2\pi)^{d+1}} \end{equation} (spatial dimension $d=3-2\varepsilon$). The coupling acquires a dimension: \begin{equation} g^2\rightarrow g^2\mu^\varepsilon, \end{equation} where $\mu$ is the square of some non-vanishing mass scale squared. This factor is included in $\Gamma_{\alpha\beta}^{(1)}$ such that the new coupling and $\Gamma^{(1)}$ are dimensionless. By inserting the appropriate tree-level factors into the Dyson--Schwinger equations, extracting the color and tensor algebra we get the following integral expressions for the various two-point proper dressing functions: \begin{eqnarray} (d-1)\Gamma_{AA}^{(1)}(k_4^2,\vec{k}^2)&=& -N_c\int\frac{\mu^\varepsilon\dk{\omega}(k_4+\omega_4)^2}{k^2\omega^2(\vec{k}-\vec{\omega})^2} t_{ij}(\vec{\omega})t_{ji}(\vec{k}) -N_c\int\frac{\mu^\varepsilon\dk{\omega}}{k^2\vec{\omega}^2(\vec{k}-\vec{\omega})^2} \omega_i\omega_jt_{ji}(\vec{k}) \nonumber\\&& -2N_c\int\frac{\mu^\varepsilon\dk{\omega}}{k^2\omega^2(k-\omega)^2} t_{li}(\vec{k})t_{jm}(\vec{\omega})t_{nk}(\vec{k}-\vec{\omega}) \left[\delta_{ij}k_k-\delta_{jk}\omega_i-\delta_{ki}k_j\right] \left[\delta_{ml}k_n-\delta_{ln}k_m-\delta_{nm}\omega_l\right], \nonumber\\ \label{eq:dseaa0}\\ \ov{\Gamma}_{AA}^{(1)}(k_4^2,\vec{k}^2)&=& -N_c\int \frac{\mu^\varepsilon\dk{\omega}(k_4+\omega_4)^2}{k_4^2\vec{k}^2\omega^2(\vec{k}-\vec{\omega})^2} k_ik_jt_{ij}(\vec{\omega}) -N_c\int\frac{\mu^\varepsilon\dk{\omega}}{k_4^2\vec{k}^2\vec{\omega}^2(\vec{k}-\vec{\omega})^2} \left[\frac{1}{2}\s{\vec{k}}{(2\vec{\omega}-\vec{k})}^2 -\s{\vec{k}}{\vec{\omega}}\s{\vec{k}}{(\vec{\omega}-\vec{k})}\right] \nonumber\\&& -\frac{1}{2} N_c\int\frac{\mu^\varepsilon\dk{\omega}\s{\vec{k}}{(\vec{k}-2\vec{\omega})}^2}{k_4^2 \vec{k}^2\omega^2(\vec{k}-\vec{\omega})^2}t_{ij}(\vec{\omega})t_{ji}(\vec{k}-\vec{\omega}), \label{eq:dseovaa0}\\ \Gamma_{\sigma\si}^{(1)}(k_4^2,\vec{k}^2)&=& -\frac{1}{2} N_c\int\frac{\mu^\varepsilon\dk{\omega}(k_4-2\omega_4)^2}{\vec{k}^2\omega^2(k-\omega)^2} t_{ij}(\vec{\omega})t_{ji}(\vec{k}-\vec{\omega}) -4N_c\int\frac{\mu^\varepsilon\dk{\omega}}{\vec{k}^2\omega^2(\vec{k}-\vec{\omega})^2} k_ik_jt_{ij}(\vec{\omega}), \\ \Gamma_{A\sigma}^{(1)}(k_4^2,\vec{k}^2)&=& \frac{1}{2} N_c\int\frac{\mu^\varepsilon\dk{\omega}(k_4-2\omega_4)}{k_4\vec{k}^2\omega^2(k-\omega)^2} \s{\vec{k}}{(\vec{k}-2\vec{\omega})}t_{ij}(\vec{\omega})t_{ji}(\vec{k}-\vec{\omega}) -2N_c\int\frac{\mu^\varepsilon\dk{\omega}}{\vec{k}^2\omega^2(\vec{k}-\vec{\omega})^2} k_ik_jt_{ij}(\vec{\omega}), \\ \Gamma_c^{(1)}(\vec{k}^2)&=&-N_c\int\frac{\mu^\varepsilon\dk{\omega}}{\vec{k}^2\omega^2(\vec{k} -\vec{\omega})^2}k_ik_jt_{ij}(\vec{\omega}). \end{eqnarray} At this stage, we are in a position to check the two Slavnov--Taylor identities for the two-point functions. The first of these, \eq{eq:stids3}, reads at one-loop: \begin{equation} \Gamma_{\sigma\si}^{(1)}-\Gamma_{A\sigma}^{(1)}-\Gamma_c^{(1)}=0. \end{equation} Inserting the integral expressions above and eliminating overall constants, the left-hand side reads \begin{equation} \Gamma_{\sigma\si}^{(1)}-\Gamma_{A\sigma}^{(1)}-\Gamma_c^{(1)}\sim -\frac{1}{2}\int\frac{\dk{\omega}(k_4-2\omega_4)}{k_4\vec{k}^2\omega^2(k-\omega)^2}\s{k}{(k-2\omega)} t_{ij}(\vec{\omega})t_{ji}(\vec{k}-\vec{\omega}) -\int\frac{\dk{\omega}}{\vec{k}^2\omega^2(\vec{k}-\vec{\omega})^2} k_ik_jt_{ij}(\vec{\omega}). \end{equation} By expanding the transverse projectors and scalar products, it is relatively trivial to show that this does indeed vanish. The second identity, \eq{eq:stida3}, reads \begin{equation} \Gamma_{A\sigma}^{(1)}-\ov{\Gamma}_{AA}^{(1)}-\Gamma_c^{(1)}=0 \end{equation} and the left-hand side is: \begin{eqnarray} \Gamma_{A\sigma}^{(1)}-\ov{\Gamma}_{AA}^{(1)}-\Gamma_c^{(1)}&\sim& \frac{1}{2}\int\frac{\dk{\omega}\,\s{\vec{k}}{(\vec{k}-2\vec{\omega})}}{\omega^2(k-\omega)^2} \s{k}{(k-2\omega)}t_{ij}(\vec{\omega})t_{ji}(\vec{k}-\vec{\omega}) +\int\frac{\dk{\omega}\,(\omega_4^2+2k_4\omega_4)}{\omega^2(\vec{k}-\vec{\omega})^2} k_ik_jt_{ij}(\vec{\omega}) \nonumber\\&& +\int\frac{\dk{\omega}}{\vec{\omega}^2(\vec{k}-\vec{\omega})^2} \left[\frac{1}{2}\s{\vec{k}}{(2\vec{\omega}-\vec{k})}^2-\s{\vec{k}}{\vec{\omega}} \s{\vec{k}}{(\vec{\omega}-\vec{k})}\right]. \end{eqnarray} Again, it is straightforward to show that this vanishes. Thus, we have reproduced the Slavnov--Taylor identity results that tell us that there are only three independent two-point dressing functions. The evaluation of the integrals that give $\Gamma_{AA}$, $\Gamma_{\sigma\si}$ and $\Gamma_c$ is far from trivial. However, using the techniques developed in \cite{Watson:2007mz} it is possible. For brevity, we do not go into the details here and simply quote the results. They are, as $\varepsilon\rightarrow0$: \begin{eqnarray} \Gamma_{AA}^{(1)}(x,y)&=& \frac{N_c}{(4\pi)^{2-\varepsilon}} \left\{-\left[\frac{1}{\varepsilon}-\gamma-\ln{\left(\frac{x+y}{\mu}\right)}\right] +\frac{64}{9}-3z+g(z)\left[\frac{1}{2z}-\frac{14}{3}+\frac{3}{2}z\right] -\frac{f(z)}{4}\left[\frac{1}{z}-1+11z-3z^2\right]\right\}, \nonumber\\ \Gamma_{\sigma\si}^{(1)}(x,y)&=& \frac{N_c}{(4\pi)^{2-\varepsilon}}\left\{ -\frac{11}{3}\left[\frac{1}{\varepsilon}-\gamma-\ln{\left(\frac{x+y}{\mu}\right)}\right] -\frac{31}{9}+6z+g(z)(1-3z)-f(z)\left[\frac{1}{2}+2z+\frac{3}{2}z^2\right]\right\}, \nonumber\\ \Gamma_{c}^{(1)}(y)&=&\frac{N_c}{(4\pi)^{2-\varepsilon}}\left\{ -\frac{4}{3}\left[\frac{1}{\varepsilon}-\gamma-\ln{\left(\frac{y}{\mu}\right)}\right] -\frac{28}{9}+\frac{8}{3}\ln{2}\right\}, \end{eqnarray} where $x=k_4^2$, $y=\vec{k}^2$, $z=x/y$ and we define two functions: \begin{eqnarray} f(z)&=&4\ln{2}\frac{1}{\sqrt{z}}\arctan{\sqrt{z}} -\int_0^1\frac{dt}{\sqrt{t}(1+zt)}\ln{(1+zt)},\nonumber\\ g(z)&=&2\ln{2}-\ln{(1+z)}. \end{eqnarray} (The integral occurring in $f(z)$ can be explicitly evaluated in terms of dilogarithms \cite{Watson:2007mz}.) Defining a similar notation for the perturbative expansion of the propagator functions: \begin{equation} D_{\alpha\beta}=1+g^2D_{\alpha\beta}^{(1)} \end{equation} we then have, via \eq{eq:rnd0}, the final results: \begin{equation} D_{AA}^{(1)}(x,y)=-\Gamma_{AA}^{(1)}(x,y),\;\;\;\; D_{\sigma\si}^{(1)}(x,y)=-\Gamma_{\sigma\si}^{(1)}(x,y),\;\;\;\; D_c^{(1)}(y)=-\Gamma_c^{(1)}(y). \end{equation} Several comments are in order here. Firstly, the expressions for $\Gamma_{AA}$ and $\ov{\Gamma}_{AA}$, Eqs.~(\ref{eq:dseaa0}) and (\ref{eq:dseovaa0}), respectively, contain energy divergent integrals of the form \begin{equation} \int\frac{\dk{\omega}\, \left\{1,\omega_i,\omega_i\omega_j\right\}}{\vec{\omega}^2(\vec{k}-\vec{\omega})^2}. \end{equation} These integrals cancel explicitly, though it should be remarked that this cancellation is more obvious in the first order formalism \cite{Watson:2007mz}. Secondly, with respect to the temporal variable $x$, all the results above are strictly finite for Euclidean and spacelike Minkowski momenta -- any singularities occur for $z=x/y=-1$ (the light-cone) with branch cuts extending in the timelike direction. This means that the analytic continuation between Euclidean and Minkowski space can be justified. Thirdly, the coefficient of the $\varepsilon$-pole for $D_{\sigma\si}$ and the combination $D_{AA}D_c^2$ is $11N_c/3(4\pi)^2$ which is minus the value of the first coefficient of the $\beta$-function. This confirms that $g^2D_{\sigma\si}$ \cite{Niegawa:2006ey} and $g^2D_{AA}D_c^2$ (the Coulomb gauge analogue of the Landau gauge nonperturbative running coupling) are renormalization group invariants at this order in perturbation theory. Fourthly, the results above for $D_{AA}$, $D_{\sigma\si}$ and $D_c$ are identical to those calculated within the first order formalism \cite{Watson:2007mz}. \section{Summary and Outlook} \setcounter{equation}{0} The two-point functions (connected and proper) of Coulomb gauge Yang-Mills theory have been considered within the standard, second order formalism. Functional methods have been used to derive the relevant Dyson--Schwinger equations and Slavnov--Taylor identities. One-loop perturbative results have been presented and the Slavnov--Taylor identities that concern them verified. Suffice it to say that it is tautological for the situation in Coulomb gauge to be somewhat different from covariant gauges such as Landau gauge. The proper $\vec{A}$-$\vec{A}$ two-point function is explicitly not transverse, nor does its longitudinal component remain bare beyond tree-level. This longitudinal component can however be written in terms of the temporal gluon and ghost two-point functions via the Slavnov--Taylor identities. Indeed, the Slavnov--Taylor identities show that there are only three independent two-point dressing functions: the (transverse) spatial gluon propagator dressing function ($D_{AA}$), the temporal gluon propagator dressing function ($D_{\sigma\si}$) and the ghost propagator dressing function ($D_c$). With the exception of the ghost dressing function, all are noncovariantly expressed in terms of two variables: $k_4^2$ (or $k_0^2$ in Minkowski space) and $\vec{k}^2$. Perturbatively it is seen that the analytic continuation between Euclidean and Minkowski space (and vice versa) is valid and that the Slavnov--Taylor identities hold. There are many further questions to be addressed. The perturbative structure of the vertex functions, the addition of the quark sector and the construction of physical scattering matrix elements from noncovariant components are all important next steps. The issue of noncovariant renormalization prescriptions must also be understood. The connection of the functional formalism with other approaches such as the Hamiltonian formalism \cite{Feuchter:2004mk} and lattice calculations must also be established. Clearly, there is a lot of work yet to be done. \begin{acknowledgments} This work has been supported by the Deutsche Forschungsgemeinschaft (DFG) under contracts no. DFG-Re856/6-1 and DFG-Re856/6-2. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{introduction} Fraud is encountered in a variety of domains. It comes in all different shapes and sizes, from traditional fraud, e.g. (simple) tax cheating, to more sophisticated, where entire \textit{groups} of individuals are collaborating in order to commit fraud. Such groups can be found in the automobile insurance domain. Here fraudsters stage traffic accidents and issue fake insurance claims to gain (unjustified) funds from their general or vehicle insurance. There are also cases where an accident has never occurred, and the vehicles have only been placed onto the road. Still, the majority of such fraud is not planned (\textit{opportunistic fraud}) \textendash\space an individual only seizes the opportunity arising from the accident and issues exaggerated insurance claims or claims for past damages. Staged accidents have several common characteristics. They occur in late hours and non-urban areas in order to reduce the probability of witnesses. Drivers are usually younger males, there are many passengers in the vehicles, but never children or elders. The police is always called to the scene to make the subsequent acquisition of means easier. It is also not uncommon that all of the participants have multiple (serious) injuries, when there is almost no damage on the vehicles. Many other suspicious characteristics exist, not mentioned here. The insurance companies place the most interest in organized groups of fraudsters consisting of drivers, chiropractors, garage mechanics, lawyers, police officers, insurance workers and others. Such groups represent the majority of revenue leakage. Most of the analyses agree that approximately $20\%$ of all insurance claims are in some way fraudulent (various resources). But most of these claims go unnoticed, as fraud investigation is usually done by hand by the domain expert or investigator and is only rarely computer supported. Inappropriate representation of data is also common, making the detection of groups of fraudsters extremely difficult. An expert system approach is thus needed. \citet{Jen97} has observed several technical difficulties in detecting fraud (various domains). Most hold for (automobile) insurance fraud as well. Firstly, only a small portion of accidents or participants is fraudulent (\textit{skewed class distribution}) making them extremely difficult to detect. Next, there is a severe lack of \textit{labeled} data sets as labeling is expensive and time consuming. Besides, due to sensitivity of the domain, there is even a lack of unlabeled data sets. Any approach for detecting such fraud should thus be founded on moderate resources (data sets) in order to be applicable in practice. Fraudsters are very innovative and new types of fraud emerge constantly. Hence, the approach must also be highly adaptable, detecting new types of fraud as soon as they are noticed. Lastly, it holds that fully autonomous detection of automobile insurance fraud is not possible in practice. Final assessment of potential fraud can only be made by the domain expert or investigator, who also determines further actions in resolving it. The approach should also support this investigation process. Due to everything mentioned above, the set of approaches for detecting such fraud is extremely limited. We propose a novel expert system approach for detection and subsequent investigation of automobile insurance fraud. The system is focused on detection of groups of collaborating fraudsters, and their connecting accidents (non-opportunistic fraud), and not some isolated fraudulent entities. The latter should be done independently for each particular entity, while in our system, the entities are assessed in a way that considers also the relations between them. This is done with appropriate representation of the domain \textendash\space networks. Networks are the most natural representation of any relational domain, allowing formulation of complex relations between entities. They also present the main advantage of our system against other approaches that use a standard \textit{flat data} form. As collaborating fraudsters are usually related to each other in various ways, detection of groups of fraudsters is only possible with appropriate representation of data. Networks also provide clear visualization of the assessment, crucial for the subsequent investigation process. The system assesses the entities using a novel \textit{Iterative Assessment Algorithm} (\textit{IAA} algorithm), presented in this article. No learning from initial labeled data set is done, the system rather allows simple incorporation of the domain knowledge. This makes it applicable in practice and allows detection of new types of fraud as soon as they are encountered. The system can be used with poor data sets, which is often the case in practice. To simulate realistic conditions, the discussion in the article and evaluation with the prototype system relies only on the data and entities found in the police record of the accident (main entities are participant, vehicle, collision\footnote{Throughout the article the term collision is used instead of (traffic) accident. The word accident implies there is no one to blame, which contradicts with the article.}, police officer). The article makes an in depth description, evaluation and analysis of the proposed system. We pursue the hypothesis that automobile insurance fraud can be detected with such a system and that proper data representation is vital. Main contributions of our work are: (1) a novel expert system approach for the detection of automobile insurance fraud with networks; (2) a benchmarking study, as no expert system approach for detection of groups of automobile insurance fraudsters has yet been reported (to our knowledge); (3) an algorithm for assessment of entities in a relational domain, demanding no labeled data set (\textit{IAA} algorithm); and (4) a framework for detection of groups of fraudsters with networks (applicable in other relational domains). The rest of the article is organized as follows. In section~\ref{related_work} we discuss related work and emphasize weaknesses of other proposed approaches. Section~\ref{background} presents formal grounds of (social) networks. Next, in section~\ref{system}, we introduce the proposed expert system for detecting automobile insurance fraud. The prototype system was evaluated and rigorously analyzed on real world data, description of the data set and obtained results are given in section~\ref{evaluation}. Discussion of the results is conducted in section~\ref{discussion}, followed by the conclusion in section~\ref{conclusion}. \section{Related work} \label{related_work} Our work places in the wide field of fraud detection. Fraud appears in many domains including telecommunications, banking, medicine, e-commerce, general and automobile insurance. Thus a number of expert system approaches for preventing, detecting and investigating fraud have been developed in the past. Researches have proposed using some standard methods of data mining and machine learning, \textit{neural networks}, \textit{fuzzy logic}, \textit{genetic algorithms}, \textit{support vector machines}, \textit{(logistic) regression}, \textit{consolidated (classification) trees}, approaches over \textit{red-flags} or \textit{profiles}, various statistical methods and other methods and approaches \citep{AAG02,BDGLA02,BH02,EHP06,FB08,GS99,HLV07,KSM07,PMAGM05,RKK07,QS08,SVCS09,VDBD02,VDD05,WD98,YH06}. Analyses show that in practice none is significantly better than others \citep{BH02,VDD05}. Furthermore, they mainly have three weaknesses. They (1) use inappropriate or inexpressive representation of data; (2) demand a labeled (initial) data set; and (3) are only suitable for larger, richer data sets. It turns out that these are generally a problem when dealing with fraud detection \citep{Jen97,PLSG05}. In the narrower sense, our work comes near the approaches from the field of network analysis, that combine intrinsic attributes of entities with their relational attributes. \citet{NC03} proposed detecting anomalies in networks with various types of vertices, but they focus on detecting suspicious structures in the network, not vertices (i.e. entities). Besides that, the approach is more appropriate for larger networks. Researchers also proposed detecting anomalies using measures of centrality \citep{Fre77,Fre79}, random walks \citep{SQCF05} and other \citep{HC03,MT00}, but these approaches mainly rely only on the relational attributes of entities. Many researchers have investigated the problem of classification in the relational context, following the hypothesis that classification of an entity can be improved by also considering its related entities (inference). Thus many approaches formulating \textit{inference}, \textit{spread} or \textit{propagation} on networks have been developed in various fields of research \citep{BP98,DR01,Kle99,KF98,LG03a,Min01,NJ00}. Most of them are based on one of the three most popular (approximate) inference algorithms: \textit{Relaxation Labeling (RL)}~\citep{HZ83} from the computer vision community, \textit{Loopy Belief Propagation (LBP)} on loopy (Bayesian) \textit{graphical models} \citep{KF98} and \textit{Iterative Classification Algorithm (ICA)} from the data mining community \citep{NJ00}. For the analyses and comparison see~\citep{KKT03,SG07}. Researchers have reported good results with these algorithms \citep{BP98,KF98,LG03a,NJ00}, however they mainly address the problem of learning from an (initial) labeled data set (\textit{supervised learning}), or a partially labeled (\textit{semi-supervised learning}) \citep{LG03b}, therefore the approaches are generally inappropriate for fraud detection. The algorithm we introduce here, \textit{IAA} algorithm, is almost identical to the \textit{ICA} algorithm, however it was developed with different intentions in mind \textendash\space to assess the entities when no labeled data set is at hand (and not for improving classification with inference). Furthermore, \textit{IAA} does not address the problem of \textit{classification}, but \textit{ranking}. Thus, in this way, it is actually a simplification of \textit{RL} algorithm, or even Google's \textit{PageRank}~\citep{BP98}, still it is not founded on the probability theory like the latter. We conclude that due to the weaknesses mentioned, most of the proposed approaches are inappropriate for detection of (automobile) insurance fraud. Our approach differs, as it does not demand a labeled data set and is also appropriate for smaller data sets. It represents data with networks, which are one of the most natural representation and allow complex analysis without simplification of data. It should be pointed out that networks, despite their strong foundations and expressive power, have not yet been used for detecting (automobile) insurance fraud (at least according to our knowledge). \section{(Social) networks} \label{background} Networks are based upon mathematical objects called \textit{graphs}. Informally speaking, graph consists of a collection of points, called \textit{vertices}, and links between these points, called \textit{edges} (\figref{fig:graphs}). Let $V_G$, $E_G$ be a set of vertices, edges for some graph $G$ respectively. We define $G$ as $G=(V_G,E_G)$ where \begin{eqnarray} V_G & = & \{v_1,v_2\dots v_n\}, \\ E_G & \subseteq & \{\{v_i,v_j\}|\mbox{ }v_i,v_j\in V_G\wedge i\neq j\}. \label{eq_E_undirected} \end{eqnarray} Note that edges are sets of vertices, hence they are not directed (\textit{undirected graph}). In the case of \textit{directed graphs} equation~(\ref{eq_E_undirected}) rewrites to \begin{eqnarray} E_G & \subseteq & \{(v_i,v_j)|\mbox{ }v_i,v_j\in V_G\wedge i\neq j\}, \label{eq_E_directed} \end{eqnarray} where edges are ordered pairs of vertices \textendash\space $(v_i,v_j)$ is an edge from $v_i$ to $v_j$. The definition can be further generalized by allowing multiple edges between two vertices and loops (edges that connect vertices with themselves). Such graphs are called \textit{multigraphs}. Examples of some simple (multi)graphs can be seen in \figref{fig:graphs}. \begin{figure}[htp] \begin{center} \includegraphics[width=1.\columnwidth]{graphs.eps} \caption{(a) simple graph with directed edges; (b) undirected multigraph with labeled vertices and edges (labels are represented graphically); (c) network representing collisions where round vertices correspond to participants and cornered vertices correspond to vehicles. Collisions are represented with directed edges between vehicles.} \label{fig:graphs} \end{center} \end{figure} In practical applications we usually strive to store some extra information along with the vertices and edges. Formally, we can define two labeling functions \begin{eqnarray} l_{V_G}: & V_G\rightarrow\Sigma_{V_G}, \\ l_{E_G}: & E_G\rightarrow\Sigma_{E_G}, \end{eqnarray} where $\Sigma_{V_G}$, $\Sigma_{E_G}$ are (finite) alphabets of all possible vertex, edge labels respectively. \textit{Labeled graph} can be seen in \figref{fig:graphs}~(b). We proceed by introducing some terms used later on. Let $G$ be some undirected multigraph or an \textit{underlying graph} of some directed multigraph \textendash\space underlying graph consists of same vertices and edges as the original directed (multi)graph, only that all of its edges are set to be undirected. $G$ naturally partitions into a set of \textit{(connected) components} denoted $C(G)$. E.g. all three graphs in \figref{fig:graphs} have one connected component, when graphs in \figref{fig:system} consist of several connected components. From here on, we assume that $G$ consists of a single connected component. Let $v_i$ be some vertex in graph $G$, $v_i\in V_G$. \textit{Degree} of the vertex $v_i$, denoted $d(v_i)$, is the number of edges incident to it. Formally, \begin{eqnarray} d(v_i) & = & |\{e|\mbox{ }e\in E_G\wedge v_i\in e\}|. \end{eqnarray} Let $v_j$ be some other vertex in graph $G$, $v_j\in V_G$, and let $p(v_i,v_j)$ be a \textit{path} between $v_i$ and $v_j$. A path is a sequence of vertices on a way that leads from one vertex to another (including $v_i$ and $v_j$). There can be many paths between two vertices. A \textit{geodesic} $g(v_i,v_j)$ is a path that has the minimum size \textendash\space consists of the least number of vertices. Again, there can also be many geodesics between two vertices. We can now define the \textit{distance} between two vertices, i.e. $v_i$ and $v_j$, as \begin{eqnarray} d(v_i,v_j) & = & |g(v_i,v_j)|-1. \end{eqnarray} Distance between $v_i$ and $v_j$ is the number of edges visited when going from $v_i$ to $v_j$ (or vice versa). The \textit{diameter} of some graph $G$, denoted $d(G)$, is a measure for the ``width'' of the graph. Formally, it is defined as the maximum distance between any two vertices in the graph, \begin{eqnarray} d(G) & = & \max\{d(v_i,v_j)|\mbox{ }v_i,v_j\in V_G\}. \end{eqnarray} All graphs can be divided into two classes. First are \textit{cyclic} graphs, having a path $p(v_i,v_i)$ that contains at least two other vertices (besides $v_i$) and has no repeated vertices. Such path is called a \textit{cycle}. Graphs in \figref{fig:graphs}~(a)~and~(b) are both cyclic. Second class of graphs consists of \textit{acyclic} graphs, more commonly known as \textit{trees}. These are graphs that contain no cycle (see \figref{fig:graphs}~(c)). Note that a simple undirected graph is a tree if and only if $|E_G|=|V_G|-1$. Finally, we introduce the \textit{vertex cover} of a graph $G$. Let $S$ be a subset of vertices, $S\subseteq V_G$, with a property that each edge in $E_G$ has at least one of its incident vertices in $S$ (covered by $S$). Such $S$ is called a vertex cover. It can be shown, that finding a minimum vertex cover is \textit{NP-hard} in general. Graphs have been studied and investigated for almost $300$ years thus a strong theory has been developed until today. There are also numerous practical problems and applications where graphs have shown their usefulness \citep[e.g.][]{BP98} \textendash\space they are the most natural representation of many domains and are indispensable whenever we are interested in relations between entities or in patterns in these relations. We emphasize this only to show that networks have strong mathematical, and also practical, foundation \textendash\space \textit{networks}\footnote{Throughout the article the terms graph and network are used as synonyms.} are usually seen as labeled, or \textit{weighted}, multigraphs with both directed and undirected edges (see \figref{fig:graphs}~(c)). Furthermore, vertices of a network usually represent some entities, and edges represent some relations between them. When vertices correspond to people, or groups of people, such networks are called \textit{social networks}. Networks often consist of densely connected subsets of vertices called \textit{communities}. Formally, communities are subsets of vertices with many edges between the vertices within some community and only a few edges between the vertices of different communities. \citet{GN02} suggested identifying communities by recursively removing the edges between them \textendash\space \textit{between edges}. As it holds that many geodesics run along such edges, where only few geodesics run along edges within communities, between edges can be removed by using \textit{edge betweenness} \citep{GN02}. It is defined as \begin{eqnarray} Bet(e_i) & = & |\{g(v_i,v_j)|\mbox{ }v_i,v_j\in V_G\wedge \\ & & \wedge\mbox{ }g(v_i,v_j)\mbox{ goes along }e_i\}|, \nonumber \end{eqnarray} where $e_i\in E_G$. The edge betweenness $Bet(e_i)$ is thus the number of all geodesics that run along edge $e_i$. For more details on (social) networks see e.g. \citep{New03,New08}. \section{Expert system for detecting automobile insurance fraud} \label{system} As mentioned above, the proposed expert system uses (primarily constructed) networks of collisions to assign suspicion score to each entity. These scores are used for the detection of groups of fraudsters and their corresponding collisions. The \textit{framework} of the system is structured into four \textit{modules} (\figref{fig:system}). \begin{figure}[htp] \begin{center} \includegraphics[width=0.30\columnwidth]{system.eps} \caption{Framework of the proposed expert system for detecting (automobile insurance) fraud.} \label{fig:system} \end{center} \end{figure} In the first module, different types of networks are constructed from the given data set. When necessary, the networks are also simplified \textendash\space divided into natural communities that appear inside them. The latter is done without any loss of generality. Networks from the first module naturally partition into several connected components. In the second module we investigate these components and output the suspicious, focusing mainly on their structural properties such as diameter, cycles, etc. Other components are discarded at the end of this module. Not all entities in some suspicious component are necessarily suspicious. In the third module components are thus further analyzed in order to detect key entities inside them. They are found by employing \textit{Iterative Assessment Algorithm (IAA)}, presented in this article. The algorithm assigns a suspicion score to each entity, which can be used for subsequent assessment and analysis \textendash\space to identify suspicious groups of entities and their connecting collisions. In general, suspicious groups are subsets of suspicious components. Note that detection of suspicious entities is done in two \textit{stages} (second and third module). In the first stage, or the second module, we focus only on detecting suspicious components and in the second stage, third module, we also locate the suspicious entities within them. Hence the detection in the first, second stage is done at the level of components, entities respectively. The reason for this \textit{hierarchical investigation} is that early stages simplify assessment in the later stages, possibly without any loss for detection (for further implications see section~\ref{discussion}). It holds that fully autonomous detection of automobile insurance fraud is not possible in practice. The obtained results should always be investigated by the domain expert or investigator, who determines further actions for resolving potential fraud. The purpose of the last, fourth, module of the system is thus to appropriately assess and visualize the obtained results, allowing the domain expert or investigator to conduct subsequent analysis. First three modules of the system are presented in sections~\ref{system_representation}, \ref{system_components},~\ref{system_entities} respectively, when the last module is only briefly discussed in section~\ref{system_remarks}. \subsection{Representation with networks} \label{system_representation} Every entity's attribute is either \textit{intrinsic} or \textit{relational}. Intrinsic attributes are those, that are independent of the entity's surrounding (e.g. person's age), while the relational attributes represent, or are dependent on, relations between entities (e.g. relation between two colliding drivers). Relational attributes can be naturally represented with the edges of a network. Thus we get networks, where vertices correspond to entities and edges correspond to relations between them. Numerous different networks can be constructed, depending on which entities we use and how we connect them to each other. The purpose of this first module of the system is to construct different types of networks, used later on. It is not immediately clear how to construct networks, that describe the domain in the best possible way and are most appropriate for our intentions. This problem arises as networks, despite their high expressive power, are destined to represent relations between only two entities (i.e. \textit{binary relations}). As collisions are actually relations between multiple entities, some sort of projection of the data set must be made (for other suggestions see section~\ref{conclusion}). Collisions can thus be represented with various types of networks, not all equally suitable for fraud detection. In our opinion, there are some guidelines that should be considered when constructing networks from any relational domain data (guidelines are given approximately in the order of their importance): \begin{enumerate}[1.] \item \textit{Intention:} networks should be constructed so that they are most appropriate for our intentions (e.g. fraud detection) \label{guideline_intentions} \item \textit{Domain:} networks should be constructed in a way that describes the domain as it is (e.g. connected vertices should represent some entities, also directly connected in the data set) \label{guideline_domain} \item \textit{Expressiveness:} expressive power of the constructed networks should be as high as possible \label{guideline_expressiveness} \item \textit{Structure:} structure of the networks should not be used for describing some specific domain characteristics (e.g. there should be no cycles in the networks when there are no actual cycles in the data set). Structural properties of networks are a strong tool that can be used in the subsequent (investigation) process, but only when these properties were not artificially incorporated into the network during the construction process \label{guideline_structure} \item \textit{Simplicity:} networks should be kept as simple and sparse as possible (e.g. not all entities need to be represented by its own vertices). The hypothesis here is that simple networks would also allow simpler subsequent analysis and clearer final visualization (principle of \textit{Occam's razor}\footnote{The principle states that the explanation of any phenomenon should make as few assumptions as possible, eliminating those making no difference in the assessment \textendash\space entities should not be multiplied beyond necessity.}) \label{guideline_simplicity} \item \textit{Uniqueness:} every network should uniquely describe the data set being represented (i.e. there should be a \textit{bijection} between different data sets and corresponding networks) \label{guideline_uniqueness} \end{enumerate} Frequently all guidelines can not be met and some trade-off have to be made. In general there are ${3 \choose 1}+{3 \choose 2}+({3 \choose 2}+{3 \choose 3})=10$ possible ways how to connect three entities (i.e. collision, participant and vehicle), depending on which entities we represent with their own vertices. $7$ of these represent participants with vertices and in $4$ cases all entities are represented by their own vertices. For the reason of simplicity, we focus on the remaining $3$ cases. In the following we introduce four different types of such networks, as an example and for later use. All can be seen in \figref{fig:collisions_networks}. \begin{figure}[htp] \begin{center} \includegraphics[width=1.0\columnwidth]{collisions_networks.eps} \caption{Four types of networks representing same two collisions \textendash\space (a) \textit{drivers network}, (b) \textit{participants network}, (c) \textit{COPTA network} and (d) \textit{vehicles network}. Rounded vertices correspond to participants, hexagons correspond to collisions and irregular cornered vertices correspond to vehicles. Solid directed edges represent involvement in some collision, solid undirected edges represent drivers (only for the vehicles network) and dashed edges represent passengers. Guilt in the collision is formulated with edge's direction.} \label{fig:collisions_networks} \end{center} \end{figure} The simplest way is to only connect the drivers who were involved in the same collision \textendash\space \textit{drivers networks}. Guilt in the collision is formulated with edge's direction. Note that drivers networks severely lack expressive power (guideline~\ref{guideline_expressiveness}). We can therefore add the passengers and get \textit{participants networks}, where passengers are connected with the corresponding drivers. Such networks are already much richer, but they have one major weakness \textendash\space passengers ``group'' on the driver, i.e. it is generally not clear which passengers were involved in the same collision and not even how many passengers were involved in some particular collision (guidelines~\ref{guideline_expressiveness},~\ref{guideline_uniqueness}). This weakness is partially eliminated by \textit{COnnect Passengers Through Accidents networks} (\textit{COPTA networks}). We add special vertices representing collisions and all participants in some collision are now connected through these vertices. Passengers no longer group on the drivers but on the collisions, thus the problem is partially eliminated. We also add special edges between the drivers and the collisions, to indicate the number of passengers in the vehicle. This type of networks could be adequate for many practical applications, but it should be mentioned that the distance between two colliding drivers is now twice as large as before \textendash\space the drivers are those that were directly related in the collision (guideline~\ref{guideline_domain},~\ref{guideline_simplicity}). Last type of networks are \textit{vehicles networks} where special vertices are added to represent vehicles. Collisions are now represented by edges between vehicles, and driver and passengers are connected through them. Such networks provide good visualization of the collisions and also incorporate another entity, but they have many weaknesses as well. Two colliding drivers are very far apart and (included) vehicles are not actually of our interest (guideline~\ref{guideline_simplicity}). Such networks also seem to suggest that the vehicles are the ones, responsible for the collision (guideline~\ref{guideline_domain}). Vehicles networks are also much larger than the previous. A better way to incorporate vehicles into networks is simply to connect collisions, in which the same vehicle was involved. Similar holds for other entities like police officers, chiropractors, lawyers, etc. Using special vertices for these entities would only unnecessarily enlarge the networks and consequently make subsequent detection harder (guidelines~\ref{guideline_intentions},~\ref{guideline_simplicity}). It is also true, that these entities usually aren't available in practice (sensitivity of the domain). Summary of the analysis of different types of networks is given in table~\ref{tbl:networks_guidelines}. \begin{table}[htp] \begin{center} \begin{tabular}{cccccc} \multicolumn{6}{l}{\textit{Guidelines and networks}} \\\hline\hline & \textit{drivers} & \textit{particip.} & \textit{COPTA} & \textit{vehicles} & \\\hline \multirow{2}{*}{\textit{Intention}} & $+$ & $++$ & & & \multirow{2}{*}{$5$} \\ & & $+$ & $++$ & & \\ \textit{Domain} & & & $-$ & $-$ & $4$ \\ \textit{Expressive.} & $--$ & $-$ & & $+$ & $4$ \\ \textit{Structure} & & & & & $4$ \\ \textit{Simplicity} & $+$ & & $-$ & $--$ & $3$ \\ \textit{Uniqueness} & & $-$ & $-$ & $-$ & $2$ \\\hline \multirow{2}{*}{Total} & $0$ & $\mathbf{4}$ & $-9$ & $-8$ & \\ & $-5$ & $-1$ & $\mathbf{1}$ & $-8$ & \\ \end{tabular} \end{center} \caption{Comparison of different types of networks due to the proposed guidelines. Scores assigned to the guidelines are a choice made by the authors. Analysis for \textit{Intention} (guideline~\ref{guideline_intentions}), and total score, is given separately for second, third module respectively.} \label{tbl:networks_guidelines} \end{table} There is of course no need to use the same type of networks in every stage of the detection process (guideline~\ref{guideline_intentions}). In the prototype system we thus use participants networks in the second module (section~\ref{system_components}), as they provide enough information for initial suspicious components detection, and \textit{COPTA} networks in the third module (section~\ref{system_entities}), whose adequacy will be clearer later. Other types of networks are used only for visualization purposes. Network scores, given in table~\ref{tbl:networks_guidelines}, confirm this choice. After the construction of networks is done, the resulting connected components can be quite large (depending on the type of networks used). As it is expected that groups of fraudsters are relatively small, the components should in this case be simplified. We suggest using edge betweenness~\citep{GN02} to detect communities in the network (i.e. supersets of groups of fraudsters) by recursively removing the edges until the resulting components are small enough. As using edge betweenness assures that we would be removing only the edges between the communities, and not the edges within communities, simplification is done without any loss for generality. \subsection{Suspicious components detection} \label{system_components} The networks from the first module consist of several connected components. Each component describes a group of related entities (i.e. participants, due to the type of networks used), where some of these groups contain fraudulent entities. Within this module of the system we want to detect such groups (i.e. \textit{fraudulent components}) and discard all others, in order to simplify the subsequent detection process in the third module. Not all entities in some fraudulent component are necessarily fraudulent. The purpose of the third module is to identify only those that are. Analyses, conducted with the help of a domain expert, showed that fraudulent components share several \textit{structural characteristics}. Such components are usually much larger than other, non-fraudulent components, and are also denser. The underlying collisions often happened in suspicious circumstances, and the ratio between the number of collisions and the number of different drivers is usually close to $1$ (for reference, the ratio for completely independent collisions is $2$). There are vertices with extremely high degree and \textit{centrality}. Components have a small diameter, (short) cycles appear and the size of the minimum vertex cover is also very small (all due to the size of the component). There are also other characteristics, all implying that entities, represented by such components, are unusually closely related to each other. Example of a fraudulent component with many of the mentioned characteristics is shown in \figref{fig:suspicious_component}. \begin{figure}[htp] \begin{center} \includegraphics[width=0.75\columnwidth]{suspicious_component.eps} \caption{Example of a component of participants network with many of the suspicious characteristics shared by fraudulent components.} \label{fig:suspicious_component} \end{center} \end{figure} We have thus identified several \textit{indicators} of likelihood that some component is fraudulent (i.e. \textit{suspicious component}). The detection of suspicious components is done by assessing these indicators. Only simple indicators are used (no combinations of indicators). Formally, we define an ensemble of $n$ indicators as $I = [I_1, I_1 \dots I_n]^T$. Let $c$ be some connected component in network $G$, $c \in C(G)$, and let $H_i(c)$ be the value for $c$ of the characteristic, measured by indicator $I_i$. Then \begin{eqnarray} I_i(c) & = & \left\{\begin{array}{cl} 1 & c \mbox{ has suspicious value of }H_i \\ 0 & \mbox{otherwise} \end{array}\right.. \label{eq_I_i} \end{eqnarray} For the reason of simplicity, all indicators are defined as \textit{binary attributes}. For the indicators that measure a characteristic that is independent of the structure of the component (e.g. number of vertices, collisions, etc.), simple \textit{thresholds} are defined in order to distinguish suspicious components from others (due to this characteristic). These thresholds are set by the domain expert. Other characteristics are usually greatly dependent on the number of the vertices and edges in the component. A simple \textit{threshold strategy} thus does not work. Values of such $H_i$ could of course be ``normalized'' before the assessment (based on the number of vertices and edges), but it is often not clear how. Values could also be assessed using some (supervised) learning algorithm over a labeled data set, but a huge set would be needed, as the assessment should be done for each number of vertices and edges separately (owing to the dependence mentioned). What remains is to construct random networks of (presumably) honest behavior and assess the values of such characteristics using them. No in-depth analysis of collisions networks has so far been reported, and it is thus not clear how to construct such random networks. General random network \textit{generators} or \textit{models}, e.g. \citep{BA99,EW02}, mainly give results far away from the collisions networks (visually and by assessing different characteristics). Therefore a sort of \textit{rewiring} algorithm is employed, initially proposed by \citet{BMST97} and \citet{WS98}. The algorithm iteratively rewires edges in some component $c$, meaning that we randomly choose two edges in $E_c$, $\{v_i,v_j\}$ and $\{v_k,v_l\}$, and switch one of theirs incident vertices. The resulting edges are e.g. $\{v_i,v_l\}$ and $\{v_k,v_j\}$ (see \figref{fig:rewiring}). The number of vertices and edges does not change during the rewiring process and the values for some $H_i$ can thus be assessed by generating a sufficient number of such random networks (for each component). \begin{figure}[htp] \begin{center} \includegraphics[width=0.20\columnwidth]{rewiring.eps} \caption{Example of a rewired network. Dashed edges are rewired, i.e. replaced by solid edges.} \label{fig:rewiring} \end{center} \end{figure} The details of the rewiring algorithm are omitted due to space limitations, we only discuss two aspects. First, the number of rewirings should be kept relatively small (e.g. $<|E_c|$), otherwise the constructed networks are completely random with no trace of the one we start with \textendash\space (probably) not networks representing a set of collisions. We also want to compare components to other random ones, which are similar to them, at least in the aspect of this rewirings. If a component significantly differs even from these similar ones, there is probably some severe anomaly in it. Second, one can notice that the algorithm never changes the degrees of the vertices. As we wish to assess the degrees as well, the algorithm can be simply adopted to the task in an \textit{ad~hoc} fashion. We add an extra vertex $v_e$ and connect all other vertices with it. As this vertex is removed at the end, rewiring one of the newly added edges with some other (old) edge changes the degrees of the vertices. Let $\{v_i,v_e\}$, $\{v_k,v_l\}$ be the edges being rewired and let $\{v_i,v_l\}$, $\{v_k,v_e\}$ be the edges after the rewiring. The (true) degree of vertex $v_i, v_k$ was increased, decreased by one respectively. To assess the values of indicators we separately construct random components for each component $c\in C(G)$ and indicator $I_i\in I$, and approximate the distributions for characteristics $H_i$ ($H_i$ are seen as random variables). A statistical test is employed to test the \textit{null hypothesis}, if the observed value $H_i(c)$ comes from the distribution for $H_i$. The test can be \textit{one} or \textit{two-tailed}, based on the nature of characteristic $H_i$. In the case of one-tailed test, where large values of $H_i$ are suspicious, we get \begin{eqnarray} I_i(c) & = & \left\{\begin{array}{cl} 1 & \hat{P}_c(H_i\geq H_i(c))<t_i \\ 0 & \mbox{otherwise} \end{array}\right., \label{eq_I_one_sided} \end{eqnarray} where \textit{probability density function} $P(H_i)$ is approximated with the generated distribution $\hat{P}_c(H_i)$ and $t_i$ is a \textit{critical threshold} or acceptable \textit{Type I error} (e.g. set to $0.05$). In the case of two-tailed test the equation~(\ref{eq_I_one_sided}) rewrites to \begin{eqnarray} I_i(c) & = & \left\{\begin{array}{cl} 1 & \begin{array}{l} \hat{P}_c(H_i\geq H_i(c))<t_i/2\mbox{ }\vee \\ \mbox{ }\hat{P}_c(H_i\leq H_i(c))<t_i/2 \end{array} \\ 0 & \begin{array}{l} \mbox{otherwise} \end{array} \end{array}\right.. \label{eq_I_two_sided} \end{eqnarray} Knowing the values for all indicators $I_i$ we can now indicate the suspicious components in $C(G)$. The simplest way to accomplish this is to use a \textit{majority classifier} or \textit{voter}, indicating all the components, for which at least half of the indicators is set to $1$, as suspicious. Let $S(G)$ be a set of suspicious components in a network $G$, $S(G)\subseteq C(G)$, then \begin{eqnarray} S(G) & = & \{c|\mbox{ }c\in C(G)\wedge\sum_{i=1}^nI_i(c)\geq n/2\}. \label{eq_S_majority} \end{eqnarray} When fraudulent components share most of the characteristics, measured by the indicators, we would clearly indicate them (they would have most, at least half, of the indicators set). Still, the approach is rather naive having three major weaknesses (among others). (1) there is no guarantee that the threshold $n/2$ is the best choice; (2) we do not consider how many components have some particular indicator set; and (3) all indicators are treated as equally important. Normally, we would use some sort of supervised learning technique that eliminates this weaknesses (e.g. regression, neural networks, classification trees, etc.), but again, due to the lack of labeled data and skewed class distribution in the collisions domain, this would only rarely be feasible (the size of $C(G)$ is even much smaller then the size of the actual data set). To cope with the last two weaknesses mentioned, we suggest using \textit{principal component analysis of RIDITs} (\textit{PRIDIT}) proposed by \citet{BL77} (see \citep{Bro81}), which has already been used for detecting fraudulent insurance claim files \citep{BDGLA02}, but not for detecting groups of fraudsters (i.e. fraudulent components). The \textit{RIDIT} analysis was first introduced by \citet{Bro58}. \textit{RIDIT} is basically a scoring method that transforms a set of \textit{categorical} attribute values into a set of values from interval $[-1,1]$, thus they reflect the probability of an occurrence of some particular categorical value. Hence, an \textit{ordinal scale} attribute is transformed into an \textit{interval scale} attribute. In our case, all $I_i$ are simple binary attributes, and the \textit{RIDIT} scores, denoted $R_i$, are then just \begin{eqnarray} R_i(c) & = & \left\{\begin{array}{cl} \hat{p}_i^0 & I_i(c)=1 \\ -\hat{p}_i^1 & I_i(c)=0 \end{array}\right., \label{eq_R_i} \end{eqnarray} where $c\in C(G)$, $\hat{p}_i^1$ is the \textit{relative frequency} of $I_i$ being equal to $1$, computed from the entire data set, and $\hat{p}_i^0=1-\hat{p}_i^1$. We demonstrate the technique with an example. Let $\hat{p}_i^1$ be equal to $0.95$ \textendash\space almost all of the components have the indicator $I_i$ set. The \textit{RIDIT} score for some component $c$, with $I_i(c)=1$, is then just $0.05$, as the indicator clearly gives a poor indication of fraudulent components. On the other hand, for some component $c$, with $I_i(c)=0$, the \textit{RIDIT} score is $-0.95$, since the indicator very likely gives a good indication of the non-fraudulent components. Similar intuitive explanation can be made by setting $\hat{p}_i^1$ to $0.05$. A full discussion of \textit{RIDIT} scoring is omitted, for more details see \citep{Bro81,BL77}. Introduction of \textit{RIDIT} scoring diminishes previously mentioned second weakness. To also cope with the third, we make use of the \textit{PRIDIT} technique. The intuition of this technique is that we can weight indicators in some ensemble by assessing the agreement of some particular indicator with the entire ensemble. We make a (probably incorrect) assumption that indicators are independent. Formally, let $W$ be a vector of \textit{weights} for the ensemble of \textit{RIDIT scorers} $R_i$ for indicators $I_i$, denoted $W=[w_1,w_2\dots w_n]^T$, and let $R$ be a matrix with $i,j^{th}$ component equal to $R_{j}(c)$, where $c$ is an $i^{th}$ component in $C(G)$. Matrix product $RW$ gives the ensemble's score for all the components, i.e. $i^{th}$ component in vector $RW$ is equal to the weighted linear combination of \textit{RIDIT} scores for $i^{th}$ component in $C(G)$. Denote $S=RW$, we can then assess indicators agreement with entire ensemble as (written in matrix form) \begin{eqnarray} & & R^TS/\parallel R^TS\parallel. \label{eq_W_0} \end{eqnarray} Equation~(\ref{eq_W_0}) computes normalized scalar products of columns of $R$, which corresponds to the returned values of \textit{RIDIT} scorers, and $S$, which is the overall score of the entire ensemble (for each component in $C(G)$). When the returned values of some scorer are completely orthogonal to the ensemble's scores, the resulting normalized scalar product equals $0$, and reaches its maximum, or minimum, when they are perfectly aligned. Equation~(\ref{eq_W_0}) thus gives scorers (indicators) agreement with the ensemble and can be used to assign new weights, i.e. $W^1=R^TS/ ||R^TS||$. Greater weights are assigned to the scorers that kind of agree with the general belief of the ensemble. Denote $S^1=RW^1$, then $S^1$ is a vector of overall scores using these newly determined weights. There is of course no reason to stop the process here, as we can iteratively get even better weights. We can write \begin{eqnarray} W^i & = & \frac{R^TS^{i-1}}{||R^TS^{i-1}||} = \frac{R^TRW^{i-1}}{||R^TRW^{i-1}||} \label{eq_W_i} \end{eqnarray} for $i\geq 1$, which can be used to iteratively compute better and better weights for an ensemble of \textit{RIDIT} scorers $R_i$, starting with some weights, e.g. $W^0=[1,1\dots 1]$ \textendash\space the process converges to some fixed point no matter the starting weights (due to some assumptions). It can be shown that the fixed point is actually the \textit{first principal component} of the matrix $R^TR$ denoted $W^\infty$. For more details on \textit{PRIDIT} technique see \citep{BDGLA02}. We can now score each component in $C(G)$ using the \textit{PRIDIT} technique for indicators $I_i$ and output as suspicious all the components, with a score greater than $0$. Thus \begin{eqnarray} S(G) & = & \{c|\mbox{ }c\in C(G)\wedge R(c)W^\infty\geq 0\}, \label{eq_S_pridit} \end{eqnarray} where $R(c)$ is a row of matrix $R$, that corresponds to component $c$. Again there is no guarantee, that the threshold $0$ is the best choice. Still, if we know the expected proportion of fraudulent components in the data set (or e.g. expected number of fraudulent collisions), we can first rank the components using \textit{PRIDIT} technique and then output only the appropriate proportion of most highly ranked components. \subsection{Suspicious entities detection} \label{system_entities} In the third module of the system key entities are detected inside each previously identified suspicious component. We focus on identifying key participants, that can be later used for the identification of other key entities (collisions, vehicles, etc.). Key participants are identified by employing \textit{Iterative Assessment Algorithm (IAA)} that uses intrinsic and relational attributes of the entities. The algorithm assigns a \textit{suspicion score} to each participant, which corresponds to the likelihood of it being fraudulent. In classical approaches over flat data, entities are assessed using only their intrinsic attributes, thus they are assessed in complete \textit{isolation} to other entities. It has been empirically shown that the \textit{assessment} can be improved by also considering the related entities, more precisely, by considering the assessment of the related entities \citep{CDI98,DR01,LG03b,LG03a,NJ00}. The assessment of an entity is \textit{inferred} from the assessments of the related entities and \textit{propagated} onward. Still, incorporating only the intrinsic attributes of the related entities generally doesn't improve, or even deteriorates, the assessment \citep{CDI98,OML00}. The proposed \textit{IAA} algorithm thus assesses the entities by also considering the assessment of their related entities. As these related entities were also assessed using the assessments of their related entities, and so on, the entire network is used in the assessment of some particular entity. This could not be achieved otherwise, as the formulation would surely be too complex. We proceed by introducing \textit{IAA} in a general form. Let $c$ be some suspicious component in network $G$, $c\in S(G)$, and let $v_i$ be one of its vertices, $v_i\in V_c$. Furthermore, let $N(v_i)$ be a set of neighbor vertices of $v_i$ (i.e. vertices at distance $1$) and $V(v_i)=N(v_i)\cup\{v_i\}$, and let $E(v_i)$ be a set of edges incident to $v_i$ (i.e. $E(v_i)=\{e|\mbox{ }e\in E_c\wedge v_i\in e\}$). Let also $en_i$ be an entity corresponding to vertex $v_i$ and $N(en_i)$, $V(en_i)$ be a set of entities that corresponds to $N(v_i)$, $V(v_i)$ respectively. We define the suspicion score $s$, $s(\cdot)\geq 0$, for the entity $en_i$ as \begin{eqnarray} \label{eq_assess} s(en_i) & = & AM(s(N(en_i)),V(en_i),V(v_i),E(v_i)) \\ & = & AM(i,c), \nonumber \end{eqnarray} where $AM$ is some \textit{assessment model} and $s(N(en_i))$ is a set of suspicion scores for entities in $N(en_i)$. The suspicion of some entity is dependent on the assessment of the related entities (first argument in equation~(\ref{eq_assess})), on the intrinsic attributes of related entities and itself (second argument), and on the relational attributes of the entity (last two arguments). We assume that $AM$ is \textit{linear} in the assessments of the related entities (i.e. $s(N(en_i))$) and that it returns higher values for fraudulent entities. For some entity $en_i$, when the suspicion scores of the related entities are known, $en_i$ can be assessed using equation~(\ref{eq_assess}). Commonly, none of the suspicion scores are known preliminary (as the data set is unlabeled), and the equation thus cannot be used in a common manner. Still, one can incrementally assess the entities in an \textit{iterative} fashion, similar to e.g. \citep{BP98,Kle99}. Let $s^{0}(\cdot)$ be some set of suspicion scores, e.g. $s^{0}(\cdot)=1$. We can then assess the entities using scores $s^{0}(\cdot)$ and equation~(\ref{eq_assess}), and get better scores $s^{1}(\cdot)$. We proceed with this process, iteratively refining the scores until some stopping criteria is reached. Generally, on the $k^{th}$ iteration, entities are assessed using \begin{eqnarray} \label{eq_assess_k} s^k(en_i) & = & AM(s^{k-1}(N(en_i)),V(en_i),V(v_i),E(v_i)) \\ & = & AM(i,k,c). \nonumber \end{eqnarray} Note that the choice for $s^0(\cdot)$ is arbitrary \textendash\space the process converges to some \textit{fixed point} no matter the starting scores (due to some assumptions). Hence, the entities are assessed without preliminary knowing any suspicion score to bootstrap the procedure. We present the \textit{IAA} algorithm below. \begin{table}[ht] \begin{center} \begin{tabular}{|lllll} \multicolumn{5}{l}{\textit{IAA algorithm}} \\\hline\hline & \multicolumn{4}{l|}{$s^{0}(\cdot)=1$} \\ & \multicolumn{4}{l|}{$k=1$} \\ & \multicolumn{4}{l|}{\texttt{WHILE NOT} \textit{stopping criteria} \texttt{DO}} \\ & & \multicolumn{3}{l|}{\texttt{FOR} $\forall v_i,en_i$ \texttt{DO}} \\ & & & \multicolumn{2}{l|}{$s^k(en_i) = \alpha s^{k-1}(en_i) + (1-\alpha) AM(i,k,c)$} \\ & & \multicolumn{3}{l|}{\texttt{FOR} $\forall v_i,en_i$: $v_i$ \textit{non-bucket} \texttt{DO}} \\ & & & \multicolumn{2}{l|}{\textit{normalize} $s^k(en_i)$} \\ & & \multicolumn{3}{l|}{$k=k+1$} \\ & \multicolumn{4}{l|}{\texttt{RETURN }$s^k(\cdot)$} \\\hline \end{tabular} \end{center} \end{table} Entities are iteratively assessed using model $AM$ ($\alpha$ is a \textit{smoothing parameter} set to e.g. $0.75$). In order for the process to converge, scores corresponding to \textit{non-bucket} vertices are normalized at the end of each iteration. Due to the fact that relations represented by the networks are often not binary, there are usually some vertices only serving as \textit{buckets} that store the suspicion assessed at this iteration to be propagated on the next. \textit{Non-bucket} vertices correspond to entities that are actually being assessed and only these scores should be normalized (for binary relations all the vertices are of this kind). Structure of such \textit{bucket} networks would typically correspond to \textit{bipartite graphs}\footnote{In the social science literature bipartite graphs are known as \textit{collaboration networks}.} \textendash\space bucket vertices would only be connected to non-bucket vertices (and vice versa). In the case of \textit{COPTA} networks, used in this module of the (prototype) system, bucket vertices are those representing collisions. One would intuitively run the algorithm until some fixed point is reached, i.e. when the scores no longer change. We empirically show that, despite the fact that iterative assessment does indeed increase the performance, such approach actually decreases it. The reason is that the scores \textit{over-fit} the model. We also show, that superior performance can be achieved with a dynamic approach \textendash\space by running the algorithm for $d(c)$ iterations (diameter of component $c$). For more see sections~\ref{evaluation},~\ref{discussion}. Note that if each subsequent iteration of the algorithm actually increased the performance, one could assess the entities directly. When $AM$ is linear in the assessments of related entities, the model could be written as a set of \textit{linear equations} and solved exactly (analytically). An arbitrary model can be used with the algorithm. We propose several linear models based on the observation that in many of these bucket networks the following holds: \textit{every entity is well defined with (only) the entities directly connected to it, considering the context observed}. E.g. in the case of \textit{COPTA} networks, every collision is connected to its participants, who are clearly the ones who ``define'' the collision, and every participant is connected with its collisions, which are the precise aspect of the participant we wish to investigate when dealing with fraud detection. Similar discussion could be made for movie-actor, corporate board-director and other well known collaboration networks. A model using no attributes of the entities is thus simply the sum of suspicion scores of the related entities (we omit the arguments of the model) \begin{eqnarray} AM_{raw} & = & \sum_{\{v_i,v_j\}\in E(v_i)} s(en_j). \label{eq_model_raw} \end{eqnarray} Our empirical evaluation shows that even such a simple model can achieve satisfactory performance. To incorporate entities' attributes into the model, we introduce \textit{factors}. These are based on intrinsic or relational attributes of entities. The intuition behind the first is that some intrinsic attributes' values are highly correlated with fraudulent activity. Suspicion scores of corresponding entities should in this case be increased and also propagated on the related entities. Moreover, many of the relational attributes (i.e. labels of the edges) increase the likelihood of fraudulent activity \textendash\space the propagation of suspicion over such edges should also be increased. Let $l_{E_G}$ be the edge labeling function and $\Sigma_{E_G}$ the alphabet of all possible edge labels, i.e. $\Sigma_{E_G}=\{Driver,Passenger\dots\}$ (for \textit{COPTA} networks). Furthermore, let $En$ be a set of all entities $en_i$. We define $F_{int}$, $F_{rel}$ to be the factors, corresponding to intrinsic, relational attributes respectively, as \begin{eqnarray} F_{int}: & En \rightarrow [0,\infty), \\ F_{rel}: & \Sigma_{E_G}\rightarrow [0,\infty). \end{eqnarray} Improved model incorporating these factors is then \begin{eqnarray} AM_{bas} & = & F_{int}(en_i)\sum_{e=\{v_i,v_j\}\in E(v_i)} F_{rel}(l_{E_G}(e))\mbox{ }s(en_j). \label{eq_model_basic} \end{eqnarray} Factors $F_{int}$ are computed from (similar for $F_{rel}$) \begin{eqnarray} F_{int}(en_i) & = & \prod_k F_{int}^k(en_i) \label{eq_F_int} \end{eqnarray} where \begin{eqnarray} F_{int}^k(en_i) & = & \left\{\begin{array}{cl} 1/(1-f_{int}^k(en_i)) & f_{int}^k(en_i)\geq 0 \\ 1+f_{int}^k(en_i) & \mbox{otherwise} \end{array}\right. \label{eq_f_int} \end{eqnarray} and \begin{eqnarray} f_{int}^k: & En\rightarrow (-1,1). \end{eqnarray} $f_{int}^k$ are \textit{virtual factors} defined by the domain expert. The transformation with equation~(\ref{eq_f_int}) is done only to define factors on the interval $(-1,1)$, rather than on $[0,\infty)$. The first is more intuitive as e.g. two ``opposite'' factors are now $f$ and $-f$, $f\in[0,1)$, opposed to $f$ and $1/f$, $f>0$, before. Some virtual factor $f_{int}^k$ can be an arbitrary function defined due to a single attribute of some entity, or due to several attributes formulating \textit{correlations} between the attributes. When attributes' values correspond to some suspicious activity (e.g. collision corresponds to some classical \textit{scheme}), factors are set to be close to $1$, and close to $-1$, when values correspond to non-suspicious activity (e.g. children in the vehicle). Otherwise, they are set to be $0$. Note that assessment of some participant with models $AM_{raw}$ and $AM_{bas}$ is highly dependent on the number of collisions this participant was involved in. More precisely, on the number of the terms in the sums in equations~(\ref{eq_model_raw}),~(\ref{eq_model_basic}) (which is exactly the degree of the corresponding vertex). Although this property is not vain, we still implicitly assume we posses \textit{all} of the collisions a certain participant was involved in. This assumption is often not true (in practice). We propose a third model diminishing the mentioned assumption. Let $\overline{d_G}$ be the average degree of the vertices in network $G$, $\overline{d_G}=ave\{d(v_k)|\mbox{ }v_k\in V_G\}$. The model is \begin{eqnarray} AM_{\cdot}^{mean} & = & \frac{\overline{d_G}+d(v_i)}{2}\frac{AM_\cdot}{d(v_i)} = \left(1+\frac{\overline{d_G}}{d(v_i)}\right)\frac{AM_\cdot}{2}, \label{eq_model_laplace} \end{eqnarray} where $AM_{\cdot}$ can be any of the models $AM_{raw}$, $AM_{bas}$. $AM_{\cdot}^{mean}$ averages terms in the sum of the model $AM_{\cdot}$, and multiplies this average by the mean of vertex's degree and the average degree over all the vertices in $V_G$. Thus a sort of \textit{Laplace smoothing} is employed that pulls the vertex degree toward the average, in order to diminish the importance of this parameter in the final assessment. Empirical analysis in section~\ref{evaluation} shows that such a model outperforms the other two. Knowing scores $s(\cdot)$ for all the entities in some connected component $c\in G$, one can rank them according to the suspicion of their being fraudulent. In order to also compare the entities from various components, scores must be normalized appropriately (e.g. multiplied with the number of collisions represented by component $c$). \subsection{Final remarks} \label{system_remarks} In the previous section (third module of the system) we focused only on detection of fraudulent participants. Their suspicion scores can now be used for assessment of other entities (e.g. collisions, vehicles), using one of the assessment models proposed in section~\ref{system_entities}. When all of the most highly ranked participants in some suspicious component are directly connected to each other (or through buckets), they are proclaimed to belong to the same group of fraudsters. Otherwise they belong to several groups. During the investigation process, the domain expert or investigator analyzes these groups and determines further actions for resolving potential fraud. Entities are investigated in the order induced by scores $s(\cdot)$. Networks also allow a neat visualization of the assessment (see \figref{fig:visualization}). \begin{figure}[htp] \begin{center} \includegraphics[width=1.00\columnwidth]{visualization.eps} \caption{Four \textit{COPTA} networks showing same group of collisions. Size of the participants' vertices correspond to their suspicion score; only participants with score above some threshold, and connecting collisions, are shown on each network. The contour was drawn based on the \textit{harmonic mean} distance to every vertex, weighted by the suspicion scores. (Blue) filled collisions' vertices in the first network correspond to collisions that happened at night.} \label{fig:visualization} \end{center} \end{figure} \section{Evaluation with the prototype system} \label{evaluation} We implemented a prototype system to empirically evaluate the performance of the proposition. Furthermore, various components of the system are analyzed and compared to other approaches. To simulate realistic conditions, the data set used for evaluation consisted only of the data, that can be easily (automatically) retrieved from police records (\textit{semistructured data}). We report results of the assessment of participants (not e.g. collisions). \subsection{Data} \label{data} The data set consists of $3451$ participants involved in $1561$ collisions in Slovenia between the years $1999$ and $2008$. The set was made by merging two data sets, one labeled and one unlabeled. The first, labeled, consists of collisions corresponding to previously identified fraudsters and some other participants, which were investigated in the past. In a few cases, when \textit{class} of a participant could not be determined, it was set according to the domain expert's and investigator's belief. As the purpose of our system is to identify groups of fraudsters, and not some isolated fraudulent collisions, (almost) all isolated collisions were removed from this set. It is thus a bit smaller (i.e. $211$ participants, $91$ collisions), but still large enough to make the assessment. To achieve a more realistic class distribution and better statistics for \textit{PRIDIT} analysis, the second larger data set was merged with the first. The set consists of various collisions chosen (almost) at random, although some of them are still related with others. Since random data sampling is not advised for relational data \citep{Jen99}, this set is used explicitly for \textit{PRIDIT} analysis. Both data sets consist of only standard collisions (e.g. there are no chain collisions involving numerous vehicles or coaches with many passengers). Class distribution for the data set can be seen in table~\ref{tbl:class_distribution}. \begin{table}[htp] \begin{center} \begin{tabular}{cccc} \multicolumn{4}{l}{\textit{Class distribution}} \\\hline\hline & \textit{Count} & \multicolumn{2}{c}{\textit{Proportion}} \\\hline Fraudster & $46$ & $1.3 \%$ & $21.8 \%$ \\ Non-fraudster & $165$ & $4.8 \%$ & $78.2 \%$ \\ Unlabeled & $3240$ & $93.9 \%$ & \\ \end{tabular} \caption{Class distribution for the data set used in the analysis of the proposed expert system.} \label{tbl:class_distribution} \end{center} \end{table} The entire assessment was made using the merged data set, while the reported results naturally only correspond to the first (labeled) set. Note that the assessment of entities in some connected component is completely independent of the entities in other components (except for \textit{PRIDIT} analysis). \subsection{Results} \label{results} Performance of the system depends on random generation of networks, used for detection of suspicious components (second module). We construct $200$ random networks for each indicator and each component (equations~(\ref{eq_I_one_sided}),~(\ref{eq_I_two_sided})), however, the results still vary a little. The entire assessment was thus repeated $20$ times and the scores were averaged. To assess the ranking of the system, average $AUC$ (\textit{Area Under Curve}) scores were computed, $\overline{AUC}$. Results given in tables~\ref{tbl:assessment_models},~\ref{tbl:factors},~\ref{tbl:iaa_algorithm},~\ref{tbl:fraudulent_components} are all $\overline{AUC}$. \begin{table}[ht] \begin{center} \begin{tabular}{clc} \multicolumn{3}{l}{\textit{Golden standard}} \\\hline\hline $CA$ & $0.8720$ & \\ \textit{Recall} & $0.8913$ & \\ \textit{Precision} & $0.6508$ & \\ \textit{Specificity} & $0.8667$ & \\ $F1$ \textit{score} & $0.7523$ & \\ $\overline{AUC}$ & $\mathbf{0.9228}$ & \\ \end{tabular} \caption{Performance of the system that uses \textit{PRIDIT} analysis with $IAA^{mean}_{bas}$ algorithm. Various metrics are reported; all except $\overline{AUC}$ are computed so the total cost (on the first run) is minimal.} \label{tbl:golden_standard} \end{center} \end{table} In order to obtain a standard for other analyses, we first report the performance of the system that uses \textit{PRIDIT} analysis for fraudulent components detection, and \textit{IAA} algorithm with model $AM_{bas}^{mean}$ for fraudulent entities detection, denoted $IAA^{mean}_{bas}$ (see table~\ref{tbl:golden_standard}). Various metrics are computed, i.e. \textit{classification accuracy} ($CA$), \textit{recall} (\textit{true positive rate}), \textit{precision} (\textit{positive predictive value}), \textit{specificity} ($1-$ \textit{false positive rate}), \textit{$F1$ score} (\textit{harmonic mean of recall and precision}) and $\overline{AUC}$. All but last are metrics that assess the classification of some approach, thus a threshold for suspicion scores must be defined. We report the results from the first run that minimize the total cost, assuming the cost of misclassified fraudsters and non-fraudsters is the same. Same holds for confusion matrix seen in table~\ref{tbl:confusion_matrix}. \begin{table}[ht] \begin{center} \begin{tabular}{ccc} \multicolumn{3}{l}{\textit{Confusion matrix}} \\\hline\hline & \textit{Suspicious} & \textit{Unsuspicious} \\\hline Fraudster & $41$ & $5$ \\ Non-fraudster & $22$ & $143$ \\ \end{tabular} \caption{Confusion matrix for the system that uses \textit{PRIDIT} analysis with $IAA^{mean}_{bas}$ algorithm (determined so the total cost on the first run is minimal).} \label{tbl:confusion_matrix} \end{center} \end{table} We proceed with an in-depth analysis of the proposed \textit{IAA} algorithm. Table~\ref{tbl:assessment_models} shows the results of the comparison of different assessment models, i.e. $IAA_{raw}$, $IAA_{bas}$, $IAA^{mean}_{raw}$ and $IAA^{mean}_{bas}$. Factors for models $IAA_{bas}$ and $IAA^{mean}_{bas}$ (equation~(\ref{eq_f_int})) were set by the domain expert, with the help of statistical analysis of data from collisions. To further analyze the impact of factors on final assessment, an additional set of factors was defined by the authors. Values were set due to authors' intuition; corresponding models are $IAA_{int}$ and $IAA^{mean}_{int}$. Results of the analysis can be seen in table~\ref{tbl:factors}. \begin{table}[ht] \begin{center} \begin{tabular}{cccc} \multicolumn{4}{l}{\textit{Assessment models}} \\\hline\hline \multicolumn{4}{c}{\textit{PRIDIT}} \\\hline $IAA_{raw}$ & $IAA_{bas}$ & $IAA^{mean}_{raw}$ & $IAA^{mean}_{bas}$ \\\hline $0.8872$ & $0.9145$ & $0.8942$ & $0.9228$ \\ \end{tabular} \caption{Comparison of different assessment models for \textit{IAA} algorithm (after \textit{PRIDIT} analysis).} \label{tbl:assessment_models} \end{center} \end{table} \begin{table}[ht] \begin{center} \begin{tabular}{ccc} \multicolumn{3}{l}{\textit{Factors}} \\\hline\hline \multicolumn{3}{c}{\textit{ALL}} \\\hline $IAA^{mean}_{raw}$ & $IAA^{mean}_{int}$ & $IAA^{mean}_{bas}$ \\\hline $0.8188$ & $0.8435$ & $0.8787$ \\ \\\hline\hline \multicolumn{3}{c}{\textit{PRIDIT}} \\\hline $IAA^{mean}_{raw}$ & $IAA^{mean}_{int}$ & $IAA^{mean}_{bas}$ \\\hline $0.8942$ & $0.9086$ & $0.9228$ \\ \end{tabular} \caption{Analysis of the impact of factors on the final assessment (on all the components and after \textit{PRIDIT} analysis).} \label{tbl:factors} \end{center} \end{table} As already mentioned, the performance of the \textit{IAA} algorithm depends on the number of iterations made in the assessment (see section~\ref{system_entities}). We have thus plotted the $AUC$ scores with respect to the number of iterations made (for the first run), in order to clearly see the dependence; plots for $IAA^{mean}_{raw}$, $IAA^{mean}_{bas}$ can be seen in \figref{fig:iterations_raw}, \figref{fig:iterations_bas} respectively. We also show that superior performance can be achieved, if the number of iterations is set dynamically. More precisely, the number of iterations made for some component $c\in C(G)$ is \begin{eqnarray} & max\{\overline{d_G},d(c)\}, \label{eq_dyn_iters} \end{eqnarray} where $d(c)$ is the diameter of $c$ and $\overline{d_G}$ the average diameter over all the components. All other results reported in this analysis used such a dynamic setting. \begin{figure}[htp] \begin{center} \includegraphics[width=1.00\columnwidth]{iterations_raw.eps} \caption{$AUC$ scores with respect to the number of iterations made in the \textit{IAA} algorithm. Solid curves correspond to $IAA^{mean}_{raw}$ algorithm after \textit{PRIDIT} analysis and dashed curves to $IAA^{mean}_{raw}$ algorithm ran on all the components. Straight line segments show the scores achieved with dynamic setting of the number of iterations (see text).} \label{fig:iterations_raw} \end{center} \end{figure} \begin{figure}[htp] \begin{center} \includegraphics[width=1.00\columnwidth]{iterations_bas.eps} \caption{$AUC$ scores with respect to the number of iterations made in the \textit{IAA} algorithm. Solid curves correspond to $IAA^{mean}_{bas}$ algorithm after \textit{PRIDIT} analysis and dashed curves to $IAA^{mean}_{bas}$ algorithm ran on all the components. Straight line segments show the scores achieved with dynamic setting of the number of iterations (see text).} \label{fig:iterations_bas} \end{center} \end{figure} Due to the lack of other expert system approaches for detecting groups of fraudsters, or even individual fraudsters (according to our knowledge), no comparative analysis of such kind could be made. The proposed \textit{IAA} algorithm is thus compared against several well known measures for anomaly detection in networks \textendash\space \textit{betweenness centrality} (\textit{BetCen}), \textit{closeness centrality} (\textit{CloCen}), \textit{distance centrality} (\textit{DisCen}) and \textit{eigenvector centrality} (\textit{EigCen}) \citep{Fre77,Fre79}. They are defined as \begin{eqnarray} BetCen(v_i) & = & \sum_{v_j,v_k\in V_c} \frac{g_{v_j,v_k}(v_i)}{g_{v_j,v_k}},\\ \label{eq_bet_cen} CloCen(v_i) & = & \frac{1}{n_c-1}\sum_{v_j\in V_c\backslash{v_i}}d(v_i,v_j) ,\\ DegCen(v_i) & = & \frac{d(v_i)}{n_c-1} ,\\ EigCen(v_i) & = & \frac{1}{\lambda} \sum_{\{v_i,v_j\}\in E_c} EigCen(v_j) , \end{eqnarray} where $n_c$ is the number of vertices in component $c$, $n_c=|V_c|$, $\lambda$ is a constant, $g_{v_j,v_k}$ is the number of geodesics between vertices $v_j$ and $v_k$ and $g_{v_j,v_k}(v_i)$ the number of such geodesics that pass through vertex $v_i$, $i\neq j\neq k$. For further discussion see \citep{Fre77,Fre79,New03}. These measures of centrality were used to assign suspicion score to each participant; scores were also appropriately normalized as in the case of \textit{IAA} algorithm. For a fair comparison, measures were compared against the model that uses no intrinsic attributes of entities, i.e. $IAA^{mean}_{raw}$. The results of the analysis are shown in table~\ref{tbl:iaa_algorithm}. \begin{table}[ht] \begin{center} \begin{tabular}{ccccc} \multicolumn{5}{l}{\textit{IAA algorithm}} \\\hline\hline \multicolumn{5}{c}{\textit{ALL}} \\\hline \textit{BetCen} & \textit{CloCen} & \textit{DegCen} & \textit{EigCen} & $IAA^{mean}_{raw}$\\\hline $0.6401$ & $0.8138$ & $0.7428$ & $0.7300$ & $0.8188$ \\ \\\hline\hline \multicolumn{5}{c}{\textit{PRIDIT}} \\\hline \textit{BetCen} & \textit{CloCen} & \textit{DegCen} & \textit{EigCen} & $IAA^{mean}_{raw}$\\\hline $0.6541$ & $0.8158$ & $0.8597$ & $0.8581$ & $0.8942$ \\ \end{tabular} \caption{Comparison of the \textit{IAA} algorithm against several well known measures for anomaly detection in networks (on all the components and after \textit{PRIDIT} analysis). For a fair comparison, no intrinsic attributes are used in the \textit{IAA} algorithm (i.e. model $AM^{mean}_{raw}$).} \label{tbl:iaa_algorithm} \end{center} \end{table} Next, we analyzed different approaches for detection of fraudulent components (see table~\ref{tbl:fraudulent_components}). The same set of $9$ indicators was used for the majority voter (equation~(\ref{eq_S_majority})) and for \textit{(P)RIDIT} analysis (equation~(\ref{eq_S_pridit})). For the latter, we use a variant of \textit{random undersampling} (\textit{RUS}), to cope with skewed class distribution. We output most highly ranked components, thus the set of selected components contain $4\%$ of all the collisions (in the merged data set) Analyses of automobile insurance fraud mainly agree that up to $20\%$ of all the collisions are fraudulent, and up to $20\%$ of the latter correspond to non-opportunistic fraud (various resources). However, for the majority voter, such an approach actually decreases performance \textendash\space we therefore report results where all components, with at least half of the indicators set, are selected. Several individual indicators, achieving superior performance, are also reported. Indicator $I_{BetCen}$ is based on betweenness centrality (equation~(\ref{eq_bet_cen})), $I_{MinCov}$ on minimum vertex cover and $I_{l^{-1}}$ on $l^{-1}$\textit{ measure} defined as the harmonic mean distance between every pair of vertices in some component $c$, \begin{eqnarray} l^{-1} & = & \frac{1}{\frac{1}{2}n_c(n_c+1)}\sum_{v_i,v_j\in V_c, i\geq j}d(v_i,v_j)^{-1}. \end{eqnarray} \begin{table}[ht] \begin{center} \begin{tabular}{cccccc} \multicolumn{6}{l}{\textit{Fraudulent components}} \\\hline\hline $I_{MinCov}$ & $I_{l^{-1}}$ & $I_{BetCen}$ & \textit{MAJOR} & \textit{RIDIT} & \textit{PRIDIT} \\\hline \multicolumn{6}{c}{\textit{ALL}} \\\hline $0.6019$ & $0.6386$ & $0.6774$ & $0.7946$ & $0.6843$ & $0.7114$ \\ \\\hline\hline $I_{MinCov}$ & $I_{l^{-1}}$ & $I_{BetCen}$ & \textit{MAJOR} & \textit{RIDIT} & \textit{PRIDIT} \\\hline \multicolumn{6}{c}{$IAA^{mean}_{bas}$} \\\hline $0.6119$ & $0.8494$ & $0.8549$ & $0.8507$ & $0.9221$ & $0.9228$ \\ \end{tabular} \caption{Comparison of different approaches for detection of fraudulent components (prior to no fraudulent entities detection and $IAA^{mean}_{bas}$).} \label{tbl:fraudulent_components} \end{center} \end{table} We last analyze the importance of proper data representation for detection of groups of fraudsters \textendash\space the use of networks. Networks were thus transformed into flat data and some standard unsupervised learning techniques were examined (e.q. \textit{k-means}, \textit{hierarchical clustering}). We obtained no results comparable to those given in table~\ref{tbl:golden_standard}. Furthermore, we tested nine standard supervised data-mining techniques to analyze the compensation of data labels for the inappropriate representation of data. We used (default) implementations of classifiers in \textit{Orange} data-mining software \citep{DZLC04} and $20$-\textit{fold} \textit{cross validation} was employed as the validation technique. Best performance, up to $AUC\approx 0.86$, was achieved with \textit{Naive Bayes}, \textit{support vector machines}, \textit{random forest} and, interestingly, also \textit{k-nearest neighbors} classifier. Scores for other approaches were below $AUC=0.80$ (e.g. \textit{logistic regression}, \textit{classification trees}, etc.). \section{Discussion} \label{discussion} Empirical evaluation from the previous section shows that automobile insurance fraud can be detected using the proposition. Moreover, the results suggest that appropriate data representation is vital \textendash\space even a simple approach over networks can detect a great deal of fraud. The following section discusses the results in greater detail (in the order given). Almost all of the metrics obtained with \textit{PRIDIT} analysis and $IAA^{mean}_{bas}$ algorithm, \textit{golden standard}, are very high (table~\ref{tbl:golden_standard}). Only precision appears low, still this results (only) from the skewed class distribution in the domain. The $F1$ measure is consequently also a bit lower, else the performance of the system is more than satisfactory. The latter was confirmed by the experts and investigators from a Slovenian insurance company, who were also pleased with the visual representation of the results. The confusion matrix given in table~\ref{tbl:confusion_matrix} shows that we correctly classified almost $90\%$ of all fraudsters and over $85\%$ of non-fraudsters. Only $5$ fraudsters were not detected by the prototype system. We thus obtained a particularly high recall, which is essential for all fraud detection systems. The majority of unlabeled participants were classified as unsuspicious (not shown in table~\ref{tbl:confusion_matrix}), but the corresponding collisions are mainly isolated and the participants could have been trivially eliminated anyway (for our purposes). We proceed with discussion of different assessment models (table~\ref{tbl:assessment_models}). Performance of the simplest of the models $IAA_{raw}$, which uses no domain expert's knowledge, could already prove sufficient in many circumstances. It can still be significantly improved by also considering the factors, set by the domain expert (model $IAA_{bas}$). Model $IAA^{mean}_{\cdot}$ further improves the assessment of both (simple) models, confirming the hypothesis behind it (see section~\ref{system_entities}). Although the models (probably incorrectly) assume that the fraudulence of an entity is linear (in the fraudulences of the related entities), they give a good approximation of the fraudulent behavior. The analysis of the factors used in the models confirms their importance for the final assessment. As expected, model $IAA^{mean}_{bas}$ outperforms $IAA^{mean}_{int}$, and the latter outperforms $IAA^{mean}_{raw}$ (table~\ref{tbl:factors}). First, this confirms the hypothesis that domain knowledge can be incorporated into the model using factors (as defined in section~\ref{system_entities}). Second, it shows that better understanding of the domain can improve assignment of factors. Combination of both makes the system extremely flexible, allowing for detection of new types of fraud immediately after they have been noticed by the domain expert or investigator. As already mentioned, running the \textit{IAA} algorithm for too long over-fits the model and decreases algorithm's final performance (see \figref{fig:iterations_raw}, \figref{fig:iterations_bas}, note different scales used). Early iterations of the algorithm still increase the performance in all cases analyzed, which proves the importance of iterative assessment as opposed to \textit{single-pass} approach. However, after some particular number of iterations has been reached, performance decreases (at least slightly). Also note that the decrease is much larger in the case of $AM^{mean}_{raw}$ model than $AM^{mean}_{bas}$, indicating that the latter is superior to the first. We propose to use this \textit{decrease in performance} as an additional evaluation of any model used with \textit{IAA}, or similar, algorithm. It is preferable to run the algorithm for only a few iterations for one more reason. Networks are often extremely large, especially when they describe many characteristics of entities. In this case, running the algorithm until some fixed point is simply not feasible. Since the prototype system uses only the basic attributes of the entities, the latter does not present a problem. The number of iterations that achieves the best performance clearly depends on various factors (data set, model, etc.). Our evaluation shows that superior, or at least very good, performance (\figref{fig:iterations_raw}, \figref{fig:iterations_bas}) can be achieved with the use of a dynamic setting of the number of iterations (equation~(\ref{eq_dyn_iters})). When no detection of fraudulent components is done, the comparison between \textit{IAA} algorithm and measures of centrality shows no significant difference (table~\ref{tbl:iaa_algorithm}). On the other hand, when we use \textit{PRIDIT} analysis for fraudulent components detection, the \textit{IAA} algorithm dominates others. Still, the results obtained with \textit{DegCen} and \textit{EigCen} are comparable to those obtained with supervised approaches over flat data. This shows that even a simple approach can detect a reasonably large portion of fraud, if appropriate representation of data is used (networks). The analysis of different approaches for detection of fraudulent components produces no major surprises (table~\ref{tbl:fraudulent_components}) \textendash\space the best results are obtained using \textit{(P)RIDIT} analysis. Note that a single indicator can match the performance of majority classifier \textit{MAJOR}, confirming its naiveness (see section~\ref{system_components}); exceptionally high $\overline{AUC}$ score obtained by \textit{MAJOR}, prior to no fraudulent entities detection, only results from the fact, that the returned set of suspicious components is almost $10$-times smaller than for other approaches. The precision of the approach is thus much higher, but for the price of lower recall (useless for fraud detection). We have already discussed the purpose of hierarchical detection of groups of fraudsters \textendash\space to simplify detection of fraudulent entities with appropriate detection of fraudulent components. Another implication of such an approach is also simpler, or is some cases even feasible, \textit{data collection} process. As the detection of components is done using only the relations between entities (relational attributes), a large portion of data can be discarded without knowing the values of any of the intrinsic attributes. This characteristic of the system is vital when deploying in practice \textendash\space (complete) data often cannot be obtained for all the participants, due to sensitivity of the domain. Last, we briefly discuss the applicability of the proposition in other domains. The presented \textit{IAA} algorithm can be used for arbitrary assessment of entities over some relational domain, exploring the relations between entities with no demand for an (initial) labeled data set. When every entity is well defined with (only) the entities directly related to it, considering the context observed, one of the proposed assessment models can also be used. Furthermore, the presented framework (four modules of the system) could be employed for fraud detection in other domains. The system is also applicable for use in other domains, where we are interested in groups of related entities sharing some particular characteristics. The framework exploits the relations between entities, in order to improve the assessment, and is structured hierarchically, to make it applicable in practice. \section{Conclusion} \label{conclusion} The article proposes a novel expert system approach for detection of groups of automobile insurance fraudsters with networks. Empirical evaluation shows that such fraud can be efficiently detected using the proposition and, in particular, that proper representation of data is vital. For the system to be applicable in practice, no labeled data set is used. The system rather allows the imputation of domain expert's knowledge, and it can thus be adopted to new types of fraud as soon as they are noticed. The approach can aid the domain investigator to detect and investigate fraud much faster and more efficiently. Moreover, the employed framework is easy to implement and is also applicable for detection (of fraud) in other relational domains. Future research will be focused on further analyses of different assessment models for \textit{IAA} algorithm, considering also the nonlinear models. Moreover, the \textit{IAA} will be altered into an \textit{unsupervised algorithm}, learning the factors of the model in an unsupervised manner during the actual assessment. The factors would thus not have to be specified by the domain expert. Applications of the system in other domains will also be investigated. \section*{Acknowledgements} \label{acknowledgement} Authors thank Matja\v z Kukar, from University of Ljubljana, and Jure Leskovec, (currently) from Cornell University, for all the effort and useful suggestions; Tja\v sa Krisper Kutin for lectoring the article; and Optilab d.o.o. for the data used in the study. This work has been supported by the Slovene Research Agency \textit{ARRS} within the research program P2-0359.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} A drop placed on a partially wetting substrate will make a finite angle with the surface, given by Young's equation~\cite{Young}, \begin{equation} \cos\ensuremath{\theta_{\mathrm{Y}}}=\frac{\gamma_{\mathrm{SV}}-\gamma_{\mathrm{SL}}}{\gamma}\;, \label{eqn:young} \end{equation} where $\gamma_{\mathrm{SV}}$, $\gamma_{\mathrm{SL}}$ and ${\gamma}$ are the solid--vapour, solid--liquid and liquid--vapour surface tensions. Young's equation assumes that the surface is smooth, and that the contact line is able to move freely to allow the drop to globally minimise its free energy. Due to recent advances in microlithography it is now possible to pattern surfaces with regular arrays of micron-scale posts, leading to deviations from Young's equation on a macroscopic level. On a superhydrophilic, or superwetting surface, the fluid can be drawn into the spaces between the posts, such that the drop forms a film with thickness equal to the height of the posts \cite{Bico,Ishino,Ishino2}, a phenomenon that is termed imbibition. Imbibition is thermodynamically feasible if the thick film has a lower free energy than the dry surface. The free energy change per unit width, $\delta \mathcal{F}$, when the film advances a distance $\delta x$ can be estimated by averaging over the surface features \begin{equation} \delta\mathcal{F}=\left[\left(\gamma_{SL}-\gamma_{SV}\right)\left(r-\phi\right)+\gamm \left(1-\phi\right)\right]\delta x\;, \label{eqn:freeEnergyChange} \end{equation} where $r$ is the ratio of the surface area to its vertical projection, $\phi$ is the fraction of the surface covered by posts, and we assume posts of constant cross-section. Eliminating the surface tensions in Eqn.~(\ref{eqn:freeEnergyChange}) using Eqn.~(\ref{eqn:young}), the condition $\delta\mathcal{F}<0$ becomes~\cite{Bico} \begin{equation} \cos\ensuremath{\theta_{\mathrm{Y}}}>\cos\ensuremath{\theta_{\mathrm{I}}}=\frac{1-\phi}{r-\phi}\;. \label{eqn:bico} \end{equation} This inequality relies on the same assumption as Young's equation (\ref{eqn:young}): namely that the contact line can move freely over the substrate, sampling the average properties of the roughness. This assumption holds well on some surfaces, for example, in the longitudinal direction on a grooved surface, but not in other cases, for example perpendicular to such grooves, where free energy barriers due to contact line pinning can halt the motion of the interface~\cite{ChenHe}. \begin{figure} \centering \subfigure[]{\label{fig:gibbs}\includegraphics[width=50mm]{gibbs.png}} \subfigure[]{\label{fig:schematic}\includegraphics[width=120mm]{schematic.png}} \caption{(a) Illustration of the Gibbs' Criterion. At a sharp corner a wetting contact line remains pinned for all angles between $\ensuremath{\theta_{\mathrm{Y}}}$ and $\ensuremath{\theta_{\mathrm{Y}}}+\psi$ (between the full lines). (b) Diagram showing the geometry and post dimensions for the simulations described in Sec.~\ref{subsection:geometry}.} \end{figure} Contact line pinning occurs when an interface moving across a surface meets a convex corner. The criterion for pinning, proposed by Gibbs~\cite{Gibbs} and demonstrated experimentally by Oliver {\it et al.}~\cite{Oliver}, is that the contact angle can take a range of values spanning the dihedral angle of the corner, as shown in Fig.~\ref{fig:gibbs}. Over this range, the contact angle with respect to the dry plane is too low for the contact line to advance, and that with the wet plane is too high for the contact line to recede. Pinning on surface features can lead to the threshold angle for imbibition being substantially lower than predicition (\ref{eqn:bico}), as was demonstrated by Courbin et al~\cite{Courbin,Courbin2}. Furthermore it has been shown that anisotropic surface features lead to anisotropic spreading~\cite{Kim,Chu}. In these proceedings, we use the lattice Boltzmann method to study imbibition through an array of posts with uniform polygonal cross-section, building on previous work~\cite{BlowKusumaatmaja}. We show that the mechanism of contact line pinning differs with direction, and explain how this leads to anisotropic spreading. \section{Simulation approach} \label{section:LBM} We model the sytem as a diffuse-interface, two-phase fluid in contact with a solid substrate. The thermodynamic state of the fluid is described by an order parameter $\rho(\mathbf{r})$, corresponding to the density of the fluid at each point $\mathbf{r}$. The equilibrium properties are modelled by a Landau free energy functional over the spatial domain of the fluid $\mathcal{D}$, and its boundary with solid surfaces $\partial\mathcal{D}$, \begin{equation} \Psi=\iiint_{\mathcal{D}}\left(p_{\mathrm{c}}\left\{\nu^{4}-2\beta\tau_{\mathrm{w}}(1 \nu^{2})-1\right\}-\mu_{\mathrm{b}} \rho+\tfrac{1}{2}\kappa\vert\nabla \rho\vert^{2}\right)dV-\iint_{\partial \mathcal{D}}\mu_{\mathrm{s}}\rho dS\;. \label{eqn:fluidFreeEnergy} \end{equation} The first term in the integrand of (\ref{eqn:fluidFreeEnergy}) is the bulk free energy density, where $\nu=(\rho-\rho_{\mathrm{c}})/\rho_{\mathrm{c}}$ and $\rho_{\mathrm{c}}$, $p_{\mathrm{c}}$, and $\beta\tau_{\mathrm{w}}$ are constants. It allows two equilibrium bulk phases, liquid and gas, with $\nu=\pm\sqrt{\beta\tau_{\mathrm W}}$. The second term is a Lagrange multiplier constraining the total mass of the fluid. The third term is a free energy cost associated with density gradients. This allows for a finite-width, or diffuse, interface to arise between the bulk phases, with surface tension $\gamma=\tfrac{4}{3}\rho_{\mathrm{c}}\sqrt{2\kappa p_{c}(\beta\tau_{\mathrm{w}})^{3}}$ and width $\chi=\tfrac{1}{2}\rho_{\mathrm{c}}\sqrt{\kappa/(\beta\tau_{\mathrm{w}}p_{\mathrm{c}} }$. The boundary integral takes the form proposed by Cahn~\cite{Cahn}. Minimising the free energy leads to a Neumann condition on the density \begin{equation} \partial_{\perp}\rho = -\mu_{\mathrm{s}}/\kappa\;. \label{eqn:cahn} \end{equation} The wetting potential $\mu_{\mathrm{s}}$ related to the $\ensuremath{\theta_{\mathrm{Y}}}$ of the substrate by~\cite{Briant} \begin{equation} \mu_{\mathrm{s}} = 2\beta\tau_{\mathrm{w}}\sqrt{2p_{\mathrm{c}}\kappa} \mathrm{sign}\left(\tfrac{\pi}{2}-\ensuremath{\theta_{\mathrm{Y}}}\right)\sqrt{\cos{\tfrac{\alpha}{3}} \left(1-\cos{\tfrac{\alpha}{3}}\right)}\;,\;\;\;\; \alpha=\arccos{(\sin^2{\theta_Y})}\;.\label{eqn:youngAngleToPotential} \end{equation} The hydrodynamics of the fluid is described by the continuity and the Navier-Stokes equations \begin{align} \partial_{t}\rho+\partial_{\alpha}(\rho u_{\alpha})&=0\;, \label{eqn:continuity} \\ \partial_{t}(\rho u_{\alpha})+\partial_{\beta}(\rho u_{\alpha}u_{\beta})&=- \partial_{\beta}P_{\alpha\beta}+ \partial_{\beta}\left(\rho\eta\left\{\partial_{\beta}u_{\alpha} + \partial_{\alpha}u_{\beta}\right\}+\rho\lambda\delta_{\alpha\beta}\partial_{\gamma}u_ \gamma}\right)\;, \label{eqn:navierStokes} \end{align} where $\mathbf{u}$ is the local velocity, $\mathbf{P}$ is the pressure tensor derived from the free energy functional (\ref{eqn:fluidFreeEnergy}) and $\eta$ and $\lambda$ are the shear and bulk kinematic viscosities respectively. A free energy lattice Boltzmann algorithm is used to numerically solve Eqns.~(\ref{eqn:continuity},\ref{eqn:navierStokes})~\cite{Yeomans,Succi,Swift}. At the substrate we impose the boundary condition (\ref{eqn:cahn})~\cite{Briant,Dupuis}, and a condition of no-slip~\cite{Ladd,Pooley,Bouzidi}. We choose $\kappa=0.01$, $p_{\mathrm{c}}=0.125$, $\rho_{\mathrm{c}}=3.5$, $\tau_{\mathrm{W}}=0.3$ and $\beta=1.0$, giving an interfacial thickness $\chi=0.9$, surface tension $\gamma=0.029$ and a density ratio of $3.42$. The viscosity ratio is $\eta_{\mathrm{L}}/\eta_{\mathrm{G}}=7.5$. \section{Identifying the pinning mechanisms} \label{section:pinningMechanism} \label{subsection:geometry} We consider a rectangular array of posts on a flat substrate. The cross-section of each post is uniform, and is an equilateral triangle, oriented to point along a primary axis of the array taken to be the $x$-direction (see Fig.~\ref{fig:schematic}). In our simulations we hold the array spacing $d$ and post side-length $b$ at $40$ and $20$ lattice units respectively and vary the post height $h$. We find that rescaling the system such that $d=60$ does not change the threshold angles of spreading significantly{\footnote{Reducing the system size to $d=20$ leads to slightly lower values for the depinning thresholds which explains the small quantitative differences to the results we present in \cite{BlowKusumaatmaja}}}. The posts and substrate are taken to have the same Young angle $\ensuremath{\theta_{\mathrm{Y}}}$. We consider the advance of a straight contact line, which is parallel to the $y$-axis. We exploit periodic boundary conditions and use a simulation box of length $d_{y}$ along $y$. We further halve the computational burden by taking $x=0$ as a plane of reflectional symmetry, and we compare the dynamics for triangles pointing away from, or towards, the origin. To simulate imbibition fed by a mother drop resting on the surface would require a very large simulation box, and be prohibitively costly in terms of computer time. Since we are only interested in the details of flow amoungst the posts, we instead feed imbibition from a `virtual reservior', a small region $\sim 6$ lattice points wide spanning the centre of the box where $\nu$ is fixed to $\sqrt{\beta\tau_{\mathrm{w}}}$ at each time step of the simulation. In this way, liquid is introduced while there is outwards flow, but once the interface is fully pinned, no new liquid enters the system. $\ensuremath{\theta_{\mathrm{Y}}}$ is then decreased quasistatically, and we record the value at which depinning and spreading to the next post occurs. For the geometry we describe, Eqn.~(\ref{eqn:bico}), which describes imbibition with no pinning, gives an upper bound of the threshold angle of~\cite{Bico} \begin{equation} \sec\ensuremath{\theta_{\mathrm{I}}}=1+\frac{12bh}{4d^{2}-\sqrt{3}b^{2}}=1+\frac{12h/b}{16-\sqrt{3}} \label{eqn:bicoTriangles}\;. \end{equation} \subsection{Pinning of a connected interface} \label{subsection:CCL} \begin{figure} \centering \subfigure[]{\label{fig:CCLsnapshots}\includegraphics[width=75mm]{CCLsnapshots.png}} \subfigure[]{\label{fig:DCLsnapshots}\includegraphics[width=105mm]{DCLsnapshots.png}} \caption{Snapshots of (a) the {\it connected contact line} and (b) the {\it disconnected contact line} mechanisms of pinning. In the middle image of (b) a gap appears between the interface and the face of the post. This is because the liquid (blue) surface represents the density $\rho_{\mathrm{c}}$, and close to the concave corner the wetting potential increases the density above $\rho_{\mathrm{c}}$ } \label{fig:snapshots} \end{figure} \begin{figure} \centering \includegraphics[width=150mm]{resultsGraph.png} \caption{The threshold angle for depinning along $+x$ (indigo circles) and $-x$ (mauve squares) as a function of $h/b$. Depinning in these directions occurs according to the connected and disconnected contact line mechanisms respectively. The predictions from Eqns.~(\ref{eqn:bicoTriangles}) in red (dotted), (\ref{eqn:courbinTriangles}) in blue (dashed), and (\ref{eqn:courbinMod}) in green (full) are plotted for comparison.} \label{fig:resultsGraph} \end{figure} Snapshots showing one pinning mechanism, for a film advancing in the direction of the points of the triangles, are shown in Fig.~\ref{fig:CCLsnapshots}. The film is of height $h$ up to the leading triangle, and then descends with increasing $x$ to meet the substrate at the Young angle. There are two ways in which the contact line can move forward. Firstly, it could make a shallower angle at the substrate, but this would increase the free energy away from the minimum characterised by Eqn.~(\ref{eqn:young}). Secondly, the top of the film could move forwards, but this would create liquid-gas interface and hence also have a free energy cost. Depinning will occur when $\ensuremath{\theta_{\mathrm{Y}}}$ is sufficiently small that the contact line on the base reaches the next post. This depinning pathway, which we shall term the {\it connected contact line} mechanism, was elucidated by Courbin et al\cite{Courbin,Courbin2}, who showed that \begin{equation} \tan\ensuremath{\theta_{\mathrm{I}}}=\frac{\text{post height}}{\text{post spacing}} = \frac{h}{d-\tfrac{\sqrt{3}}{2}b}=\frac{h/b}{2-\tfrac{\sqrt{3}}{2}}\;.\label{eqn:courb nTriangles} \end{equation} Numerical results for the variation of the depinning angle with $h/b$ are shown in Fig.~\ref{fig:resultsGraph} as indigo circles, and Eqns.~(\ref{eqn:bicoTriangles},~\ref{eqn:courbinTriangles}) are plotted as red and blue curves respectively. Comparing the simulation data to the blue curve, we see that the simulation values are significantly higher than those predicted, expecially for lower values of $h/b$. To resolve the discrepency we note that Eqn.~(\ref{eqn:courbinTriangles}) assumes a flat interface. A positive Laplace pressure $\Delta p$ will instead produce a convex curvature, enabling the interface to extend further across the substrate. Neglecting curvature in the $y$ direction, we model the interface in the $xz$ plane as a circular arc with radius of curvature $R=\gamma/\Delta p$ given by Laplace's law. The contact angle with the substrate will then be modified to \begin{equation} \tan\left(\ensuremath{\theta_{\mathrm{I}}}-\beta\right)=\tfrac{h}{s}\;, \end{equation} where $\beta$ is the angle of bulge, given by $\sqrt{h^{2}+s^{2}}=2R\sin\beta$. The depinning threshold is thus given by \begin{equation} \ensuremath{\theta_{\mathrm{I}}}=\arctan\left[\tfrac{h}{s}\right]+ \arcsin\left[\tfrac{\sqrt{h^{2}+s^{2}}}{2R}\right]\;. \label{eqn:courbinMod} \end{equation} We expect the dominant contribution to the Laplace pressure to result from confinement in the $z$ direction. Therefore, we shall assume $\Delta p \propto h^{-1}$. Writing $R=Ah$ and $s=d-Bb$, a least squares fit of the data to Eqn.~(\ref{eqn:courbinMod}), with respect to $A$ and $B$, was performed. The optimisation found $B=0.822$, barely different from the value $\tfrac{\sqrt{3}}{2}\approx0.866$ used in Eqn.~(\ref{eqn:bicoTriangles}), and $A=7.09$. Eqn.~(\ref{eqn:courbinMod}) is plotted with these coefficients as the green curve in Fig.~\ref{fig:resultsGraph}, and the fit is very reasonable. \subsection{Pinning of a disconnected interface} \label{subsection:DCL} We now present simulation results with the posts pointing towards the origin, and identify a second mechanism for (de)pinning, shown in Fig.~\ref{fig:DCLsnapshots}. Now the advancing front is disconnected, and is pinned at the vertical edges of the posts. The base of the film is pulled forward by the hydrophilic substrate, but there is a free energy cost associated with the growth of the interface as it spreads out from the gap. As the Young angle is quasistatically decreased, the contact line creeps onto the blunt faces of the posts, near to the base substrate, but remains pinned to the post edges at higher $z$, where the angle made between the interface and the blunt faces remains less than $\ensuremath{\theta_{\mathrm{Y}}}$. When $\ensuremath{\theta_{\mathrm{Y}}}$ becomes sufficiently small, the depinned parts of the contact lines from neighbouring gaps meet each other midway. Once connected, the interface readily wets up the posts and out across the substrate. We shall refer to this as the {\it disconnected contact line} pinning mechanism. The threshold for depinning is plotted in Fig.~\ref{fig:resultsGraph} as mauve squares. The dependence on $h/b$ is different to that for motion along $+x$. When $h/b$ is low, the depinning angle closely follows the upper bound given by Eqn.~(\ref{eqn:bicoTriangles}), indicating that pinning by the posts is weak in this regime. For larger values of the ratio $h/b$, $\ensuremath{\theta_{\mathrm{I}}}$ levels off to $\sim 51^{\circ}$. \section{Imbibition through polygonal posts} \label{section:3Dimbibition} We now present simulation results for films spreading through arrays with various lattice symmetries and post geometries. We use arrays which are several posts wide in both the $x$ and $y$ directions, such that the film is not connected over periodic boundaries. We again use a virtual reservoir, this time located at a small location at the centre of the array, but we hold $\ensuremath{\theta_{\mathrm{Y}}}$ constant over time. We discern how both the arrangement, and the geometries, of the posts affect the dynamics of the interfaces, and the final film shapes. These can be interpreted in terms of the pinning mechanisms identified in Sec.~\ref{section:pinningMechanism}. For our simulations we use $d=40$, $b=20$, $h=30$ and $\ensuremath{\theta_{\mathrm{Y}}}=55^{\circ}$. According to Fig.~\ref{fig:resultsGraph}, these parameters should allow and inhibit spreading in the $+x$ and $-x$ directions respectively. In Figs.~\ref{fig:3DTriSqu},~\ref{fig:3DHexSqu} and \ref{fig:3DTriHex}, we show plan views of the substrate at various times in the evolution of the film. The posts are shown in brown, the wetted substrate in blue, and the unwetted substrate in white. \subsection{A square array of triangular posts} \begin{figure} \centering \includegraphics[width=150mm]{3DTriSqu.png} \caption{Spreading on a square array of triangles} \label{fig:3DTriSqu} \end{figure} We first consider the system studied in Sec.~\ref{section:pinningMechanism}, extended in the $y$ direction. The shape of the film at intermitant times is shown in Fig.~\ref{fig:3DTriSqu}. Advance of the film is possible in the $+x$ and $\pm y$ directions, via the connected contact line mechanism, but the film is barred from advancing in the $-x$ direction, where the disconnected contact line mechanism, which has a lower threshold angle, is relevant. Thus the surface acts as a microfludic diode. Such unidirectional behaviour is made possible by the triangular shape of the posts. \subsection{A square array of hexagonal posts} \begin{figure} \centering \includegraphics[width=160mm]{3DHexSqu.png} \caption{Hexagonal posts show the connected contact line mechanism along $\pm x$ and the disconnected contact line mechanism along $\pm y$.} \label{fig:3DHexSqu} \end{figure} Having considered exclusively triangular posts thus far in the chapter, we now turn our attention to posts whose cross-sections are regular hexagons. We find that the two depinning mechanisms discerned for triangles, in Sec.~\ref{section:pinningMechanism}, may also be applied to hexagons, but that their directional distribution of occurrence is different. We consider hexagonal posts in a square array, oriented so that the corners point along $\pm x$. Along these two directions, as might be expected, the (de)pinning behaviour follows the connected contact line mechanism. Conversely, the faces point along $\pm y$, and it is the disconnected contact line mechanism which determines the (de)pinning in these directions. Fig.~\ref{fig:3DHexSqu} shows the spreading of a film on a square array of hexagons. Since advance of the liquid is permitted along $\pm x$ but barred along $\pm y$, a stripe of fluid is formed. \subsection{A hexagonal array of triangular posts} \begin{figure} \centering \includegraphics[width=150mm]{3DTriHex.png} \caption{Spreading on a hexagonal array of traingles} \label{fig:3DTriHex} \end{figure} We now simulate a hexagonal lattice of posts with spacing $d=40$, and the triangles aligned with lattice directions, as shown in Fig.~\ref{fig:3DTriHex}. We start with a circular film with diameter spanning several posts (Fig.~\ref{fig:3DTriHex}(a)). As spreading begins, the film quickly facets into a hexagon, by aligning its sides with posts in the immediate vicinity (Fig.~\ref{fig:3DTriHex}(b)). Spreading continues, via the connected contact line mechanism, along the directions of the three corners of the posts, but the interface is pinned, by the disconnected contact line mechanism, along the faces. As a result, the facets along the corner directions shrink as they advance (Fig.~\ref{fig:3DTriHex}(c)). \section{Discussion} We have performed Lattice Boltzmann simulations of imbibition on hydrophilic substrates patterned with posts, whose cross-sections are regular polygons. Our motivation was to identify pinning mechanisms on the posts and show how these lead to anistropic spreading behaviour on the surface. We began by considering the advance of a long planar front along a row of triangular posts. This enabled us to take advantage of periodic boundaries in the simulations, reducing computational expense, and to isolate particular pinning behaviours. The simulations showed that the critical value of $\ensuremath{\theta_{\mathrm{Y}}}$ at which the interface advances differs between directions relative to the triangles. Hence there is a range of $\ensuremath{\theta_{\mathrm{Y}}}$ in which spreading is unidirectional, with the exact range and direction of the anisotropy depending on the relative dimensions of the substrate. The cause is differing depinning routes: one where the contact line along the base substrate is connected, and one where it is disconnected, punctuated by the blunt edges of the posts. We showed that a square lattice of triangular posts inhibits spreading in one direction, while if hexagonal posts are used, the spreading is bidrectional, with films elongating. Finally we investigated spreading amongst a hexagonal lattice of triangular posts. The three-fold rotational symmetry of this geometry leads to the formation of a triangular film. In future work it would be of interest to consider how the spreading is affected if the post cross section changes with height, and how electrowetting might be used to locally control the contact angle, and hence the spreading characteristics~\cite{Heikenfield}. \begin{acknowledgements} We thank H. Kusumaatmaja, B. M. Mognetti and R. Vrancken for helpful discussions. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In this paper we consider the problem of maintaining a dynamic set of $N$ two-dimensional points from $\mathbb{R}^2$ in external memory, where the set of points can be updated by the insertion and deletion of points, and where two types of queries are supported: 3-sided range reporting queiresand top-$k$ queries. More precisely, we consider how to support the following four operations in external memory (see Figure~\ref{fig:queries}): \begin{figure}[t] \centerline{\input{queries.tex}} \caption{3-sided range reporting queries~(left) and top-$k$ queries~(right). The reported points are the white points and $k=3$.} \label{fig:queries} \end{figure} \begin{description} \itemsep0pt \parskip0,5ex \item[$\mathrm{Insert}(p)$] Inserts a new point~$p\in\mathbb{R}^2$ into the set~$S$ of points. If $p$ was already in $S$, the old copy of $p$ is replaced by the new copy of $p$ (this case is relevant if points are allowed to carry additional information). \item[$\mathrm{Delete}(p)$] Deletes a point~$p\in\mathbb{R}^2$ from the current set~$S$ of points. The set remains unchanged if $p$ is not in the set. \item[$\mathrm{Report}(x_1,x_2,y)$] Reports all points contained in $S\cap [x_1,x_2]\times[y,\infty]$. \item[$\mathrm{Top}(x_1,x_2,k)$] Report $k$ points contained in $S\cap [x_1,x_2]\times[-\infty,\infty]$ with highest $y$-value. \end{description} \subsection{Previous work} McCreight introduced the priority search tree~\cite{McCreight85} (for internal memory). The classic result is that priority search trees support updates in $O(\log N)$ time and 3-sided range reporting queries in $O(\log N+K)$ time, where $K$ is the number of points reported. Priority search trees are essentially just balanced heap-ordered binary trees where the root stores the point with minimum $y$-value and the remaining points are distributed among the left and right children such that all points in the left subtree have smaller $x$-value than points in the right subtree. Frederickson~\cite{f93} presented an algorithm selecting the $k$ smallest elements from a binary heap in time $O(k)$, which can be applied quite directly to a priority search tree to support top-$k$ queries in $O(\log N+K)$ time. Icking et~al.~\cite{wg87iko} initiated the study of adapting priority search trees to external memory. Their structure uses space $O(N/B)$ and supports 3-sided range reporting queries using $O(\log_2 N+K/B)$ IOs, where $B$ is the external memory block size. Other early linear space solutions were given in~\cite{bg90} and \cite{krvv96} supporting queries with $O(\log_B N+K)$ and $O(\log_B N+K/B+\log_2 B)$ IOs, respectively. Ramaswamy and Subraminian in \cite{pods94rs} and \cite{soda95sr} developed data structures with optimal query time and space, respectively, but suboptimal space bounds and query bounds, respectively (see Table~\ref{tab:results}). The best previous dynamic bounds are obtained by the external memory priority search tree by Arge et al.~\cite{pods99asv}, which supports queries using $O(\log_B N+K/B)$ IOs and updates using $O(\log_B N)$ IOs, using linear space. The space and query bounds of~\cite{pods99asv} are optimal. External memory top-$k$ queries were studied in \cite{soda11abz, pods12st,pods14t}, where Tao in \cite{pods14t} presented a data structure achieving bounds matching those of the external memory priority search tree of Arge et al.~\cite{pods99asv}, updates being amortized. See Table~\ref{tab:results} for an overview of previous results. We improve the update bounds of both \cite{pods99asv} and \cite{pods14t} by a factor $\varepsilon B^{1-\varepsilon}$ by adopting ideas of the buffer trees of Arge~\cite{a03} to the external memory priority search tree~\cite{pods99asv}. \paragraph*{1D dictionaries} The classic B-tree of Bayer and McCreight~\cite{bm72} is the external memory counterpart of binary search trees for storing a set of one-dimensional points. A B-tree supports updates and membership/predecessor searches in $O(\log_B N)$ IOs and 1D range reporting queries in $O(\log_B N+K/B)$ IOs, where $K$ is the output size. The query bounds for B-trees are optimal for comparison based external memory data structures, but the update bounds are not. Arge~\cite{a03} introduced the buffer tree as a variant of B-trees supporting \emph{batched} sequences of interleaved updates and queries. A sequence of $N$ operations can be performed using $O(\frac{N}{B}\log_{M/B} \frac{N}{B})$ IOs. The buffer tree has many applications, and can e.g.\ be used as an external memory priority queue and segment tree, and has applications to external memory graph problems and computational geometry problems. By adapting Arge's technique of buffering updates (insertions and deletions) to a B-tree of degree~$B^{\varepsilon}$, where $1>\varepsilon>0$ is a constant, and where each node stores a buffer of $O(B)$ buffered updates, one can achieve updates using amortized $O(\frac{1}{\varepsilon B^{1-\varepsilon}}\log_B N)$ IOs and member queries in $O(\frac{1}{\varepsilon}\log_B N)$ IOs. Brodal and Fagerberg~\cite{soda03bf} studied the trade-offs between the IO bounds for comparison based updates and membership queries in external memory. They proved the optimality of B-trees with buffers when the amortized update cost is in the range $1/\log^3 N$ to $\log_{B+1} \frac{N}{M}$. Verbin and Zhang~\cite{vz13} and Iacono and P\v{a}tra\c{s}cu~\cite{soda12ip} consider trade-offs between updates and membership queries when hashing is allowed, i.e.\ elements are not indivisible. In~\cite{soda12ip} it is proved that updates can be supported in $O(\frac{\lambda}{B})$ IOs and queries in $O(\log_{\lambda} N)$ IOs, for $\lambda\geq\max\{\log\log N,\log_{M/B} (N/B)\}$. Compared to the comparison based bounds, this essentially removes a factor $\log_B N$ from the update bounds. \paragraph*{Related top-$k$ queries} In the RAM model Brodal et al.~\cite{isaac09bfgl} presented a linear space static data structure provided for the case where $x$-values were $1,2,\ldots,N$, i.e. input is an array of $y$-values. The data structure supports sorted top-$k$ queries in $O(k)$ time, i.e. reports the top~$K$ in decreasing $y$-order one point at a time. Afshani~\cite{soda11abz} studied the problem in external memory and proved a trade-off between space and query time for sorted top~$k$ queries, and proved that data structures with query time $\log^{O(1)}N+O(cK/B)$ requires space $\Omega\left(\frac{N}{B}\frac{\frac{1}{c}\log_M \frac{N}{B}}{\log (\frac{1}{c}\log_M\frac{N}{B})}\right)$ blocks. It follows that for linear space top-$k$ data structures it is crucial that we focus on unsorted range queries. Rahul et al.~\cite{walcom11rgjr} and Rahul and Tao~\cite{pods15rt} consider the static top-$k$ problem for 2D points with associated real weights where queries report the Top-$k$ points with respect to weight contained in an axis-parallel rectangle. Rahul and Tao\cite{pods15rt} achieve query time $O(\log_B N+K/B)$ using space $O(\frac{N}{B}\frac{\log N\cdot(\log\log B)^2}{\log\log_B N})$, $O(\frac{N}{B}\frac{\log N}{\log\log_B N})$, and $O(N/B)$ for supporting 4-sided, 3-sided and 2-sided top-$k$ queries respectively. \begin{table}[t] \newcommand{\AM}{^\dag} \begin{center} \tabcolsep5pt \small \begin{tabular}{ccccc} Query & Reference & Update & Query & Construction \\ \hline & \cite{pods94rs} & $O(\log N\cdot\log B)\AM$ & $O(\log_B N+K/B)$ & \\ & \cite{soda95sr} & $O(\log_B N+(\log_B N)^2/B)\AM$ & $O(\log_B N+K/B+\mathcal{IL}^*(B))$ & \\ \raisebox{1.5ex}[0pt]{3-sided} & \cite{pods99asv} & $O(\log_B N)$ & $O(\log_B N+K/B)$ & \\ & \textbf{New} & $O(\frac{1}{\varepsilon B^{1-\varepsilon}}\log_B N)\AM$ & $O(\frac{1}{\varepsilon}\log_B N+ K/B)\AM$ & $O(\Sort(N))$ \\ \hline & \cite{soda11abz} & (static) & $O(\log_B N+ K/B)$ & \\ & \cite{pods12st} & $O(\log_B^2 N)\AM$ & $O(\log_B N + K/B)$ & $O(\Sort(N))$ \\ \raisebox{1.5ex}[0pt]{Top-$k$} & \cite{pods14t} & $O(\log_B N)\AM$ & $O(\log_B N + K/B)$ & \\ & \textbf{New} & $O(\frac{1}{\varepsilon B^{1-\varepsilon}}\log_B N)\AM$ & $O(\frac{1}{\varepsilon}\log_B N+ K/B)\AM$ & $O(\Sort(N))$ \\ \hline \end{tabular} \end{center} \caption{Previous external-memory 3-sided range reporting and top-$k$ data structures. All query bounds are optimal except~\cite{soda95sr}. Amortized bounds are marked ``$\AM$'', and $\varepsilon$ is a constant satisfying $1>\varepsilon>0$. All data structures require space $O(N/B)$, except \cite{pods94rs} requiring space $O(\frac{N}{B}\log B\log\log B)$. $\mathcal{IL}^*(x)$ denotes the number of times $\log^*$ must be applied before the results becomes $\leq 2$.} \label{tab:results} \end{table} \subsection{Model of computation} The results of this paper are in the external memory model of Aggarwal and Vitter~\cite{av88} consisting of a two-level memory hierarchy with an unbounded external memory and an internal memory of size~$M$. An IO transfers $B\leq M/2$ consecutive records between internal and external memory. Computation can only be performed on records in internal memory. The basic results in the model are that the scanning and sorting an array require $\Theta(\Scan(N))$ and $\Theta(\Sort(N))$ IOs, where $\Scan(N)=\frac{N}{B}$ and $\Sort(N) =\frac{N}{B}\log_{M/B} \frac{N}{B}$respectively~\cite{av88}. In this paper we assume that the only operation on points is the comparison of coordinates. For the sake of simplicity we in the following assume that all points have distinct $x$- and $y$-values. If this is not the case, we can extend the $x$-ordering to the lexicographical order $\prec_x$ where $(x_1,y_1)\prec_x(x_2,y_2)$ if and only if $x_1<x_2$, or $x_1=x_2$ and $y_1<y_2$, and similarly for the comparison of $y$-values. \subsection{Our results} This paper provides the first external memory data structure for 3-sided range reporting queries and top-$k$ queries with amortized sublogarithmic updates. \begin{theorem} \label{thm:main} For any constant $\varepsilon$, $0<\varepsilon\leq\frac{1}{2}$, there exists an external memory data structure supporting the insertion and deletion of points in amortized $O(\frac{1}{\varepsilon B^{1-\varepsilon}}\log_B N)$ IOs and 3-sided range reporting queries and top-$k$ queries in amortized $O(\frac{1}{\varepsilon}\log_B N+K/B)$ IOs, where $N$ is the current number of points and $K$ is the size of the query output. Given an $x$-sorted set of $N$ points, the structure can be constructed with amortized $O(N/B)$ IOs. The space usage of the data structure is $O(N/B)$ blocks. \end{theorem} To achieve the results in Theorem~\ref{thm:main} we combine the external memory priority search tree of Arge et al.~\cite{pods99asv} with the idea of buffered updates from the buffer tree of Arge~\cite{a03}. Buffered insertions and deletions move downwards in the priority search tree in batches whereas points with large $y$-values move upwards in the tree in batches. We reuse the dynamic substructure of \cite{pods99asv} for storing $O(B^2)$ points at each node of the priority search tree, except that we reduce its capacity to $B^{1+\varepsilon}$ to achieve amortized $o(1)$ IOs per update. The major technical novelty in this paper lays in the top-$k$ query (Section~\ref{sec:top-k}) that makes essential use of Frederickson's binary heap selection algorithm~\cite{f93} to select an approximate $y$-value, that allows us to reduce top-$k$ queries to 3-sided range reporting queries combined with standard selection~\cite{bfprt73}. One might wonder if the bounds of Theorem~\ref{thm:main} are the best possible. Both 3-sided range reporting queries and top-$k$ queries can be used to implement a dynamic 1D dictionary with membership queries by storing a value $x\in\mathbb{R}$ as the 2D point $(x,x)\in\mathbb{R}^2$. A dictionary membership query for~$x$ can then be answered by the 3-sided query $[x_1,x_2]\times[-\infty,\infty]$ or a top-1 query for $[x,x]$. If our queries had been worst-case instead of amortized, it would follow from~\cite{soda03bf} that our data structure achieves an optimal trade-off between the worst-case query time and amortized update time for the range where the update cost is between $1/\log^3 N$ to $\log_{B+1} \frac{N}{M}$. Unfortunately, our query bounds are amortized and the argument does not apply. Our query bounds are inherently amortized and it remains an open problem if the bounds in Theorem~\ref{thm:main} can be obtained in the worst case. Throughout the paper we assume the amortized analysis framework of Tarjan~\cite{tarjan85} is applied in the analysis. \paragraph*{Outline of paper} In Section~\ref{sec:child-structure} we describe our data structure for point sets of size $O(B^{1+\varepsilon})$. In Section~\ref{sec:structure} we define our general data structure. In Section~\ref{sec:updates} we describe to how support updates, in Section~\ref{sec:global-rebuilding} the application of global rebuilding, and in Sections~\ref{sec:3-sided} and Section~\ref{sec:top-k} how to support 3-sided range reporting and top-$k$ queries, respectively. In Section~\ref{sec:construction} we describe how to construct the data structure for a given point set. \section{$O(B^{1+\varepsilon})$ structure} \label{sec:child-structure} In this section we describe a data structure for storing a set of $O(B^{1+\varepsilon})$ points, for a constant $0\leq\varepsilon\leq \frac{1}{2}$, that supports 3-sided range reporting queries using $O(1+K/B)$ IOs and the batched insertion and deletion of $s\leq B$~points using amortized $O(1+s/B^{1-\varepsilon})$ IOs. The structure is very much identical to the external memory priority search structure of Arge et al.~\cite[Section~3.1]{pods99asv} for handling $O(B^2)$ points. The essential difference is that we reduce the capacity of the data structure to obtain amortized $o(1)$ IOs per update, and that we augment the data structure with a sampling operation required by our top-$k$ queries. A sampling intuitively selects the $y$-value of approximately every $B$th point with respect to $y$-value within a query range $[x_1,x_2]\times[-\infty,\infty]$ and takes $O(1)$ IOs. In the following we describe how to support the below operations within the bounds stated in Theorem~\ref{thm:child-structure}. \begin{description} \itemsep0pt \parskip0,5ex \item[$\mathrm{Insert}(p_1,\ldots,p_s)$] Inserts the points $p_1,\ldots,p_s$ into the structure, where $1\leq s\leq B$. \item[$\mathrm{Deletes}(p_1,\ldots,p_s)$] Deletes the points $p_1,\ldots,p_s$ from the structure, where $1\leq s\leq B$ . \item[$\mathrm{Report}(x_1,x_2,y)$] Reports all points within the query range $[x_1,x_2]\times[y,\infty]$. \item[$\mathrm{Sample}(x_1,x_2)$] Returns a decreasing sequence of $O(B^{\varepsilon})$ $y$-values $y_1\geq y_2\geq \cdots$ such that for each $y_i$ there are between $iB$ and $iB+\alpha B$ points in the range~$[x_1,x_2]\times[y_i,\infty]$, for some constant $\alpha\geq 1$. Note that this implies that in the range $[x_1,x_2]\times[y_{i+1},y_i[$ there are between 0 and $(1+\alpha)B$ points. \end{description} \begin{theorem} \label{thm:child-structure} There exists a data structure for storing $O(B^{1+\varepsilon})$ points, $0\leq \varepsilon\leq \frac{1}{2}$, where the insertion and deletion of $s$ points requires amortized $O(1+s/B^{1-\varepsilon})$ IOs. Report queries use $O(1+K/B)$ IOs, where $K$ is the number of points returned, and Sample queries use $O(1)$ IOs. Given an $x$-sorted set of $N$ points, the structure can be constructed with $O(N/B)$ IOs. The space usage is linear. \end{theorem} \paragraph*{Data structure} Our data structure $\mathcal{C}$ consists of four parts. A static data structure $\mathcal{L}$ storing $O(B^{1+\varepsilon})$ points; two buffers $\mathcal{I}$ and $\mathcal{D}$ of delayed insertions and deletions, respectively, each containing at most $B$ points; and a set $\mathcal{S}$ of $O(B)$ sampled $y$-values. A point can appear at most once in $\mathcal{I}$ and~$\mathcal{D}$, and at most in one of them. Initially all points are stored in $\mathcal{L}$, and $\mathcal{I}$ and $\mathcal{D}$ are empty. Let $L$ be the points in the $\mathcal{L}$ structure and let $\ell=\lceil |L|/B\rceil$. The data structure $\mathcal{L}$ consists of $2\ell-1$ blocks. The points in $L$ are first partitioned left-to-right with respect to $x$-value into blocks $b_1,\ldots,b_\ell$ each of size $B$, except possibly for the rightmost block~$b_\ell$ just having size $\leq B$. Next we make a vertical sweep over the points in increasing $y$-order. Whenever the sweepline reaches a point in a block where the block together with an adjacent block contains exactly $B$ points on or above the sweepline, we replace the two blocks by one block only containing these $B$ points. Since each such block contains exactly the points on or above the sweepline for a subrange $b_i,\ldots,b_j$ of the initial blocks, we denote such a block $b_{i,j}$. The two previous blocks are stored in $\mathcal{L}$ but are no longer part of the vertical sweep. Since each fusion of adjacent blocks causes the sweepline to intersect one block less, it follows that at most $\ell-1$ such blocks can be created. Figure~\ref{fig:child-structure} illustrates the constructed blocks, where each constructed block is illustrated by a horizontal line segment, and the points contained in the block are exactly all the points on or above the corresponding line segment. Finally, we have a ``catalog'' storing a reference to each of the $2\ell-1$ blocks of $\mathcal{L}$. For a block $b_i$ we store the minimum and maximum $x$-values of the points within the block. For blocks~$b_{i,j}$ we store the interval $[i,j]$ and the minimum $y$-value of a point in the block, i.e.\ the $y$-value where the sweep caused block~$b_{i,j}$ to be created. \begin{figure}[t] \centerline{\input{child-structure.tex}} \caption{$O(B^{1+\varepsilon})$ structure for $B=4$. White nodes are the points. Horizontal line segments with black endpoints illustrate the blocks stored. Each block stores the $B$ points on and above the line segment.} \label{fig:child-structure} \end{figure} The set $\mathcal{S}$ consists of the $\lceil i\cdot B^{\varepsilon}\rceil$-th highest $y$-values in each of the blocks $b_1,\ldots,b_\ell$ for $1\leq i\leq B^{1-\varepsilon}$. Since $\ell=O(B^\varepsilon)$, the total number of points in $\mathcal{S}$ is $O(B^\varepsilon\cdot B^{1-\varepsilon})=O(B)$. The sets $\mathcal{S}$, $\mathcal{I}$, $\mathcal{D}$ and the catalog are stored in $O(1)$ blocks. \paragraph*{Updates} Whenever points are inserted or deleted we store the delayed updates in $\mathcal{I}$ or $\mathcal{D}$, respectively. Before adding a point $p$ to $\mathcal{I}$ or $\mathcal{D}$ we remove any existing occurrence of $p$ in $\mathcal{I}$ and~$\mathcal{D}$, since the new update overrides all previous updates of~$p$. Whenever $\mathcal{I}$ or $\mathcal{D}$ overflows, i.e.\ gets size $>B$, we apply the updates to the set of points in $\mathcal{L}$, and rebuild $\mathcal{L}$ for the updated point set. To rebuild~$\mathcal{L}$, we extract the points $L$ in $\mathcal{L}$ in increasing $x$-order from the blocks $b_1,\ldots,b_\ell$ in $O(\ell)$ IOs, and apply the $O(B)$ updates in $\mathcal{I}$ or $\mathcal{D}$ during the scan of the points to achieve the updated point set~$L'$. We split $L'$ into new blocks $b_1,\ldots,b_{\ell'}$ and perform the vertical sweep by holding in internal memory a priority queue storing for each adjacent pair of blocks the $y$-value where the blocks potentially should be fusioned. This allows the construction of each of the remaining blocks~$b_{i,j}$ of $\mathcal{L}$ in $O(1)$ IOs per block. The reconstruction takes worst-case $O(\ell')$ IOs. Since $|L|=O(B^{1+\varepsilon})$ and the reconstruction of $\mathcal{L}$ whenever a buffer overflow occurs requires $O(|L|/B)=O(B^\varepsilon)$ IOs, the amortized cost of reconstructing $\mathcal{L}$ is $O(1/B^{1-\varepsilon})$ IOs per buffered update. \paragraph*{3-sided reporting queries} For a 3-sided range reporting query $Q=[x_1,x_2]\times[y,\infty]$, the $t$ line segments immediately below the bottom segment of the query range~$Q$ correspond exactly to the blocks intersected by the sweep when it was at $y$, and the blocks contain a superset of the points contained in $Q$. In Figure~\ref{fig:child-structure} the grey area shows a 3-sided range reporting query $Q=[x_1,x_2]\times[y,\infty]$, where the relevant blocks are $b_{3,4}$, $b_5$ and $b_{6,7}$. By construction we know that at the sweepline two consecutive blocks contain at least $B$ points on or above the sweepline. Since the leftmost and rightmost of these blocks do not necessarily contain any points from $Q$, it follows that the output to the range query $Q$ is at least $K\geq B\lfloor(t-2)/2\rfloor$. The relevant blocks can be found directly from the catalog using $O(1)$ IOs and the query is performed by scanning these $t$ blocks, and reporting the points contained in~$Q$. The total number of IOs becomes $O(1+t)=O(1+K/B)$. \paragraph*{Sampling queries} To perform a sampling query for the range $[x_1,x_2]$ we only consider $\mathcal{L}$, i.e.\ we ignore the $O(B)$ buffered updates. We first identify the two blocks $b_i$ and $b_j$ spanning $x_1$ and~$x_2$, respectively, by finding the predecessor of $x_1$ (successor of $x_2$) among the minimum (maximum) $x$-values stored in the catalog. The sampled $y$-values in $\mathcal{S}$ for the blocks $b_{i+1},\ldots,b_{j-1}$ are extracted in decreasing $y$-order, and the $\lceil (s+1)\cdot B^{1-\varepsilon}\rceil$-th $y$-values are returned from this list for $s=1,2,\ldots$. Let $y_1\geq y_2 \geq \cdots$ denote these returned $y$-values. We now bound the number of points in $\mathcal{C}$ contained in the range $Q_s=[x_1,x_2]\times[y_s,\infty]$. By construction there are $\lceil (s+1)\cdot B^{1-\varepsilon}\rceil$ $y$-values $\geq y_s$ in $\mathcal{S}$ from points in $b_{i+1}\cup\cdots\cup b_{j-1}$. In each~$b_t$ there are at most $\lceil B^\varepsilon\rceil$ points vertically between each sampled $y$-value in~$\mathcal{S}$. Assume there are $n_t$ sampled $y$-values $\geq y_s$ in $\mathcal{S}$ from points in $b_t$, i.e.\ $n_{i+1}+\cdots+n_{j-1} = \lceil (s+1)\cdot B^{1-\varepsilon}\rceil$. The number of points in $b_t$ with $y$-value $\geq y_s$ is at least $\lceil n_t B^\varepsilon\rceil$ and less than $\lceil (n_t+1) B^\varepsilon\rceil$, implying that the total number of points in $Q_s\cap (b_{i+1}\cup\cdots\cup b_{j-1})$ is at least $\sum_{t=i+1}^{j-1} \lceil n_tB^\varepsilon\rceil\geq B^\varepsilon \sum_{t=i+1}^{j-1} n_t = B^\varepsilon\lceil (s+1)\cdot B^{1-\varepsilon}\rceil\geq (s+1)B$ and at most $\sum_{t=i+1}^{j-1} (n_t+1)B^\varepsilon = (j-i-1)B^\varepsilon+B^\varepsilon\sum_{t=i+1}^{j-1} n_t = (j-i-1)B^\varepsilon+B^\varepsilon\lceil (s+1)\cdot B^{1-\varepsilon}\rceil \leq (j-i)B^\varepsilon+(s+1)B$. Since the buffered deletions in $\mathcal{D}$ at most cancel $B$ points from $\mathcal{L}$ it follows that there are at least $(s+1)B-B=sB$ points in the range $Q_s$. Since there are most $B$ buffered insertions in $\mathcal{I}$ and $B$ points in each of the blocks~$b_i$ and~$b_j$, it follows that $Q_s$ contains at most $(j-i)B^\varepsilon+(s+1)B+3B=sB+O(B)$ points, since $j-i=O(B^\varepsilon)$ and $\varepsilon\leq \frac{1}{2}$. It follows that the generated sample has the desired properties. Since the query is answered by reading only the catalog and $\mathcal{S}$, the query only requires $O(1)$ IOs. Note that the returned $y$-values might be the $y$-values of deleted points by buffered deletions in $\mathcal{D}$. \section{The data structure} \label{sec:structure} To achieve our main results, Theorem~\ref{thm:main}, we combine the external memory priority search tree of Arge et al.~\cite{pods99asv} with the idea of buffered updates from the buffer tree of Arge~\cite{a03}. As in~\cite{pods99asv}, we have at each node of the priority search tree an instance of the data structure of Section~\ref{sec:child-structure} to handle queries on the children efficiently. The major technical novelty lays in the top-$k$ query (Section~\ref{sec:top-k}) that makes essential use of Frederickson's binary heap selection algorithm~\cite{f93} and our samplings from Section~\ref{sec:child-structure}. \paragraph*{Structure} The basic structure is a B-tree~\cite{bm72} $T$ over the $x$-values of points, where the degree of each internal node is in the range $[\Delta/2,\Delta]$, where $\Delta=\lceil B^\varepsilon\rceil$, except for the root~$r$ that is allowed to have degree in the range $[2,\Delta]$. Each node $v$ of $T$ stores three buffers containing $O(B)$ points: a \emph{point buffer} $P_v$, an \emph{insertion buffer}~$I_v$, and a \emph{deletion buffer}~$D_v$. The intuitive idea is that $T$ together with the $P_v$ sets form an external memory priority search tree, i.e.\ a point in $P_v$ has larger $y$-value than all points in $P_w$ for all descendants $w$ of $v$, and that the $I_v$ and $D_v$ sets are delayed insertions and deletions on the way down through $T$ that we will handle recursively in batches when buffers overflow. A point $p\in I_v$ ($p\in D_v$) should eventually be inserted in (deleted from) one of the $P_w$ buffers at a descendant~$w$ of $v$. Finally for each internal node~$v$ with children $c_1,\ldots,c_\delta$ we will have a data structure $\mathcal {C}_v$ storing $\cup_{i=1}^{\delta} P_{c_i}$, that is an instance of the data structure from Section~\ref{sec:child-structure}. In a separate block at $v$ we store for each child $c_i$ the minimum $y$-value of a point in $P_{c_i}$, or $+\infty$ if $P_{c_i}$ is empty. We assume that all information at the root is kept in internal memory, except for~$\mathcal{C}_r$. \paragraph*{Invariants} For a node $v$, the buffers $P_v$, $I_v$ and $D_v$ are disjoint and all points have $x$-values in the $x$-range spanned by the subtree~$T_v$ rooted at~$v$ in $T$. All points in $I_v\cup D_v$ have $y$-value less than the points in $P_v$. In particular leaves have empty $I_v$ and $D_v$ buffers. If a point appears in a buffer at a node~$v$ and at a descendant $w$, the update at $v$ is the most recent. The sets stored at a node~$v$ must satisfy one of the below size invariants, guaranteeing that either $P_v$ contains at least $B/2$ points, or all insertion and deletion buffers in $T_v$ are empty and all points~in $T_v$ are stored in the point buffer~$P_v$. \begin{enumerate} \itemsep0,5ex \parskip0ex \item $B/2 \leq |P_v| \leq B$, $|D_v| \leq B/4$, and $|I_v| \leq B$, or \item $|P_v|<B/2$, $I_v=D_v=\emptyset$, and $P_w=I_w=D_w=\emptyset$ for all descendants $w$ of $v$ in $T$. \end{enumerate} \section{Updates} \label{sec:updates} Consider the insertion or deletion of a point $p=(p_x,p_y)$. First we remove any (outdated) occurence of $p$ from the root buffers $P_r$, $I_r$ and $D_r$. If $p_y$ is smaller than the smallest $y$-value in $P_r$ then $p$ is inserted into $I_r$ or $D_r$, respectively. Finally, for an insertion where $p_y$ is larger than or equal to the smallest $y$-value in $P_r$ then $p$ is inserted into $P_r$. If $P_r$ overflows, i.e.\ $|P_r|=B+1$, we move a point with smallest $y$-value from $P_r$ to $I_r$. During the update above, the $I_r$ and $D_r$ buffers might overflow, which we handle by the five steps described below: (\textit{i}) handle overflowing deletion buffers, (\textit{ii}) handle overflowing insertion buffers, (\textit{iii}) split leaves with overflowing point buffers, (\textit{iv}) recursively split nodes of degree $\Delta+1$, and (\textit{v}) recursively fill underflowing point buffers. For deletions only (\textit{i}) and (\textit{v}) are relevant, whereas for insertions (\textit{ii})--(\textit{v}) are relevant. (\textit{i}) If a deletion buffer $D_v$ overflows, i.e.\ $|D_v|>B/4$, then by the pigeonhole principle there must exist a child $c$ where we can push a subset $U\subseteq D_v$ of $\lceil|D_v|/\Delta\rceil$ deletions down to. We first remove all points in $U$ from $D_v$, $I_c$, $D_c$, $P_c$, and $\mathcal{C}_v$. Any point~$p$ in $U$ with $y$-value larger than or equal to the minimum $y$-value in $P_c$ is removed from $U$ (since the deletion of $p$ cannot cancel further updates). If $v$ is a leaf, we are done. Otherwise, we add the remaining points in $U$ to $D_c$, which might overflow and cause a recursive push of buffered deletions. In the worst-case, deletion buffers overflow all the way along a path from the root to a single leaf, each time causing at most $\lceil B/\Delta\rceil$ points to be pushed one level down. Updating a $\mathcal{C}_v$ buffer with $O(B/\Delta)$ updates takes amortized $O(1+(B/\Delta)/B^{1-\varepsilon})=O(1)$ IOs. (\textit{ii}) If an insertion buffer $I_v$ overflows, i.e.\ $|I_v|>B$, then by the pigeonhole principle there must exist a child $c$ where we can push a subset $U\subseteq I_v$ of $\lceil|I_v|/\Delta\rceil$ insertions down to. We first remove all points in $U$ from $I_v$, $I_c$, $D_c$, $P_c$, and $\mathcal{C}_v$. Any point in $U$ with $y$-value larger than or equal to the minimum $y$-value in $P_c$ is inserted into $P_c$ and $\mathcal{C}_v$ and removed from $U$ (since the insertion cannot cancel further updates). If $P_c$ overflows, i.e.\ $|P_c|>B$, we repeatedly move the points with smallest $y$-value from $P_c$ to $U$ until $|P_c|=B$. If $c$ is a leaf all points in $U$ are inserted into $P_c$ (which might overflow), and $U$ is now empty. Otherwise, we add the remaining points in $U$ to $I_c$, which might overflow and cause a recursive push of buffered insertions. As for deletions, in the worst-case insertion buffers overflow all the way along a path from the root to a single leaf, each time causing $O(B/\Delta)$ points to be pushed one level down. Updating a $\mathcal{C}_v$ buffer with $O(B/\Delta)$ updates takes amortized $O(1+(B/\Delta)/B^{1-\varepsilon})=O(1)$ IOs. (\textit{iii}) If the point buffer~$P_v$ at a leaf~$v$ overflows, i.e. $|P_v|>B$, we split the leaf $v$ into two nodes $v'$ and $v''$, and distribute evenly the points $P_v$ among $P_{v'}$ and $P_{v''}$ using $O(1)$ IOs. Note that the insertion and deletion buffers of all the involved nodes are empty. The splitting might cause the parent to get degree~$\Delta+1$. (\textit{iv}) While some node~$v$ has degree $\Delta+1$, split the node into two nodes $v'$ and $v''$ and distribute $P_v$, $I_v$ and $D_v$ among the buffers at the nodes $v'$ and $v''$ w.r.t.\ $x$-value. Finally construct $\mathcal{C}_{v'}$ and~$\mathcal{C}_{v''}$ from the children point sets~$P_c$. In the worst-case all nodes along a single leaf-to-root path will have to split, where the splitting of a single node costs $O(\Delta)$ IOs, due to reconstructing $\mathcal{C}$ structures. (\textit{v}) While some node~$v$ has an underflowing point buffer, i.e.\ $|P_v|<B/2$, we try to move the $B/2$ top points into $P_v$ from $v$'s children. If all subtrees below $v$ do not store any points, we remove all points from $D_v$, and repeatedly move the point with maximum $y$-value from $I_v$ to $P_v$ until either $|P_v|=B$ or $I_v=\emptyset$. Otherwise, we scan the children's point buffers $P_{c_1},\ldots,P_{c_\delta}$ using $O(\Delta)$ IOs to identify the $B/2$ points with largest $y$-value, where we only read the children with nonempty point buffers (information about empty point buffers at the children is stored at $v$, since we store the minimum $y$-value in each of the children's point buffer). These points $X$ are then deleted from the children's $P_{c_i}$ lists using $O(\Delta)$ IOs and from $\mathcal{C}_v$ using $O(B^{\varepsilon})=O(\Delta)$ IOs. All points in $X\cap D_v$ are removed from $X$ and $D_v$ (since they cannot cancel further updates below~$v$). For all points $p\in X\cap I_v$, the occurrence of $p$ in $X$ is removed and the more recent occurrence in $I_v$ is moved to $X$. While the highest point in $I_v$ has higher $y$-value than the lowest point in $X$, we swap these two values to satisfy the ordering among buffer points. Finally all remaining points in $X$ are inserted into $P_v$ using $O(1)$ IOs and into $\mathcal{C}_u$ using $O(B^{\varepsilon})=O(\Delta)$ IOs, where $u$ is the parent of~$v$. The total cost for pulling these up to $B/2$ points one level up in~$T$ is $O(\Delta)$ IOs. It is crucial that we do the pulling up of points bottom-up, such that we always fill the lowest node in the tree, which will guarantee that children always have non-underflowing point buffers if possible. After having pulled points from the children, we need to check if any of the children's point buffers underflows and should be refilled. \paragraph*{Analysis} The tree $T$ is rebalanced during updates by the splitting of leaves and internal nodes. We do not try to fusion nodes to handle deletions. Instead we apply global rebuilding whenever a linear number of updates have been performed (see Section~\ref{sec:global-rebuilding}). A leaf $v$ will only be split into two leaves whenever its $P_v$ buffer overflows, i.e.\ when $|P|>B$. It follows that the total number of leaves created during a total of $N$ insertions can at most be $O(N/B)$, implying that at most $O(\frac{N}{\Delta B})$ internal nodes can be created by the recursive splitting of nodes. It follows that $T$ has height $O(\log_\Delta\frac{N}{B})=O(\frac{1}{\varepsilon}\log_B N)$. For every $\Theta(B/\Delta)$ update, in (\textit{i}) and (\textit{ii}) amortized $O(1)$ IOs are spend on each the $O(\log_\Delta \frac{N}{B})$ levels of $T$, i.e.\ amortized $O(\frac{\Delta}{B} \log_\Delta \frac{N}{B})=O(\frac{1}{\varepsilon B^{1-\varepsilon}}\log_B N)$ IOs per update. For a sequence of $N$ updates, in (\textit{iii}) at most $O(N/B)$ leaves are created requiring $O(1)$ IOs each and in (\textit{iv}) at most $O(\frac{N}{B\Delta})$ non-leaf nodes are created. The creation of each non-leaf node costs amortized $O(\Delta)$ IOs, i.e.\ in total $O(N/B)$ IOs, and amortized $O(1/B)$ IO per update. The analysis of (\textit{v}) is more complicated, since the recursive filling can trigger cascaded recursive refillings. Every refilling of a node takes $O(\Delta)$ IOs and moves $\Theta(B)$ points one level up in the tree's point buffers (some of these points can be eliminated from the data structure during this move). Since each point at most can move $O(\log_\Delta \frac{N}{B})$ levels up, the total number of IOs for the refillings during a sequence of $N$ operations is amortized $O(\frac{N}{B}\Delta \log_\Delta \frac{N}{B})$ IOs, i.e.\ amortized $O(\frac{1}{\varepsilon B^{1-\varepsilon}}\log_B N)$ IOs per point. The preceding argument ignores two cases. The first case is that during the pull up of points some points from $P_c$ and $I_v$ swap r\^oles due to their relative $y$-values. But this does not change the accounting, since the number of points moved one level up does not change due to this change of r\^ ole. The second case is when all children of a node all together have less than $B/2$ points, i.e.\ we do not move as many points up as promised. In this case we will move to~$v$ all points we find at the children of~$v$, such that these children become empty and cannot be read again before new points have been pushed down to these nodes. We can now do a simple amortization argument: By double charging the IOs we previously have counted for pushing points to a child we can ensure that each node with non-empty point buffer always has saved an IO for being emptied. It follows that the above calculations remain valid. \section{Global rebuilding} \label{sec:global-rebuilding} We adopt the technique of global rebuilding \cite[Chapter 5]{o83} to guarantee that $T$ is balanced. We partition the sequence of updates into epochs. If the data structure stores $\bar{N}$ points at the beginning of an epoch the next epoch starts after $\bar{N}/2$ updates have been performed. This ensures that during the epoch the current size satisfies $\frac{1}{2}\bar{N} \leq N \leq \frac{3}{2}\bar{N}$, and that $T$ has height $O(\frac{1}{\varepsilon}\log_B \frac{3\bar{N}}{2})=O(\frac{1}{\varepsilon}\log_B N)$. At the beginning of an epoch we rebuild the structure from scratch by construction a new empty structure and reinsert all the non-deleted points from the previous structure. We identify the points to insert in a top-down traversal of the $T$, always flushing the insertion and deletion buffers of a node $v$ to its children and inserting all points of $P_v$ into the new tree. The insertion and deletion buffers might temporarily have size $\omega(B)$. To be able to filter out deleted points etc., we maintain the buffers $P_v$, $I_v$, and $D_v$ in lexicographically sorted. Since level~$i$ (leaves being level~0) contains at most $\frac{3\bar{N}}{2B(\Delta/2)^i}$ nodes, i.e.\ stores $O(\frac{\bar{N}}{(\Delta/2)^i})$ points to be reported and buffered updates to be moved $i$ levels down, the total cost of flushing all buffers is $O(\sum_{i=0}^{\infty} (i+1)\frac{\bar{N}}{B(\Delta/2)^i})=O(\frac{\bar{N}}{B})$ IOs. The $O(\bar{N})$ reinsertions into the new tree can be done in $O(\frac{\bar{N}}{\varepsilon B^{1-\varepsilon}} \log_B \bar{N})$ IOs. The $\bar{N}/2$ updates during an epoch are each charged a constant factor amortized overhead to cover the $O(\frac{\bar{N}}{\varepsilon B^{1-\varepsilon}} \log_B \bar{N})$ IO cost of rebuilding the structure at the end of the epoch. \section{3-sided range reporting queries} \label{sec:3-sided} Our implementation of 3-sided range reporting queries $Q=[x_1,x_2]\times[y,\infty]$ consists of three steps: Identify the nodes to \emph{visit} for reporting points, push down buffered insertions and deletions between visited nodes, and finally return the points in the query range~$Q$. We recursively identify the nodes to visit, as the $O(\frac{1}{\varepsilon}\log_B N)$ nodes on the two root-to-leaf search paths in $T$ for $x_1$ and $x_2$, and all nodes $v$ between $x_1$ and $x_2$ where all points in $P_v$ are in~$Q$. We can check if we should visit a node $w$ without reading the node, by comparing $y$ with the minimum $y$-value in $P_w$ that is stored at the parent of $w$. It follows that all points to be reported by $Q$ are contained in the $P_v$ and $I_v$ buffers of visited nodes $v$ or point buffers at the children of visited nodes, i.e.\ in $\mathcal{C}_v$. Note that some of the points in the $P_v$, $I_v$ and $\mathcal{C}_v$ sets might have been deleted by buffered updates at visited ancestor nodes. A simple worst-case solution for answering queries would be to extract for all visited nodes~$v$ all points from $P_v$, $I_v$, $D_v$ and $\mathcal{C}_c$ contained in $Q$. By sorting the $O(K+\frac{B}{\varepsilon}\log_B N)$ extracted points (bound follows from the analysis below) and applying the buffered updates we can answer a query in worst-case $O(\Sort(K+\frac{B}{\varepsilon}\log_B N))$ IOs. In the following we prove the better bound of amortized $O(\frac{1}{\varepsilon}\log_B N+K/B)$ IOs by charging part of the work to the updates. Our approach is to push buffered insertions and deletions down such that for all visited nodes~$v$, no ancestor $u$ of $v$ stores any buffered updates in $D_u$ and $I_u$ that should go into the subtree of $v$. We do this by a top-down traversal of the visited nodes. For a visited node $v$ we identify all the children to visit. For a child $c$ to visit, let $U\subseteq D_v \cup I_v$ be all buffered updates belonging to the $x$-range of $c$. We delete all points in $U$ from $P_c$, $\mathcal{C}_v$, $I_c$ and $D_c$. All updates in $U$ with $y$-value smaller than the minimum $y$-value in $P_c$ are inserted into $D_c$ or $I_c$, respectively. All insertions in $U$ with $y$-value larger than or equal to the minimum $y$-value in $P_c$ are merged with $P_c$. If $|P_c|>B$ we move the points with lowest $y$-values to $I_c$ until $|P_c|=B$. We update $\mathcal{C}_v$ to reflect the changes to $P_c$. During this push down of updates, some update buffers at visited nodes might get size~$>B$. We temporarily allow this, and keep update buffers in sorted $x$-order. The reporting step consists of traversing all visited nodes~$v$ and reporting all points in $(P_v \cup I_v)\cap Q$ together with points in $\mathcal{C}_v$ contained in $Q$ but not canceled by deletions in $D_v$, i.e.\ $(Q\cap\mathcal{C}_v)\setminus D_v$. Overflowing insertion and deletion buffers are finally handled as described in the update section, Section~\ref{sec:updates}~(\textit{i})--(\textit{iv}), possibly causing new nodes to be created by splits, where the amortized cost is already accounted for in the update analysis. The final step is to refill the $P_v$ buffers of visited nodes, which might have underflowed due to the deletions pushed down among the visited nodes. The refilling is done as described in Section~\ref{sec:updates}~(\textit{v}). \paragraph*{Analysis} Assume $V+O(\frac{1}{\varepsilon}\log_B N)$ nodes are visited, where $V$ nodes are not on the search paths for $x_1$ and $x_2$. Let $R$ be the set of points in the point buffers of the $V$ visited nodes before pushing updates down. Then we know $|R|\geq VB/2$. The number of buffered deletions at the visited nodes is at most $(V+O(\frac{1}{\varepsilon}\log_B N))B/4$, i.e.\ the number of points reported $K$ is then at least $VB/2-(V+O(\frac{1}{\varepsilon}\log_B N))B/4=VB/4-O(\frac{B}{\varepsilon}\log_B N)$. It follows $V=O(\frac{1}{\varepsilon}\log_B N+K/B)$. The worst-case IO bound becomes $O(V+\frac{1}{\varepsilon}\log_B N+K/B)=O(\frac{1}{\varepsilon}\log_B N+K/B)$, except for the cost of pushing the content of update buffers done at visited nodes and handling overflowing update buffers and underflowing point buffers. Whenever we push $\Omega(B/\Delta)$ points to a child, the cost is covered by the analysis in Section~\ref{sec:updates}. Only when we push $O(B/\Delta)$ updates to a visited child, with an amortized cost of $O(1)$ IOs, we charge this IO cost to the visited child. Overflowing update buffers and refilling $P_v$ buffers is covered by the cost analyzed in Section~\ref{sec:updates}. It follows that the total amortized cost of a 3-sided range reporting query in amortized $O(\frac{1}{\varepsilon}\log_B N+ K/B)$ IOs. \section{Top-$k$ queries} \label{sec:top-k} Our overall approach for answering a top-$k$ query for the range $[x_1,x_2]$ consists of three steps: First we find an approximate threshold $y$-value $\bar{y}$, such that we can reduce the query to a 3-sided range reporting query. Then we perform a 3-sided range reporting query as described in Section~\ref{sec:3-sided} for the range $[x_1,x_2]\times[\bar{y},\infty]$. Let $A$ be the output the three sided query. If $|A|\leq k$ then we return $A$. Otherwise, we select and return $k$ points from $A$ with largest $y$-value using the linear time selection algorithm of Blum et al.~\cite{bfprt73}, that in external memory uses $O(|A|/B)$ IOs. The correctness of this approach follows if $|A|\geq k$ or $A$ contains all points in the query range, and the IO bound follows if $|A|=O(K+B\log_B N)$ and we can find $\bar{y}$ in $O(\log_B N+ K/B)$ IOs. It should be noted that our~$\bar{y}$ resembles the approximate $k$-threshold used by Sheng and Tao~\cite{pods12st}, except that we allow an additional slack of $O(\log_B N)$. To compute $\bar{y}$ we (on demand) construct a heap-ordered binary tree~$\mathcal{T}$ of sampled $y$-values, where each node can be generated using $O(1)$ IOs, and apply Frederickson's binary heap-selection to $\mathcal{T}$ to find the $O(k/B+\log_B N)$ largest $y$-value in $O(K/B+\log_B N)$ time and $O(K/B+\log_B N)$ IOs. This is the returned value~$\bar{y}$. For each node $v$ we construct a path $\mathcal{P}_v$ of $O(\Delta)$ decreasing $y$ values, consisting of the samples returned by $\mathrm{Sample}(x_1,x_2)$ for $\mathcal{C}_v$ and merged with the minimum $y$ values of the point buffers $P_c$, for each child~$c$ within the $x$-range of the query and where $|P_c|\geq B/2$. The root of~$\mathcal{P}_v$ is the largest $y$-value, and the remaining nodes form a leftmost path in decreasing $y$-value order. For each child~$c$ of $v$, the node in $\mathcal{P}_v$ storing the minimum $y$-value in $P_c$ has as right child the root of $\mathcal{P}_c$. Finally let $v_1,v_2,\ldots,v_t$ be all the nodes on the two search paths in $T$ for $x_1$ and $x_2$. We make a left path~$\mathcal{P}$ containing $t$ nodes, each with $y$-value $+\infty$, and let the root of $\mathcal{P}_{v_i}$ be the right child of the $i$th node on $\mathcal{P}$. Let $\mathcal{T}$ be the resulting binary tree. The $\bar{y}$ value we select is the $\bar{k}=\lceil 7t+12k/B\rceil$-th among the nodes in the binary tree~$\mathcal{T}$. \paragraph*{Analysis} We can construct the binary tree~$\mathcal{T}$ topdown on demand (as needed by Frederickson's algorithm), using $O(1)$ IOs per node since each $\mathcal{P}_v$ structure can be computed using $O(1)$ IOs. To lower bound the number of points in $T$ contained in $Q_{\bar{y}}=[x_1,x_2]\times[\bar{y},\infty]$, we first observe that among the $\bar{k}$ $y$-values in $\mathcal{T}$ larger than $\bar{y}$ are the $t$ occurrences of $+\infty$, and either $\geq \frac{1}{3}(\bar{k}-t)$ samplings from $\mathcal{C}_v$ sets or $\geq \frac{2}{3}(\bar{k}-t)$ minimum values from $P_v$ sets. Since $s$ samplings from $\mathcal{C}_v$ ensures $sB$ elements from $\mathcal{C}_v$ have larger values than $\bar{y}$ and the $\mathcal{C}_v$ sets are disjoint, the first case ensures that there are $\geq \frac{1}{3}B(\bar{k}-t)$ points from $\mathcal{C}_v$ sets in $Q_{\bar{y}}$. For the second case each minimum $y$-value of a $P_v$ set represents $\geq B/2$ points in $P_v$ contained in $Q_{\bar{y}}$, i.e.\ in total $\geq \frac{B}{2}\frac{2}{3}(\bar{k}-t)=\frac{1}{3}B(\bar{k}-t)$ points. Some of these elements will not be reported, since they will be canceled by buffered deletions. These buffered deletions can only be stored at the $t$ nodes on the two search paths and in nodes where all $\geq B/2$ points in $P_v$ are in $Q_{\bar{y}}$. It follows at most $\frac{B}{4}(t+\bar{k})$ buffered deletions can be applied to points in the $P_v$ sets, i.e.\ in total at least $\frac{B}{3}(\bar{k}-t) - \frac{B}{4}(t+\bar{k}) =\frac{B}{12}\bar{k}-\frac{7B}{12}t =\frac{B}{12}\lceil 7t+12k/B\rceil-\frac{7B}{12}t \geq k$ points will be reported by the 3-sided range reporting $Q_{\bar{y}}$. To upper bound the number of points that can be reported by $Q_{\bar{y}}$, we observe that these points are stored in $P_v$, $\mathcal{C}_v$ and $I_v$ buffers. There are at most $\bar{k}$ nodes where all $\geq B/2$ points in $P_v$ are reported (remaining points in point buffers are reported using $\mathcal{C}_v$ structures), at most from $t+\bar{k}$ nodes we need to consider points from the insertion buffers $I_v$, and from the at most $t+\bar{k}$ child structures~$\mathcal{C}_v$ we report at most $\bar{k}B+(\alpha+1)(t+\bar{k})B$ points, for some constant $\alpha \geq 1$, which follows from the interface of the Sample operation from Section~\ref{sec:child-structure}. In total the 3-sided query reports at most $\bar{k}B + (t + \bar{k})B+\bar{k}B+(\alpha+1) (t+\bar{k})B=O(B(t+\bar{k}))=O(\frac{1}{\varepsilon}B\log_B N+k)$ points. In the above we ignored the case where we only find $<\bar{k}$ nodes in~$\mathcal{T}$, where we just set $\bar{y}=-\infty$ and all points within the $x$-range will be reported. Note that the IO bounds for finding $\bar{y}$ and the final selection are worst-case, whereas only the 3-sided range reporting query is amortized. \section{Construction} \label{sec:construction} In this section we describe how to initialize our data structure with an initial set of $N$ points using $O(\Sort(N))$ IOs. If the points are already sorted with respect to $x$-value the initialization requires $O(\Scan(N))$ IOs. If the points are not sorted with respect to $x$-value, we first sort all points by $x$-value using $\Sort(N)$ IOs. Next we construct a B-tree $T$ over the $x$-values of the $N$ points using $O(\Scan(N))$ IOs, such that each leaf stores $B/2$ $x$-values (except for the rightmost leaf storing $\leq B/2$, $x$-values) and each internal node has degree $\Delta/2$ (except for the rightmost node at each level having degree $\leq\Delta/2$). The $P_v$ buffers of $T$ are now filled bottom-up, such that each buffer contains $B$ points (except if the subtrees below all have empty $P_w$ buffers). First we store the $N$ points in the $P_v$ buffers at the leaves of $T$ from left-to-right using $O(\Scan(N))$ IOs. The remaining levels af $T$ are processed bottom up by recursively pulling up points. The $P_v$ buffer of a node is filled with the $B/2$ points with largest $y$-value from the children, by scanning all children; if a child buffer underflows, i.e.\ gets $< B/2$ points, then we recursive refill the child's buffer with $B/2$ points by scanning all its children. This process guarantees that all children of a node $v$ have $\geq B/2$ points before filling $v$ with $B/2$ points, which enables us to move the points to $v$ before we recursively have to refill the children. Moving $B/2$ nodes from the children to a node can be done with $O(\Delta)$ IOs. In a second iteration we process the nodes top-down filling the $P_v$ buffers to contain exactly $B$ points by moving between 0 and $B/2$ points from the children's point buffers $P_c$ (possibly causing $P_c$ to underflow and the recursive pulling of $B/2$ points). All insertion and deletion buffers $I_v$ and $D_v$ are initialized to be empty, and all $\mathcal{C}_v$ structures are constructed from its children's $P_c$ point buffers. We now argue that the recursive filling of the $P$ buffers requires $O(\Scan(N))$ IOs. Level~$i$ of $T$ (leaves being level 0) contains at most $\frac{N}{B\Delta^i}$ nodes, i.e.\ the total number of points stored at level $i$ or above is $O(\sum_{j=i}^{\infty} B\frac{N}{B\Delta^j})=O(\frac{N}{\Delta^i})$. The number of times we need to move $B/2$ points to level~$i$ from level~$i-1$ is then bounded by $O(\frac{N}{\Delta^i}/\frac{B}{2})=O(\frac{N}{B\Delta^i})$, where each move requires $O(\Delta)$ IOs. The total number of IOs for the filling of $P_v$ buffers becomes $O(\sum_{i=1}^{\infty} \Delta\frac{N}{B\Delta^i}) = O(\frac{N}{B} \sum_{i=0}^{\infty} \frac{1}{\Delta^i})=O(N/B)$. \paragraph*{Amortized analysis} The above considers the worst-case cost to construct an initial structure for $N$ points. In the following we argue that the amortized costs of the remaining operations remain unchanged during the epoch started by the construction. We consider a sequence of operations containing $\Ni$ insertions and $\Nd$ deletions, starting with a newly constructed tree containing $N$ points. We first bound the cost of creating new nodes in $T$ during the updates. Since each leaf in the initial tree only spans the $x$-range of at most $B/2$ points, it follows that $\Ni$ insertions can at most cause $2\Ni/B$ leaves to be created. Since each new leaf of $T$ can be created using $O(1)$ IOs, the total cost of creating new leaves is $O(\Ni/B)$. Similarly, since each internal node has initial degree $\leq \Delta/2$, at most $O(\frac{\Ni}{\Delta B})$ internal nodes might be created, each taking $O(\Delta)$ IOs to create, i.e.\ in total $O(\Ni/B)$ IOs (not counting the cost of refilling point buffers). An overflowing insertion buffer is handled by moving $\Theta(B/\Delta)$ buffered insertions one level down in $T$ using $O(1)$ IOs. Since each insertion has to be moved $O(\frac{1}{\varepsilon}\log_B N)$ levels down before it is canceled or transforms into the insertion into a point buffer $P_v$, it follows that the total cost of handling over flowing insertion buffers is $O(\frac{\Ni}{B/\Delta}\frac{1}{\varepsilon}\log_B N)$ IOs. Similarly overflowing deletion buffers are handled by moving $\Theta(B/\Delta)$ deletions one level using $O(1)$ IOs. When the deletion of a point $p$ reaches a node where $p\in P_v$ the deletion terminates after having removed $p$ from $P_v$. This leaves a ``hole'' in the $P_v$ buffer, that needs to be moved down by pulling up points from the children. Each deletion potentially creates a hole and each of the $O(\frac{\Ni}{\Delta B})$ splittings of an internal node creates $B$ holes, i.e.\ in total we need to handle $O(\Nd+\frac{\Ni}{\Delta B}B)$ holes. Since we can move up $B/2$ points, or equivalently move down $B/2$ holes, using $O(\Delta)$ IOs, and a hole can at most be moved down $O(\frac{1}{\varepsilon}\log_B N)$ levels before it vanishes, the total cost of handling holes is $O((\Nd+\frac{\Ni}{\Delta})\frac{\Delta}{B}\frac{1}{\varepsilon}\log_B N)$ IOs. The total cost of handling the updates, also covering the work done by the queries that we charged to the updates, becomes $O(\frac{\Nd+\Ni}{B/\Delta}\frac{1}{\varepsilon}\log_B N)=O(\frac{\Nd+\Ni} {\varepsilon B^{1-\varepsilon}}\log_B N)$ IOs, i.e.\ matching the previous proved amortized bounds. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} Among the various approximations existing in the literature to describe a diluted Bose condensed gas at finite temperature, the generalized random phase approximation (GRPA) has been the subject of several studies \cite{Reidl,Zhang,condenson,Levitov,gap}. This approximation has attracted a special attention since it is the only one in the literature with two important properties: 1) in agreement with the Hugenholtz-Pines theorem \cite{HM,HP,Ketterle,Stringari}, it predicts the observed gapless and phonon-like excitations; 2) the mass, momentum and energy conservation laws are fulfilled in the gas dynamical description. An approximation that satisfies these properties is said to be {\it gapless} and {\it conserving} \cite{Reidl,HM}. Besides these unique features, the GRPA predicts also other phenomena, namely a second branch of excitations and the dynamical screening of the interaction potential. These phenomena appear also in the case of a gas of charged particles or plasma. The possibility of a second kind of excitation has been explained quite extensively in \cite{condenson,Levitov,gap}. There is a distinction between the single particle excitations and the collective excitations. In the case of a plasma, the first corresponds to the electrically charged excitations and its dispersion relation is obtained from the pole of the one particle Green function. The second corresponds to the plasmon which is a chargeless excitation whose the dispersion relation is obtained from the pole of the susceptibility function. The plasmon mediates the interaction between two charged excitations. More precisely, during the interaction, one charged excitation emits a virtual plasmon which is subsequently reabsorbed by another charged excitation (see Fig.1). \begin{figure} \begin{center} \resizebox{0.50\columnwidth}{!}{\includegraphics{condensonh.eps}} \end{center} \caption{Feymann diagram illustrating the mediation process: 1) For a plasma two charged excitations of momentum $\vc{k}$ and $\vc{k}'$ mediate their interaction via a plasmon of momentum $\vc{q}$; 2) For a Bose gas, two excitations with one atom number unit mediate their interaction via a phonon-like collective excitation.} \end{figure} Remarkably, such a description holds also for a Bose gas with single atom excitations carrying one unit of atom number and with gapless collective excitations with no atom number. The poles of the Green functions have a similar structure above the critical point. But below this critical point, the existence of a macroscopic condensed fraction {\it hybridizes} the collective and single particle excitations so that the poles of the one particle Green function and the susceptibility function mix to form common branches of collective excitations \cite{gap,HM}. Thus, at the difference of a plasma, the presence of a condensed fraction prevents the direct observation of the atom-like excitation through the one particle Green function. The dynamical screening effect predicted in the GRPA appears much more spectacular in a Bose gas. The screening effect of the coulombian interaction is well known to explain the dissociation of salt diluted in water into its ions (see Fig.2a). But it also provides an explanation to the superfluidity phenomenon i.e. the possibility of a metastable motion without any friction. \begin{figure} \begin{center} \resizebox{0.75\columnwidth}{!}{\includegraphics{diel.eps}} \end{center} \caption{ Illustration of the screening effect: (a) In water, the interaction force between the ions $Na^+$ and $Cl^-$ of the salt is screened by the presence of water molecules and the coulombian potential $V_{Coul}(\vc{r})$ is reduced by the relative permittivity factor ${\cal{K}} \sim 80$. (b) In a Bose condensed gas, a similar effect occurs. The condensed and thermal atoms represented in blue and red respectively correspond in good approximation to the superfluid and normal fluid. The interaction potential $V(\vc{r})$ displayed by these thermal atoms on condensed atoms are pictured qualitatively by the green line. The macroscopic wave function associated to the condensed atoms deforms its shape in order to locally modify the superfluid mean field interaction energy represented by the blue line. The net result is a total screening of the interaction potential by this mean field energy, which prevents binary collision processes between condensed and thermal atoms. In this way, one can explain qualitatively the metastability of a relative motion between the superfluid and the normal fluid. } \label{fig2} \end{figure} Most of the literature on superfluidity is usually devoted to the study of metastable motion in a toroidal geometry like, for example, an annular region between two concentric cylinders possibly in rotation \cite{books,Leggett2}. In this simply connected geometry, the angular momentum about the axis of the cylinder of the superfluid is quantized in unit of $\hbar$. The metastability of the motion is explained by the impossibility to go continuously from one quantized state to another due to the difficulty to surmount an enormous free-energy barrier. This is not the situation we want to address in this paper. We are rather focusing on the explanation of the superfluid ability to flow without any apparent friction with its surrounding. The Landau criterion is a necessary but not sufficient condition for superfluidity. It tells about the kinematic conditions under which an external object can move relatively to a superfluid without damping its relative velocity by emitting a phonon-like collective excitation. For a dilute Bose gas at low temperature, it amounts to saying that this relative velocity must be lower than the sound velocity \cite{Stringari}. The external object is assumed to be macroscopic and can be an impurity \cite{Chikkatur}, an obstacle like a lattice \cite{Cataliotti} or even the normal fluid \cite{condenson}. In particular, this criterion does not taken into account the fact that the normal fluid is microscopically composed of thermal excitations. In a Bose condensed gas, even though their relative velocity is on average lower than the critical one, many of these excitations are very energetic with a relative velocity high enough to allow the phonon emission. In the GRPA where these excitations correspond to the thermal atoms and under the condition of the Landau criterion, such a process is forbidden as shown qualitatively from Fig.2b. The effect of an external perturbation of the condensed atoms caused for example by the thermal atoms is attenuated by the dynamical screening. This screening is total in the sense that no effective mutual binary interaction allows a collision process which would be essential for a dissipative relaxation of the superfluid motion. The purpose of this paper is to show that these peculiar phenomena could in principle be observed in a Raman scattering process. This process induces a transition for a given frequency $\omega$ and a wavevector $\vc{q}$ determined from the difference of the frequencies and the wavevectors of two laser beams \cite{books}. For each wavevector corresponding to the transferred momentum, one can arbitrarily tune the frequency in order to reach the resonance energy associated to the excitation. Unlike the Bragg scattering which allows the observation of the Bogoliubov phonon-like collective excitation \cite{Ketterle,Stringari}, the Raman scattering is more selective. Not only the gas is probed with a selected energy transition and transferred momentum, but the atoms are scattered into a selected second internal hyperfine level. Through a Zeeman splitter, they can be subsequently analyzed separately from unscattered atoms. According to the GRPA, the scattered thermal atoms become distinguishable from the unscattered ones and thus release the gap energy due to the exchange interaction. In a previous study \cite{gap}, we showed that this gap appears as a resonance in the frequency spectrum of the atom transition rate at $\vc{q} \rightarrow 0$. The possibility of momentum transfer allows to analyze the influence of the screening of the external perturbation induced by the Raman light beams. The paper is divided as follows. In section 2, we review the time-dependant Hartree-Fock (TDHF) equations for a spinor condensate and study the linear response function to an external potential which gives results equivalent to the GRPA. Sections 3 and 4 are devoted to the Bragg and Raman scatterings respectively. Section 5 ends up with the conclusions and the perspectives. \section{Time-dependant Hartree-Fock approximation} We start from the time-dependant Hartree-Fock equations for describing two component spinor Bose gas \cite{Zhang,books} labeled by $a=1,2$. The atoms have a mass $m$, feel the external potential $V_{ab}(\vc{r},t)$ and the Hartree and Fock mean field interaction potential characterized by the coupling constants $g_{ab}=4\pi a_{ab}/m$ expressed in terms of the scattering lengths $a_{ab}$ between components $a$ and $b$ ($\hbar=1$). Note that no Fock mean field (or exchange) interaction energy appears between condensed atoms. These equations describe the time evolution of a set of spinor wave function $\psi_{a,i}(\vc{r},t)$ describing $N_i$ atoms labeled by $i$ and depending on the position $\vc{r}$ and on the time $t$. For the condensed mode ($i=0$), these are: \begin{eqnarray} \left(\begin{array}{cc} i{\partial_t}+\frac{\nabla^2_\vc{r}}{2m} -V_{11} & V_{12}^*\\ V_{12} & i{\partial_t}+\frac{\nabla^2_\vc{r}}{2m} -V_{22} \end{array} \right) \left(\begin{array}{c} \psi_{1,0} \\ \psi_{2,0} \end{array} \right)= \nonumber \\ \left(\begin{array}{cc} \sum_j (g_{11} (2-\delta_{0,j})|\psi_{1,j}|^2 +g_{12}|\psi_{2,j}|^2)N_j & g_{12}\sum_j (1-\delta_{0,j})N_j \psi_{2,j}^* \psi_{1,j}\\ g_{12}\sum_j (1-\delta_{0,j}) N_j \psi_{1,j}^* \psi_{2,j} & \sum_j (g_{22} (2-\delta_{0,j})|\psi_{2,j}|^2 +g_{12}|\psi_{1,j}|^2)N_j \end{array}\right) \left(\begin{array}{c} \psi_{1,0} \\ \psi_{2,0} \end{array} \right) \end{eqnarray} For a non condensed mode ($i \not= 0$), these are \begin{eqnarray} \left(\begin{array}{cc} i{\partial_t}+\frac{\nabla^2_\vc{r}}{2m} -V_{11} & V_{12}^*\\ V_{12} & i{\partial_t}+\frac{\nabla^2_\vc{r}}{2m} -V_{22} \end{array} \right)\left(\begin{array}{c} \psi_{1,i} \\ \psi_{2,i} \end{array} \right)= \nonumber \\ \left(\begin{array}{cc} \sum_j (2g_{11} |\psi_{1,j}|^2 +g_{12}|\psi_{2,j}|^2)N_j & g_{12}\sum_j N_j \psi_{2,j}^* \psi_{1,j}\\ g_{12}\sum_j N_j \psi_{1,j}^* \psi_{2,j} & \sum_j (2 g_{22} |\psi_{2,j}|^2 +g_{12}|\psi_{1,j}|^2)N_j \end{array}\right) \left(\begin{array}{c} \psi_{1,i} \\ \psi_{2,i} \end{array} \right) \end{eqnarray} The non condensed spinors remain orthogonal during their time evolution in the thermodynamic limit. In general, the spinor associated to the condensed mode does not remain orthogonal with the others. But according to \cite{Huse}, the non orthogonality is not important in the thermodynamic limit for smooth external potential. Another way of justifying the non orthogonality is to start from an ansatz where the condensed spinor mode is described in terms of a coherent state and the non condensed ones in terms of a complete set of orthogonal Fock states i.e. $|\Psi \rangle \sim \exp(\sum_{j\not=0} b_j c_j^\dagger-c.c.) \prod_{i\not= 0} (c_i^\dagger)^{N_i} |0\rangle$ where $c_i^\dagger$ is the atom creation operator in the mode $i$ and $b_j=\sqrt{N_0}\sum_a \int d^3 \vc{r} \psi^*_{a,j} \psi_{a,0}$. The theory remains {\it conserving} because the conservation laws are preserved on average but becomes non {\it number conserving} since the quantum state is not an eigenstate of the total particle number operator. This procedure is justified in the thermodynamic limit since the total particle number fluctuations are relatively small during the time evolution. In contrast, instead of using spinor wavefunctions, the alternative method based on the use of excitation operators is number conserving \cite{condenson,Levitov}. The atom number $N_i$ for each mode is supposed time-independent in the TDHF. Strictly speaking, a collision term must be added in order to allow population transfers between the various modes. These equations are valid in the collisionless regime i.e. on a time scale shorter than the average time between two collisions $\tau \sim 1/(\sigma_{ab} n v_T)$ where $v_T=\sqrt{1/\beta m}$ is the average velocity and $\sigma_{ab}=8\pi a_{ab}^2$ is the scattering cross section. In these conditions, the resulting frequency spectrum has a resolution limited by $\Delta \omega \sim 1/\tau$. The magnitude order of resolution of interest is given by the $g_{ab}n$'s so we require $\Delta \omega / g_{ab}n \sim \sqrt{a^3_{ab} n/\beta g_{ab}n} \ll 1$ which is generally the case when $a^3_{ab} n \ll 1$. These conditions are fulfilled for the parameter values considered in this work. In the following, we will restrict our analysis to a bulk gas embedded in a volume $V$. At $t<0$, we assume all atoms in thermodynamic equilibrium in the level $1$ and that $V_{ab}=0$ except for $V_{22}=\omega_0$ which is constant and fixes the energy shift between the two sub-levels. In that case, the solutions of the TDHF are orthogonal plane waves with $i$ corresponding to the momentum $\vc{k}$: \begin{eqnarray} \left(\begin{array}{c} \psi^{(0)}_{1,\vc{k}} \\ \psi^{(0)}_{2,\vc{k}} \end{array} \right)= \frac{\exp[i(\vc{k}.\vc{r}-\epsilon^{HF}_{1,\vc{k}} t)]}{\sqrt{V}} \left(\begin{array}{c}1 \\ 0 \end{array} \right) \end{eqnarray} where we define the Hartree-Fock energy for atoms with momentum $\vc{k}$: \begin{eqnarray}\label{drs} \epsilon^{HF}_{1,\vc{k}}=\epsilon_\vc{k} +g_{11} (2n- n_\vc{0}\delta_{\vc{k},\vc{0}}) \end{eqnarray} where $\epsilon_\vc{k}=\vc{k}^2/2m$ and where the condensed and total particle densities are $n_\vc{0}=N_\vc{0}/V$ and $n=\sum_\vc{k} N_\vc{k}/V$. Eq.(\ref{drs}) corresponds to the dispersion relation of the single particle excitation. At equilibrium, \begin{eqnarray} N'_\vc{k}=N_\vc{k}(1-\delta_{\vc{k},\vc{0}})= 1/(\exp[\beta(\epsilon^{HF}_{1,\vc{k}}-\mu)]-1) \end{eqnarray} is the Bose-Einstein distribution. Below the condensation point, the chemical potential becomes $\mu=\epsilon_\vc{0}=g_{11}(2n- n_\vc{0} )$ and the macroscopic occupation $N_\vc{0}$ is fixed to satisfy the total number conservation. For $t \geq 0$, we apply an external potential. For the Bragg and Raman scatterings, these are respectively: \begin{eqnarray} V_{11}= V_B \cos(\vc{q}.\vc{r}-\omega t) \\ V_{12}= V_R \exp[i(\vc{q}.\vc{r}-\omega t)] \end{eqnarray} We solve the system through a perturbative expansion: \begin{eqnarray} \left(\begin{array}{c} \psi_{1,\vc{k}} \\ \psi_{2,\vc{k}} \end{array} \right)= \left(\begin{array}{c}e^{i(\vc{k}.\vc{r}-\epsilon^{HF}_{1,\vc{k}}t)}/\sqrt{V} + \psi^{(1)}_{1,\vc{k}}(\vc{r},t) + \psi^{(2)}_{1,\vc{k}}(\vc{r},t) \\ \psi^{(1)}_{2,\vc{k}}(\vc{r},t) \end{array} \right) \end{eqnarray} The equations of motion for the first order corrections are for the case of Bragg and Raman scatterings respectively: \begin{eqnarray} \!\!\left[i{\partial_t}+\frac{\nabla^2_\vc{r}}{2m} - g_{11} (2n-\delta_{\vc{k},\vc{0}}n_\vc{0}) \right] \psi^{(1)}_{1,\vc{k}} = \nonumber \\ \left[V_{11}+\!\sum_\vc{k'} g_{11} (2-\delta_{\vc{k'},\vc{0}}\delta_{\vc{k},\vc{0}}) ({\psi^{(0)*}_{1,\vc{k'}}} \psi_{1,\vc{k'}}^{(1)} + c.c.) N_\vc{k'} \right]\! \psi^{(0)}_{1,\vc{k}} \\ \label{p21} \left[i{\partial_t} +\frac{\nabla^2_\vc{r}}{2m} - \omega_0 - g_{12}(n-\delta_{\vc{k},\vc{0}}n_{\vc{0}}) \right] \psi^{(1)}_{2,\vc{k}} = \left[V_{12} + g_{12}\sum_\vc{k'} N_\vc{k'} {\psi^{(0)*}_{1,\vc{k'}}} \psi^{(1)}_{2,\vc{k'}}\right] \psi^{(0)}_{1,\vc{k}} \end{eqnarray} These two set of integral equations can be solved exactly using the methods developed in \cite{condenson}. Defining the Fourier transforms: \begin{eqnarray} V_{ab,\vc{q},\omega}=\int_V \!\!\!d^3 \vc{r} \int_0^\infty \!\!\! dt\, e^{i[(\omega +i0)t -\vc{q}.\vc{r}]} V_{ab}(\vc{r},t) \end{eqnarray} one obtains in the level 1 for the condensed mode: \begin{eqnarray}\label{psiB0} \psi^{(1)}_{1,\vc{0}}(\vc{r},t)= \sum_\vc{q'} \int_{-\infty}^\infty \frac{d\omega'}{2\pi i} \frac{e^{i(\vc{q'}.\vc{r}-\omega't)}V_{11,\vc{q'},\omega'}\psi^{(0)}_{1,\vc{0}}(\vc{r},t)} {{\tilde{\cal K}}(\vc{q'},\omega')(\omega'+i0-\epsilon_{\vc{q'}})} \end{eqnarray} for the non condensed modes ($\vc{k} \not=0$): \begin{eqnarray}\label{psiB} \psi^{(1)}_{1,\vc{k}}(\vc{r},t)= \sum_\vc{q'} \int_{-\infty}^\infty \frac{d\omega'}{2\pi i} \frac{e^{i(\vc{q'}.\vc{r}-\omega't)}V_{11,\vc{q'},\omega'}\psi^{(0)}_{1,\vc{k}}(\vc{r},t)} {{\cal K}(\vc{q'},\omega') (\omega'+i0-\epsilon_{\vc{k}+\vc{q'}}+\epsilon_{\vc{k}})} \end{eqnarray} and in the level 2 for all modes: \begin{eqnarray}\label{psiR} \lefteqn{\psi^{(1)}_{2,\vc{k}}(\vc{r},t)= \sum_\vc{q'} \int_{-\infty}^\infty \frac{d\omega'}{2\pi i} \times} \nonumber \\ & \displaystyle \frac{e^{i(\vc{q'}.\vc{r}-\omega't)}V_{12,\vc{q'},\omega'}\psi^{(0)}_{1,\vc{k}}(\vc{r},t)} {{\cal K}_{12}(\vc{q'},\omega')(\omega'+i0-\omega_0-\epsilon_{\vc{k}+\vc{q'}}+\epsilon_{\vc{k}}+(2g_{11}-g_{12})n +\delta_{\vc{k},\vc{0}}(g_{12}-g_{11})n_\vc{0})} \end{eqnarray} These formulae resemble the one obtained from the non interacting Bose gas excepted for the mean field term in (\ref{psiR}) and the extra factors representing the screening effect. For the Bragg scattering, these factors can be written as \cite{condenson}: \begin{eqnarray}\label{Ktilde} {\tilde{\cal K}}(\vc{q},\omega)=\frac{\Delta(\vc{q},\omega)}{(\omega+i0)^2-\epsilon_\vc{q}^2} \\ \label{K} {{\cal K}}(\vc{q},\omega)=\frac{\Delta(\vc{q},\omega)}{(\omega+i0)^2-\epsilon_\vc{q}^2+ 2g_{11}n_\vc{0} \epsilon_\vc{q} } \end{eqnarray} where \begin{eqnarray} \Delta(\vc{q},\omega)= (1-2g_{11}\chi_0(\vc{q},\omega))[(\omega+i0)^2 - {\epsilon^B_{\vc{q}}}^2] -8g_{11}\chi_0(\vc{q},\omega)g_{11} n_{\vc{0}}\epsilon_\vc{q} \end{eqnarray} is the propagator for the collective excitations, $ \epsilon^B_{\vc{q}}= \sqrt{c^2 \vc{q}^2 + \epsilon_\vc{q}^2} $ is the Bogoliubov excitation energy, $c=\sqrt{g_{11}n_\vc{0}/m}$ is the sound velocity and \begin{eqnarray}\label{chi0} \chi_{0}(\vc{q},\omega)= \frac{1}{V}\sum_{\vc{k}} \frac{N'_{\vc{k}}-N'_{\vc{k}+\vc{q}}} {\omega +i0 + \epsilon_{\vc{k}}-\epsilon_{\vc{k+q}}} \end{eqnarray} is the susceptibility function describing the normal atoms. For the Raman scattering, it is \begin{eqnarray}\label{K12} {{\cal K}}_{12}(\vc{q},\omega)=1-g_{12}\chi_{0,12}(\vc{q},\omega) \end{eqnarray} where \begin{eqnarray}\label{chi012} \chi_{0,12}(\vc{q},\omega)= \frac{1}{V}\sum_\vc{k} \frac{N_{\vc{k}}} {\omega +i0 -\omega_0 + \epsilon_{\vc{k}}-\epsilon_{\vc{k+q}}+(2g_{11}-g_{12})n+ \delta_{\vc{k},\vc{0}}(g_{12}-g_{11})n_\vc{0}} \end{eqnarray} Knowing the Fourier transform of the potential $V_{11,\vc{q'},\omega'}=\sum_\pm i V_B \delta_{\vc{q'},\pm \vc{q}}/ 2(\omega'+i0 \mp \omega)$ and $V_{12,\vc{q'},\omega'}= i V_R \delta_{\vc{q},\vc{q'}}/ (\omega'+i0 - \omega)$, Eqs.(\ref{psiB0},\ref{psiB},\ref{psiR}) are calculated using the contour integration method over $\omega'$ by analytic continuation in the lower half plane. As a consequence, the poles of the integrand tell about the excitation frequencies induced by the external perturbation. The pole of the propagator containing $\vc{k}$ corresponds to atom excitation involving one mode only while the poles coming from the screening factors correspond to the excitations involving all modes $\vc{k}$ collectively. Thus, the TDHF approach predicts both single atom and collective excitations. Note that the single mode excitation is not possible for the condensed atoms since the corresponding pole is compensated by a zero coming from the screening factor. The expressions (\ref{psiB0},\ref{psiB},\ref{psiR}) have an interpretation shown in Fig.3. An atom of momentum $\vc{k}$ is scattered into a state of momentum $\vc{k+q'}$ by means of an external interaction mediated by a virtual collective excitation of momentum $\vc{q'}$. \begin{figure} \label{vc} \begin{center} \resizebox{0.50\columnwidth}{!} {\includegraphics{vcondensonh.eps}} \end{center} \caption{Diagrammatic representation of the scattering of an atom by an external potential} \end{figure} \section{Bragg scattering} Let us first review the Bragg scattering process . Up to the second order in the Bragg potential, the atoms number for any mode $\vc{k}$ can be decomposed into an unscattered part: \begin{eqnarray} N_\vc{k}^{unscat}=N_\vc{k} \left[1+\int_V d^3 \vc{r} (\psi^{(0)*}_{1,\vc{k}} \psi^{(2)}_{1,\vc{k}} + c.c.) \right] \end{eqnarray} and a scattered part: \begin{eqnarray} N_\vc{k}^{scat}=N_\vc{k}\int_V d^3 \vc{r} |\psi^{(1)}_{1,\vc{k}}|^2 \end{eqnarray} Instead of evaluating the second order term, $N_\vc{k}^{unscat}$ is determined through the conservation relation $N_\vc{k}=N_\vc{k}^{unscat}+N_\vc{k}^{scat}$. Generally speaking within the sublevel 1, the scattered atoms cannot be distinguished from the unscattered ones. But in order to understand the underlying physics, we assume that distinction is possible. Within the second order perturbation theory, the quantity of interest is the scattered atom rate per unit of time and is expected to reach a stationary value after a certain transition time. In the following, we shall analyze these transition rates for time long enough that transient effects disappear. In these conditions, a perturbative approach is still valid for very large time provided that the scattered atom number remains low compared to unscattered ones. This last requirement is always satisfied with a sufficiently weak external perturbation. At zero temperature, only the condensed wave function is modified and Eq.(\ref{psiB0}) becomes after contour integration over $\omega'$: \begin{eqnarray}\label{T0} \psi^{(1)}_{1,\vc{0}}(\vc{r},t)=\frac{V_B}{2i}\psi^{(0)}_{1,\vc{0}}(\vc{r},t) \sum_\pm e^{\pm i\vc{q}.\vc{r}}\!\! \left[\frac{(e^{-i\epsilon^B_{\vc{q}}t}-e^{\mp i\omega t}) (\epsilon^B_{\vc{q}}+\epsilon_{\vc{q}})}{2 \epsilon^B_{\vc{q}} (\epsilon^B_{\vc{q}} \mp \omega)}+ \frac{(e^{i\epsilon^B_{\vc{q}}t}-e^{\mp i\omega t}) (\epsilon_{\vc{q}}-\epsilon^B_{\vc{q}})}{2 \epsilon^B_{\vc{q}} (\epsilon^B_{\vc{q}} \pm \omega)} \right] \end{eqnarray} The response function is only resonant at the Bogoliubov energy $\pm \epsilon_\vc{q}^B$. Also no transient response appears at zero temperature. Using (\ref{psiB0}) and (\ref{psiB}), the total number of scattered atom can be obtained by determining the total momentum: \begin{eqnarray}\label{P} \vc{P}= \sum_\vc{k} N_\vc{k} \int_V d^3\vc{r}\, \psi^*_{1,\vc{k}} \frac{\nabla_\vc{r}}{i} \psi_{1,\vc{k}} = \sum_\vc{k} N_\vc{k} \int_V d^3\vc{r}\, |\psi^{(1)}_{1,\vc{k}}|^2 \vc{q} \end{eqnarray} In the large time limit, the total momentum rate is related to the imaginary part of the susceptibility response function $\chi=\chi'-i\chi''$ through \cite{Ketterle,Stringari}: \begin{eqnarray}\label{P2} \frac{d \vc{P}}{dt}\stackrel{t \rightarrow \infty }{=}2\vc{q} \left(\frac{V_B}{2}\right)^2 \chi''(\vc{q},\omega) \end{eqnarray} Using Eq.(\ref{T0}), we recover that: \begin{eqnarray}\label{suscB} \chi''(\vc{q},\omega)= \pi S_\vc{q}N_\vc{0}(\delta(\omega - \epsilon^B_\vc{q})-\delta(\omega +\epsilon^B_\vc{q})) \end{eqnarray} where $S_\vc{q}=\epsilon_\vc{q}/\epsilon^B_\vc{q}$ is the static structure factor. The delta function comes from the relation $\delta(x)=\lim_{t \rightarrow \infty} \sin(xt)/(\pi x)$. The result (\ref{suscB}) obtained in the GRPA is identical to the one obtained from the Bogoliubov approach where $S_\vc{q}$ can be calculated equivalently from the four points correlation function \cite{Stringari,books}. But in any case the generated phonon like excitation is still a part of the macroscopic wave function $\psi_{1,\vc{0}}(\vc{r},t)$. At temperatures different from zero, the poles become imaginary which means that any Bogoliubov excitation is absorbed by a thermal atom excitation \cite{Reidl,condenson}. This phenomenon is known as the Landau damping. So for long time, only the residues of (\ref{psiB0}) with poles touching the real axis contribute whereas the others give rise to transient terms negligible for long time. Thus the perturbative part becomes: \begin{eqnarray} \psi^{(1)}_{1,\vc{0}}(\vc{r},t)&\stackrel{t \rightarrow \infty}{=}& \frac{V_B}{2i}\psi^{(0)}_{1,\vc{0}}(\vc{r},t) \sum_\pm \frac{e^{\pm i(\vc{q}.\vc{r}-\omega t)}}{ {\tilde{\cal K}}(\pm \vc{q},\pm \omega)(\pm \omega-\epsilon_\vc{q})} \\ \psi^{(1)}_{1,\vc{k}}(\vc{r},t)&\stackrel{t \rightarrow \infty}{=}& \frac{V_B}{2i}\psi^{(0)}_{1,\vc{k}}(\vc{r},t) \sum_\pm \left(\frac{e^{\mp i\omega t}}{ {{\cal K}}(\pm \vc{q},\pm \omega)}- \frac{e^{-i(\epsilon_{\vc{k}\pm \vc{q}}-\epsilon_\vc{k})t}}{ {{\cal K}}(\pm \vc{q},\epsilon_{\vc{k}\pm \vc{q}}-\epsilon_\vc{k})}\right) \frac{e^{\pm i\vc{q}.\vc{r}}}{(\pm \omega-\epsilon_{\vc{k}\pm \vc{q}}+\epsilon_\vc{k})} \end{eqnarray} Using the property $\Delta(\vc{q},\omega)=\Delta^*(-\vc{q},-\omega)$, the total number in the condensed mode reaches a constant value \begin{eqnarray}\label{n0scat} N_\vc{0}^{scat}\stackrel{t \rightarrow \infty}{=} \left(\frac{V_B}{2}\right)^2 \frac{2(\epsilon^2_\vc{q}+\omega^2)N_\vc{0}} {|\Delta(\vc{q},\omega)|^2} \end{eqnarray} and the scattered thermal atom rate is given by: \begin{eqnarray}\label{nscatt} \frac{dN_\vc{k}^{scat}}{dt}\stackrel{t \rightarrow \infty}{=}2\pi \left(\frac{V_B}{2}\right)^2 \sum_\pm \frac{\delta(\pm \omega-\epsilon_{\vc{k}\pm \vc{q}}+\epsilon_\vc{k})N_\vc{k}} {|{\cal K}(\vc{q}, \omega)|^2} \end{eqnarray} From (\ref{P2}), we deduce for the imaginary susceptibility: \begin{eqnarray}\label{chiT} \chi''(\vc{q},\omega)= -\frac{1}{g_{11}} {\rm Im}\left(\frac{1}{{\cal K}(\vc{q},\omega)}\right) \end{eqnarray} The basic interpretation of these formulae is the following. At finite temperature, the collective excitation modes created by the external perturbation are damped over a time given by the inverse of the Landau damping. So the number of collectively excited condensed atom reaches the constant value (\ref{n0scat}) when the produced collective excitations rate compensates their absorption rate by thermal atoms. This constant value is higher for a transition frequency and a transferred momentum close to the resonance $\omega=\epsilon_c \sim \pm \epsilon^B_\vc{q}$. The formula (\ref{nscatt}) is a generalization of the Fermi-Golden rule when the screening effect is taken into account. The external potential perturbs the thermal atoms of momentum $\vc{k}$ in two channels by transferring a momentum $\pm \vc{q}$ and a transition energy $\pm \omega$ such that the resulting single atom excitation has a momentum $\vc{k} \pm \vc{q}$ and a kinetic energy $\epsilon_{\vc{k} \pm \vc{q}}=\epsilon_\vc{k}\pm \omega$. The presence of the screening factor amplifies or reduces the scattering rate. Amplification (or anti-screening) occurs for a frequency close to the resonance energy $\epsilon_c$ of the collective excitations. On the contrary, dynamical screening occurs for a frequency close to the pole of the screening factor and is total for transition involving condensed atom at $\omega=\epsilon_\vc{q}$. Thus, in GRPA, attempt to generate incoherence through single condensed atom scattering is forbidden at finite temperature. Only collective excitations affect the condensed mode but they are damped and therefore cannot contribute to effectively transfer condensed atoms to a different mode \cite{condenson}. It is taught in standard textbooks \cite{books} that, in the impulse approximation used for large $\vc{q}$, the response of the system is sensitive to the momentum distribution of the gas, since the atoms behave like independent particle. In particular, a delta peak is expected to account for the presence of a condensate fraction. The difficulty of the observation of this peak could be explained by this impossibility of a single condensed atom excitation at finite temperature. For completeness, let us mention that interaction with thermal atoms can be also totally screened and inspection of the formulae (\ref{K}) shows that this happens for $\epsilon_g=\pm \sqrt{\epsilon_\vc{q}^2-c^2\vc{q}^2}$ \cite{Zhang}. Fig. 4 shows these features in the frequency spectrum for the total momentum rate (\ref{chiT}) at fixed $\vc{q}$. We choose the typical density observed experimentally for $^{87} Rb$ at the trap center \cite{Stringari}. \begin{figure} \begin{center} \resizebox{0.75\columnwidth}{!}{ \includegraphics{imoverkp.eps}} \end{center} \caption{Imaginary susceptibility $\chi''$ of a bulk Bose condensed gas for $\epsilon_\vc{q}= 2\pi \times 30 {\rm kHz}$ versus the detuning frequency $\delta \omega$. The superfluid fraction is 94\%, $g_{11}n=2\pi \times 4.3 {\rm kHz}$, $k_B T/g_{11}n=2.11 $ and $a_{11}^3n= 5.6 \, 10^{-5}$. The black dashed/solid curve is the rate calculated in absence/presence of the screening factor. Both regimes of screening and anti-screening are displayed close to the zero $\epsilon_g$ and to the resonance $\epsilon_c$ respectively. In particular, the screening prevents the observation of a huge delta peak associated to the condensed mode.} \label{fig:0} \end{figure} These results can be put in direct relation with the analysis of impurity scattering \cite{Nozieres}. Indeed, the dynamic response function is related to the dynamic structure factor through the fluctuation-dissipation theorem: $S(\vc{q},\omega)=\chi''(\vc{q},\omega)/\pi(1-\exp(-\beta \omega))$. The dynamic structure factor is directly connected to the transition probability rate ${\cal P}(\vc{q},\omega)$ that an external particle or impurity changes its initial momentum $\vc{p}$ and energy $E_\vc{p}$ into $\vc{p} +\vc{q}$ and $E_{\vc{p}+\vc{q}}=E_\vc{p}+\omega$ respectively: \begin{eqnarray} {\cal P}(\vc{q},\omega)=2\pi |{\cal{V}}_\vc{q}|^2 S(\vc{q},\omega) \end{eqnarray} where $\cal{V}_\vc{q}$ is the Fourier transform of the interaction potential between the impurity and the atom gas. The total rate of scattering $\Gamma_\vc{p}$ results from a virtual process involving emission and absorption of the collective excitations: \begin{eqnarray}\label{Gam} \Gamma_\vc{p}&=&\sum_\vc{q} 2\pi |{\cal{V}}_\vc{q}|^2 S(\vc{q},E_{\vc{p}+\vc{q}}-E_\vc{p}) \\ \label{Gam2} &=&\sum_{\vc{q},\vc{k}} 2\pi |\frac{{\cal{V}}_\vc{q}}{{\cal{K}}(\vc{q},E_{\vc{p}+\vc{q}}-E_\vc{p})}|^2 \delta(\epsilon_\vc{k}+E_\vc{p}-E_{\vc{p}+\vc{q}}- \epsilon_{\vc{k}-\vc{q}})N'_\vc{k}(1+N'_{\vc{k}-\vc{q}}) \end{eqnarray} As a consequence, the impurity scattering is possible provided that the energy and momentum are conserved in a effective collision with a thermal atom of momentum $\vc{k}$ mediated by a virtual collective excitation. Note that total screening prevents impurity scattering involving ongoing and outgoing condensed atoms. In contrast, for temperature close to zero, the Landau damping approaches zero since $\chi_0(\vc{q},\omega) \rightarrow 0$ so that the application of Eq.(\ref{suscB}) to (\ref{Gam}) leads to an on-energy shell process of absorption and emission of a collective excitation. We obtain: \begin{eqnarray}\label{Gam3} \Gamma_\vc{p}&=&\sum_{\pm,\vc{q}} 2\pi |{\cal{V}}_\vc{q}|^2 S_\vc{q} (n^B_{\vc{q}}+\delta_{\pm,+}) \delta(\pm \epsilon^B_\vc{q} +E_{\vc{p}+\vc{q}}-E_\vc{p}) \end{eqnarray} where $n^B_{\vc{q}}=1/(\exp(\beta \epsilon^B_\vc{q})-1)$. This limit case leads to the apparent interpretation of an impurity interacting with a thermal bath of phonon-like quasi-particle. This situation has been considered in \cite{Montina} in the study of the impurity dynamics. Instead, Eq.(\ref{Gam2}) provides a generalization for higher temperature emphasizing that any external particle can excite a single thermal atom alone but not a condensed one. \section{Raman scattering} The conclusions so far obtained in the Bragg process can be extended straightforwardly to the case of Raman scattering with the difference that only one channel of scattering is possible. For the purpose of simplicity, we choose the case $g=g_{ab}$. Also this channel is easier to access experimentally. Defining the detuning $\delta \omega=\omega-\omega_0$, explicit calculations of the spinor component (\ref{psiR}) in the second sublevel give: \begin{eqnarray} \psi^{(1)}_{2,\vc{k}}(\vc{r},t)&\stackrel{t \rightarrow \infty}{=}& \left(\frac{e^{- i\omega t}}{ {{\cal K}}_{12}(\vc{q},\omega)}- \frac{e^{i(\epsilon_\vc{k}+gn-\epsilon_{\vc{k}+\vc{q}}-\omega_0)t}}{ {{\cal K}}_{12}(\vc{q},\omega_0+ \epsilon_{\vc{k} + \vc{q}}-\epsilon_\vc{k}-gn)}\right) \frac{e^{i\vc{q}.\vc{r}}V_R\psi^{(0)}_{1,\vc{k}}(\vc{r},t)} {i(\delta\omega+\epsilon_\vc{k}+gn-\epsilon_{\vc{k}+\vc{q}})} \end{eqnarray} So we obtain for the atom number in the mode $\vc{k}$: \begin{eqnarray} \frac{dN_{2,\vc{k}}}{dt}\stackrel{t \rightarrow \infty}{=}2\pi V_R^2 \frac{\delta(\delta \omega-\epsilon_{\vc{k}+\vc{q}}+\epsilon_\vc{k}+gn)N_\vc{k}} {|{\cal K}_{12}(\vc{q},\omega)|^2} \end{eqnarray} By summing over all the modes, we obtain the density rate transferred in level 2: \begin{eqnarray} \frac{dn_{2}}{dt}=\frac{d}{dt}(\sum_\vc{k} N_{2,\vc{k}}/V) \stackrel{t \rightarrow \infty}{=}2 V_R^2 \chi''_{12}(\vc{q},\omega) \end{eqnarray} where we define the imaginary part $\chi_{12}=\chi'_{12}-i\chi''_{12}$ of the intercomponent susceptibility function: \begin{eqnarray}\label{chiRPA} \chi_{12}(\vc{q},\omega)= \chi_{0,12}(\vc{q},\omega)/(1-g\chi_{0,12}(\vc{q},\omega)) \end{eqnarray} This last formulae is also the one obtained in the GRPA \cite{Levitov}. Again we find a similar structure as the intracomponent case. In this process, thermal atoms with an initial momentum $\vc{k}$ and energy $\epsilon^{HF}_{1,\vc{k}}= 2gn + \epsilon_\vc{k}$ are transferred into a second level with momentum $\vc{k+q}$ and energy $\epsilon^{HF}_{2,\vc{k+q}}=gn +\epsilon_{\vc{k}+\vc{q}}$ provided $\delta \omega= \epsilon^{HF}_{2,\vc{k+q}}- \epsilon^{HF}_{1,\vc{k}}$. In absence of screening, a resonance appears at the detuning $\epsilon_g=\epsilon_\vc{q}-gn$. The first term corresponds to the usual recoil energy while the second is the gap energy $gn$ that results from the exchange interaction. During the Raman transition, the transferred atoms become distinguishable from the others and release this gap energy. The scattering rate is determined through the imaginary part of the susceptibility Eq.(\ref{chiRPA}) versus the transition frequency $\omega$ and at fixed $\vc{q}$. Figs.~\ref{fig:1} and \ref{fig:2} show the corresponding resonance around this gap in absence of screening. \begin{figure} \resizebox{1\columnwidth}{!}{ \includegraphics{screenbogp.eps} \includegraphics{logscreenbogp.eps}} \caption{Imaginary susceptibility $\chi''_{12} $ of a bulk Bose condensed gas $\epsilon_\vc{q}= 2\pi \times 30 {\rm Hz}$ versus the detuning frequency $\delta \omega$. Parameter values are the ones of Fig.4. Left and right graphs represent the same curves but the right graph is in logarithm scale. The black dashed/solid curve is calculated in absence/presence of the screening factor while the dotted curve represents the Bogoliubov approximation. See the grey curve for a magnification of the black solid curve ($\times 25$) } \label{fig:1} \end{figure} \begin{figure} \resizebox{1\columnwidth}{!}{ \includegraphics{screenbog2p.eps} \includegraphics{logscreenbog2p.eps}} \caption{Idem as Fig.\ref{fig:1} but for $\epsilon_\vc{q}= 2\pi \times 300{\rm Hz}$. Here, the broadening of the curves is much more important.} \label{fig:2} \end{figure} The screening effect strongly reduces the Raman scattering and, in particular, forbids it for atoms with momentum $\vc{k}$ such that $\vc{k}.\vc{q}=0$. This case corresponds to $\delta \omega=\epsilon_g$ and includes also the condensed atoms ($\vc{k}=0$). The graphs illustrate well the effect of the macroscopic wave function that deforms its shape in order to attenuate locally the external potential displayed by the Raman light beams and to prevent incoherent scattering of the condensed atom. The experimental observation of this result would explain some of the reasons for which a superfluid condensate moves coherently without any friction with its surrounding. Anti-screening occurs in the region close to the resonance frequency $\epsilon_c$ of the collective mode. At zero temperature, we recover $\epsilon_c=\epsilon_\vc{q}$ \cite{Fetter} while for non zero temperature the collective modes become damped for $\vc{q}\not= 0$ \cite{gap}. These results can be compared to the one obtained from the Bogoliubov non conserving approximation developed in \cite{gap} and valid only for a weakly depleted condensate. This approach implicitly assumes that the only elementary excitations are the collective ones and form a basis of quantum orthogonal states that describe the thermal part of the gas. Consequently, this formalism predicts no gap and no screening. Instead, the intercomponent susceptibility describes transitions involving the two collective excitation modes of phonon $\epsilon_\vc{k}^B$ and of rotation in spinor space $\epsilon_\vc{k}$: \begin{eqnarray}\label{BP} \chi^{B}_{12}(\vc{q},\omega)= \frac{n_\vc{0}}{\omega-\epsilon_\vc{q}+i0}+\frac{1}{V}\sum_{\pm,\vc{k}} \frac{u^2_{\pm,\vc{k}} (n^B_{\vc{k}}+\delta_{\pm,-})}{\omega+i0 \pm \epsilon^B_{\vc{k}}-\epsilon^{}_{\vc{k}\pm\vc{q}}} \end{eqnarray} where $u_{\pm,\vc{k}}=\pm [(\epsilon_\vc{k}+gn_\vc{0})/2\epsilon^B_\vc{k}\pm 1/2]^{1/2}$. This function does not preserve the f-sum rule associated to the $SU(2)$ symmetry. In contrast to the GRPA, a delta peak describes a spinor rotation transition of the condensed fraction, and two other transitions involve the excitation transfer from a phonon mode into a rotation mode and the excitation creation in the two modes simultaneously. For small $\vc{q}$, these processes remain dispersive since the frequency transition depends on the momentum $\vc{k}$. As a consequence, the resulting spectrum shown in Figs. 5 and 6 is broader. In particular, the process of creation in the two modes favors transition with positive frequency. Note also the maximum of the curve separating the region involving a transition atom-atom like (high $\vc{k}$) and the one involving a transition phonon-atom like (low $\vc{k}$). All these features established so far for the bulk case allow a clear comparison between the GRPA and the Bogoliubov approaches. In the real case of a parabolic trap, the inhomogeneity induces a supplementary broadening of the spectrum that prevents the direct observation of the screening. This effect as well as the finite time resolution and the difference between the scattering lengths will be discussed in a subsequent work. \section{Conclusions and perspectives} \label{sec:3} We have analyzed the many body properties that can be extracted from the Raman scattering in the framework the GRPA. The calculated spectrum allows to show the existence of a second branch of excitation but also the screening effect which prevents the excitation of the condensed mode alone. The observation of phenomena like the gap and the dynamical screening could have significant repercussions on our microscopic understanding of a finite temperature Bose condensed gas and its superfluidity mechanism. On the contrary, the non-observation of these phenomena would imply that the {\it gapless} and {\it conserving} GRPA is not valid. In that case, a different approximation has to be developed in order to explain what will be observed. As an alternative, the idea to use the Bogoliubov approach has been also discussed. But unfortunately, the violation of the f-sum rule is a serious concern regarding this {\it non conserving} approach \cite{gap}. All these aspects emphasize the importance of the experimental study of the Raman scattering at finite temperature. PN thanks the referees for usefull comments and acknowledges support from the Belgian FWO project G.0115.06, from the Junior fellowship F/05/011 of the KUL research council, and from the German AvH foundation.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Macromolecular crystals represent at the same time a particular case of a solid state system as well as that of a polymer system. The simplest and paradigmatic case in this field is crystalline polyethylene. As is well known, it is very difficult to obtain reliable experimental data for these systems, mainly due to problems associated with preparing sufficiently large monocrystalline samples. This increases the value of theoretical insight for understanding and eventual prediction of their properties. However, compared to the case of liquid polymer phases, a quantitative description of structure and properties of a solid phase is a more subtle problem. While in the former case it is often possible to replace whole monomer groups by united atoms and constrain the bond lengths and sometimes even the bond angles, in case of macromolecular crystals, instead, it is preferred to employ a force field for all atoms, without enforcing any constraints on the degrees of freedom \cite{wunderlich}. Apart from the study of the ground state crystal structure and properties, which can be readily done within the framework of molecular mechanics, once a force field is available, the main interest lies in the finite temperature properties. At low temperatures, where the creation of conformational defects is unlikely, and the displacements of the atoms are relatively small, the structure of the PE crystal is dominantly governed by packing energetics of all-trans conformation chains. In this regime, phonon modes are well defined excitations of the system, and a possible theoretical approach is the use of quasi-harmonic or self-consistent quasi-harmonic approximations, which have the advantage of allowing for quantum effects to be taken into account easily \cite{rutledge,hagele,stobbe}. Such methods are, however, intrinsically incapable of treating correctly the large amplitude, anharmonic motion of whole chain segments, which arises as the temperature approaches the melting point (at normal pressure about 414 K for a PE crystal). Actually, for crystalline PE they start to fail just at the room temperature of 300 K \cite{stobbe}. If the melting is prevented by increasing the external pressure, the orthorhombic crystal structure may persist to higher temperatures, and eventually undergo a phase transition into the hexagonal "condis" phase, characterized by a large population of gauche defects \cite{condis}. In order to study the complex behavior in this high temperature regime, computer simulation techniques, like molecular dynamics or Monte Carlo, appear as a particularly convenient tool. Using a suitable ensemble and an appropriate simulation technique one can evaluate various thermodynamic quantities, like specific heat, elastic constants or thermal expansion coefficients, and structural quantities, such as bond lengths and angles, or defect concentration. It is also possible to directly study a variety of physically interesting phenomena associated with the structural phase transitions between different crystal modifications. Computer simulations are, however, still limited to relatively small systems. In case of macromolecular crystals, the situation is rather peculiar due to the extreme anisotropy of the system originating from its quasi-one-dimensional character. When the chains are short, the system crosses over from a PE crystal to an $n$-paraffin crystal, which behaves in a substantially different way. Due to an easy activation of chain rotation and diffusion, $n$-paraffin crystals undergo, as the temperature is increased, a characteristic series of phase transitions, depending on the chain length ("rotator phases",\cite{ungar,rk}). This fact makes it necessary to study and understand the related finite-size effects, if the results of the simulation are to be regarded as representative for the limit of very long chains, corresponding to PE, and interpreted in a consistent way. The choice of boundary conditions also represents a non-trivial issue, as documented by the very extensive explicit atom MD simulation \cite{wunderlich}, in which a transition from an initial orthorhombic structure to a parallel zig-zag chain arrangement was observed already at 111 K, contrary to experiment, probably because of the use of free boundary conditions in all spatial directions and a related large surface effect. Finite temperature simulations of realistic, explicit atom models of crystals with long methylene chains, of which we are aware so far, have exclusively used molecular dynamics as the sampling method. Ryckaert and Klein \cite{rk} used a version of the Parrinello-Rahman MD technique with variable cell shape to simulate an $n$-paraffin crystal with constrained bond lengths and periodic boundary conditions. Sumpter, Noid, Liang and Wunderlich have simulated large crystallites consisting of up to $10^4$ CH$_2$ groups, using free boundary conditions \cite{wunderlich}. Recently, Gusev, Zehnder and Suter \cite{gusev} have simulated PE crystal using periodic boundary conditions, at zero pressure, using also the Parrinello-Rahman technique. On the other hand, not much MC work seems to exist in this field, and to our knowledge, the only work is that of Yamamoto \cite{yamamoto} which was aimed specifically at the high temperature "rotator" phases of $n$-paraffins. It concentrated exclusively on the degrees of freedom associated with the packing of whole chains, assumed to be rigid, neglecting completely the internal, intramolecular degrees of freedom. The aim of this paper is twofold. On the one hand, we would like to explore the applicability of the MC method to a classical simulation of a realistic model of a PE crystal, using an explicit atom force field without any constraints, and periodic boundary conditions in all spatial directions. In order to have a direct access to quantities like thermal expansion coefficients, or elastic constants, we choose to work at constant pressure. If MC turns out to be a well applicable method for this system on the classical level, it would be a promising sign for an eventual Path Integral Monte Carlo (PIMC) study allowing to take into account also the quantum effects, known to play a crucial role at low temperatures \cite{rutledge,hagele,stobbe}. It is known that for path integral techniques the use of MC is generally preferred to MD, because of the problem of non-ergodicity of the pseudo-classical system representing the quantum one in a path integral scheme. As a related problem, we would also like to understand the finite-size effects and determine for each particular temperature the minimal size of the system that has to be simulated in order to be reasonably representative of the classical PE crystal. Such information would also be very useful for an eventual PIMC study, where an additional finite-size scaling has to be performed, because of the finite Trotter number. On the other hand, we would like to study the physics of the orthorhombic phase of PE crystal in the classical limit over the whole temperature range of its experimentally known existence at normal pressure. We stress that our goal here is not to tune the force field in order to improve the agreement between the simulation and experiment. Our emphasis rather lies on providing results calculated for a given force field with an essentially exact classical technique. These might in future serve as a basis for estimation of the true importance of quantum effects, when these can be taken into account by a PIMC technique, as well as for an assessment of the range of validity of different classical and quantum approximation schemes. Some preliminary simulation results of this study have been presented briefly in recent conference proceedings \cite{rm1}. The paper is organized as follows. In Sect.~II, we describe the force field as well as the constant pressure simulation method used. The MC algorithm itself will be addressed only briefly, since it has already been discussed in detail in its constant volume version in Refs.\cite{rm,rm1}. In Sect.~III, we present the results for the structural and thermodynamic properties of the orthorhombic PE crystal obtained from zero pressure simulation in the temperature range 10 -- 450 K, for different chain lengths and system sizes. We discuss the temperature dependence of the measured quantities as well as the related finite-size effects, and compare our results to those obtained from other theoretical approaches, as well as to some available experimental data. In the final Sect.~IV we then draw conclusions and suggest some possible further directions. \section{Constant pressure simulation method} In this section we describe some details of the simulation method, as well as of the force field used. Before doing so, we recall here the structure of the orthorhombic PE crystal \cite{kavesh}. The unit cell contains two chains, each consisting of 2 CH$_2$ groups, giving a total of 12 atoms per unit cell. The all-trans chains extend along the crystallographic $c$-direction ($z$-axis) and are packed in a "herringbone" arrangement, characterized by the setting angle $\psi$ (angle between the $xz$ plane and the plane containing the carbon backbone of a chain) alternating from one raw of chains to another between the values $\pm |\psi|$. The packing is completely determined by specifying the 3 lattice parameters $a,b,c$ as well as the value of $|\psi|$. To specify the internal structure of the chains, three additional parameters are needed, which may be taken to be the bond lengths $r_{CH}$ and $r_{CC}$ and the angle $\theta_{HCH}$. We have simulated a super-cell containing $i \times j \times k$ unit cells of the crystal, $i,j,k$ being integers. Periodic boundary conditions were applied in all three spatial directions, in order to avoid surface effects. Our PE chains with backbones consisting of $2k$ carbon atoms are thus periodically continued beyond the simulation box and do not have any chain ends. As the study was aimed specifically at the orthorhombic phase of the PE crystal and we did not expect phase transitions into different crystal structures, we did not consider general fluctuations of the super-cell shape, otherwise common in the Parrinello - Rahman MD method. We have constrained the crystal structure angles $\alpha,\beta,\gamma$ to the right angle value, not allowing for shear fluctuations. The volume moves employed thus consisted only of an anisotropic rescaling of the linear dimensions of the system by three scaling factors $s_1,s_2,s_3$, which relate the instantaneous size of the super-cell to that of the reference one. The reference super-cell always corresponded to lattice parameters $a = 7.25 \AA, b = 5.00 \AA, c = 2.53 \AA$. The acceptance criterion for the volume moves was based on the Boltzmann factor $(s_1 s_2 s_3)^N e^{-\beta H}$, where $H = U + p V_0 s_1 s_2 s_3$, $U$ is the potential energy of the system, $p$ is the external pressure, $V_0$ the volume of the reference super-cell, and $N$ the total number of atoms in the system. Throughout all simulations described in this paper, the external pressure was set to zero. We come now to the description of the potential. We have used the force field developed for the PE crystal by Sorensen, Liau, Kesner and Boyd \cite{sb}, with several slight modifications of the bonded interaction. This consists of diagonal terms corresponding to bond stretching, angle bending, and torsions, as well as of off-diagonal bond-bond, bond-angle and various angle-angle terms. For convenience, we have changed the form of the expansion in bond angles, replacing the expressions $(\theta - \theta_0)$ in all terms by $(\cos\theta - \cos\theta_0)/(-\sin\theta_0)$. The original form of the vicinal bend-bend interaction, having two different force constants, $k_T$ and $k_G$ for torsional angles close to trans and gauche minima, respectively, is useful for ground state studies, but not for a finite-temperature simulation, where the torsional angles may continuously fluctuate from one minimum to another. The form of this interaction has therefore been changed into the one used in Ref.\cite{kdg}, $ k \cos\varphi (\cos\theta_1 - \cos\theta_0) (\cos\theta_2 - \cos\theta_0)$, where $\theta_1,\theta_2$ are bond angles, $\varphi$ is a torsional angle, and $k$ is a new force constant. The value of the latter constant was taken to be $-k_T/\sin^2\theta_0$ for C-C-C-C torsions and $2 k_G/\sin^2\theta_0$ for C-C-C-H torsions, in order to reproduce the curvature of the potential in the vicinity of the ground state equilibrium value of each torsional angle in the PE crystal. For H-C-C-H torsions, which are in the ground state in both trans and gauche minima, we took for $k$ an approximate value of $k = \left(\sqrt{k_G k_T/\cos {{\pi}\over{3}} \cos {\pi}} \, \right)/ \sin^2\theta_0$, which guarantees that both original force constants $k_T$ and $k_G$ are approximated in the vicinity of the respective minimum with the same error of about 8 \%. At this point we comment on the torsional potential used. As the force field \cite{sb} has been originally designed for ground-state studies, the explicit torsional potential for all torsions contains only the term $\cos 3\varphi$, which yields zero energy difference between the trans and gauche minima in C-C-C-C torsions. The actual difference thus comes just from the non-bonded interaction superposed over 1 -- 4 atoms, and is, according to Ref. \cite{boydpr}, about 340 K, which is a somewhat higher value than the generally accepted one of about 250 K. However, the use of periodic boundary conditions anyway inhibits the creation of conformational defects strongly, and therefore this difference is not likely to play an important role, at least in the temperature range studied. Concerning the nonbonded interaction, we used a spherical cutoff of 6 $\AA$ for all pair interactions, which corresponded to interaction of a given atom with about 110 neighbors. The list of neighbors has been determined at the beginning of the simulation with respect to the reference structure, and kept fixed throughout the evolution of the system (topological interaction). The use of the topological interaction would clearly preclude the longitudinal diffusion of the chains by creating an artificial energy barrier, which might be unrealistic for a study of short alkanes. However, since our main interest lies in the limit of very long chains, this approximation, common in the study of crystals, is acceptable here, and brings an advantage of considerably speeding up the execution of the program. The reference structure was obtained by placing the ideal crystal structure, described at the beginning of this section, inside the reference box, taking for the setting angle $|\psi|$ and the internal chain parameters the values of $|\psi| = 43.0 ^{\circ}, r_{CC} = 1.536 \AA, r_{CH} = 1.09 \AA, \theta_{HCH} = 107.4 ^{\circ}$, respectively. These values turned out to be to a good approximation close to their true average values throughout the whole temperature range of the simulation, thus proving the consistency. Because of the relatively low cutoff used, we have added the long-range corrections to the non-bonded energy and diagonal components of the pressure tensor. These corrections have been calculated for the reference structure in the static lattice approximation, and tabulated for a suitable mesh of scaling factors $s_1,s_2,s_3$. During the simulation, the values of the corrections corresponding to the instantaneous values of the scaling factors were calculated from the table by means of three-dimensional linear interpolation. We note here that our way of treating the non-bonded interaction is different from that used in \cite{sb}, where the interaction of a given atom with two neighboring shells of chains has been taken into account and no long-range corrections have been applied. The MC sampling algorithm for PE crystal was described in considerable detail in our previous papers \cite{rm,rm1}, and here we present it just briefly. In addition to the volume moves, we used local moves to move the atoms and global moves to move the chains. In the local moves, the atoms of the crystal lattice were visited in sequential order, and different maximum displacements have been used for the atoms of carbon and hydrogen, reflecting the fact that a carbon atom has four covalent bonds while a hydrogen atom has just one. The typical value of the acceptance ratio for the local moves was kept close to 0.3, corresponding at temperature $T = 100$ K to isotropic maximum displacements of 0.03 $\AA$ and 0.06 $\AA$ for carbons and hydrogens, respectively. In the global moves, displacements of the center of mass of a whole chain along all three axes accompanied by rotation of the chain along a line parallel to the crystallographic $c$-direction and passing through the center of mass of the chain were attempted. Choosing the fraction of global moves to be 30 \% in this study, the global and local moves were alternated at random, in order to satisfy the detailed balance condition. Once it was decided to perform a global move, all the chains of the super-cell were attempted to move in sequential order. For C$_{12}$ chains at $T = 100$ K, the maximum (anisotropic) displacements of the chains were $\Delta x_{max} = \Delta y_{max} = 0.11 \AA, \Delta z_{max} = 0.05 \AA$, the maximum rotation angle being $\Delta \psi_{max} = 11^{\circ}$. This choice of the parameters resulted in an acceptance ratio of about 0.18 for the global moves. We note here that the energy change associated with a rigid displacement or a rotation of a whole chain scales linearly with the length of the chain, and therefore for longer chains we reduced appropriately the parameters of the global moves in order to preserve the same acceptance ratio. No attempt to optimize the maximal displacements or fraction of different kinds of moves has been done in this study. Concerning the volume moves, we attempted a change of all three scaling factors $s_1,s_2,s_3$ after each sweep over the lattice (MCS) performing local or global moves. For the super-cell $2 \times 3 \times 6$ unit cells at $T = 100$ K, the maximum changes of the scaling factors used were $\delta s_1 = \delta s_2 = 0.0045, \delta s_3 = 0.0009$, where the different values reflected the anisotropy of the diagonal elastic constants $c_{11},c_{22},c_{33}$. This choice resulted in an acceptance ratio of 0.21 for the volume moves. We have simulated several different super-cell sizes. The smallest one consisted of $2 \times 3 \times 6$ unit cells, contained 12 C$_{12}$ chains with a total of 432 atoms, and was used for simulation of the system at seven different temperatures ranging from 10 to 300 K. To study chain-length-dependent finite-size effects, at $T = 100$ K and $T = 300$ K the simulation was performed also on super-cells consisting of 12 C$_{24}$ and C$_{48}$ chains, containing respectively $2 \times 3 \times 12$ and $2 \times 3 \times 24$ unit cells, while at $T = 200$ K only the latter system size was studied in addition to the smallest one. At $T = 300$ K, a super-cell size $4 \times 6 \times 12$ unit cells, twice as large as the smallest one in each spatial direction, was also used, to study volume-related finite-size effects. Finally, at all four highest temperatures, $T = 300,350,400,450$ K, a super-cell with the longest, C$_{96}$ chains was used, consisting of $2 \times 3 \times 48$ unit cells and containing 3456 atoms. As the initial configuration for the lowest temperature simulation for a given super-cell size we used the corresponding reference structure. In course of the simulation, we made use of the final configuration of the run at a lower temperature when possible, and always equilibrated the system for at least $2 \times 10^4$ MCS before averaging. Our statistics is based on the run length of $1.4 \times 10^6$ MCS per data point for the smallest, 432 atom system, and a run length decreasing linearly with the number of atoms for the larger systems. We have calculated the specific heat at constant pressure $c_p$ from the enthalpy fluctuations. The pressure tensor was calculated by means of a standard virial expression. In order to check the consistency of the simulation algorithm, we also evaluated the kinetic energy from the corresponding virial expression. During the averaging, the accumulators for the total energy, lattice parameters $a,b,c$ and setting angle $\psi$ were updated after each MCS while those for virial and other quantities were updated only every 10 MCS. Histograms have been accumulated for the structural quantities, like the bond lengths and angles, torsional angles and the setting angle of the chains as well as the displacement of the center of mass of the chains along the $z$-axis. The whole run was always subdivided into four batches and the batch subaverages were used to estimate the approximate error bars of the total averages. Concerning the elastic constants $c_{11}, c_{22}, c_{33}, c_{12}, c_{13}, c_{23}$ (in the Voigt's notation), we have independently determined them in two different ways. Apart from the standard Parrinello-Rahman fluctuation formula \cite{pr} \begin{equation} c_{ik} = {{k_B T}\over{\langle V \rangle}} \langle e_i e_k \rangle^{-1} \; , i,k = 1,2,3 \; , \label{prff} \end{equation} we applied also the new fluctuation formula proposed in Ref.~\cite{gzs}, in its approximate version suitable for small strain fluctuations \begin{equation} c_{ik} = -\sum_n \langle p_i e_n \rangle \langle e_n e_k \rangle^{-1} \; , \label{newff} \end{equation} where $p_i$ and $e_i$ are the diagonal components of the pressure tensor and strain tensor, respectively. \section{Results and discussion} In this section we describe and discuss the results of the simulation. We start with a comment on the stability of the crystal structure. The initial structure with the characteristic "herringbone" arrangement of the chains was found to be stable at all temperatures and all system sizes except for the smallest system with C$_{12}$ chains, where an occasional rotation of a whole chain was observed at $T = 300 $ K. In the largest system with C$_{96}$ chains, no change of structure was observed up to $T = 450 $ K. Although the latter temperature is larger than the experimentally known melting temperature of PE crystal (414 K), the use of periodic boundary conditions inhibits the melting and allows the simulation of a superheated crystalline phase. On the other hand, our arrangement with constrained angles of the super-cell is compatible both with the orthorhombic and with the hexagonal phase, and would not prevent the system from entering the latter, which would occur when the ratio of the lattice parameters ${{a}\over{b}}$ reaches the value $\sqrt{3}$. Our observation thus agrees with the experimentally known stability of the orthorhombic structure up to the melting point. In order to appreciate the amount of disorder present in the system at $T = 400$ K, we show in Fig.1 (a) a projection of the atoms on the $xy$ plane for a typical configuration. The "herringbone" arrangement of the chains is still clearly visible, in spite of a well pronounced disorder. In Fig.1 (b), a projection of the same configuration on the $xz$ plane is shown. In Figs.2,3,4 we show the temperature dependence of the lattice parameters $a,b,c$. Extrapolating these curves down to $T=0$, we find the ground state values of $a = 7.06 \AA, b = 4.89 \AA, c = 2.530 \AA$. We note here that these values do not quite agree with those reported in Ref.\cite{sb}, where the following values have been found $a = 7.05 \AA, b = 4.94 \AA, c = 2.544 \AA$. We attribute these discrepancies to the different way of treating the non-bonded interaction as well as to our slight modifications of the bonded interaction. Before discussing the thermal expansion of the lattice parameters, we comment on the finite-size effects observed. A particularly pronounced one is observed on the lattice parameter $b$, where the values for the smallest system with C$_{12}$ chains are already at $T = 100$ K slightly larger than those for both systems with longer chains. The effect becomes stronger with increasing temperature. At $T = 300$ K, where we have data for four different chain lengths, the value of $b$ is clearly seen to increase with decreasing chain length, most dramatically for the system with C$_{12}$ chains. On the other hand, a considerably weaker finite-size effect is seen on lattice parameters $a$ and $c$. While for the latter one it is perhaps not surprising, because of the large stiffness of the system in the chain direction, the distinct behavior of the $b$ parameter with respect to the $a$ parameter does not appear to be so straightforward to interpret. We believe that its origin lies in the particular character of the chain packing, where the shortest hydrogen-hydrogen contact distance is just that along the crystallographic $b$-direction ($y$-axis) \cite{bookwund}. This results in a stronger coupling of the lateral strain $\epsilon_2$ along the $b$-direction to the longitudinal displacements of whole chains. Since the fluctuations of these displacements are larger for systems with short chains, a particular finite-size effect arises. Concerning the thermal expansion itself, it is convenient to discuss separately the case of the lateral lattice parameters $a,b$, and that of the axial one $c$. We start with the lateral ones, and show in Fig.5 also the temperature dependence of the aspect ratio ${{a}\over{b}}$. Two regimes can be clearly distinguished here. For temperatures up to about 250 K, both lattice parameters expand in a roughly linear way with increasing temperature, and the ratio ${{a}\over{b}}$ raises only slightly from its ground state value of 1.44 (which differs substantially from the value of $\sqrt{3} = 1.73$, corresponding to hexagonal structure). It is characteristic for this regime that the thermal expansion arising due to lattice anharmonicities can be described within a {\em phonon picture} using a quasi-harmonic or self-consistent quasi-harmonic approximation \cite{rutledge,hagele,stobbe}. For higher temperatures, the picture changes. While the lattice parameter $a$ starts to increase faster, the expansion of the parameter $b$ at the same time becomes more slow until it develops a maximum at $T = 350$ K, where it starts to decrease again. Such behavior of $b$ has been already observed in the work \cite{rk}. As a consequence, the aspect ratio ${{a}\over{b}}$ increases strongly. This suggests that the driving force of the change of the lateral lattice parameters in this regime is the approach of a phase transition to a hexagonal phase, in which each chain is surrounded by six chains at equal distance. We have actually continued our simulations to even higher temperatures, and from a limited amount of simulation performed in that region we have found an indication that the hexagonal phase is indeed reached in the range of temperatures 500 -- 550 K (Fig.5). It is, however, clear that in order to obtain reliable results from the simulations in this high-temperature range, where the phase transition in a real PE crystal involves large amplitude displacements and rotations of the whole chains, as well as a considerable population of conformational defects, it would be necessary to introduce several modifications into the simulation algorithm. We shall come back to this point again in the final section. In Fig.6, the temperature dependence of the average setting angle $\langle |\psi| \rangle$ of the chains is shown. It also fits well within the two regimes scenario, being rather flat up to $T = 300$ K, and then starting to decrease. This decrease could indicate an approach of the value of $|\psi| = 30^{\circ}$, compatible with the symmetry of the hexagonal phase. It is also interesting to note the pronounced finite-size effect, similar to that observed in the case of $b$. Before discussing the temperature dependence of the axial lattice parameter $c$, it is convenient to plot also the thermal expansion coefficients, defined as $\alpha_i = {a_i}^{-1} da_i/dT, i = 1,2,3$, where $a_1,a_2,a_3$ are the lattice parameters $a,b,c$. These have been obtained by taking the finite differences of the lattice parameters and are shown in Figs.7,8,9 as a function of temperature. As their behavior trivially follows from that of the lattice parameters, discussed for $i = 1,2$ above, we just note here that all three coefficients converge at low temperatures to nonzero finite values, as can be expected in the classical limit. Concerning the behavior of $c$ and $\alpha_3$, the characteristic feature is that $\alpha_3$ is negative in the whole temperature range and an order of magnitude smaller than $\alpha_1$. It is interesting here to compare our result for $\alpha_3$ to that found in Ref. \cite{rutledge} within the quasi-harmonic approximation for a different force field \cite{kdg}. A distinct feature of the latter result is that the classical value of $\alpha_3$ is considerably smaller in magnitude than the quantum mechanical value (and also the experimental one), and approaches zero as $T \to 0$. Since in the classical limit there is no a priori reason for such behavior, it has to be regarded as accidental. According to the argumentation in Ref. \cite{rutledge}, there are contributions of different sign to $\alpha_3$ from different phonon modes, negative from lattice modes (mainly backbone torsions), and positive from the harder ones. In the classical limit, all the modes contribute at all temperatures and happen to just cancel each other as $T \to 0$. In order to estimate the amount of contribution of the torsions in our case, we have made use of the work \cite{hagele}, where a formula is derived for the axial thermal contraction in a simple one-chain model with only torsional degrees of freedom. In the classical limit, the formula predicts $c - c_0 = - {{1}\over{4}} c_0 \sin^2 {{\alpha}\over{2}} \langle \phi_{CCCC}^{2} \rangle$, where $\alpha = \pi - \theta_{CCC}$, and $\phi_{CCCC}$ is the fluctuation of the torsional angle $\varphi_{CCCC}$ from the trans minimum. In Fig.10 we have plotted $c$ against $\langle \phi_{CCCC}^{2} \rangle$, and found a very good linear dependence over the whole temperature range up to 450 K, where the effective torsional constant, defined as $C_{tors} = {{T}\over{\langle \phi_{CCCC}^2 \rangle}}$, undergoes a considerable softening with increasing temperature (Fig.11). The proportionality constant determined from our plot was, however, larger in magnitude by about 50 \% with respect to the above value of ${{1}\over{4}} c_0 \sin^2 {{\alpha}\over{2}}$, valid for the simple model. The linear dependence suggests that with the force field used \cite{sb}, the negative $\alpha_3$ originates almost entirely from the C-C-C-C torsions, which points to a certain intrinsic difference in the anharmonic properties of the force fields \cite{sb} and \cite{kdg}. The pronounced increase of $\alpha_3$ for $T > 300$ K is thus a consequence of a strong softening of the torsional potential due to the large amplitude of the fluctuations. For comparison of these results to experimental ones, we have chosen two sets of data. In the work \cite{davis}, lattice parameters $a,b,c$ have been measured in the temperature range 93 -- 333 K, and the data are smooth enough to allow a direct extraction of thermal expansion coefficients by means of finite differences. The other chosen set of data \cite{sl} is to our knowledge the only one covering the range from helium temperatures up to $T = 350$ K, however, the scatter of the data is too large for a direct numerical differentiation. Therefore we decided to first fit them by a fourth-order polynomial and then evaluate the expansion coefficients analytically. For both sets, the experimental values of the lattice parameters are shown in Figs.2,3,4 and the corresponding thermal expansion coefficients in Figs.7,8,9 . Concerning the absolute value of the lattice parameters, in case of $a$ our results agree well with the data \cite{davis}, while falling slightly below the data \cite{sl}, in particular at the lowest temperatures. In case of $b$ our results fall slightly above and in that of $c$ slightly below both data sets in the whole temperature range. As far as the temperature dependence itself is concerned, this is most conveniently discussed in terms of the thermal expansion coefficients $\alpha_i$. We note first that for temperatures lower than about 150 K, quantum effects become crucial and cannot be neglected, as they are responsible for the vanishing of all expansion coefficients in the limit $T \to 0$. A meaningful comparison of our classical results to experimental data is thus possible only for larger temperatures. For $T > 150$ K, $\alpha_1$ agrees qualitatively well with the data \cite{davis}, although the experimental ones appear to be slightly smaller. In particular, the pronounced increase of $\alpha_1$ for $T > 250$ K appears to be well reproduced, in contrast to the result obtained in Ref.\cite{rutledge} within the classical quasi-harmonic approximation for the force field \cite{kdg}. The agreement is less good for the data \cite{sl}, which are distinctly smaller and start to increase strongly at somewhat smaller temperatures, for $T > 200$ K. In case of $\alpha_2$, our data agree for $T > 150$ K qualitatively well with the set \cite{sl}, correctly reproducing the gradual decrease with temperature, although our results are somewhat smaller. The data \cite{davis} exhibit here a different behavior, being of the same magnitude as the ones in Ref. \cite{sl}, but markedly flat in the whole range of temperatures. We note here that in the work \cite{swan}, $\alpha_2 < 0$ has been experimentally observed just below the melting point. On the other hand, the classical result for $\alpha_2$ calculated in Ref.\cite{rutledge} exhibits instead an upward curvature. Concerning $\alpha_3$, our results agree quantitatively well with the data \cite{davis} in the whole range of temperatures, although some scatter of the data precludes here a more detailed comparison. In the set \cite{sl}, $\alpha_3$ behaves for $T > 100$ K qualitatively similar to our results, however, the plateau value is somewhat smaller and the strong increase in magnitude appears at a lower temperature, already between 200 - 250 K. The origin of some of the observed discrepancies may be either in the force field itself, or in the quantum effects, which may be relevant even at temperatures as high as 300 K \cite{rutledge}. Also our fit of the data \cite{sl} is not unique, and may itself be a source of additional errors in the coefficients $\alpha_i$. Last, but not least, it appears that there exists also a considerable scatter between the experimental data from different sources. We therefore believe that it would be interesting to re-examine the temperature dependence of the structural parameters (including possibly the setting angle of the chains) of well-crystalline samples of PE, in the whole range from very low temperatures up to the melting point, using up-to-date X-ray or neutron diffraction techniques. In Fig.12, the average fluctuations $\sqrt{\langle (\delta |\psi|)^2 \rangle}$ of the setting angle of the chains are shown as a function of temperature. Apart from a trivial finite-size effect of average fluctuation increasing with decreasing chain length, we note that for the system with C$_{12}$ chains the curve exhibits a marked enhancement of the fluctuations at $T = 300$ K. Such enhancement suggests that the system is approaching a transition into a phase similar to the "rotator" phases of $n$-paraffins, where the setting angle of the chains jumps among several minima. For the system with C$_{96}$ chains, the same phenomenon occurs only for $T > 400$ K, which in a real PE crystal coincides with the melting point. In Fig.13, we show a histogram of the setting angle $\psi$ for the system with C$_{96}$ chains at the highest temperature $T = 450$ K. The distribution is still bimodal, with two peaks centered at $\pm \langle |\psi| \rangle$, as it is in the ground state, corresponding to the "herringbone" arrangement of the chains. This proves that in our super-cell with periodic boundary conditions in all directions, the superheated orthorhombic structure is stable even at such high temperature, at least on the MC time scale of our run. We comment now briefly on some other internal structural parameters. In Figs.14 and 15, we show the average angles $\langle \theta_{CCC} \rangle$ and $\langle \theta_{HCH} \rangle$ as a function of temperature. While the angle $\langle \theta_{CCC} \rangle$ develops between 250 and 300 K a very shallow minimum, the angle $\langle \theta_{HCH} \rangle$ decreases roughly linearly throughout the whole range of temperatures, its overall variation being about a factor of 3 larger than that of the former. Both average bond lengths $\langle r_{CC} \rangle$ and $\langle r_{CH} \rangle$ increase very slightly with temperature, the dependence being very close to linear, $\langle r_{CC} \rangle$ varying from 1.5357 $\AA$ at 10 K to 1.5398 $\AA$ at 450 K, and $\langle r_{CH} \rangle$ varying from 1.0898 $\AA$ at 10 K to 1.0924 $\AA$ at 450 K. It is interesting to show the average torsional fluctuations $\sqrt{\langle \phi_{CCCC}^2 \rangle}$ as a function of temperature, Fig.16, where we see an enhancement for $T > 400$ K, corresponding to already discussed softening of the effective torsional potential (Fig.11). The typical fluctuation in this region is about $13^{\circ}$. From the histograms of the torsional angles we found that for temperatures up to 400 K, no gauche defects are created in the chains, while at 450 K only an extremely low population starts to arise. This is clearly a consequence of the periodic boundary conditions used in the chain direction together with the still relatively short length of the chains used. We come now to some thermodynamic parameters, and start with the specific heat per unit cell. This is shown in Fig.17 as a function of temperature. At low temperatures, where the classical crystal is always harmonic, ${{c_p}\over{k_B}}$ reaches the classical equipartition value of 18, corresponding to $3 \times 12 = 36$ degrees of freedom per unit cell. With increasing temperature, as the anharmonicities become important, its value gradually increases, until a finite size effect starts to appear for $T > 200$ K. While for the system with C$_{12}$ chains $c_p$ starts to grow faster for $T > 200$ K, for the system with C$_{96}$ chains this occurs only at about 300 K. This behavior is likely to be related to the already discussed chain-length-dependent onset of increased fluctuations of the setting angle, or rotations of the chains around their axes. In Figs.18 -- 22, we show the elastic constants $c_{11},c_{22},c_{12},c_{13}, c_{23}$, determined by means of the Parrinello-Rahman fluctuation formula (\ref{prff}), as a function of temperature. For the case of $c_{33}$, we show in Fig.23 both the values obtained according to the Parrinello-Rahman fluctuation formula (\ref{prff}) and those found from the new formula (\ref{newff}). We see that the values obtained with the latter one have considerably smaller scatter. Actually, we have tried to use the formula (\ref{newff}) also for other elastic constants, however, only in the case of $c_{33}$ a definite improvement of the convergence was observed. It would certainly be desirable to be able to improve on the accuracy of the elastic constants, in particular in case of $c_{13},c_{23}$; however, at present this seems to require a prohibitively large CPU time unless a considerably more efficient algorithm is available. At this point we note that some of the error bars shown in the figures for the elastic constants are probably an underestimate of the true ones, as they have been obtained from the variance of the results from just four batches. To discuss now the temperature dependence of the elastic constants, it's again convenient to do so separately for the lateral ones $c_{11},c_{22},c_{12}$, and for the ones related to the axial strain $\epsilon_{3}$, namely $c_{13},c_{23}, c_{33}$. All three constants $c_{11},c_{22},c_{12}$ are seen to monotonously decrease with temperature, reaching at 400 K, just below the melting point, about 60 \% or even less of their ground state values. This behavior originates mainly from the thermal expansion of the crystal in the lateral directions and is typical for van der Waals systems, like, e.g., solid argon \cite{loeding}. An interesting finite-size effect is seen on the diagonal elastic constant $c_{22}$, which is at $T = 300$ K distinctly smaller for the system with C$_{12}$ chains than for other systems. This observation correlates with the effect seen on lattice parameter $b$, which was in turn found to be larger for the system with the shortest chains. Concerning the other group of elastic constants, we first note that the diagonal one $c_{33}$ is two orders of magnitude larger than all the other elastic constants. At low temperatures, $c_{33}$ reaches a limiting value as large as 340 GPa and decreases monotonously with increasing temperature, dropping at 400 K to about 80 \% of its ground state value. On the other hand, the two off-diagonal elastic constants, $c_{13}$ and $c_{23}$, are, interestingly, found to increase with temperature, in agreement with the results in Ref.\cite{rutledge}. Taking into account the negative thermal expansion of the system in the axial direction, the behavior of this group of elastic constants does not appear to be just a trivial consequence of the thermal expansion, as it was in the case of $c_{11},c_{22},c_{12}$. In order to understand such behavior, it is interesting to plot also the elastic constant $c_{33}$ vs. the mean-square fluctuation of the torsional angle $\langle \phi_{CCCC}^{2} \rangle$, Fig.24. We find again a roughly linear dependence (compare Fig.10), which suggests that the softening of $c_{33}$ is directly related to the activation of the torsional degrees of freedom. The mechanism underlying the behavior of these elastic constants then might be the following. At $T = 0$, where the chain backbones are perfectly flat in their all-trans states, an axial deformation can be accommodated only by bending the bond angles $\theta_{CCC}$. Angle bending is, however, after bond stretching, the second most stiff degree of freedom in the system and determines the high ground state value of $c_{33}$. As the torsional fluctuations become activated with increasing temperature, the chains start to "wiggle" and develop transverse fluctuations (see Fig.1 (b)). In such configurations, it becomes possible to accommodate a part of the axial deformation in the torsional modes, which represent the most soft ones in the bonded interaction, and $c_{33}$ is therefore renormalized to a smaller value. This must be, however, accompanied by an increased response of the transverse fluctuations of the chains, resulting in lateral strain, because of the "wiggling", since the total length of the chains is very hard to change. An increased lateral strain response to an axial strain means, however, just an increase of the value of the elastic constants $c_{13}$ and $c_{23}$. It would be, of course, desirable to have a quantitative theory supporting this intuitive, but, as we believe, well plausible interpretation. The reason why the constants $c_{13}$ and $c_{23}$ appear to have the largest error bars among all elastic constants also seems to be connected with the fact that these two ones express just the coupling between the lateral and axial strains. The computational efficiency of the algorithm in determination of these two quantities is crucially dependent on the exchange of energy between the non-bonded and bonded interactions, which is probably still the most difficult point even with the present algorithm, because of the large separation of the relevant energy scales. Before closing this section, we would like to make yet few more remarks on the finite-size effects. Comparing the results at $T = 300$ K for both systems consisting of C$_{24}$ chains, containing respectively $2 \times 3 \times 12$ and $4 \times 6 \times 12$ unit cells, we see that the values obtained with both system sizes are for all quantities practically equal. This suggests that the finite-size effects related directly to the volume of the box are relatively small, at least for the two system sizes considered (which are, however, still not very large). On the other hand, a definite chain-length-dependent finite-size effect has been found in case of several quantities for the chain lengths considered, its magnitude being also distinctly temperature dependent. For practical purposes, it results that the smallest system size used with C$_{12}$ chains can be representative of a classical PE crystal only at rather low temperatures, perhaps below 100 K, since at and above this temperature it exhibits pronounced finite-size effects. On the other hand, the systems with C$_{24}$ and C$_{48}$ chains appear to represent the classical PE crystal reasonably well for temperatures lower than 300 K, while at this and higher temperatures the use of a system with C$_{96}$ chains or even longer would be strongly recommended. \section{Conclusions} In this paper, we have demonstrated three main points. First, the MC algorithm using global moves on the chains in addition to the local moves on the atoms is a well applicable method for a classical simulation of crystalline PE. It allows an accurate determination of the structural properties and yields also fairly accurate results for the elastic constants. Second, the force field we have used \cite{sb} is well able to reproduce the experimentally known structure in the whole range of temperatures, and where the classical description is appropriate, the results obtained agree well with the available experimental data. Third, we have studied the finite-size effects, mainly due to chain length, and determined for different temperatures a minimal chain length necessary for the system to be in the limit of long chains, and thus representative of PE. All these findings look promising for further studies, and here we suggest some possible directions. Basically, there are two routes to extend the present study, concentrating on the low-temperature and high-temperature region, respectively. The first one would aim on taking into account the quantum effects, e.g. by means of a Path Integral MC technique. Apart from improving the agreement with experiment, mainly at low temperatures, this technique is able to treat the quantum effects at a finite temperature in an essentially exact way and therefore should also allow to check the range of validity of various approximate treatments, like the quasi-harmonic or self-consistent quasi-harmonic approximations \cite{rutledge,hagele}. This would help to understand better the true importance of quantum effects at different temperatures in this paradigmatic crystalline polymer system. The second route would aim on the high-temperature region of the phase diagram, where the orthorhombic crystal melts under normal pressure, but is known to undergo a phase transition into a hexagonal "condis" phase at elevated pressure \cite{condis}. In the present simulation arrangement, melting is prevented by periodic boundary conditions in both lateral directions, but the same boundary conditions being applied in the chain direction inhibit also the creation of conformational defects. Nevertheless, there are indications arising from the present simulation that a similar transition could indeed occur in the temperature region 500 -- 550 K, the main ones being the approach of the aspect ratio ${{a}\over{b}}$ towards the hexagonal value of $\sqrt{3}$, and enhancement of the torsional angle and setting angle fluctuations for temperatures over 400 K. In order to study this high-temperature regime properly, several modifications of the present algorithm would be necessary. The main one would be a lifting of the periodic boundary conditions in the chain direction, thus introducing free chain ends. A use of a larger cutoff for the non-bonded interactions would be necessary in order to treat correctly the large amplitude displacements and rotations of the chains involved in the transition, and perhaps a non-spherical cutoff including certain number of atoms from one or two neighboring shells of chains around a given atom might be a preferred solution. It would also be necessary to modify the torsional potential in order to yield a correct value for the energy difference between the trans and gauche states. In order to be able to sample configurations with a considerable population of conformational defects, it might also be useful to introduce some other kind of MC moves acting on the torsional degrees of freedom. Finally, a full MC version of the Parrinello-Rahman variable-cell-shape technique should be introduced in order to allow also for shear fluctuations. These might in principle be substantially involved in the transition itself, although the hexagonal phase can be reached from the orthorhombic one without creating a static shear strain. Before closing, we also emphasize that techniques similar to those applied here should be useful for a wide variety of other macromolecular crystals. \acknowledgements We would like to acknowledge stimulating discussions with A. A. Gusev, P. C. H\"{a}gele, K. Kremer, R. J. Meier, A. Milchev, F. M\"{u}ller-Plathe, M. M\"{u}ser, P. Nielaba, G. C. Rutledge, G. Smith, U. W. Suter, E. Tosatti, M. M. Zehnder, as well as correspondence with R. H. Boyd and R. A. Stobbe.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{Introduction} Consider the Laplace-Beltrami operator \begin{eqnarray}\label{renxing_eqn} \Delta_{\alpha}=\frac{\alpha}{2}\sum_{i=1}^my_i^2\frac{\partial^2}{\partial y_i^2} + \sum_{1\leq i\ne j\leq m}\frac{1}{y_i-y_j}\cdot y_i^2\frac{\partial}{\partial y_i} \end{eqnarray} defined on the set of symmetric and homogeneous polynomial $u(x_1, \cdots, x_m)$ of all degrees. There are two important quantities associated with the operator: its eigenfunctions and eigenvalues. The eigenfunctions are the $\alpha$-Jack polynomials and the eigenvalues are given by \begin{eqnarray}\label{Jingle} \lambda_{\kappa}=n(m-1)+a(\kappa')\alpha-a(\kappa) \end{eqnarray} where $\kappa=(k_1, k_2, \cdots k_m)$ with $k_m>0$ is a partition of integer $n$, that is, $\sum_{i=1}^m k_i=n$ and $k_1\geq \cdots \geq k_m$, and $\kappa'$ is the transpose of $\kappa$ and \begin{eqnarray}\label{kernel_sea} a(\kappa)=\sum_{i=1}^m (i-1) k_i=\sum_{i \ge 1} \binom{k_i'}{2}; \end{eqnarray} see, for example, Theorem 3.1 from Stanley (1989) or p. 320 and p. 327 from Macdonald (1998). The Jack polynomials are multivariate orthogonal polynomials (Macdonald, 1998). They consist of three special cases: the zonal polynomials with $\alpha=2$ which appear frequently in multivariate analysis of statistics (e.g., Muirhead, 1982); the Schur polynomials with $\alpha=1$ and the zonal spherical functions with $\alpha=\frac{1}{2}$ which have rich applications in the group representation theory, algebraic combinatorics, statistics and random matrix theory [e.g., Macdonald (1998), Fulton and Harris (1999), Forrester (2010)]. In this paper we consider the statistical behaviors of the eigenvalues $\lambda_{\kappa}$ given in (\ref{Jingle}). That is, how does $\lambda_{\kappa}$ look like if $\kappa$ is picked randomly? For example, what are the sample mean and the sample variance of $\lambda_{\kappa}$'s, respectively? In fact, even the expression of $\lambda_{\kappa}$ is explicit, it is non-trivial to answer the question. In particular, it is hard to use a software to analyze them because the size of $\{\kappa;\, \kappa\ \mbox{is a partition of}\ n \}$ is of order $\frac{1}{n}e^{C\sqrt{n}}$ for some constant $C$; see (\ref{Raman}). The same question was asked for the eigenvalues of random matrices and the eigenvalues of Laplace operators defined on compact Riemannian manifolds. For instance, the typical behavior of the eigenvalues of a large Wigner matrix is the Wigner semi-circle law (Wigner, 1958), and that of a Wishart matrix is the Marchenko-Pastur law (Marchenko and Pastur, 1967). The Weyl law is obtained for the eigenvalues of a Laplace-Beltrami operator acting on functions with the Dirichlet condition which vanish at the boundary of a bounded domain in the Euclidean space (Weyl, 1911). See details at (1) of Section \ref{Concluding_Remarks}. To study a typical property of $\lambda_{\kappa}$ in (\ref{Jingle}), how do we pick a partition randomly? We will sample $\kappa$ by using four popular probability measures: the restricted uniform measure, the restricted Jack measure, the uniform measure and the Plancherel measure. While studying $\lambda_{\kappa}$ for fixed operator $\Delta_{\alpha}$ with $m$ variables, the two restricted measures are adopted to investigate $\lambda_{\kappa}$ by letting $n$ become large. Look at the infinite version of the operator $\Delta_{\alpha}$: \begin{eqnarray}\label{renxing_eqn2} \Delta_{\alpha,\infty}:=\frac{\alpha}{2}\sum_{i=1}^{\infty}y_i^2\frac{\partial^2}{\partial y_i^2} + \sum_{1\leq i\ne j< \infty}\frac{1}{y_i-y_j}\cdot y_i^2\frac{\partial}{\partial y_i}, \end{eqnarray} which acts on the set of symmetric and homogeneous polynomial $u(x_1, \cdots, x_m)$ of all degrees with $m\geq 0$ being arbitrary; see, for example, page 327 from Macdonald (1998). Recall (\ref{Jingle}). At ``level" $n$, the set of eigenvalues of $\Delta_{\alpha,\infty}$ is $\{\lambda_{\kappa}; \kappa \in \mathcal{P}_n\}$. In this situation, the partition length $m$ depends on $n$, this is the reason that we employ the uniform measure and the Plancherel measure. Under the four measures, we prove in this paper that the limiting distribution of random variable $\lambda_{\kappa}$ is a new distribution $\mu$, the Gamma distribution, the Gumbel distribution and the Tracy-Widom distribution, respectively. The distribution $\mu$ is characterized by a function of independent random variables. In the following we will present these results in this order. We will see, in addition to a tool on random partitions developed in this paper (Theorem \ref{finite_theorem}), a fruitful of work along this direction has been used: the approximation result on random partitions under the uniform measure by Pittel (1997); the largest part of a random partition asymptotically following the Tracy-Widom law by Baik {\it et al}. (1999), Borodin {\it et al}. (2000), Okounkov (2000) and Johannson (2001); Kerov's central limit theorem (Ivanov and Olshanski, 2001); the Stein method on random partitions by Fulman (2004); the limit law of random partitions under restricted Jack measure by Matsumoto (2008). A consequence of our theory provides an answer at (\ref{mean_variance}) for the size of the sample mean and sample variance of $\lambda_{\kappa}$ aforementioned. The organization of the paper is as follows. We present our limit laws by using the four measures in Sections \ref{sec:restricted-uniform}, \ref{sec:restricted-Jack}, \ref{sec:uniform} and \ref{sec:Plancherel}, respectively. Four figures corresponding to the four theorems are provided to show that curves based on data and the limiting curves match very well. In Section \ref{sec:New_Random}, we state a new result on random partitions. In Section \ref{Concluding_Remarks}, we make some comments, connections to other problems, and some future work, potential applications and a conjecture. In Section \ref{main:proofs}, we prove all of the results. In Section \ref{appendix:last} (Appendix), we compute the sample mean and sample variance of $\lambda_{\kappa}$ mentioned in \eqref{mean_variance}, calculate a non-trivial integral used earlier and derive the density function in Theorem \ref{cancel_temple} for two cases. {\bf Notation:} $f(n) \sim g(n)$ if $\lim_{n \to \infty} f(n)/g(n) =1$. We write ``cdf" for ``cumulative distribution function" and ``pdf" for ``probability density function". We use $\kappa \vdash n$ if $\kappa$ is a partition of $n$. The notation $[x]$ stands for the largest integer less than or equal to $x$. {\bf Graphs:} The convergence in Theorems \ref{cancel_temple}, \ref{Gamma_surprise}, \ref{vase_flower} and \ref{difficult_easy} are illustrated in Figures \ref{fig:ru}-\ref{fig:plancherel}: we compare the empirical pdfs, also called histograms in statistics literature, with their limiting pdfs in the left columns. The right columns compare the empirical cdfs with their limiting cdfs. These graphs suggest that the empirical ones and their limits match very well. \subsection{Limit under restricted uniform distribution}\label{sec:restricted-uniform} Let $\mathcal{P}_n$ denote the set of all partitions of $n$. Now we consider a subset of $\mathcal{P}_n$. Let $\mathcal{P}_n(m)$ and $\mathcal{P}_n'(m)$ be the sets of partitions of $n$ with lengths at most $m$ and with lengths exactly equal to $m$, respectively. Our limiting laws of $\lambda_\kappa$ under the two measures are derived as follows. A simulation is shown in Figure \ref{fig:ru}. \begin{theorem}\label{cancel_temple} Let $\kappa \vdash n$ and $\lambda_{\kappa}$ be as in (\ref{Jingle}) with $\alpha >0$. Let $m\geq 2$, $\{\xi_i;\, 1\leq i \leq m\}$ be i.i.d. random variables with density $e^{-x}I(x\geq 0)$ and $\mu$ be the measure induced by $\frac{\alpha}{2}\cdot \frac{\xi_1^2+\cdots + \xi_m^2}{(\xi_1+\cdots + \xi_m)^2}$. Then, under the uniform measure on $\mathcal{P}_n(m)$ or $\mathcal{P}_n'(m)$, $\frac{\lambda_{\kappa}}{n^2}\to \mu$ weakly as $n\to\infty$. \end{theorem} By the definition of $\mathcal{P}_n'(m)$, the above theorem gives the typical behavior of the eigenvalues of the Laplace-Beltrami operator for fixed $m$. We will prove this theorem in Section \ref{sec:proof:restricted-uniform}. In Section \ref{appendix:integral}, we compute the pdf $f(t)$ of $\frac{\xi_1^2+\cdots + \xi_m^2}{(\xi_1+\cdots + \xi_m)^2}$, which is different from $\mu$ by a scaler, for $m=2, 3$. It shows that $f(t)= \frac{1}{\sqrt{2t-1}}I_{[\frac{1}{2}, 1]}(t)$ for $m=2$; for $m=3$, the support of $\mu$ is $[\frac{1}{3}, 1]$ and \begin{eqnarray*} f(t)= \begin{cases} \frac{2}{\sqrt{3}} \pi, & \text{if } \frac{1}{3} \le t < \frac{1}{2}; \\ \frac{2}{\sqrt{3}} \big( \pi - 3\arccos\frac{1}{\sqrt{6t-2}} \big), & \text{if } \frac{1}{2} \le t \le 1.\\ \end{cases} \end{eqnarray*} From our computation, it seems not easy to derive an explicit formula for the density function as $m\geq 4$. It would be interesting to explore this. The proof of Theorem \ref{cancel_temple} relies on a new result on random partitions from $\mathcal{P}_n(m)$ and $\mathcal{P}_n'(m)$ with the uniform distributions, which is of independent interest. We postpone it until Section \ref{sec:New_Random}. Given numbers $x_1, \cdots, x_r$. The average and dispersion/fluctation of the data are usually measured by the sample mean $\bar{x}$ and the sample variance $s^2$, respectively, where \begin{eqnarray}\label{pro_land} \bar{x}= \frac{1}{r}\sum_{i=1}^rx_i\ \ \mbox{and}\ \ s^2=\frac{1}{r-1}\sum_{i=1}^r(x_i-\bar{x})^2. \end{eqnarray} Replacing $x_i$'s by $\lambda_{\kappa}$'s as in (\ref{Jingle}) for all $\kappa\in \mathcal{P}_n(m)'$, then $r=|\mathcal{P}_n(m)'|$. By Theorem \ref{cancel_temple} and the bounded convergence theorem, we have \begin{eqnarray}\label{mean_variance} \frac{\bar{x}}{n^2}\to\frac{\alpha}{m+1}\ \ \mbox{and} \ \ \frac{s^2}{n^4}\to \frac{(m-1)\alpha^2}{(m+1)^2(m+2)(m+3)} \end{eqnarray} as $n\to\infty$. The proof is given in Section \ref{appendix:mean_variance}. The moment $(1/r)\sum_{i=1}^rx_i^j$ with $x_i$'s replaced by $\lambda_{\kappa}$'s can be analyzed similarly for other $j\geq 3.$ \medskip \noindent\textbf{Comments}. By a standard characterization of spacings of i.i.d. random variables with the uniform distribution on $[0, 1]$ through exponential random variables [see, e.g., Sec 2.5.3 from Rubinstein and Kroese (2007) and Chapter 5 from Devroye (1986)], the limiting distribution $\mu$ in Theorem \ref{cancel_temple} is identical to any of the following: \noindent (i) $\frac{\alpha}{2}\cdot \sum_{i=1}^m y_i^2$, where $y:=(y_1,\ldots,y_m)$ uniformly sits on $\{y \in [0,1]^{m}; \sum_{i=1}^{m}y_i = 1 \}$. \noindent (ii) $\frac{\alpha}{2}\cdot \sum_{i=1}^m (U_{(i)}-U_{(i-1)})^2$ where $U_{(1)} \le \ldots \le U_{(m-1)}$ are the order statistics of i.i.d. random variables $\{U_i;\, 1\leq i \leq m\}$ with uniform distribution on $[0,1]$ and $U_{(0)}=0$, $U_{(m)}=1$. \begin{figure}[!Ht] \begin{center} \includegraphics[width=10cm]{restricteduniform_a=2_} \caption{The histogram/empirical cdf of $\lambda_\kappa/n^2$ for $\alpha=m=2$ is compared with pdf/cdf of $\mu$ in Theorem \ref{cancel_temple} at $n=2000$. Points are independently sampled according to $\mu$ for $1000$ times. } \label{fig:ru} \end{center} \end{figure} \subsection{Limit under restricted Jack distribution}\label{sec:restricted-Jack} \noindent The Jack measure with parameter $\alpha$ chooses a partition $\kappa \in \mathcal{P}_n$ with probability \begin{equation}\label{eq:jack} P(\kappa) = \frac{\alpha^n n! }{c_{\kappa}(\alpha) c'_{\kappa} (\alpha)}, \end{equation} where $$c_{\kappa}(\alpha) = \prod_{(i,j) \in \kappa} (\alpha (\kappa_i - j) + (\kappa'_j -i) + 1) \quad \text{and} \quad c'_{\kappa}(\alpha) = \prod_{(i,j) \in \kappa} (\alpha (\kappa_i - j) + (\kappa'_j -i) + \alpha).$$ The Jack measure naturally appears in the Atiyah-Bott formula from the algebraic geometry; see an elaboration in the notes by Okounkov (2013). In this section, we consider the random restricted Jack measure studied by Matsumoto (2008). Let $m$ be a fixed positive integer. Recall $\mathcal{P}_n(m)$ is the set of integer partitions of $n$ with at most $m$ parts. The induced restricted Jack distribution with parameter $\alpha$ on $\mathcal{P}_n(m)$ is defined by [we follow the notation by Matsumoto (2008)] \begin{equation}\label{eq:restrictedjack} P_{n,m}^{\alpha}(\kappa) = \frac{1}{C_{n,m}(\alpha)} \frac{1}{c_{\kappa}(\alpha) c'_{\kappa}(\alpha)}, \quad \kappa \in \mathcal{P}_n(m), \end{equation} with the normalizing constant $$C_{n,m}(\alpha) = \sum_{\mu \in \mathcal{P}_n(m)} \frac{1}{c_{\mu}(\alpha) c'_{\mu}(\alpha)}.$$ \begin{figure}[!Ht] \begin{center} \includegraphics[width=10cm]{restrictedjack_a=1,2_} \caption{Top row compares histogram/empirical cdf of $(\lambda_{n}-a_n)/b_n$ in Theorem \ref{Gamma_surprise} for $m=2,\, \alpha=1$ with Gamma pdf/cdf at $n=1000$. The quantity ``$(\lambda_{n}-a_n)/b_n$" is independently sampled for $800$ times. Similar interpretation applies to the bottom row for $m=\alpha=2.$} \label{fig:rj} \end{center} \end{figure} Similarly, replacing $\mathcal{P}_n(m)$ above with ``$\mathcal{P}_n'(m)$", we get the restricted Jack measure on $\mathcal{P}_n'(m)$. We call it $Q_{n,m}^{\alpha}$. The following is our result under the two measures. \begin{theorem}\label{Gamma_surprise} Let $\kappa \vdash n$ and $\lambda_{\kappa}$ be as in (\ref{Jingle}) with parameter $\alpha >0$. Then, for given $m\geq 2$, if $\kappa$ is chosen according to $P_{n,m}^{\alpha}$ or $Q_{n,m}^{\alpha}$, then $$\frac{\lambda_\kappa-a_n}{b_n} \to \text{Gamma distribution with pdf } h(x)=\frac{1}{\Gamma(v)\, (2/\beta)^{v}}x^{v-1}e^{-\beta x/2} \text{ for } x\geq 0$$ weakly as $n\to \infty$, where \begin{eqnarray*} a_n=\frac{m-\alpha-1}{2}n+\frac{\alpha}{2m}n^2,\quad b_n=\frac{n}{2m},\quad v=\frac{1}{4}(m-1)\cdot(m\beta + 2). \end{eqnarray*} \end{theorem} By the definition of $\mathcal{P}_n'(m)$, the above theorem gives the typical behavior of the eigenvalues of the Laplace-Beltrami operator for fixed $m$ under the restricted Jack measure. Write $v=\frac{1}{2}\cdot\frac{1}{2}(m-1)(m\beta+2)$. Then the limiting distribution becomes a $\chi^2$ distribution with (integer) degree of freedom $\frac{1}{2}(m-1)(m\beta+2)$ for $\beta=1,2$ or $4$. See Figure \ref{fig:rj} for numerical simulation. We will prove Theorem \ref{Gamma_surprise} in Section \ref{sec:proof:restricted-Jack}. \subsection{Limit under uniform distribution}\label{sec:uniform} Let $\mathcal{P}_n$ denote the set of all partitions of $n$ and $p(n)$ the number of such partitions. Recall the operator $\Delta_{\alpha, \infty}$ in (\ref{renxing_eqn2}) and the eigenvalues in (\ref{Jingle}). At ``level" $n$, the set of eigenvalues is $\{\lambda_{\kappa}; \kappa \in \mathcal{P}_n\}$. Now we choose $\kappa$ according to the uniform distribution on $\mathcal{P}_n$. The limiting distribution of $\lambda_{\kappa}$ is given below. Denote $\zeta(x)$ the Riemman's zeta function. \begin{theorem}\label{vase_flower} Let $\kappa \vdash n$ and $\lambda_{\kappa}$ be as in (\ref{Jingle}) with parameter $\alpha >0$. If $\kappa$ is chosen uniformly from the set $\mathcal{P}_n$, then $$cn^{-3/2}\lambda_{\kappa} -\log \frac{\sqrt{n}}{c} \to G(x)=\exp\big(-e^{-(x+K)}\big) $$ weakly as $n \to \infty$, where $c=\frac{\pi}{\sqrt{6}}$ and $K=\frac{6\zeta(3)}{\pi^2}(1-\alpha)$. \end{theorem} In Figure \ref{fig:uniform}, we simulate the distribution of $\lambda_\kappa$ at $n=4000$ and compare with the Gumbel distribution $G(x)$ as in Theorem \ref{vase_flower}. Its proof will be given at Section \ref{proof_vase_flower}. \begin{figure}[!Ht] \begin{center} \includegraphics[width=10cm]{uniform_a=1,2_} \caption{Top row compares histogram/empirical cdf of ``$cn^{-3/2}\lambda_{\kappa} -\log \frac{\sqrt{n}}{c}$" for $\alpha=1$ with the pdf $G'(x)$/cdf $G(x)$ in Theorem \ref{vase_flower} at $n=4000$. The quantity ``$cn^{-3/2}\lambda_{\kappa} -\log \frac{\sqrt{n}}{c}$" is independently sampled for $1000$ times. Similar interpretation applies to the bottom row for $\alpha=2.$} \label{fig:uniform} \end{center} \end{figure} \subsection{Limit under Plancherel distribution}\label{sec:Plancherel} A random partition $\kappa$ of $n$ has the Plancherel measure if it is chosen from $\mathcal{P}_n$ with probability \begin{equation}\label{eq:plan} P(\kappa) = \frac{\dim(\kappa)^2}{n!}, \end{equation} where $\dim(\kappa)$ is the dimension of irreducible representations of the symmetric group $\mathcal{S}_n$ associated with $\kappa$. It is given by $$\dim(\kappa) = \frac{n!}{\prod_{(i,j) \in \kappa} (k_i - j +k'_j -i + 1)}.$$ See, e.g., Frame {\it et al}. (1954). This measure is a special case of the $\alpha$-Jack measure defined in (\ref{eq:jack}) with $\alpha=1$. The Tracy-Widom distribution is defined by \begin{eqnarray}\label{tai} F_2(s)=\exp\left(-\int_{s}^{\infty}(x-s)q(x)^2\,dx\right),\ s\in \mathbb{R}, \end{eqnarray} where $q(x)$ is the solution to the Painl\'eve II differential equation \begin{eqnarray*} & & q''(x)=xq(x)+2q(x)^3\ \ \mbox{with boundary condition}\\ & & q(x) \sim \mbox{Ai}(x)\ \mbox{as}\ x\to +\infty \end{eqnarray*} and $\mbox{Ai}(x)$ denotes the Airy function. Replacing the uniform measure in Theorem \ref{vase_flower} with the Plancherel measure, we get the following result. \begin{theorem}\label{difficult_easy} Let $\kappa \vdash n$ and $\lambda_{\kappa}$ be as in (\ref{Jingle}) with parameter $\alpha=1$. If $\kappa$ follows the Plancherel measure, then \begin{eqnarray*} \frac{\lambda_{\kappa} - 2 \cdot n^{3/2}}{n^{7/6} } \to F_2 \end{eqnarray*} weakly as $n\to\infty$, where $F_2$ is as in (\ref{tai}). \end{theorem} \begin{figure}[!Ht] \begin{center} \includegraphics[width=10cm]{plancherel_a=1_} \caption{The histogram/empirical cdf of $T:=(\lambda_{\kappa} - 2 \cdot n^{3/2})n^{-7/6}$ for $\alpha=1$ is compared with pdf/cdf of $F_2$ in Theorem \ref{difficult_easy} at $n=5000$. The value of $T$ is independently sampled for $800$ times.} \label{fig:plancherel} \end{center} \end{figure} The proof of this theorem will be presented in Section \ref{Proof_difficult_easy}. In Figure \ref{fig:plancherel}, we simulate the limiting distribution of $\lambda_\kappa$ with $\alpha=1$ and compare it with $F_2$. For any $\alpha\ne 1$, we prove a weak result as follows. \begin{theorem}\label{thm:LLLplan} Let $\kappa \vdash n$ and $\lambda_{\kappa}$ be as in (\ref{Jingle}) with parameter $\alpha>0$. If $\kappa$ follows the Plancherel measure, then for any sequence of real numbers $\{a_n > 0\}$ with $\lim_{n\to \infty}a_n = \infty$, $$\frac{\lambda_{\kappa} -\left(2+ \frac{128}{27\pi^2} (\alpha-1) \right) n^{3/2} }{ n^{5/4} \cdot a_n } \to 0$$ in probability as $n\to\infty$. \end{theorem} The proof of Theorem \ref{thm:LLLplan} will be given in Section \ref{non_stop_thousand}. We provide a conjecture on the limiting distribution for $\lambda_{\kappa}$ with arbitrary $\alpha>0$ under Plancherel measure. \begin{conjecture} Let $\kappa \vdash n$ and $\lambda_{\kappa}$ be as in (\ref{Jingle}). If $\kappa$ has the Plancherel measure, then \begin{eqnarray*} \frac{\lambda_{\kappa} - \left(2 + \frac{128}{27\pi^2}(\alpha-1) \right)\cdot n^{3/2}}{n^{7/6} } \to (3-2\alpha)F_2 \end{eqnarray*} weakly as $n\to\infty$, where $F_2$ is as in (\ref{tai}). \end{conjecture} The quantities ``$3-2\alpha$" and ``$n^{7/6}$" can be seen from the proofs of Theorems \ref{difficult_easy} and \ref{thm:LLLplan}. The conjecture will be confirmed if there is a stronger version of the central limit theorem by Kerov [Theorem 5.5 by Ivanov and Olshanski (2001)]: the central limit theorem still holds if the Chebyshev polynomials are replaced by smooth functions. \subsection{A new result on random partitions}\label{sec:New_Random} At the time proving Theorem \ref{cancel_temple}, we find the following result on the restricted random partitions, which is also interesting on its own merits. \begin{theorem}\label{finite_theorem} Given $m\geq 2$. Let $\mathcal{P}_n(m)$ and $\mathcal{P}_n(m)'$ be as in Theorem \ref{cancel_temple}. Let $(k_1, \cdots, k_m)\vdash n$ follow the uniform distribution on $\mathcal{P}_n(m)$ or $\mathcal{P}_n(m)'$. Then, as $n\to\infty$, $\frac{1}{n}(k_1, \cdots, k_m)$ converges weakly to the uniform distribution on the ordered simplex \begin{eqnarray}\label{unifDelta} \Delta:=\Big\{(x_1, \cdots, x_{m})\in [0,1]^{m};\, x_1>\cdots >x_m\ \mbox{and}\ \sum_{i=1}^{m} x_i=1\Big\}. \end{eqnarray} \end{theorem} It is known from (\ref{moon_apple}) that the volume of $\Delta=\frac{\sqrt{m}}{m!(m-1)!}.$ So the density function of the uniform distribution on $\Delta$ is equal to $\frac{m!(m-1)!}{\sqrt{m}}.$ If one picks a random partition $\kappa=(k_1, k_2,\cdots)\vdash n$ under the uniform measure, that is, under the uniform measure on $\mathcal{P}_n$, put the Young diagram of $\kappa$ in the first quadrant, and shrink the curve by a factor of $n^{-1/2}$, Vershik (1996) proves that the new random curve converges to the curve $e^{-cx} + e^{-cy}=1$ for $x, y>0$, where $c=\pi/\sqrt{6}.$ For the Plancherel measure, Logan and Shepp (1977) and Vershik and Kerov (1977) prove that, for a rotated and shrunk Young diagram $\kappa$, its boundary curve (see the ``zig-zag" curve in Figure \ref{fig:kerov}) converges to $\Omega(x)$, where \begin{eqnarray}\label{jilin} \Omega(x)=\begin{cases} \frac{2}{\pi}(x\arcsin\frac{x}{2} + \sqrt{4-x^2}),& \text{$|x|\leq 2$};\\ |x|, & \text{$|x|>2$}. \end{cases} \end{eqnarray} A different law is seen from Theorem \ref{finite_theorem}. We will prove this result in Section \ref{proof_finite_theorem}. \subsection{Concluding remarks}\label{Concluding_Remarks} In this paper we investigate the asymptotic behavior of the eigenvalues $\lambda_{\kappa}$ in (\ref{Jingle}). Under the restricted uniform measure, the restricted Jack measure, the uniform measure or the Plancherel measure, we prove that the empirical distribution of the eigenvalues converges to a new distribution $\mu$, the Gamma distribution, the Gumbel distribution and the Tracy-Widom distribution, respectively. The distribution $\mu$ is the push-forward of $\frac{\alpha}{2}\cdot \frac{\xi_1^2+\cdots + \xi_m^2}{(\xi_1+\cdots + \xi_m)^2}$ where $\xi_i$'s are i.i.d. random variables with the density $e^{-x}I(x\geq 0).$ In the following we make comments on some connections, further work and potential applications. A conjecture is also stated. (1). Properties of the eigenvalues of the Laplace-Beltrami operators on a compact Riemannian manifold $M$ are discovered by Weyl (1911). For example, the Weyl asymptotic formula says that $\frac{\lambda_k}{k^{d/2}} \sim (4\pi)^{-d/2}\frac{vol(M)}{\Gamma(\frac{d}{2}+1)}$ as $k\to\infty$, where $d$ is the dimension of $M$ and $vol(M)$ is the volume of $M$. It is proved by analyzing the trace of a heat kernel; see, e.g., p. 13 from Borthwick (2012). Let $\Delta_S$ be the spherical Laplacian operator on the unit sphere in $\mathbb{R}^{n+1}.$ It is known that the eigenvalues of $-\Delta_S$ are $k(k+n-1)$ for $k=0,1,2,\cdots$ with multiplicity of $\binom{n+k}{n}- \binom{n+k-2}{n}$; see, e.g., ch. 2 from Shubin (2001). Some other types of Laplace-Beltrami operators appear in the Riemannian symmetric spaces; see, e.g., M\'{e}liot (2014). Their eigenvalues are also expressed in terms of partitions of integers. Similar to this paper, those eigenvalues can also be analyzed. (2). In Theorem \ref{difficult_easy}, we derive the limiting distribution of the eigenvalues under the Plancherel measure. One can also consider the same quantity under the $\alpha$-Jack measure as in (\ref{eq:jack}), a generalization of the Plancherel measure. However, under this measure, the limiting distribution of the largest part of a random partition is not known. There is only a conjecture made by Dolega and F\'{e}ray (2014). In virtue of this and our proof of Theorem \ref{difficult_easy}, we give a conjecture on $\lambda_{\kappa}$ studied in this paper. Let $\kappa \vdash n$ and $\lambda_{\kappa}$ be as in (\ref{Jingle}) with parameter $\alpha>0$. If $\kappa$ follows the $\alpha$-Jack measure [the ``$\alpha$" here is the same as that in (\ref{Jingle})], then \begin{eqnarray*} \frac{\lambda_{\kappa} - 2\alpha^{-1/2}n^{3/2} }{n^{7/6} } \to F_{\alpha} \end{eqnarray*} weakly as $n\to\infty$, and $F_{\alpha}$ is the $\alpha$-analogue of the Tracy-Widom distribution $F_2$ in (\ref{tai}). The law $F_{\alpha}$ is equal to $\Lambda_0$ stated in Theorem 1.1 from Ram\'{\i}rez {\it et al}. (2011). (3). We do not pursue applications of our results in this paper. They may be useful in Migdal's formula for the partition functions of the 2D Yang-Mills theory [e.g., Witten (1991) and Woodward (2005)]. Further possibilities can be seen, e.g., in the papers by Okounkov (2003) and Borodin and Gorin (2012). (4). We study the eigenvalues of the Laplace-Beltrami operator in terms of four different measures. This can also be continued by other probability measures on random partitions, for example, the $q$-analog of the Plancherel measure [e.g., Kerov (1992) and F\'{e}ray and M\'{e}liot (2012)], the multiplicative measures [e.g., Vershik (1996)], the $\beta$-Plancherel measure (Baik and Rains, 2001), the Jack measure and the Schur measure [e.g., Okounkov (2003)]. \section{Proofs}\label{main:proofs} In this section we will prove the theorems stated earlier. Theorem \ref{finite_theorem} will be proved first because it will be used later. \subsection{Proof of Theorem \ref{finite_theorem}}\label{proof_finite_theorem} The following conclusion is not difficult to prove. We skip its proof. \begin{lemma}\label{Carnige} Review the notation in Theorem \ref{finite_theorem}. Assume, under $\mathcal{P}_n(m)$, $\frac{1}{n}(k_1, \cdots, k_m)$ converges weakly to the uniform distribution on $\Delta$ as $n\to\infty$. Then the same convergence also holds true under $\mathcal{P}_n(m)'$. \end{lemma} We now introduce the equivalence of two uniform distributions. \begin{lemma}\label{paint_swim} Let $m\geq 2$ and $X_1> \cdots > X_m\geq 0$ be random variables. Recall (\ref{unifDelta}). Set \begin{eqnarray}\label{thunder_storm} W=\Big\{(x_1, \cdots, x_{m-1})\in [0, 1]^{m-1};\, x_1>\cdots >x_m\geq 0\ \mbox{and}\ \sum_{i=1}^{m}x_i=1 \Big\}. \end{eqnarray} Then $(X_1, \cdots, X_m)$ follows the uniform distribution on $\Delta$ if and only if $(X_1, \cdots, X_{m-1})$ follows the uniform distribution on $W$. \end{lemma} \begin{proof}[Proof of Lemma \ref{paint_swim}] First, assume that $(X_1, \cdots, X_m)$ follows the uniform distribution on $\Delta$. Then $(X_1, \cdots, X_{m-1})^T=A(X_1, \cdots, X_m)^T$ where $A$ is the projection matrix with $A=(I_{m-1}, \mathbf{0})$ where $\mathbf{0}$ is a $(m-1)$-dimensional zero vector. Since a linear transform sends a uniform distribution to another uniform distribution [see p. 158 from Fristedt and Gray (1997)]. Since $A\Delta=W$, we get that $(X_1, \cdots, X_{m-1})$ is uniformly distributed on $W$. Now, assume $(X_1, \cdots, X_{m-1})$ is uniformly on $W$. First, it is well known that \begin{eqnarray}\label{silk_peer} \mbox{the volume of}\ \Big\{(x_1, \cdots, x_{m})\in [0, 1]^{m};\, \sum_{i=1}^{m}x_i=1 \Big\}= \frac{\sqrt{m}}{(m-1)!}; \end{eqnarray} see, e.g., Rabinowitz (1989). Thus, by symmetry, \begin{eqnarray}\label{moon_apple} \mbox{the volume of } \Delta=\frac{\sqrt{m}}{m!(m-1)!}. \end{eqnarray} Therefore, to show that $(X_1, \cdots, X_m)$ has the uniform distribution on $\Delta$, it suffices to prove that, for any bounded measurable function $\varphi$ defined on $[0,1]^m$, \begin{eqnarray}\label{integral_Barnes} E\varphi(X_1, \cdots, X_m) = \frac{m!(m-1)!}{\sqrt{m}}\int_{\Delta}\varphi(x_1, \cdots, x_m)\,dS \end{eqnarray} where the right hand side is a surface integral. Seeing that $\mathcal{A}:\, (x_1, \cdots, x_{m-1}) \in W\to (x_1, \cdots, x_{m-1},1-\sum_{i=1}^{m-1}x_i) \in \Delta$ is a one-to-one and onto map, then by a formula of change of variable [see, e.g., Proposition 6.6.1 from Berger and Gostiaux (1988)], \begin{eqnarray*} \int_{\Delta}\varphi(x_1, \cdots, x_m)\,dS = \int_{W} \varphi\Big(x_1, \cdots, x_{m-1}, 1-\sum_{i=1}^{m-1}x_i\Big)\cdot \mbox{det}(B^TB)^{1/2}\, dx_1\cdots dx_{m-1} \end{eqnarray*} where \begin{eqnarray*} B:=\frac{\partial (x_1, \cdots, x_{m-1},1-\sum_{i=1}^{m-1}x_i)}{\partial (x_1, \cdots, x_{m-1})}= \begin{pmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots\\ 0 & 0 & \cdots & 1 \\ -1 & -1 & \cdots -1 & -1 \end{pmatrix} _{m\times {(m-1)}}. \end{eqnarray*} Trivially, $B^TB=I_{m-1} + ee^T$, where $e=(1, \cdots, 1)^T\in \mathbb{R}^{m-1}$, which has eigenvalues $1$ with $m-2$ folds and eigenvalue $m$ with one fold. Hence, $\mbox{det}(B^TB)=m$. Thus, the right hand side of (\ref{integral_Barnes}) is identical to \begin{eqnarray}\label{light_coffee} m!(m-1)!\int_{W} \varphi\Big(x_1, \cdots, x_{m-1}, 1-\sum_{i=1}^{m-1}x_i\Big)\, dx_1\cdots dx_{m-1}. \end{eqnarray} It is well known that \begin{eqnarray*} \mbox{the volume of}\ \Big\{(x_1, \cdots, x_{m-1})\in [0,1]^{m-1};\, \sum_{i=1}^{m-1}x_i \le 1\Big\}=\frac{1}{(m-1)!}; \end{eqnarray*} see, e.g., Stein (1966). Thus, by symmetry, \begin{eqnarray}\label{pen_sky} \mbox{the volume of}\ W= \frac{1}{m!(m-1)!}. \end{eqnarray} This says that the density of the uniform distribution on $W$ is identical to $m!(m-1)!.$ Consequently, the left hand side of (\ref{integral_Barnes}) is equal to \begin{eqnarray*} m!(m-1)!\int_{W}\varphi\Big(x_1, \cdots, x_{m-1}, 1-\sum_{i=1}^{m-1}x_i\Big)\, dx_1\cdots dx_{m-1}, \end{eqnarray*} which together with (\ref{light_coffee}) leads to (\ref{integral_Barnes}). \end{proof} \medskip Fix $m\geq 2$. Let $\mathcal{P}_n(m)$ be the set of partitions of $n$ with lengths at most $m.$ It is known from Erd\"{o}s and Lehner (1941) that \begin{eqnarray}\label{ask_cup} |\mathcal{P}_n(m)| \sim \frac{\binom{n-1}{m-1}}{m!} \end{eqnarray} as $n\to\infty$. The main proof in this section is given below. \begin{proof}[Proof of Theorem \ref{finite_theorem}] By Lemma \ref{Carnige}, it is enough to prove that, under $\mathcal{P}_n(m)$, $\frac{1}{n}(k_1, \cdots, k_m)$ converges weakly to the uniform distribution on $\Delta$ as $n\to\infty$. We first prove the case for $m=2.$ In fact, since $k_1+k_2=n$ and $k_1\geq k_2$, we have $\frac{1}{2}n\leq k_1\leq n$. Recall $W$ in (\ref{thunder_storm}). We know $W$ is the interval $(\frac{1}{2},1)$. So it is enough to check that $k_1$ has the uniform distribution on $(\frac{1}{2},1)$. Indeed, for any $x\in (\frac{1}{2}, 1),$ the distribution function of $\frac{k_1}{n}$ is given by \begin{eqnarray*} P\Big((k_1, n-k_1);\, \frac{k_1}{n}\leq x\Big) & = & P\Big((k, n-k);\, \frac{n}{2}\leq k_1 \leq [nx]\Big)\\ &=&\frac{nx-\frac{1}{2}n +O(1)}{\frac{1}{2}n+O(1)}\to 2x-1 \end{eqnarray*} as $n\to \infty,$ which is exactly the cdf of the uniform distribution on $(1/2, 1).$ Recall (\ref{pen_sky}). The volume of $W$ in (\ref{thunder_storm}) equals $\frac{1}{m! (m-1)!}$. Thus the density of the uniform distribution on $W$ has the constant value of $m! (m-1)!$ on $W$. To prove the conclusion, it suffices to show the convergence of their moment generating functions, that is, \begin{eqnarray}\label{tiger_cat} Ee^{(t_1k_1 +\cdots + t_mk_m)/n} \to Ee^{t_1\xi_{1} +\cdots + t_m\xi_{m}} \end{eqnarray} as $n\to\infty$ for all $(t_1, \cdots, t_m) \in \mathbb{R}^m$, where $(\xi_1, \cdots, \xi_{m-1})$ has the uniform distribution on $W$ by Lemma \ref{paint_swim}. We prove this by several steps. {\it Step 1: Estimate of LHS of (\ref{tiger_cat}).} From (\ref{tiger_cat}), we know that the left hand side of (\ref{tiger_cat}) is identical to \begin{eqnarray} & & \frac{1}{|\mathcal{P}_n(m)|} \sum_{(k_1, \cdots, k_m)} e^{(t_1k_1 +\cdots + t_mk_m)/n} \nonumber\\ & = & \frac{1}{|\mathcal{P}_n(m)|} \sum_{k_1> \cdots > k_m} e^{(t_1k_1 +\cdots + t_mk_m)/n} + \frac{1}{|\mathcal{P}_n(m)|} \sum_{k\in Q_n } e^{(t_1k_1 +\cdots + t_mk_m)/n} \label{drink_cloud} \end{eqnarray} where all of the sums above are taken over $\mathcal{P}_n(m)$ with the corresponding restrictions, and \begin{eqnarray*} Q_n:=\{k=(k_1, \cdots, k_m)\vdash n;\, k_i=k_j\ \mbox{for some}\ 1\leq i<j\leq m\}. \end{eqnarray*} Let us first estimate the size of $Q_n$. Observe \begin{eqnarray*} Q_n=\cup_{i=1}^{m-1}\{k=(k_1, \cdots, k_m)\vdash n;\, k_i=k_{i+1}\}. \end{eqnarray*} For any $\kappa=(k_1, \cdots, k_m)\vdash n$ with $k_i=k_{i+1}$, we know $k_1 + \cdots + 2k_i + k_{i+2} +\cdots +k_m=n$, which is a non-negative integer solutions of $j_1+\cdots + j_{m-1}=n$. It is easily seen that the number of non-negative integer solutions of the equation $j_1+\cdots + j_{m-1}=n$ is equal to $\binom{n+m-2}{m-2}.$ Therefore, \begin{eqnarray}\label{cow_sheep} |Q_n| \leq (m-1)\binom{n+m-2}{m-2} \sim (m-1)\frac{n^{m-2}}{(m-2)!} \end{eqnarray} as $n\to\infty.$ Also, by (\ref{ask_cup}), $| \mathcal{P}_n(m) | \sim \frac{n^{m-1}}{m!(m-1)!}$. For $e^{(t_1k_1 +\cdots + t_mk_m)/n}\leq e^{|t_1| +\cdots + |t_m|}$ for all $k_i$'s, we see that the last term in (\ref{drink_cloud}) is of order $O(n^{-1})$. Furthermore, we can assume all the $k_i$'s are positive since $|\mathcal{P}_n(m-1)| = o(|\mathcal{P}_n(m)|)$. Consequently, \begin{eqnarray} Ee^{(t_1k_1 +\cdots + t_mk_m)/n} & \sim & \frac{m!(m-1)!}{n^{m-1}} \sum e^{(t_1k_1 +\cdots + t_mk_m)/n} \label{cat_sun} \end{eqnarray} where $(k_1, \cdots, k_m)\vdash n$ in the last sum runs over all positive integers such that $k_1>\cdots > k_m>0$. {\it Step 2: Estimate of RHS of (\ref{tiger_cat}).} For a set $\mathcal{A}$, let $I_\mathcal{A}$ or $I(\mathcal{A})$ denote the indicator function of $\mathcal{A}$ which takes value $1$ on the set $A$ and 0 otherwise. Review that the density function on $W$ is equal to the constant $m!(m-1)!$. For $\xi_1+\cdots + \xi_m=1$, we have \begin{eqnarray} & & Ee^{t_1\xi_{1} +\cdots + t_m\xi_{m}}\nonumber\\ &= & m! (m-1)! e^{t_m} \int_{[0,1]^{m-1}} e^{(t_1-t_m) x_1 + \cdots + (t_{m-1}-t_m) x_{m-1} } I_{\mathcal{A}} ~d x_1 \dots d x_{m-1}\nonumber\\ &= & m! (m-1)! e^{t_m}\int_{[0,1]^{m-1}} f(x_1,\cdots,x_{m-1}) I_{\mathcal{A}} ~d x_1 \dots d x_{m-1}, \label{hodge} \end{eqnarray} where \begin{eqnarray} & & \mathcal{A}=\Big\{(x_1, \cdots, x_{m-1})\in [0,1]^{m-1};\, x_1 > \cdots > x_{m-1} > 1- \sum_{i=1}^{m-1} x_i \ge 0\Big\}; \nonumber\\ & & f(x_1,\cdots,x_{m-1})=e^{(t_1-t_m) x_1 + \cdots + (t_{m-1}-t_m) x_{m-1} }. \label{master-could} \end{eqnarray} {\it Step 3: Difference between LHS and RHS of (\ref{tiger_cat}).} Denote \begin{eqnarray*} & & \mathcal{A}_n= \Big\{ (k_1, \cdots, k_{m-1})\in \{1,\cdots, n\}^{m-1};\, \frac{k_1}{n} > \cdots > \frac{k_{m-1}}{n} > 1- \sum_{i=1}^{m-1} \frac{k_i}{n} > 0\Big\}; \\ & & f_n (k_1,\cdots,k_{m-1}) := e^{(t_1-t_m)k_1/n +\cdots + (t_{m-1}-t_m) k_{m-1}/n} \end{eqnarray*} for all $(k_1, \cdots, k_{m-1}) \in \mathcal{A}_n$. From (\ref{cat_sun}), we obtain \begin{eqnarray*} & & Ee^{(t_1k_1 +\cdots + t_mk_m)/n} \\ &\sim & e^{t_m} \frac{m!(m-1)!}{n^{m-1}} \sum_{k_1> \cdots > k_m>0} e^{(t_1-t_m)k_1/n +\cdots + (t_{m-1}-t_m) k_{m-1}/n}\\ &= & m!(m-1)! e^{t_m} \sum_{k_1 = 1}^n \cdots \sum_{k_{m-1}=1}^n \int_{\frac{k_1-1}{n}}^{\frac{k_1}{n}} \cdots \int_{\frac{k_{m-1}-1}{n}}^{\frac{k_{m-1}}{n}} f_n (k_1,\cdots,k_m) I_{ \mathcal{A}_n } ~d x_1 \dots d x_{m-1}. \end{eqnarray*} Writing the integral in (\ref{hodge}) similar to the above, we get that \begin{eqnarray} & & Ee^{t_1\xi_{1} +\cdots + t_m\xi_{m}} - Ee^{(t_1k_1 +\cdots + t_mk_m)/n} \nonumber\\ &\sim & m! (m-1)! e^{t_m} \sum_{k_1 = 1}^n \cdots \sum_{k_{m-1}=1}^n \int_{\frac{k_1-1}{n}}^{\frac{k_1}{n}} \cdots \int_{\frac{k_{m-1}-1}{n}}^{\frac{k_{m-1}}{n}} \nonumber\\ & & \quad \quad \quad \quad \big( f (x_1,\cdots,x_{m-1}) I_{ \mathcal{A} }-f_n (k_1,\cdots,k_m) I_{ \mathcal{A}_n } \big)~d x_1 \dots d x_{m-1}\nonumber \end{eqnarray} which again is identical to \begin{eqnarray} & & m! (m-1)! e^{t_m} \sum_{k_1 = 1}^n \cdots \sum_{k_{m-1}=1}^n \int_{\frac{k_1-1}{n}}^{\frac{k_1}{n}} \cdots \int_{\frac{k_{m-1}-1}{n}}^{\frac{k_{m-1}}{n}} \nonumber\\ & & \quad \quad \quad \quad \quad \quad \quad \quad f (x_1,\cdots,x_{m-1}) \left( I_{ \mathcal{A} } - I_{ \mathcal{A}_n } \right) ~d x_1 \dots d x_{m-1}\label{chicken_say}\\ && \quad + m! (m-1)! e^{t_m} \sum_{k_1 = 1}^n \cdots \sum_{k_{m-1}=1}^n \int_{\frac{k_1-1}{n}}^{\frac{k_1}{n}} \cdots \int_{\frac{k_{m-1}-1}{n}}^{\frac{k_{m-1}}{n}} \nonumber\\ & &\quad \quad \quad \quad \quad \quad \quad \quad\left( f (x_1,\cdots,x_{m-1}) - f_n (k_1,\cdots,k_{m-1}) \right) I_{ \mathcal{A}_n } ~d x_1 \dots d x_{m-1} \ \ \ \ \ \ \ \label{sky_leaves}\\ &= & m! (m-1)! e^{t_m} \left( \mathcal{S}_1 + \mathcal{S}_2\right),\nonumber \end{eqnarray} where $\mathcal{S}_1$ stands for the sum in (\ref{chicken_say}) and $\mathcal{S}_2$ stands for the sum in (\ref{sky_leaves}). The next step is to show both $\mathcal{S}_1 \to 0$ and $\mathcal{S}_2 \to 0$ as $n\to\infty$ and this completes the proof. {\it Step 4: Proof of that $\mathcal{S}_2 \to 0$.} First, for the term $\mathcal{S}_2$, given that $$\frac{k_1 - 1}{n} \le x_1 \le \frac{k_1}{n}, \cdots, \frac{k_{m-1} - 1}{n} \le x_{m-1} \le \frac{k_{m-1}}{n},$$ we have \begin{eqnarray*} | f(x_1, \cdots, x_{m-1}) - f_n(k_1, \cdots, k_{m-1}) | \leq \frac{1}{n}\exp\Big\{\sum_{i=1}^{m-1} |t_i -t_m|\Big\}\cdot \sum_{i=1}^{m-1} |t_i -t_m|. \end{eqnarray*} Indeed, the above follows from the mean value theorem by considering $|g(1) - g(0) |$, where $$g(s): = \exp\Big\{\sum_{i=1}^{m-1} (t_i - t_m) [ sx_i + (1-s) \frac{k_i}{n}]\Big\}.$$ Thus $$|\mathcal{S}_2| \le \Big(\frac{1}{n}\Big)^{m-1} n^{m-1} \frac{\exp\big\{\sum_{i=1}^{m-1} |t_i -t_m|\big\}\cdot\sum_{i=1}^{m-1}|t_i -t_m| }{n} \to 0$$ as $n\to\infty$. {\it Step 5. Proof of that $\mathcal{S}_1 \to 0$.} From (\ref{master-could}), we immediately see that \begin{eqnarray}\label{CCL} \|f\|_{\infty}:=\sup_{(x_1, \cdots, x_{m-1})\in [0,1]^{m-1}}| f (x_1,\cdots,x_{m-1}) | \le e^{|t_1 - t_m| + \cdots |t_{m-1} - t_m|}. \end{eqnarray} By definition, as $k_i$ ranges from 1 to $n$ for $i=1,\dots, m-1$, the function $I_{ \mathcal{A}_n}$ equals 1 only when the followings hold \begin{eqnarray}\label{Chan} \frac{k_1}{n} > \frac{k_2}{n}, \cdots, \frac{k_{m-2}}{n}> \frac{k_{m-1}}{n}, \frac{k_1 + \cdots k_{m-2} + 2k_{m-1}}{n} >1, \frac{k_1 + \cdots + k_{m-1}}{n} < 1. \end{eqnarray} Similarly, $I_{\mathcal{A}}$ equals 1 only when \begin{eqnarray}\label{Tang} x_1 > x_2, \cdots, x_{m-2}> x_{m-1}, x_1 + \cdots + x_{m-2} + 2x_{m-1} >1, x_1 + \cdots + x_{m-1} < 1.\ \ \ \end{eqnarray} Let $\mathcal{B}_n$ be a subset of $\mathcal{A}_n$ such that \begin{eqnarray*} \mathcal{B}_n= \mathcal{A}_n \cap \Big\{(k_1, \cdots, k_{m-1})\in \{1,2,\cdots, n\}^{m-1};\, \frac{k_{m-1}}{n} +\sum_{i=1}^{m-1} \frac{k_i}{n} > \frac{m}{n}+1\Big\}. \end{eqnarray*} Given $(k_1, \cdots, k_{m-1})\in \mathcal{B}_n$, for any \begin{eqnarray}\label{red_book} \frac{k_1 - 1}{n} < x_1 < \frac{k_1}{n}, \cdots, \frac{k_{m-1} - 1}{n} < x_{m-1} < \frac{k_{m-1}}{n}, \end{eqnarray} it is easy to verify from (\ref{Chan}) and (\ref{Tang}) that $I_{\mathcal{A}}=1$. Hence, \begin{eqnarray} & & I_{\mathcal{A}_n}=I_{\mathcal{B}_n} + I_{\mathcal{A}_n\backslash\mathcal{B}_n} \nonumber\\ & \leq & I_{\mathcal{A}} + I\Big\{(k_1, \cdots, k_{m-1})\in \{1,\cdots, n\}^{m-1};\, 1 < \frac{k_{m-1}}{n} + \sum_{i=1}^{m-1} \frac{k_i}{n} \leq \frac{m}{n}+1\Big\} \nonumber\\ & = & I_{\mathcal{A}} + \sum_{j=n+1}^{n+m}I_{E_j}\label{spring_cold} \end{eqnarray} where \begin{eqnarray*} E_j=\Big\{(k_1, \cdots, k_{m-1})\in \{1,\cdots, n\}^{m-1};\, k_1+\cdots + k_{m-2} +2k_{m-1}=j\Big\} \end{eqnarray*} for $n+1 \leq j \leq m+n$. Similar to the argument as in {\it Step 1}, \begin{eqnarray}\label{us_pen} \max_{n\leq j \leq m+n}|E_j| =O(n^{m-2}) \end{eqnarray} as $n\to\infty$. On the other hand, consider a subset of $\mathcal{A}_n^c:=\{1,\cdots, n\}^{m-1}\backslash \mathcal{A}_n$ defined by \begin{eqnarray*} \mathcal{C}_n &=& \Big\{(k_1, \cdots, k_{m-1})\in \{1,2,\cdots, n\}^{m-1};\, \mbox{either}\ k_i\leq k_{i+1}-1\ \mbox{for some } 1\leq i \leq m-2,\\ & & \ \mbox{or}\ k_1+\cdots + k_{m-2} + 2k_{m-1} \leq n,\ \mbox{or}\ k_1+\cdots + k_{m-1} \geq m+n-1 \Big\}. \end{eqnarray*} Set $\mathcal{A}^c=[0,1]^{m-1}\backslash \mathcal{A}$. Given $(k_1, \cdots, k_{m-1})\in \mathcal{C}_n$, for any $k_i$'s and $x_i$'s satisfying (\ref{red_book}), it is not difficult to check that $I_{\mathcal{A}^c}=1$. Consequently, \begin{eqnarray*} I_{\mathcal{A}_n^c} &= & I_{\mathcal{C}_n} + I\Big\{(k_1, \cdots, k_{m-1})\in \mathcal{A}_n^c;\, k_i> k_{i+1}-1\ \mbox{for all }\ 1\leq i \leq m-2,\\ & & ~~~~~~~~~~~~ k_1+\cdots + k_{m-2} + 2k_{m-1} > n,\ \mbox{and}\ k_1+\cdots + k_{m-1} < m+n-1 \Big\}\\ & \leq & I_{\mathcal{A}^c} + I(\mathcal{D}_{n,1}) + I(\mathcal{D}_{n,2}), \end{eqnarray*} or equivalently, \begin{eqnarray}\label{coca_warm} I_{\mathcal{A}_n} \geq I_{\mathcal{A}} - I(\mathcal{D}_{n,1}) - I(\mathcal{D}_{n,2}), \end{eqnarray} where \begin{eqnarray*} & & \mathcal{D}_{n,1}=\bigcup_{i=1}^{m-2}\big\{(k_1, \cdots, k_{m-1})\in \{1,2,\cdots, n\}^{m-1};\, k_i=k_{i+1}\big\};\\ & & \mathcal{D}_{n,2}=\bigcup_{i=n}^{ n+m-2}\big\{(k_1, \cdots, k_{m-1})\in \{1,2,\cdots, n\}^{m-1};\, k_1+\cdots +k_{m-1}=i\big\}. \end{eqnarray*} By the same argument as in (\ref{cow_sheep}), we have $\max_{1\leq i \leq 2}|\mathcal{D}_{n,i}|=O(n^{m-2})$ as $n\to\infty$. Joining (\ref{spring_cold}) and (\ref{coca_warm}), and assuming (\ref{red_book}) holds, we arrive at \begin{eqnarray*} |I_{\mathcal{A}_n}- I_{\mathcal{A}}|\leq I(\mathcal{D}_{n,1})+ I(\mathcal{D}_{n,2}) + \sum_{i=n+1}^{n+m}I_{E_i} \end{eqnarray*} and $\sum_{i=1}^2|\mathcal{D}_{n,i}| + \sum_{i=n+1}^{n+m}|E_i|=O(n^{m-2})$ as $n\to\infty$ by (\ref{us_pen}). Review $\mathcal{S}_1$ in (\ref{chicken_say}). Observe that $\mathcal{D}_{n,i}$'s and $E_i$'s do not depend on $x$, we obtain from (\ref{CCL}) that \begin{eqnarray*} \mathcal{S}_1 &\leq & \|f\|_{\infty}\cdot\sum_{k_1 = 1}^n \cdots \sum_{k_{m-1}=1}^n \Big[\sum_{i=1}^2I(\mathcal{D}_{n,i}) + \sum_{i=n}^{n+m}I_{E_i}\Big]\int_{\frac{k_1-1}{n}}^{\frac{k_1}{n}} \cdots \int_{\frac{k_{m-1}-1}{n}}^{\frac{k_{m-1}}{n}}1 ~d x_1 \dots d x_{m-1}\\ & = & \|f\|_{\infty}\cdot \Big(\sum_{i=1}^2|\mathcal{D}_{n,i}| + \sum_{i=n}^{n+m}|E_i|\Big)\cdot \frac{1}{n^{m-1}}\\ & = & O(n^{-1}) \end{eqnarray*} as $n\to\infty.$ The proof is completed. \end{proof} \subsection{Proof of Theorem \ref{cancel_temple}}\label{sec:proof:restricted-uniform} We first rewrite the eigenvalues of the Laplace-Beltrami operator given in (\ref{Jingle}) in terms of $k_i$'s instead of a mixing of $k_i$'s and $k_i$'s. A similar expression, which is essentially the same as ours, can be found on p. 596 from Dumitriu {\it et al}. (2007). So we skip the proof. \begin{lemma}\label{theater} Let $\alpha>0$. Let $\lambda_{\kappa}$ be as in (\ref{Jingle}). For $\kappa=(k_1, \cdots, k_m) \vdash n$, we have \begin{eqnarray}\label{green} \lambda_{\kappa}=\big(m-\frac{\alpha}{2}\big)n +\sum_{i=1}^m (\frac{\alpha}{2}k_i-i)k_i. \end{eqnarray} \end{lemma} Let $\eta$ follow the chi-square distribution $\chi^2(v)$ with density function \begin{eqnarray}\label{his_her} (2^{v/2}\Gamma(v/2))^{-1}x^{\frac{v}{2}-1}e^{-x/2}, ~~~~~~~ x>0. \end{eqnarray} The following lemma is on p. 486 from Kotz {\it et al}. (2000). \begin{lemma}\label{Kotz} Let $m\geq 2$ and $\eta_1, \cdots, \eta_m$ be independent random variables with $\eta_i \sim \chi^2(v_i)$ for each $i$. Set $X_i=\eta_{i}/(\eta_1+\cdots + \eta_m)$ for each $i.$ Then $(X_1, \cdots, X_{m-1})$ has density \begin{eqnarray*} f(x_1, \cdots, x_{m-1})=\frac{\Gamma(\frac12\sum_{j=1}^{m}v_j)}{\prod_{j=1}^{m}\Gamma(\frac12v_j)}\Big[\prod_{j=1}^{m-1}x_j^{(v_j/2)-1}\Big] \Big(1-\sum_{j=1}^{m-1}x_j\Big)^{(v_m/2)-1} \end{eqnarray*} on the set $U=\{(x_1, \cdots, x_{m-1})\in [0,1]^{m-1};\, \sum_{i=1}^{m-1}x_i\leq 1\}$. \end{lemma} \begin{proof}[Proof of Theorem \ref{cancel_temple}] By Lemma \ref{theater}, for $m$ is fixed and $k_1\leq n$, we have \begin{eqnarray*} \frac{\lambda_{\kappa}}{n^2}=\frac{\alpha}{2}\cdot\sum_{i=1}^m\Big(\frac{k_i}{n}\Big)^2 + o(1) \end{eqnarray*} as $n\to\infty.$ By Theorem \ref{finite_theorem}, under the uniform distribution on either $\mathcal{P}_n(m)$ or $\mathcal{P}_n(m)'$, $\frac{1}{n}(k_1, \cdots, k_m)$ converges weakly to $(Z_1, \cdots, Z_m)$, which has the uniform measure on $\Delta$. Let $\xi_1, \cdots, \xi_m$ be independent random variables with the common density $e^{-x}I(x\geq 0)$. Set \begin{eqnarray*} S_m=\xi_1 + \cdots + \xi_m~~~ \mbox{and} ~~~ X_i=\frac{\xi_{(i)}}{S_m}, \ \ \ 1\leq i \leq m \end{eqnarray*} where $\xi_{(1)}>\cdots>\xi_{(m)}$ are the order statistics. By the continuous mapping theorem, we only need to show that $(Z_1, \cdots, Z_m)$ has the same distribution as that of $(X_1, \cdots, X_m)$. Review $W$ in Lemma \ref{paint_swim}. From (\ref{pen_sky}), the volume of $W$ is $(m!(m-1)!)^{-1}$. Therefore, by Lemma \ref{paint_swim}, it suffices to prove that \begin{eqnarray}\label{Big_sing} E\varphi(X_1, \cdots, X_{m-1})=m!(m-1)!\int_{W}\varphi(x_1, \cdots, x_{m-1})\,dx_1\cdots dx_{m-1} \end{eqnarray} for any bounded and measurable function $\varphi$ defined on $[0, 1]^{m-1}.$ Recalling (\ref{his_her}), we know $\chi^2(2)/2$ has the exponential density function $e^{-x}I(x\geq 0)$. Taking $v_1=v_2=\cdots =v_m=2$ in Lemma \ref{Kotz}, we see that the density function of $\big(\frac{\xi_{1}}{S_m}, \cdots, \frac{\xi_{m-1}}{S_m}\big)$ on $U$ is equal to the constant $\Gamma(m)=(m-1)!$. Furthermore, \begin{eqnarray*} E\varphi(X_1, \cdots, X_{m-1}) = \sum_{\pi}E\Big[\varphi\Big(\frac{\xi_{\pi(1)}}{S_m}, \cdots, \frac{\xi_{\pi(m-1)}}{S_m}\Big)I(\xi_{\pi(1)}>\cdots > \xi_{\pi(m)})\Big], \end{eqnarray*} where the sum is taken over every permutation $\pi$ of $m.$ Write $S_m=\xi_{\pi(1)}+\cdots + \xi_{\pi(m)}.$ By the i.i.d. property of $\xi_i$'s, we get \begin{eqnarray*} & & E\varphi(X_1, \cdots, X_{m-1})\\ & = & m!\cdot E\Big[\varphi\Big(\frac{\xi_{(1)}}{S_m}, \cdots, \frac{\xi_{(m-1)}}{S_m}\Big) I\Big(\frac{\xi_{(1)}}{S_m}> \cdots > \frac{\xi_{(m-1)}}{S_m}> 1-\frac{\sum_{i=1}^{m-1}\xi_{(i)}}{S_m}\Big)\Big]\\ & =& m!(m-1)!\int_{U}\varphi(x_1, \cdots, x_{m-1})I\Big(x_1> \cdots > x_{m-1}> 1-\sum_{i=1}^{m-1}x_i\Big)\,dx_1\cdots dx_{m-1} \end{eqnarray*} for $\big(\frac{\xi_{(1)}}{S_m}, \cdots, \frac{\xi_{(m-1)}}{S_m}, 1-\frac{\sum_{i=1}^{m-1}\xi_{(i)}}{S_m}\big)$ is a function of $\big(\frac{\xi_{1}}{S_m}, \cdots, \frac{\xi_{m-1}}{S_m}\big)$ which has a constant density $(m-1)!$ on $U$ as shown earlier. Easily, the last term above is equal to the right hand side of (\ref{Big_sing}). The proof is then completed. \end{proof} \subsection{Proof of Theorem \ref{Gamma_surprise}}\label{sec:proof:restricted-Jack} We start with a result on the restricted Jack probability measure $P_{n,m}^\alpha$ as in (\ref{eq:restrictedjack}). \begin{lemma}\label{Sho_friend}(Matsumoto, 2008). Let $\alpha>0$ and $\beta=2/\alpha.$ For a given integer $m\geq 2$, let $\kappa=(k_{n,1}, \cdots, k_{n,m})\vdash n$ be chosen with chance $P_{n,m}^{ \alpha}(\kappa).$ Then, as $n\to\infty$, \begin{eqnarray*} \Big(\sqrt{\frac{\alpha m}{n}}\big(k_{n,i}-\frac{n}{m}\big)\Big)_{1\leq i \leq m} \end{eqnarray*} converges weakly to a liming distribution with density function \begin{eqnarray}\label{dance} g(x_1, \cdots, x_m)=\mbox{const}\cdot e^{-\frac{\beta}{2}\sum_{i=1}^mx_i^2}\cdot \prod_{1\leq j < k \leq m}|x_j - x_k|^{\beta} \end{eqnarray} for all $x_1\geq x_2\geq \cdots \geq x_m$ such that $x_1+\cdots + x_m=0.$ \end{lemma} The idea of the proof of Theorem \ref{Gamma_surprise} below lies in that, in virtue of Lemma \ref{Sho_friend}, we are able to write $\lambda_{\kappa}$ in (\ref{Jingle}) in terms of the trace of a ``Wishart" type of matrix. Due to this we get the Gamma density by evaluating the moment generating function (or the Laplace transform) of the trace through (\ref{dance}). \begin{proof}[Proof of Theorem \ref{Gamma_surprise}] Let \begin{eqnarray*} Y_{n,i}=\sqrt{\frac{\alpha m}{n}}\big(k_{n,i}-\frac{n}{m}\big) \end{eqnarray*} for $1\leq i \leq m$. By Lemma \ref{Sho_friend}, under $P_{n,m}^{\alpha}$, we know $(Y_{n,1}, \cdots, Y_{n,m})$ converges weakly to random vector $(X_1, \cdots, X_m)$ with density function $g(x_1, \cdots, x_m)$ as in (\ref{dance}). Checking the proof of Lemma \ref{Sho_friend}, it is easy to see that its conclusion still holds for $Q_{n,m}^{\alpha}$ without changing its proof. Solve for $k_{n,i}$'s to have \begin{eqnarray*} k_{n,i}=\frac{n}{m} + \sqrt{\frac{n}{\alpha m}}Y_{n,i} \end{eqnarray*} for $1\leq i \leq m$. Substitute these for the corresponding terms in \eqref{green} to see that \begin{eqnarray*} & & \lambda_\kappa-\big(m-\frac{\alpha}{2}\big)n\\ & = & \sum_{i=1}^m\Big[\frac{\alpha}{2}\big(\frac{n}{m} +\sqrt{\frac{n}{m\alpha}}Y_{n,i}\big)-i\Big]\cdot \big(\frac{n}{m} +\sqrt{\frac{n}{m\alpha}}Y_{n,i}\big)\\ & = & \frac{\alpha}{2}\sum_{i=1}^m\big(\frac{n}{m} +\sqrt{\frac{n}{m\alpha}}Y_{n,i}\big)^2-\sum_{i=1}^m i\big(\frac{n}{m} +\sqrt{\frac{n}{m\alpha}}Y_{n,i}\big)\\ & = & \frac{\alpha}{2}\cdot\frac{n^2}{m} + \sqrt{\alpha}\cdot \big(\frac{n}{m}\big)^{3/2}\sum_{i=1}^mY_{n,i}+\frac{n}{2m}\sum_{i=1}^mY_{n,i}^2 - \frac{n(m+1)}{2}- \sqrt{\frac{n}{m\alpha}} \sum_{i=1}^m i Y_{n,i}\\ & = & \frac{\alpha}{2}\cdot\frac{n^2}{m} - \frac{n(m+1)}{2}+\frac{n}{2m}\sum_{i=1}^mY_{n,i}^2 - \sqrt{\frac{n}{m\alpha}} \sum_{i=1}^m i Y_{n,i} \end{eqnarray*} since $\sum_{i=1}^mY_{n,i}=0.$ According to the notation of $a_n$ and $b_n$, \begin{eqnarray*} \frac{\lambda_\kappa-a_n}{b_n}=\sum_{i=1}^mY_{n,i}^2 - \frac{2}{\sqrt{\alpha}}\sqrt{\frac{m}{n}} \sum_{i=1}^m i Y_{n,i}. \end{eqnarray*} For $(Y_{n,1}, \cdots, Y_{n,m})$ converges weakly to the random vector $(X_1, \cdots, X_m)$, taking $$h_1(y_1, \cdots, y_m)=\sum_{i=1}^miy_i \quad \text{and} \quad h_2(y_1, \cdots, y_m)=\sum_{i=1}^my_i^2,$$ respectively, by the continuous mapping theorem, \begin{eqnarray*} \sum_{i=1}^miY_{n,i}\to \sum_{i=1}^miX_i\ \ \ \mbox{and}\ \ \ \ \sum_{i=1}^mY_{n,i}^2\to \sum_{i=1}^mX_i^2 \end{eqnarray*} weakly as $n\to\infty$. By the Slutsky lemma, \begin{eqnarray*} \frac{\lambda_\kappa-a_n}{b_n}=\sum_{i=1}^mY_{n,i}^2 + O_p\big(n^{-1/2}\big) \to \sum_{i=1}^mX_i^2 \end{eqnarray*} weakly as $n\to\infty.$ Now let us calculate the moment generating function of $\sum_{i=1}^mX_i^2$. Recall (\ref{dance}). Let $C_n$ be the normalizing constant such that \begin{eqnarray*} g(x_1, \cdots, x_m)=C_m\cdot e^{-\frac{\beta}{2}\sum_{i=1}^mx_i^2}\cdot \prod_{1\leq j < k \leq m}|x_j - x_k|^{\beta} \end{eqnarray*} is a probability density function on the subset of $\mathbb{R}^{m}$ such that $x_1\geq x_2\geq \cdots \geq x_m$ and $x_1+\cdots + x_m=0.$ We then have \begin{eqnarray} Ee^{t\sum_{i=1}^mX_i^2}&=& C_m\int_{\mathbb{R}^{m-1}}e^{t\sum_{i=1}^mx_i^2}g(x_1, \cdots, x_m)\,dx_1, \cdots, dx_{m-1} \nonumber\\ & = & C_m \int_{\mathbb{R}^{m-1}}e^{-\frac{\beta}{2}\sum_{i=1}^m(1-\frac{2t}{\beta})x_i^2}\prod_{1\leq j < k \leq m}|x_j - x_k|^{\beta}\,dx_1, \cdots, dx_{m-1}\nonumber\\ & = & \Big(1-\frac{2t}{\beta}\Big)^{-\frac{1}{2}\cdot(\frac{m(m-1)}{2}\beta + (m-1))}\cdot \int_{\mathbb{R}^{m-1}}g(y_1, \cdots, y_m)\,dy_1, \cdots, dy_{m-1}\nonumber\\ & = & \Big(1-\frac{2t}{\beta}\Big)^{-\frac{1}{4}(m-1)\cdot(m\beta + 2)} \label{laugh_sniff} \end{eqnarray} for $t<\frac{\beta}{2}$, where a transform $y_i=(1-\frac{2t}{\beta})^{1/2}x_i$ is taken in the third step for $i=1,\cdots, m-1.$ It is easy to check that the term in (\ref{laugh_sniff}) is also the generating function of the Gamma distribution with density function $h(x)=\frac{1}{\Gamma(v)\, (2/\beta)^{v}}x^{v-1}e^{-\beta x/2}$ for all $x\geq 0$, where $v=\frac{1}{4}(m-1)\cdot(m\beta + 2).$ By the uniqueness theorem, we know the conclusion holds. \end{proof} \subsection{Proof of Theorem \ref{vase_flower}}\label{proof_vase_flower} Let $\{X_n;\, n\geq 1\}$ be random variables and $\{w_n;\, n\geq 1\}$ be non-zero constants. If $\{X_n/w_n;\, n\geq 1\}$ is bounded in probability, i.e., $\lim_{K\to\infty}\sup_{n\geq 1}P(|X_n/w_n|\geq K)=0$, we then write $X_n=O_p(w_n)$ as $n\to\infty.$ If $X_n/w_n$ converges to $0$ in probability, we write $X_n=o_p(w_n)$. The following lemma is Theorem 2 from Pittel (1997). \begin{lemma}\label{wuliangye} Let $\kappa=(k_1, \cdots, k_m)$ be a partition of $n$ chosen according to the uniform measure on $\mathcal{P}(n)$. Then \begin{eqnarray*} k_j= \begin{cases} \big(1+ O_p((\log n)^{-1})\big) E(j)\ \ \ \ \ \text{if\ \ $1\leq j \leq \log n$;}\\ E(j) + O_p(nj^{-1}\log n)^{1/2}\ \ \ \ \ \text{if\ \ $\log n\leq j \leq n^{1/2}$;} \\ E(j) + O_p(e^{-cjn^{-1/2}}n^{1/2}\log n)^{1/2} \ \ \ \ \ \text{if\ \ $n^{1/2}\leq j \leq \kappa_n$};\\ (1+O_P(a_n^{-1}))E(j)\ \ \ \ \ \text{if\ \ $\kappa_n \leq j \leq k_n$} \end{cases} \end{eqnarray*} uniformly as $n\to\infty$, where $c=\pi/\sqrt{6}$, \begin{eqnarray*} & & E(x)=\frac{\sqrt{n}}{c}\log \frac{1}{1-e^{-cxn^{-1/2}}} \ \ \ \mbox{for $x>0$}, \\ & & \kappa_n=\left[\frac{\sqrt{n}}{4c}\log n\right] \ \ \ \mbox{and}\ \ \ \ k_n=\Big[\frac{\sqrt{n}}{2c}(\log n-2\log\log n-a_n)\Big] \end{eqnarray*} with $a_n\to\infty$ and $a_n=o(\log \log n)$ as $n\to\infty.$ \end{lemma} Based on Lemma \ref{wuliangye}, we get the following law of large numbers. This is a key estimate in the proof of Theorem \ref{vase_flower}. \begin{lemma}\label{happy_uniform} Let $\kappa=(k_1, \cdots, k_m)$ be a partition of $n$ chosen according to the uniform measure on $\mathcal{P}(n)$. Then $n^{-3/2}\sum_{j=1}^m k_j^2 \to a$ in probability as $n\to\infty$, where \begin{eqnarray}\label{learn} a=c^{-3}\int_0^1\frac{\log^2 (1-t)}{t}\,dt \end{eqnarray} and $c=\pi/\sqrt{6}$. The above conclusion also holds if ``$\sum_{j=1}^m k_j^2$" is replaced by ``$2\sum_{j=1}^m jk_j$". \end{lemma} \begin{proof}[Proof of Lemma \ref{happy_uniform}] Define \begin{eqnarray*} F(x)=\log \frac{1}{1-e^{-cxn^{-1/2}}} \end{eqnarray*} for $x>0.$ Obviously, both $E(x)$ and $F(x)$ are decreasing in $x\in (0, \infty).$ {\it Step 1}. We first claim that \begin{eqnarray}\label{rubber_band} \max_{1\leq j \leq \frac{1}{6}\sqrt{n}\log n}\big|\frac{k_j}{E(j)}-1\big| \to 0 \end{eqnarray} in probability as $n\to\infty.$ (The choice of $1/6$ is rather arbitrary here. Actually, any number strictly less than $1/2c$ would work). We prove this next. Notice \begin{eqnarray*} \max_{x\geq 1}E(x)=E(1) &=& -\frac{\sqrt{n}}{c} \log \big(1-e^{-cn^{-1/2}}\big)\nonumber\\ & \sim & -\frac{\sqrt{n}}{c} \log \big(cn^{-1/2}\big) \sim \frac{1}{2c}\sqrt{n}\log n \end{eqnarray*} as $n\to\infty$ since $1-e^{-x}\sim x$ as $x\to 0.$ Observe \begin{eqnarray*} \frac{\sqrt{nj^{-1}\log n}}{E(j)}=-c\sqrt{\log n}\cdot \frac{j^{-1/2}}{\log \big(1-e^{-cjn^{-1/2}}\big)}. \end{eqnarray*} Therefore, \begin{eqnarray*} \max_{\log n \leq j \leq (\log n)^2}\frac{\sqrt{nj^{-1}\log n}}{E(j)}\leq \frac{c}{F(\log^2n)} \to 0 \end{eqnarray*} and \begin{eqnarray*} \max_{\log^2n \leq j \leq n^{1/2}}\frac{\sqrt{nj^{-1}\log n}}{E(j)} \leq c\frac{(\log n)^{-1/2}}{F(n^{1/2})} \to 0 \end{eqnarray*} as $n\to\infty$. By Lemma \ref{wuliangye}, \begin{eqnarray}\label{mine} \max_{\log n \leq j \leq \sqrt{n}}\Big|\frac{k_j}{E(j)}-1\Big| =o_p(1) \end{eqnarray} as $n\to\infty$. Now we consider the case for $n^{1/2}\leq j \leq \kappa_n$ where $\kappa_n$ is as in Lemma \ref{wuliangye}. Trivially, $\frac{1}{4c} >\frac{1}{6}$. Notice that \begin{eqnarray*} \max_{n^{1/2}\leq j \leq (1/6)\sqrt{n}\log n}\frac{(e^{-cjn^{-1/2}}n^{1/2}\log n)^{1/2}}{E(j)} &\leq & \frac{(e^{-c}n^{1/2}\log n)^{1/2}}{E((1/6)\sqrt{n}\log n)} \\ &= &\frac{(ce^{-c/2})n^{-1/4}(\log n)^{1/2}}{F((1/6)\sqrt{n}\log n)}. \end{eqnarray*} Evidently, \begin{eqnarray}\label{child_baby} F\big(\frac{1}{6}\sqrt{n}\log n\big)=-\log \big(1-e^{-(c/6)\log n}\big) \sim \frac{1}{n^{c/6}} \end{eqnarray} as $n\to\infty.$ This says \begin{eqnarray*} \max_{n^{1/2}\leq j \leq (1/6)\sqrt{n}\log n}\Big|\frac{k_j}{E(j)}-1\Big| =o_p(1) \end{eqnarray*} as $n\to\infty$ by Lemma \ref{wuliangye}. This together with (\ref{mine}) and the first expression of $k_j$ in Lemma \ref{wuliangye} concludes (\ref{rubber_band}), which is equivalent to that \begin{eqnarray} \label{skill} k_j=E(j) +\epsilon_{n,j}E(j) \end{eqnarray} uniformly for all $1\leq j\leq (1/6)\sqrt{n}\log n$, where $\epsilon_{n,j}$'s satisfy \begin{eqnarray}\label{gymnastics} H_n:=\sup_{1\leq j\leq (1/6)\sqrt{n}\log n}|\epsilon_{n,j}| \to 0 \end{eqnarray} in probability as $n\to\infty$. {\it Step 2}. We approximate the two sums in (\ref{sing}) and (\ref{song}) below by integrals in this step. The assertions (\ref{skill}) and (\ref{gymnastics}) imply that \begin{eqnarray} & & \sum_{1\leq j \leq (1/6)\sqrt{n}\log n }k_j^2 =\Big(\sum_{1\leq j \leq (1/6)\sqrt{n}\log n}E(j)^2\Big) (1+o_p(1)); \label{sing}\\ & & \sum_{1\leq j \leq (1/6)\sqrt{n}\log n }jk_j =\Big(\sum_{1\leq j \leq (1/6)\sqrt{n}\log n}jE(j)\Big) (1+o_p(1))\label{song} \end{eqnarray} as $n\to\infty.$ For $E(x)$ is decreasing in $x$ we have \begin{eqnarray*} \int_1^mE(x)^2\,dx=\sum_{j=1}^{m-1}\int_j^{j+1}E(x)^2\,dx \leq \sum_{j=1}^{m-1}E(j)^2 \end{eqnarray*} for any $m\geq 2.$ Consequently, \begin{eqnarray*} \sum_{1\leq j \leq (1/6)\sqrt{n}\log n}E(j)^2 \geq \int_1^{m_1}E(x)^2\,dx \end{eqnarray*} with $m_1=\big[\frac{1}{6}\sqrt{n}\log n\big]$. Similarly, \begin{eqnarray*} \int_0^{m+1}E(x)^2\,dx=\sum_{j=0}^{m}\int_j^{j+1}E(x)^2\,dx \geq \sum_{j=1}^{m+1}E(j)^2 \end{eqnarray*} for any $m\geq 1.$ The two inequalities imply \begin{eqnarray}\label{sneeze} \int_1^{m_1}E(x)^2\,dx \leq \sum_{1\leq j \leq (1/6)\sqrt{n}\log n}E(j)^2 \leq \int_0^{\infty}E(x)^2\,dx. \end{eqnarray} By the same argument, \begin{eqnarray}\label{rain_snow} \int_1^{m_1}E(x)\,dx \leq \sum_{1\leq j \leq (1/6)\sqrt{n}\log n}E(j) \leq \int_0^{\infty}E(x)\,dx. \end{eqnarray} Now we estimate $\sum_{1\leq j \leq (1/6)\sqrt{n}\log n}j E(j)$. Use the inequality \begin{eqnarray*} jE(j+1)\leq \int_j^{j+1}xE(x)\,dx \leq (j+1) E(j) \end{eqnarray*} to have \begin{eqnarray*} (j+1)E(j+1)-E(j+1)\leq \int_j^{j+1}xE(x)\,dx \leq j E(j) + E(j) \end{eqnarray*} for all $j\geq 0$. Sum the inequalities over $j$ and use (\ref{rain_snow}) to get \begin{eqnarray} \int_1^{m_1}x E(x)\,dx - \int_0^{\infty}E(x)\,dx &\leq & \sum_{1\leq j \leq (1/6)\sqrt{n}\log n}jE(j) \nonumber\\ & \leq & \int_0^{\infty}xE(x)\,dx + \int_0^{\infty}E(x)\,dx.\label{did} \end{eqnarray} {\it Step 3}. In this step, we evaluate integrals $\int E(x)\,dx$, $\int E(x)^2\,dx$ and $\int xE(x)\,dx$. First, \begin{eqnarray*} \int_0^{\infty} E(x)\,dx= \frac{\sqrt{n}}{c}\int_0^{\infty}\log \frac{1}{1-e^{-cxn^{-1/2}}}\,dx. \end{eqnarray*} Set \begin{eqnarray}\label{same_city} t=e^{-cxn^{-1/2}}\ \ \mbox{then}\ \ x=\frac{\sqrt{n}}{c}\log \frac{1}{t}\ \ \mbox{and}\ \ dx=-\frac{\sqrt{n}}{ct}dt. \end{eqnarray} Hence \begin{eqnarray}\label{scissor} \int_0^{\infty} E(x)\,dx=\frac{n}{c^2}\int_0^1\frac{\log (1-t)}{-t}\,dt=O(n) \end{eqnarray} as $n\to \infty$ considering the second integral above is finite. Using the same discussion, we have \begin{eqnarray*} & & \int_0^{\infty} E(x)^2\,dx=\frac{n^{3/2}}{c^3}\int_0^1\frac{\log^2 (1-t)}{t}\,dt; \\ & & \int_0^{\infty} xE(x)\,dx=\frac{n^{3/2}}{c^3}\int_0^1\frac{1}{t}\log \frac{1}{t}\log\frac{1}{1-t}\,dt. \end{eqnarray*} By the two identities above (3.44) from Pittel (1997), we have \begin{eqnarray}\label{little_brother} \int_0^1\frac{\log^2 (1-t)}{t}\,dt=2\int_0^1\frac{1}{t}\log \frac{1}{t}\log\frac{1}{1-t}\,dt. \end{eqnarray} From the same calculation as in (\ref{same_city}), we see that \begin{eqnarray*} \int_1^{m_1}E(x)^2\,dx=\frac{n^{3/2}}{c^3}\int_{e^{-cm_1n^{-1/2}}}^{e^{-cn^{-1/2}}}\frac{\log^2 (1-t)}{t}\,dt \sim \frac{n^{3/2}}{c^3}\int_0^1\frac{\log^2 (1-t)}{t}\,dt \end{eqnarray*} as $n\to\infty$ since $m_1=\big[\frac{1}{6}\sqrt{n}\log n\big]$. By the same reasoning, \begin{eqnarray*} \int_1^{m_1}xE(x)\,dx \sim \frac{n^{3/2}}{2c^3}\int_0^1\frac{\log^2 (1-t)}{t}\,dt. \end{eqnarray*} The above two integrals and that in (\ref{scissor}) join (\ref{sneeze}), (\ref{rain_snow}) and (\ref{did}) to conclude \begin{eqnarray} & & \sum_{1\leq j \leq (1/6)\sqrt{n}\log n}E(j)^2 \sim \frac{n^{3/2}}{c^3}\int_0^1\frac{\log^2 (1-t)}{t}\,dt; \label{music_hi}\\ & & \sum_{1\leq j \leq (1/6)\sqrt{n}\log n}j E(j) \sim \frac{n^{3/2}}{2c^3}\int_0^1\frac{\log^2 (1-t)}{t}\,dt \label{sun_hello} \end{eqnarray} as $n\to\infty.$ {\it Step 4}. We will get the desired conclusion in this step. Now connecting (\ref{music_hi}) and (\ref{sun_hello}) with (\ref{sing}) and (\ref{song}) we obtain \begin{eqnarray} & & \sum_{1\leq j \leq (1/6)\sqrt{n}\log n }k_j^2 = a n^{3/2} (1+o_p(1)); \label{sing2}\\ & & \sum_{1\leq j \leq (1/6)\sqrt{n}\log n }jk_j =\frac{a}{2} n^{3/2}(1+o_p(1)) \label{green_carpet} \end{eqnarray} as $n\to\infty$, where ``$a$" is as in (\ref{learn}). Erd\"os and Lehner (1941) obtain that \begin{eqnarray}\label{seabed} \frac{\pi}{ \sqrt{6n} } m - \log \frac{ \sqrt{6n} }{\pi} \to \mu \end{eqnarray} weakly as $n\to\infty$ where $\mu$ is a probability measure with cdf $F_{\mu}(v) = e^{-e^{-v}}$ for every $v \in \mathbb{R}$. See also Fristedt (1993). This implies that \begin{eqnarray}\label{red_oak} P\big(m > \frac{1}{c} \sqrt{n}\log n \big) \to 0 \end{eqnarray} as $n\to\infty$. Now, for any $\epsilon>0$, by (\ref{sing2}), \begin{eqnarray} & & P\Big(\big|a-n^{-3/2}\sum_{j=1}^m k_j^2\big|\geq \epsilon\Big) \nonumber\\ & \leq & P\Big(\big|a-n^{-3/2}\sum_{1\leq j \leq \frac{1}{6}\sqrt{n}\log n} k_j^2\big|\geq \epsilon/2\Big) + P\Big( n^{-3/2} \sum_{ \frac{1}{6}\sqrt{n}\log n \leq j \leq m }k_j^2 \geq \epsilon/2 \Big) \nonumber\\ & \leq & P\Big(m > \frac{1}{c} \sqrt{n}\log n \Big) + P \Big(n^{-3/2} \sum_{ \frac{1}{6}\sqrt{n}\log n \leq j \leq m }k_j^2 \geq \epsilon/2, m \leq \frac{1}{c} \sqrt{n}\log n \Big) +o(1) \nonumber\\ & \leq & P \Big(n^{-3/2} \sum_{ \frac{1}{6}\sqrt{n}\log n \leq j \leq \frac{1}{c} \sqrt{n}\log n }k_j^2 \geq \epsilon/2 \Big) +o(1) \label{money_monkey} \end{eqnarray} as $n\to\infty$. Denote by $l_n$ the least integer greater than or equal to $\frac{1}{6}\sqrt{n}\log n$. Seeing that $k_j$ is decreasing in $j$, it is seen from (\ref{skill}) and then (\ref{child_baby}) that \begin{eqnarray} k_j \le k_{l_n}&=& E(l_n)(1+o_p(1)) \nonumber\\ & \leq & E\big(\frac{1}{6}\sqrt{n}\log n\big)(1+ o_p(1)) \nonumber\\ & \sim & c^{-1} n^{(1/2)-(c/6)} (1+ o_p(1)) \label{donkey_elephant} \end{eqnarray} for all $\frac{1}{6}\sqrt{n}\log n \leq j \leq \frac{1}{c} \sqrt{n}\log n$ as $n\to\infty$. This implies \begin{eqnarray*} n^{-3/2} \sum_{ \frac{1}{6}\sqrt{n}\log n \leq j \leq \frac{1}{c} \sqrt{n}\log n }k_j^2 & \le & C\cdot n^{-3/2} \sqrt{n}(\log n) \big(n^{1/2-c/6}\big)^2 (1+o_p(1)) \\ & \sim & Cn^{-c/3} (\log n) (1+ o_p(1))= o_p(1) \end{eqnarray*} as $n\to\infty$, where $C$ is a constant. This together with (\ref{money_monkey}) yields the first conclusion of the lemma. Similarly, by (\ref{green_carpet}) and (\ref{red_oak}), for any $\epsilon>0$, \begin{eqnarray*} & & P\Big(\big|\frac{a}{2}-n^{-3/2}\sum_{j=1}^m j k_j\big| \geq \epsilon\Big)\\ &\leq & P\Big(\big| \frac{a}{2} -n^{-3/2}\sum_{1\leq j \leq \frac{1}{6}\sqrt{n}\log n} j k_j \big|\geq \epsilon/2 \Big)\\ & & ~~~ + P \Big(n^{-3/2} \sum_{ \frac{1}{6}\sqrt{n}\log n \leq j \leq \frac{1}{c} \sqrt{n}\log n } j k_j \geq \epsilon/2 \Big)+ P\Big(m > \frac{1}{c} \sqrt{n}\log n \Big) \to 0 \end{eqnarray*} as $n\to\infty$ considering \begin{eqnarray*} n^{-3/2}\sum_{ \frac{1}{6}\sqrt{n}\log n \leq j \leq \frac{1}{c}\sqrt{n}\log n }jk_j&\leq & C\cdot n^{-3/2}\cdot n^{(1/2)-(c/6)}(\sqrt{n}\log n)^2(1+o_p(1))\\ & = & Cn^{-c/6}(\log n)^2(1+o_p(1)) \to 0 \end{eqnarray*} in probability as $n\to\infty$ by (\ref{donkey_elephant}) again. We then get the second conclusion of the lemma. \end{proof} Finally we are ready to prove Theorem \ref{vase_flower}. \begin{proof}[Proof of Theorem \ref{vase_flower}] Let $a$ be as in (\ref{learn}). Set \begin{eqnarray*} & & U_n=\frac{\pi}{ \sqrt{6n} } m - \log \frac{ \sqrt{6n} }{\pi};\\ & & V_n=a-n^{-3/2}\sum_{j=1}^m k_j^2;\ \ \ W_n=\frac{a}{2}-n^{-3/2}\sum_{j=1}^m jk_j. \end{eqnarray*} By (\ref{seabed}) and Lemma \ref{happy_uniform}, $U_n$ converges weakly to cdf $F_{\mu}(v) = e^{-e^{-v}}$ as $n\to \infty$, and both $V_n$ and $W_n$ converge to $0$ in probability. Solving $m$, $\sum_{j=1}^m k_j^2$ and $\sum_{j=1}^m jk_j$ in terms of $U_n$, $V_n$ and $W_n$, respectively, and substituting them for the corresponding terms of $\lambda_{\kappa}$ in Lemma \ref{vase_flower}, we get \begin{eqnarray*} \lambda_{\kappa} & = & -\frac{\alpha}{2}n +nm+\sum_{j=1}^m (\frac{\alpha}{2}k_j-j)k_j\\ & = & -\frac{\alpha}{2}n +n\big(U_n+ \log \frac{ \sqrt{6n} }{\pi}\big)\cdot \frac{\sqrt{6n}}{\pi} + \frac{\alpha}{2}(a-V_n)n^{3/2}- (\frac{a}{2} - W_n)n^{3/2}. \end{eqnarray*} Therefore, \begin{eqnarray}\label{fruit_sky} c\frac{\lambda_{\kappa}}{n^{3/2}}-\log \frac{\sqrt{n}}{c}=U_n+ \big(\frac{\alpha-1}{2}\big)ac -\frac{c\alpha}{2}V_n + cW_n + o(1) \end{eqnarray} as $n\to\infty$. We finally evaluate $a$ in (\ref{learn}). Indeed, by (\ref{little_brother}), the Taylor expansion and integration by parts, \begin{eqnarray*} (ac)\cdot c^2 &= & \int_0^1\frac{\log^2 (1-t)}{t}\,dt\\ & = & 2 \int_{0}^1 \frac{1}{t} \log t \log (1-t)\,dt\\ & = & -2\int_0^1 \frac{1}{t} \log t \sum_{n=1}^\infty \frac{t^n}{n}\,dt = -2\sum_{n=1}^{\infty} \frac{1}{n} \int_0^1 t^{n-1} \log t\,dt\\ & = & 2\sum_{n=1}^\infty \frac{1}{n^3} = 2\zeta(3). \end{eqnarray*} This and (\ref{fruit_sky}) prove the theorem by the Slutsky lemma. \end{proof} \subsection{Proof of Theorems \ref{difficult_easy}}\label{Proof_difficult_easy} \begin{proof}[Proof of Theorem \ref{difficult_easy}] Frobenius (1900) shows that $$\frac{a(\kappa') - a(\kappa)}{\binom{n}{2}} = \frac{\chi^{\kappa}_{(2,1^{n-2})}}{\dim(\kappa)},$$ where $\chi^{\kappa}_{(2,1^{n-2})}$ is the value of $\chi^{\kappa}$, the irreducible character of $\mathcal{S}_n$ associated to $\kappa$, on the conjugacy class indexed by $(2,1^{n-2}) \vdash n$. By Theorem 6.1 from Ivanov and Olshanski (2001) for the special case \begin{eqnarray*} p_2^{\#^{(n)}}(\kappa): = n(n-1) \frac{\chi^{\kappa}_{(2,1^{n-2})}}{\dim(\kappa)} \end{eqnarray*} or Theorem 1.2 from Fulman (2004), we have \begin{eqnarray*} \frac{a(\kappa')-a(\kappa)}{n} \to N\big(0, \frac{1}{2}\big) \end{eqnarray*} weakly as $n\to\infty$. It is known from Baik {\it et al}. (1999), Borodin {\it et al}. (2000), Johannson (2001) and Okounkov (2000) that \begin{eqnarray}\label{green:white} \frac{k_1-2\sqrt{n}}{n^{1/6}} \to F_2\ \ \mbox{and}\ \ \frac{m-2\sqrt{n}}{n^{1/6}} \to F_2 \end{eqnarray} weakly as $n\to\infty$, where $F_2$ is as in (\ref{tai}). Therefore, by using (\ref{Jingle}) for the case $\alpha =1$, \begin{equation*} \begin{split} \frac{\lambda_{\kappa} - 2n^{3/2}}{n^{7/6}} &= \frac{n(m-1) + a(\kappa')-a(\kappa) - 2n^{3/2}}{n^{7/6}}\\ &=\frac{m-2\sqrt{n}}{n^{1/6}} - n^{-1/6} + \frac{a(\kappa')-a(\kappa)}{n^{7/6}} \end{split} \end{equation*} converges weakly to $F_2$ as $n\to\infty$, where $F_2$ is as in (\ref{tai}). \end{proof} \subsection{Proof of Theorem \ref{thm:LLLplan}}\label{non_stop_thousand} The proof of Theorem \ref{thm:LLLplan} is involved. The reason is that, when $\alpha=1$, the term $a(\kappa')-a(\kappa)$ is negligible as shown in the proof of Theorem \ref{difficult_easy} . When $\alpha \ne 1$, reviewing (\ref{Jingle}), it will be seen next that the term $a(\kappa')\alpha-a(\kappa)$, under the Plancherel measure, is much larger and contributes to $\lambda_{\kappa}$ essentially. \begin{figure}[!Ht] \begin{center} \includegraphics[width=7cm]{plancherelcurve_n100_} \caption{The ``zig-zag" curve is the graph of $y=g_{\kappa}(x)$ and the smooth one is $y=\Omega(x)$. Facts: $A=(-\frac{m}{\sqrt{n}}, \frac{m}{\sqrt{n}})$, $D=(\frac{k_1}{\sqrt{n}}, \frac{k_1}{\sqrt{n}})$, and $g_{\kappa}(x)=\Omega(x)$ if $x\geq \max\{\frac{k_1}{\sqrt{n}}, 2\}$ or $x\leq -\max\{\frac{m}{\sqrt{n}}, 2\}$.} \label{fig:kerov} \end{center} \end{figure} We first recall some notation. Let $\kappa=(k_1, k_2, \cdots, k_m)$ with $k_m\geq 1$ be a partition of $n$. Set coordinates $u$ and $v$ by \begin{eqnarray}\label{rotate_135} u=\frac{j-i}{\sqrt{n}}\ \ \mbox{and}\ \ v=\frac{i+j}{\sqrt{n}}. \end{eqnarray} This is the same as flipping and then rotating the diagram of $\kappa$ counter clockwise $135^{\circ}$ and scaling it by a factor of $\sqrt{n/2}$ so that the area of the new diagram is equal to $2$. Denote by $g_{\kappa}(x)$ the boundary curve of the new Young diagram. See such a graph as in Figure \ref{fig:kerov}. It follows that $g_{\kappa}(x)$ is a Lipschitz function for all $x\in \mathbb{R}.$ For a piecewise smooth and compactly supported function $h(x)$ defined on $\mathbb{R}$, its Sobolev norm is given by \begin{eqnarray}\label{Sobolev} \|h\|_{\theta}^2=\iint_{\mathbb{R}^2}\Big(\frac{h(s)-h(t)}{s-t}\Big)^2\,dsdt. \end{eqnarray} Let $\kappa=(k_1, k_2, \cdots, k_m)$ with $k_m\geq 1$ be a partition of $n$. For $x\geq 0$, the notation $\lceil x\rceil$ stands for the least positive integer greater than or equal to $x$. Define $k(x)=k_{\lceil x\rceil}$ for $x\geq 0$ and \begin{eqnarray}\label{Logan} f_{\kappa}(x)=\frac{1}{\sqrt{n}}k(\sqrt{n}x), \ \ \ \ \ x\geq 0. \end{eqnarray} Recall from \eqref{jilin} that $\Omega(x) = \frac{2}{\pi}(x\arcsin\frac{x}{2} + \sqrt{4-x^2})$ for $|x|\leq 2$ and $|x|$ otherwise. The following is a large deviation bound on a rare event under the Plancherel measure. \begin{lemma}\label{season_mate} Define $L_{\kappa}(x)=\frac{1}{2}g_{\kappa}(2x)$ and $\bar{\Omega}(x)=\frac{1}{2}\Omega(2x)$ for $x\in\mathbb{R}$. Then, $$P(\mathcal{F}) \leq \exp\big\{C\sqrt{n}-n \inf_{\kappa\in \mathcal{F}}I(\kappa)\big\}$$ for any $n\geq 2$ and any subset $\mathcal{F}$ of the partitions of $n,$ where $C>0$ is an absolute constant and \begin{eqnarray}\label{rate_function_5} I(\kappa)=\|L_{\kappa}-\bar{\Omega}\|_{\theta}^2\, -\, 4\int_{|s|>1}(L_{\kappa}(s)-\bar{\Omega}(s))\cosh^{-1} |s|\,ds. \end{eqnarray} \end{lemma} \begin{proof}[Proof of Lemma \ref{season_mate}] For any non-increasing function $F(x)$ defined on $(0, \infty)$ such that $\int_{\mathbb{R}}F(x)\,dx=1$, define \begin{eqnarray*} \theta_F=1+2\int_0^{\infty}\int_0^{F(x)}\log \big(F(x)+F^{-1}(y)-x-y\big)\,dy\,dx \end{eqnarray*} where $F^{-1}(y)=\inf\{x\in \mathbb{R};\, F(x)\leq y\}$. According to (1.8) from Logan and Shepp (1977), $P(\kappa)\leq C \sqrt{n}\cdot\exp\big\{-n \theta_{f_{\kappa}}\big\}$ for all $n\geq 2$, where $C$ is a numerical constant and $f_{\kappa}$ is defined as in (\ref{Logan}). By the Euler-Hardy-Ramanujan formula, $p(n)$, the total number of partitions of $n$, satisfies that \begin{eqnarray}\label{Raman} p(n)\sim \frac{1}{4\sqrt{3}\,n}\cdot \exp\Big\{\frac{2\pi}{\sqrt{6}}\sqrt{n}\Big\} \end{eqnarray} as $n\to\infty$. Thus, for any subset $\mathcal{F}$ of the partitions of $n,$ we have \begin{eqnarray*} P(\mathcal{F}) &\leq & C p(n)\cdot\sqrt{n}\exp\Big\{-n \inf_{\kappa\in \mathcal{F}}\theta_{f_{\kappa}}\Big\} \nonumber\\ & \leq & C' \exp\Big\{C'\sqrt{n}-n \inf_{\kappa\in \mathcal{F}}\theta_{f_{\kappa}}\Big\} \end{eqnarray*} where $C'$ is another numerical constant independent of $n$. For any curve $y=\Lambda(x)$, make the following transform \begin{eqnarray*} X=\frac{x-y}{2}\ \ \mbox{and}\ \ Y=\frac{x+y}{2}. \end{eqnarray*} We name the new curve by $y=L_{\Lambda}(x)$. Taking $\Lambda=f_{\kappa}$, by (\ref{rotate_135}) and the definition $L_{\kappa}(x)=\frac{1}{2}g_{\kappa}(2x)$, we have $L_{f_{\kappa}}(x)=L_{\kappa}(-x)$ for all $x\in \mathbb{R}.$ By Lemmas 2, 3 and 4 from Kerov (2003), \begin{eqnarray*} \theta_{f_{\kappa}} &=& \|L_{f_{\kappa}}-\bar{\Omega}\|_{\theta}^2 + 4\int_{|s|>1}(L_{f_{\kappa}}(s)-\bar{\Omega}(s))\cosh^{-1} |s|\,ds\\ & = & \|L_{\kappa}-\bar{\Omega}\|_{\theta}^2 -4\int_{|s|>1}(L_{\kappa}(s)-\bar{\Omega}(s))\cosh^{-1} |s|\,ds \end{eqnarray*} considering $\Omega(x)$ is an even function. We then get the desired result. \end{proof} The next lemma says that the second term on the right hand side of (\ref{rate_function_5}) is small. \begin{lemma}\label{rate_function} Let $L_{\kappa}(x)$ and $\bar{\Omega}(x)$ be as in Lemma \ref{season_mate}. Let $\{t_n>0;\, n\geq 1\}$ satisfy $t_n\to\infty$ and $t_n=o(n^{1/3})$ as $n\to\infty.$ Set $H_n=\{\kappa=(k_1, \cdots, k_m)\vdash n;\, k_m\geq 1,\, 2\sqrt{n}-t_nn^{1/6} \leq m,\, k_1 \leq 2\sqrt{n}+t_nn^{1/6}\}$. Then, as $n\to\infty$, $P(H_n)\to 1$ and \begin{eqnarray}\label{five_words} \int_{|s|>1}(L_{\kappa}(s)-\bar{\Omega}(s))\cosh^{-1} |s|\,ds\cdot\, I_{H_n}=O(n^{-2/3}t_n^2). \end{eqnarray} \end{lemma} \begin{proof}[Proof of Lemma \ref{rate_function}] Since $m$ and $k_1$ have the same probability distribution under the Plancherel measure, by (\ref{green:white}), $\lim_{n\to\infty}P(H_n)=1.$ Review the definitions of $L_{\kappa}$ and $\bar{\Omega}$ in Lemma \ref{season_mate}. Trivially, \begin{eqnarray*} \mbox{LHS of } (\ref{five_words})=\frac{1}{4}\int_{|x|>2}(g_{\kappa}(x)-\Omega(x))\cosh^{-1} \frac{|x|}{2}\,dx\cdot\, I_{H_n}. \end{eqnarray*} By definition, $g_{\kappa}(x)=\Omega(x)$ if $x\geq \frac{k_1}{\sqrt{n}}\vee 2$ or $x\leq -\big(\frac{m}{\sqrt{n}}\vee 2\big).$ It follows that \begin{eqnarray} & & \mbox{LHS of } (\ref{five_words}) \nonumber\\ & \leq & C_n\cdot \Big[\int_2^{2+n^{-1/3}t_n}\big|g_{\kappa}(x)-\Omega(x)\big|\,dx+ \int^{-2}_{-2-n^{-1/3}t_n}\big|g_{\kappa}(x)-\Omega(x)\big|\,dx\Big] \label{strive} \end{eqnarray} where \begin{eqnarray*} C_n &= & \sup\Big\{\cosh^{-1}\frac{|x|}{2};\, -(\frac{m}{\sqrt{n}}\vee 2) \leq x \leq \frac{k_1}{\sqrt{n}} \vee 2\Big\}\cdot\, I_{H_n}\\ & \leq & \sup\Big\{\cosh^{-1}\frac{|x|}{2};\, -3\leq x \leq 3\Big\}<\infty \end{eqnarray*} as $n$ is sufficiently large. Now \begin{eqnarray} & & \int_2^{2+n^{-1/3}t_n}\big|g_{\kappa}(x)-\Omega(x)\big|\,dx\cdot\, I_{H_n} \nonumber\\ & \leq & n^{-1/3}t_n\cdot \max\Big\{\big|g_{\kappa}(x)-\Omega(x)\big|;\, 2\leq x \leq 2+n^{-1/3}t_n\Big\}\cdot\, I_{H_n}.\label{Lie} \end{eqnarray} By the triangle inequality, the Liptschitz property of $g_{\kappa}(x)$ and the fact $\Omega(x)=|x|$ for $|x|\geq 2$, we see \begin{eqnarray*} \big|g_{\kappa}(x)-\Omega(x)\big| &\leq & \big|g_{\kappa}(x)-g_{\kappa}(2+2n^{-1/3}t_n)\big| + \big|g_{\kappa}(2+2n^{-1/3}t_n)-\Omega(x)\big|\\ & \leq & \big|x-(2+2n^{-1/3}t_n)\big|+ \big|2+2n^{-1/3}t_n-x\big|\\ & \leq & 2\big[(2+2n^{-1/3}t_n)-x\big] \leq 4n^{-1/3}t_n \end{eqnarray*} for $2\leq x \leq 2+n^{-1/3}t_n$ and $\kappa \in H_n$ whence $g_{\kappa}(2+2n^{-1/3}t_n) = 2+2n^{-1/3}t_n$. This and (\ref{Lie}) imply that the first integral in (\ref{strive}) is dominated by $O(n^{-2/3}t_n^2)$. By the same argument, the second integral in (\ref{strive}) has the same upper bound. Then the conclusion is yielded. \end{proof} To prove Lemma \ref{wisdom}, we need to examine $g_{\kappa}(x)$ more closely. For $(k_1, k_2, \ldots, k_m) \vdash n$, assume \begin{eqnarray}\label{give_word} & & k_1=\cdots =k_{l_1}> k_{l_1+1}=\cdots=k_{l_2}>\cdots > k_{l_{p-1}+1}=\cdots=k_m \geq 1 \ \ \mbox{with}\nonumber\\ & & 0=l_0<l_1<\cdots < l_p=m \end{eqnarray} for some $p\geq 1.$ To ease notation, let $\bar{k}_i=k_{l_i}$ for $i=1,2,\cdots, p$ and $\bar{k}_{p+1}=0$. So the partition $\kappa$ is determined by $\{\bar{k}_i, l_i\}$'s. It is easy to see that the corners (see, e.g., points $A, B, C,D$ in Figure \ref{fig:kerov}) sitting on the curve of $y=g_{\kappa}(x)$ listed from the leftmost to the rightmost in order are \begin{eqnarray*} \big(-\frac{l_p}{\sqrt{n}}, \frac{l_p}{\sqrt{n}}\big), \cdots,\big(\frac{\bar{k}_i-l_i}{\sqrt{n}},\frac{\bar{k}_i+l_i}{\sqrt{n}}\big), \big(\frac{\bar{k}_{i+1}-l_i}{\sqrt{n}}, \frac{\bar{k}_{i+1}+l_i}{\sqrt{n}}\,\big), \cdots, \big(\frac{\bar{k}_1}{\sqrt{n}}, \frac{\bar{k}_1}{\sqrt{n}}\,\big) \end{eqnarray*} for $i=1, 2, \cdots, p$. As a consequence, \begin{eqnarray}\label{Russian} g_{\kappa}(x)=\begin{cases} \frac{2\bar{k}_{i}}{\sqrt{n}} -x, \ \ \text{if $\frac{\bar{k}_{i}-l_i}{\sqrt{n}} \leq x \leq \frac{\bar{k}_{i}-l_{i-1}}{\sqrt{n}}$;}\\ \frac{2l_{i}}{\sqrt{n}} +x, \ \ \text{if $\frac{\bar{k}_{i+1}-l_{i}}{\sqrt{n}} \leq x \leq \frac{\bar{k}_{i}-l_{i}}{\sqrt{n}}$} \\ \end{cases} \end{eqnarray} for all $1\leq i \leq p$, and $g_{\kappa}(x)=|x|$ for other $x\in \mathbb{R}$. In particular, taking $i=1$ and $p$, respectively, we get \begin{eqnarray*} g_{\kappa}(x)=\begin{cases} \frac{2k_1}{\sqrt{n}} - x, \ \ \text{if $\frac{k_1-l_1}{\sqrt{n}} \leq x \leq \frac{k_1}{\sqrt{n}}$};\\ \frac{2m}{\sqrt{n}} + x, \ \ \text{if $-\frac{m}{\sqrt{n}} \leq x \leq \frac{k_m-m}{\sqrt{n}}$} \end{cases} \end{eqnarray*} for $l_0=0$, $l_p=m$, $\bar{k}_1=k_1$, and $\bar{k}_{p}=k_m$. We need to estimate $\sum_{i=1}^{m}ik_i$ in the proof of Theorem \ref{thm:LLLplan}. The following lemma links it to $g_{\kappa}(x)$. We will then be able to evaluate the sum through Kerov's central limit theorem (Ivanov and Olshanski, 2001). \begin{lemma}\label{wisdom} Let $\kappa=(k_1, k_2, \cdots, k_m) \vdash n$ with $k_m\geq 1$ and $g_{\kappa}(x)$ be as in (\ref{Russian}). Then \begin{eqnarray*} \sum_{i=1}^{m}ik_i &= & \frac{1}{8}n^{3/2}\int_{-m/\sqrt{n}}^{k_1/\sqrt{n}}(g_{\kappa}(x)-x)^2\,dx -\frac{1}{6}m^3 + \frac{1}{2}n. \end{eqnarray*} \end{lemma} \begin{proof}[Proof of Lemma \ref{wisdom}] Easily, \begin{eqnarray} & & \int_{-m/\sqrt{n}}^{k_1/\sqrt{n}}(g_{\kappa}(x)-x)^2\,dx \nonumber\\ & = & \sum_{i=1}^p\int_{(\bar{k}_i-l_{i})/\sqrt{n}}^{(\bar{k}_i-l_{i-1})/\sqrt{n}}(g_{\kappa}(x)-x)^2\,dx + \sum_{i=1}^{p}\int_{\frac{\bar{k}_{i+1}-l_{i}}{\sqrt{n}}}^{\frac{\bar{k}_{i}-l_{i}}{\sqrt{n}}}(g_{\kappa}(x)-x)^2\,dx. \label{long_sum} \end{eqnarray} By (\ref{Russian}), the slopes of $g_{\kappa}(x)$ in the first sum of (\ref{long_sum}) are equal to $-1$. Hence, it is equal to \begin{eqnarray*} 4\sum_{i=1}^p\int_{(\bar{k}_i-l_{i})/\sqrt{n}}^{(\bar{k}_i-l_{i-1})/\sqrt{n}}\big(\frac{\bar{k}_i}{\sqrt{n}}-x\big)^2\,dx & = & 4\sum_{i=1}^p\int_{l_{i-1}/\sqrt{n}}^{l_{i}/\sqrt{n}}t^2\,dt\\ & = & 4\int_{l_{0}/\sqrt{n}}^{l_{p}/\sqrt{n}}t^2\,dt=\frac{4m^3}{3n^{3/2}} \end{eqnarray*} because $l_0=0$ and $l_p=m.$ In the second sum in (\ref{long_sum}), $g_{\kappa}(x)$ has slopes equal to $1$. As a consequence, it is identical to \begin{eqnarray*} \sum_{i=1}^{p}\int_{\frac{\bar{k}_{i+1}-l_{i}}{\sqrt{n}}}^{\frac{\bar{k}_{i}-l_{i}}{\sqrt{n}}}\frac{4l_i^2}{n}\,dx =\frac{4}{n^{3/2}}\sum_{i=1}^{p}(\bar{k}_i-\bar{k}_{i+1})l_i^2. \end{eqnarray*} In summary, \begin{eqnarray}\label{father_home} \int_{-m/\sqrt{n}}^{k_1/\sqrt{n}}(g_{\kappa}(x)-x)^2\,dx =\frac{4m^3}{3n^{3/2}} + \frac{4}{n^{3/2}}\sum_{i=1}^{p}(\bar{k}_i-\bar{k}_{i+1})l_i^2. \end{eqnarray} Now, let us evaluate the sum. Set $k_{j}=0$ for $j>m$ for convenience and $\Delta_i=k_i-k_{i+1}$ for $i=1,2,\cdots.$ Then $\Delta_i=0$ unless $i=l_1, \cdots, l_p$. Observe \begin{eqnarray*} \sum_{i=1}^{\infty}ik_i=\sum_{i=1}^{\infty}i\sum_{j=i}^{\infty}\Delta_j&=&\sum_{j=1}^{\infty}\Delta_j\sum_{i=1}^{j}i\\ & = & \frac{1}{2}\sum_{j=1}^{\infty}j^2\Delta_j+ \frac{1}{2}\sum_{j=1}^{\infty}j\Delta_j. \end{eqnarray*} Furthermore, \begin{eqnarray*} \sum_{j=1}^{\infty}j\Delta_j= \sum_{j=1}^{\infty}\sum_{i=1}^j\Delta_j=\sum_{i=1}^{\infty}\sum_{j=i}^{\infty}\Delta_j=\sum_{i=1}^{\infty}k_i=n. \end{eqnarray*} The above two assertions say that $\sum_{j=1}^{\infty}j^2\Delta_j=-n+2\sum_{i=1}^{\infty}ik_i$. Now, \begin{eqnarray*} \sum_{j=1}^{\infty}j^2\Delta_j=\sum_{i=1}^{p}l_i^2(k_{l_i}-k_{l_i+1})=\sum_{i=1}^{p}l_i^2(\bar{k}_i-\bar{k}_{i+1}) \end{eqnarray*} by the fact $k_{l_i+1}=k_{l_{i+1}}=\bar{k}_{i+1}$ from (\ref{give_word}). This together with (\ref{father_home}) shows \begin{eqnarray*} \int_{-m/\sqrt{n}}^{k_1/\sqrt{n}}(g_{\kappa}(x)-x)^2\,dx=\frac{4m^3}{3n^{3/2}}+ \frac{4}{n^{3/2}}\big(-n+2\sum_{i=1}^{\infty}ik_i \big). \end{eqnarray*} Solve this equation to get \begin{eqnarray*} \sum_{i=1}^{\infty}ik_i=\frac{1}{8}n^{3/2}\int_{-m/\sqrt{n}}^{k_1/\sqrt{n}}(g_{\kappa}(x)-x)^2\,dx -\frac{1}{6}m^3 + \frac{1}{2}n. \end{eqnarray*} The proof is complete. \end{proof} Under the Plancherel measure, both $m/\sqrt{n}$ and $k_1/\sqrt{n}$ go to $2$ in probability. In lieu of this fact, the next lemma writes the integral in Lemma \ref{wisdom} in a slightly cleaner form. The main tools of the proof are the Tracy-Widom law of the largest part of a random partition, the large deviations and Kerov's cental limit theorem. \begin{lemma}\label{pen_name} Let $g_{\kappa}(x)$ be as in (\ref{Russian}) and set \begin{eqnarray*} Z_n=\int_{-m/\sqrt{n}}^{k_1/\sqrt{n}}(g_{\kappa}(x)-x)^2\,dx-\int_{-2}^{2}(\Omega(x)-x)^2\,dx \end{eqnarray*} where $\Omega(x)$ is as in (\ref{jilin}). Then, for any $\{a_n>0;\, n\geq 1\}$ with $\lim_{n\to\infty}a_n=\infty$, we have $$\frac{n^{1/4}}{a_n} Z_n \to 0$$ in probability as $n\to\infty$. \end{lemma} \begin{proof}[Proof of Lemma \ref{pen_name}] Without loss of generality, we assume \begin{eqnarray}\label{song_spring} a_n=o(n^{1/4}) \end{eqnarray} as $n\to\infty$. Set \begin{eqnarray*} Z_n'=\int_{-2}^{2}(g_{\kappa}(x)-x)^2\,dx-\int_{-2}^{2}(\Omega(x)-x)^2\,dx. \end{eqnarray*} Write \begin{eqnarray}\label{news_paper} \frac{n^{1/4}}{a_n}Z_n= \frac{n^{1/4}}{a_n}Z_n'+ \frac{1}{n^{1/12} a_n}R_{n,1}+\frac{1}{n^{1/12} a_n}R_{n,2} \end{eqnarray} where \begin{eqnarray*} & & R_{n,1}=n^{1/3}\int_{-m/\sqrt{n}}^{-2}(g_{\kappa}(x)-x)^2 \,dx;\\ & & R_{n,2}=n^{1/3}\int_{2}^{k_1/\sqrt{n}}(g_{\kappa}(x)-x)^2 \,dx. \end{eqnarray*} We will show the three terms on the right hand side of (\ref{news_paper}) go to zero in probability. {\it Step 1}. We will prove a stronger result that both $R_{n,1}$ and $R_{n,2}$ are of order of $O_p(1)$ as $n\to\infty.$ By Theorem 5.5 from Ivanov and Olshanski (2001), \begin{eqnarray}\label{Minnesota_lib_tc} \delta_n:= \sup_{x\in \mathbb{R}}|g_{\kappa}(x)-\Omega(x)|\to 0 \end{eqnarray} in probability as $n\to\infty$, where $\Omega(x)$ is defined in (\ref{jilin}). Observe that \begin{eqnarray*} \frac{1}{2}|g_{\kappa}(x)-x|^2 \leq \delta_n^2+(\Omega(x)-x)^2 \end{eqnarray*} for each $x\in \mathbb{R}.$ Denote $C=\sup_{-3\leq x\leq 0}(\Omega(x)-x)^2$ and $C_n=\sup(\Omega(x)-x)^2$ with the supremum taking for $x$ between $-\frac{m}{\sqrt{n}}$ and $-2$. Then $P(C_n>2C)\leq P(\frac{m}{\sqrt{n}}>3)\to 0$ by (\ref{green:white}). Therefore, $C_n=O_p(1)$. It follows that \begin{eqnarray} |R_{n,1}| \leq 2n^{1/3} \big|\frac{m}{\sqrt{n}}-2\big|\cdot(\delta_n^2 +C_n)=O_p(1)\label{catalog} \end{eqnarray} by (\ref{green:white}) again. Similarly, $R_{n,2}=O_p(1)$ as $n\to\infty$. In the rest of the proof, we only need to show $\frac{n^{1/4}}{a_n} Z_n'$ goes to zero in probability. This again takes several steps. {\it Step 2}. In this step we will reduce $Z_n'$ to a workable form. By the same argument as that in front of (\ref{catalog}), we have \begin{eqnarray*} Z_n' &= & \int_{-2}^2(g_{\kappa}(x)-\Omega(x)) (g_{\kappa}(x)-\Omega(x)+2(\Omega(x)-x))\,dx\\ & = & \int_{-2}^2|g_{\kappa}(x)-\Omega(x)|^2\,dx + \int_{-2}^2f_1(x)(g_{\kappa}(x)-\Omega(x)) \,dx\\ & \le & \int_{-2}^2|g_{\kappa}(x)-\Omega(x)|^2\,dx + \sqrt{\int_{-2}^2 f_1(x) \,dx} \cdot \sqrt{ \int_{-2}^2 |g_{\kappa}(x)-\Omega(x)|^2 \,dx} \end{eqnarray*} where $f_1(x):=2(\Omega(x)-x)$ for all $x \in \mathbb{R}$, and the last inequality above follows from the Cauchy-Schwartz inequality. To show $\frac{n^{1/4}}{a_n} Z_n'$ goes to zero in probability, since $f_1(x)$ is a bounded function on $\mathbb{R}$, it suffices to prove \begin{eqnarray}\label{mellon} Z_n'':&=& \frac{n^{1/2}}{a_n^2} \int_{-2}^2|g_{\kappa}(x)-\Omega(x)|^2\,dx \to 0 \end{eqnarray} in probability by (\ref{song_spring}). Set \begin{eqnarray}\label{hapiness} H_n &= & \Big\{\kappa=(k_1, \cdots, k_m)\vdash n;\, 2\sqrt{n}-n^{1/6}\log n \leq m,\, k_1 \leq 2\sqrt{n}+n^{1/6}\log n\ \mbox{and}\ \nonumber\\ & & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \big|n^{1/3}\int_{-2}^2(g_{\kappa}(x)-\Omega(x))\,ds\big|\leq 1 \Big\}. \end{eqnarray} {\it Step 3 }. We prove in this step that \begin{eqnarray}\label{garlic_sweet} \lim_{n\to\infty}P(H_n^c)=0. \end{eqnarray} Note that $g_{\kappa}(s)=\Omega(s)=|s|$ if $s\geq \max\{\frac{k_1}{\sqrt{n}}, 2\}$ or $s\leq -\max\{\frac{m}{\sqrt{n}}, 2\}$. Also, the areas encircled by $t=|s|$ and $t=g_{\kappa}(s)$ and that by $t=|s|$ and $t=\Omega(s)$ are both equal to $2$; see Figure \ref{fig:kerov}. It is trivial to see that $\int_{a}^b(g_{\kappa}(s)-\Omega(s))\,du=\int_{\mathbb{R}}(g_{\kappa}(s)-\Omega(s))\,du=0$ for $a:= -\max\{\frac{m}{\sqrt{n}}, 2\}$ and $b:= \max\{\frac{k_1}{\sqrt{n}}, 2\}$. Define $h_{\kappa}(s)=g_{\kappa}(s)-\Omega(s).$ We see \begin{eqnarray*} -\int_{-2}^2h_{\kappa}(s)\,ds=\int_a^{-2}h_{\kappa}(s)\,ds + \int_{2}^{b}h_{\kappa}(s)\,ds. \end{eqnarray*} Thus, \begin{eqnarray*} \big|n^{1/3}\int_{-2}^2h_{\kappa}(s)\,ds\big| &\leq & \big|n^{1/3}\int_a^{-2}h_{\kappa}(s)\,ds\big| + \big|n^{1/3}\int_{2}^bh_{\kappa}(s)\,ds\big| \nonumber\\ & \leq & 2n^{1/3}\max_{s\in \mathbb{R}}|h_{\kappa}(s)| \cdot (|a+2|+|b-2|). \end{eqnarray*} From (\ref{Minnesota_lib_tc}), $\max_{s\in \mathbb{R}}|h_{\kappa}(s)|\to 0$ in probability. Further $|a+2| \leq |\frac{m}{\sqrt{n}}-2|$ and $|b-2| \leq |\frac{k_1}{\sqrt{n}}-2|$. By (\ref{green:white}) again, we obtain $n^{1/3}\int_{-2}^2h_{\kappa}(s)\,du \to 0$ in probability. This and the first conclusion of Lemma \ref{rate_function} imply that $\lim_{n\to\infty}P(H_n^c)=0$. {\it Step 4}. Review $H_n$ in (\ref{hapiness}) and the limit in (\ref{garlic_sweet}). It is seen from Lemma \ref{season_mate} that there exists an absolute constant $C>0$ such that \begin{eqnarray*} P(Z_n''>\epsilon) & \leq & e^{C\sqrt{n}-n \cdot \inf I(\kappa)} + P(H_n^c) \\ & = & e^{C\sqrt{n}-n \cdot \inf I(\kappa)} + o(1) \end{eqnarray*} where $I(\kappa)$ is as in Lemma \ref{season_mate} and the infimum is taken over all $\kappa\in H_n \cap \{Z_n''> \epsilon\}$. We claim \begin{eqnarray}\label{tea_jilin_s} n^{1/2}\cdot \inf I(\kappa)\to\infty \end{eqnarray} as $n\to\infty$. If this is true, we then obtain (\ref{mellon}), and the proof is completed. Review \begin{eqnarray*} I(\kappa)=\|L_{\kappa}-\bar{\Omega}\|_{\theta}^2 - 4\int_{|s|>1}(L_{\kappa}(s)-\bar{\Omega}(s))\cosh^{-1} |s|\,ds. \end{eqnarray*} Lemma \ref{rate_function} says that the last term above is of order $O(n^{-2/3}(\log n)^2)$ as $\kappa\in H_n$ by taking $t_n=\log n$. To get (\ref{tea_jilin_s}), it suffices to show \begin{eqnarray}\label{waterfall} n^{1/2}\cdot \inf_{\kappa\in H_n;\, Z_n''\geq \epsilon}\|L_{\kappa}-\bar{\Omega}\|_{\theta}^2 \to\infty \end{eqnarray} as $n\to\infty$. By the definitions of $L_{\kappa}$ and $\bar{\Omega}$, we see from (\ref{Sobolev}) that \begin{eqnarray*} \|L_{\kappa}-\bar{\Omega}\|_{\theta}^2 & \geq & \frac{1}{4}\int_{-2}^2\int_{-2}^2\Big(\frac{h_{\kappa}(s)-h_{\kappa}(t)}{s-t}\Big)^2\,dsdt\\ & \geq & \frac{1}{4^3}\int_{-2}^2\int_{-2}^2(h_{\kappa}(s)-h_{\kappa}(t))^2\,dsdt\\ & = & \frac{1}{4} E(h_{\kappa}(U)-h_{\kappa}(V))^2 \end{eqnarray*} where $U$ and $V$ are independent random variables with the uniform distribution on $[-2, 2].$ By the Jensen inequality, the last integral is bounded below by $E(h_{\kappa}(U)-Eh_{\kappa}(V))^2=E[h_{\kappa}(U)^2]-[Eh_{\kappa}(V)]^2$. Consequently, \begin{eqnarray*} \|L_{\kappa}-\bar{\Omega}\|_{\theta}^2 & \geq & \frac{1}{16}\int_{-2}^2h_{\kappa}(u)^2\,du - \frac{1}{64}\Big(\int_{-2}^2h_{\kappa}(u)\,du\Big)^2 \\ & \geq & \frac{\epsilon}{16}n^{-1/2}\cdot a_n^2 -\frac{1}{64}n^{-2/3} \end{eqnarray*} for $\kappa \in H_n \cap \{Z_n''\geq \epsilon\}$. This implies (\ref{waterfall}). \end{proof} With the above preparation we proceed to prove Theorem \ref{thm:LLLplan}. \begin{proof}[Proof of Theorem \ref{thm:LLLplan}] By Lemma \ref{theater}, \begin{eqnarray*} \lambda_{\kappa}=\big(m-\frac{\alpha}{2}\big)n +\sum_{i=1}^m (\frac{\alpha}{2}k_i-i)k_i. \end{eqnarray*} Thus \begin{eqnarray*} &&\frac{\lambda_{\kappa} -2n^{3/2}- (\alpha-1) (\frac{128}{27} \pi^{-2} )n^{3/2} }{ n^{5/4} \cdot a_n }\\ &=&\frac{m-2\sqrt{n}}{n^{1/4} \cdot a_n} - \frac{\alpha}{2n^{1/4} \cdot a_n} +\frac{\sum_{i=1}^m(\frac{\alpha}{2}k_i-i)k_i-(\alpha-1) ( \frac{128}{27} \pi^{-2} )n^{3/2} }{n^{5/4} \cdot a_n}. \end{eqnarray*} We claim \begin{eqnarray}\label{cold_leave} \frac{\sum_{i=1}^m(\frac{\alpha}{2}k_i-i)k_i-(\alpha-1) (\frac{128}{27} \pi^{-2} )n^{3/2} }{n^{5/4} \cdot a_n} \to 0 \end{eqnarray} in probability as $n\to\infty.$ If this is true, by (\ref{green:white}), we finish the proof. Now let us show (\ref{cold_leave}). We first claim \begin{eqnarray}\label{we} \frac{1}{n}\sum_{i=1}^m\big(\frac{1}{2}k_i-i\big)k_i \to N\big(-\frac{1}{2}, \sigma^2\big) \end{eqnarray} for some $\sigma^2\in (0, \infty)$. To see why this is true, we get from (\ref{kernel_sea}) and Lemma \ref{theater} that \begin{eqnarray*} a(\kappa')-a(\kappa)=\frac{1}{2}n+\sum_{i=1}^m\big(\frac{1}{2}k_i-i\big)k_i. \end{eqnarray*} By Theorem 1.2 from Fulman (2004), there is $\sigma^2\in (0, \infty)$ such that \begin{eqnarray*} \frac{a(\kappa')-a(\kappa)}{n} \to N(0, \sigma^2) \end{eqnarray*} weakly as $n\to\infty$. Then (\ref{we}) follows. Second, from (\ref{green:white}), we know $\xi_n:=(m-2\sqrt{n})n^{-1/6}$ converges weakly to $F_2$ as $n\to\infty.$ Write \begin{eqnarray*} m^3 = (2\sqrt{n} + n^{1/6}\xi_n)^3=n^{1/2}\xi_n^3 + 6n^{5/6}\xi_n^2 + 12 n^{7/6}\xi_n + 8n^{3/2}. \end{eqnarray*} This implies that $\frac{m^3-8n^{3/2}}{n^{5/4}} \to 0$ in probability as $n \to \infty$. Let $Z_n$ be as in Lemma \ref{pen_name} and $\Omega(x)$ as in (\ref{jilin}). It is seen from Lemmas \ref{wisdom} and \ref{pen_name} that \begin{eqnarray*} \sum_{i=1}^m i k_i &=& \frac{1}{8}n^{3/2}\int_{-m/\sqrt{n}}^{k_1/\sqrt{n}}(g_{\kappa}(x)-x)^2\,dx -\frac{1}{6}m^3 + \frac{1}{2}n\\ &=& \frac{1}{8}n^{3/2} \left( Z_n + \int_{-2}^2 (\Omega(x)-x)^2 \,dx \right)-\frac{1}{6}m^3 + \frac{1}{2}n \end{eqnarray*} with $\frac{n^{1/4}}{8a_n} Z_n \to 0$ in probability as $n\to\infty.$ The last two assertions imply \begin{eqnarray} & & \frac{1}{{n^{5/4} \cdot a_n}}\Big[\sum_{i=1}^mik_i - \frac{1}{8} n^{\frac{3}{2}} \int_{-2}^2 (\Omega(x)-x)^2 \,dx + \frac{4}{3} n^{\frac{3}{2}}\Big] \label{field_sleep}\\ & = & \frac{n^{1/4}}{8a_n} Z_n - \frac{1}{6a_n}\cdot \frac{m^3-8n^{3/2}}{n^{5/4}} + \frac{1}{2a_n n^{1/4}} \to 0\nonumber \end{eqnarray} in probability as $n\to\infty$. It is trivial and yet a bit tedious to verify \begin{eqnarray}\label{integral} \int_{-2}^2 (\Omega(x)-x)^2 \,dx = \frac{32}{3} + \frac{1024}{27\pi^2}. \end{eqnarray} The calculation of \eqref{integral} is included in Appendix \ref{appendix:integral}. Put this into (\ref{field_sleep}) to see \begin{eqnarray}\label{you} \frac{\sum_{i=1}^mik_i - \frac{128}{27\pi^2} n^{3/2}}{n^{5/4} \cdot a_n} \to 0 \end{eqnarray} in probability as $n\to\infty$. Third, observe \begin{eqnarray*} \sum_{i=1}^m(\frac{\alpha}{2}k_i-i)k_i = \alpha\sum_{i=1}^m\big(\frac{1}{2}k_i-i\big)k_i+(\alpha-1)\sum_{i=1}^mik_i. \end{eqnarray*} Therefore \begin{eqnarray*} & &\frac{\sum_{i=1}^m(\frac{\alpha}{2}k_i-i)k_i-(\alpha-1) (\frac{128}{27} \pi^{-2} )n^{3/2} }{n^{5/4} \cdot a_n}\\ &=& \alpha \frac{\sum_{i=1}^m\big(\frac{1}{2}k_i-i\big)k_i}{n^{5/4} \cdot a_n} + (\alpha-1) \frac{\sum_{i=1}^mik_i - ( \frac{128}{27} \pi^{-2} )n^{3/2}}{n^{5/4} \cdot a_n} \to 0 \end{eqnarray*} in probability by \eqref{we} and \eqref{you}. We finally arrive at (\ref{cold_leave}). \end{proof} \section{Appendix}\label{appendix:last} In this section we will prove \eqref{mean_variance}, verify \eqref{integral} and derive the density functions of the random variable appearing in Theorem \ref{cancel_temple} for two cases. They are placed in three subsections. \subsection{Proof of \eqref{mean_variance}}\label{appendix:mean_variance} Recall $(2s-1)!!=1\cdot 3\cdots (2s-1)$ for integer $s\geq 1$. Set $(-1)!!=1$. The following is Lemma 2.4 from Jiang (2009). \begin{lemma}\label{Jiang2009} Suppose $p\geq 2$ and $Z_1, \cdots, Z_p$ are i.i.d. random variables with $Z_1 \sim N(0,1).$ Define $U_i=\frac{Z_i^2}{Z_1^2 + \cdots + Z_p^2}$ for $1\leq i \leq p$. Let $a_1, \cdots, a_p$ be non-negative integers and $a=\sum_{i=1}^pa_i$. Then \begin{eqnarray*} E\big(U_1^{a_1}\cdots U_p^{a_p}\big) = \frac{\prod_{i=1}^p(2a_i-1)!!}{\prod_{i=1}^a(p+2i-2)}. \end{eqnarray*} \end{lemma} \noindent\textbf{Proof of (\ref{mean_variance})}. Recall (\ref{pro_land}). Write $(r-1)s^2=\sum_{i=1}^r x_i^2-r\bar{x}^2$. In our case, \begin{eqnarray*} & & \bar{x} = \frac{1}{|\mathcal{P}_n(m)|}\sum_{\kappa\in \mathcal{P}_n(m)}\lambda_{\kappa}=E\lambda_{\kappa};\\ & & s^2=\frac{1}{|\mathcal{P}_n(m)|-1}\sum_{\kappa\in \mathcal{P}_n(m)}(\lambda_{\kappa}-\bar{\lambda}_{\kappa})^2 \sim E(\lambda_{\kappa}^2)- (E\lambda_{\kappa})^2 \end{eqnarray*} as $n\to\infty$, where $E$ is the expectation about the uniform measure on $\mathcal{P}_n(m)'$. Therefore, \begin{eqnarray}\label{interview_Jin} \frac{\bar{x}}{n^2}=\frac{E\lambda_{\kappa}}{n^2}\ \ \mbox{and}\ \ \frac{s^2}{n^4}\sim E\Big(\frac{\lambda_{\kappa}}{n^2}\Big)^2 - \Big(\frac{E\lambda_{\kappa}}{n^2}\Big)^2. \end{eqnarray} From Lemma \ref{theater}, we see a trivial bound that $0\leq \lambda_{\kappa}/n^2\leq 1+\frac{\alpha}{2}m$ for each partition $\kappa =(k_1, \cdots, k_m)\vdash n$ with $k_m\geq 1.$ By Theorem \ref{cancel_temple}, under $\mathcal{P}_n'(m)$, \begin{eqnarray*} \frac{\lambda_{\kappa}}{n^2}\to \frac{\alpha}{2}\cdot Y\ \ \mbox{and}\ \ Y:=\frac{\xi_1^2+\cdots + \xi_m^2}{(\xi_1+\cdots + \xi_m)^2} \end{eqnarray*} as $n\to\infty$, where $\{\xi_i;\, 1\leq i \leq m\}$ are i.i.d. random variables with density $e^{-x}I(x\geq 0)$. By bounded convergence theorem and (\ref{interview_Jin}), \begin{eqnarray}\label{do_tired} \frac{\bar{x}}{n^2}\to \frac{\alpha}{2} EY\ \ \mbox{and}\ \ \frac{s^2}{n^4} \to \frac{\alpha^2}{4} [E(Y^2)-(EY)^2] \end{eqnarray} as $n\to\infty$. Now we evaluate $EY$ and $E(Y^2)$. Easily, \begin{eqnarray} & & EY=m\cdot E\frac{\xi_1^2}{(\xi_1+\cdots + \xi_m)^2};\nonumber\\ & & E(Y^2)=m\cdot E\frac{\xi_1^4}{(\xi_1+\cdots + \xi_m)^4} +m(m-1)\cdot E\frac{\xi_1^2\xi_2^2}{(\xi_1+\cdots + \xi_m)^4}.\label{moon_wind} \end{eqnarray} Let $Z_1, \cdots, Z_{2m}$ be i.i.d. random variables with $N(0, 1)$ and $U_i=\frac{Z_i^2}{Z_1^2 + \cdots + Z_{2m}^2}$ for $1\leq i \leq 2m$. Evidently, $(Z_1^2+Z_2^2)/2$ has density function $e^{-x}I(x\geq 0)$. Then, \begin{eqnarray*} \Big(\frac{\xi_i}{\xi_1+\cdots + \xi_m}\Big)_{1\leq i \leq m}\ \ \mbox{and}\ \ \ (U_{2i-1}+U_{2i})_{1\leq i \leq m} \end{eqnarray*} have the same distribution. Consequently, by taking $p=2m$ in Lemma \ref{Jiang2009}, \begin{eqnarray} EY &=& m\cdot E(U_1+U_2)^2 \nonumber\\ &= & 2m[E(U_1^2) +E(U_1U_2)] \nonumber\\ & = & 2m\big[\frac{3}{4m(m+1)} + \frac{1}{4m(m+1)}\big]=\frac{2}{m+1}.\label{easy_absolute} \end{eqnarray} Similarly, \begin{eqnarray*} E\frac{\xi_1^4}{(\xi_1+\cdots + \xi_m)^4} &=& E[(U_1+U_2)^4]\\ & = & 2E(U_1^4) +8 E(U_1^3U_2) + 6 E(U_1^2U_2^2)\\ & = & \frac{105}{8}\frac{1}{m(m+1)(m+2)(m+3)} + \frac{15}{2}\frac{1}{m(m+1)(m+2)(m+3)}\\ & & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \frac{27}{8}\frac{1}{m(m+1)(m+2)(m+3)}\\ &= & \frac{24}{m(m+1)(m+2)(m+3)} \end{eqnarray*} and \begin{eqnarray*} E\frac{\xi_1^2\xi_2^2}{(\xi_1+\cdots + \xi_m)^4} &= & E[(U_1+U_2)^2(U_3+U_4)^2]\\ & = & 4E(U_1^2U_2^2) + 8E(U_1^2U_2U_3)+ 4E(U_1U_2U_3U_4)\\ & = & \frac{9}{4}\frac{1}{m(m+1)(m+2)(m+3)} + \frac{3}{2}\frac{1}{m(m+1)(m+2)(m+3)}\\ & & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \frac{1}{4}\frac{1}{m(m+1)(m+2)(m+3)}\\ &= & \frac{4}{m(m+1)(m+2)(m+3)}. \end{eqnarray*} It follows from (\ref{moon_wind}) and (\ref{easy_absolute}) that \begin{eqnarray*} & & E(Y^2)=\frac{4m+20}{(m+1)(m+2)(m+3)};\\ & & E(Y^2)-(EY)^2 = \frac{4m+20}{(m+1)(m+2)(m+3)}-\Big(\frac{2}{m+1}\Big)^2\\ & & ~~~~~~~~~~~~~~~~~~~~~= \frac{4m-4}{(m+1)^2(m+2)(m+3)}. \end{eqnarray*} This and (\ref{do_tired}) say that \begin{eqnarray*} \frac{\bar{x}}{n^2}\to\frac{\alpha}{m+1}\ \ \mbox{and} \ \ \frac{s^2}{n^4}\to \frac{(m-1)\alpha^2}{(m+1)^2(m+2)(m+3)}. \end{eqnarray*} \subsection{Verification of \eqref{integral}}\label{appendix:integral} \begin{proof}[Verification of \eqref{integral}] Trivially, $\Omega(x)$ in (\ref{jilin}) is an even function and $\Omega(x)' = \frac{2}{\pi} \arcsin\frac{x}{2}$ for $|x| < 2$. Then \begin{equation*} \begin{split} & \int_{-2}^2 (\Omega(x) - x)^2 \,dx = \int_{-2}^2 \Omega(x)^2 \,dx + \int_{-2}^2 x^2 \,dx\\ & = x \cdot \Omega(x)^2 \Bigr|_{-2}^2 - \int_{-2}^2 x \cdot 2\Omega(x) \cdot \Omega(x)' \,dx + \frac{x^3}{3} \Bigr|_{-2}^2\\ & = \frac{64}{3} - \frac{16}{\pi^2} \int_{0}^2 x \arcsin {\frac{x}{2}} \cdot ( x \arcsin\frac{x}{2} +\sqrt{4-x^2} )\,dx. \end{split} \end{equation*} Now, set $x=2\sin \theta$ for $0\leq \theta \leq \frac {\pi}{2}$, the above integral becomes \begin{eqnarray}\label{sine_cosine} &&\int_{0}^{\frac{\pi}{2}} 2\theta \sin \theta (2\theta \sin \theta + 2\cos\theta) 2\cos \theta \, d\theta \nonumber\\ & = & 2\int_{0}^{\frac{\pi}{2}}(\theta \sin \theta+\theta \sin (3\theta)+\theta^2\cos\theta-\theta^2\cos (3\theta))\,d\theta \end{eqnarray} by trigonometric identities. It is easy to verify that \begin{eqnarray*} & & \theta\sin \theta=(\sin\theta-\theta\cos\theta)';\ \ \ \ \theta\sin (3\theta)=\frac{1}{9}(\sin(3\theta)-3\theta\cos(3\theta))';\\ & & \theta^2\cos\theta =(\theta^2\sin \theta +2\theta\cos \theta-2\sin \theta)';\\ & & \theta^2\cos (3\theta)=\frac{1}{27}(9\theta^2\sin (3\theta) +6\theta\cos (3\theta)-2\sin (3\theta))'. \end{eqnarray*} Thus, the term in (\ref{sine_cosine}) is equal to \begin{eqnarray*} 2\Big(1+(-\frac{1}{9})+ (\frac{\pi^2}{4}-2)-\frac{1}{27}(-\frac{9\pi^2}{4}+2)\Big)=\frac{2}{3}\pi^2-\frac{64}{27}. \end{eqnarray*} It follows that \begin{eqnarray*} \int_{-2}^2 (\Omega(x) - x)^2 \,dx =\frac{64}{3} - \frac{16}{\pi^2}\Big(\frac{2}{3}\pi^2-\frac{64}{27}\Big)=\frac{32}{3}+\frac{1024}{27\pi^2}. \end{eqnarray*} This completes the verification. \end{proof} \subsection{Derivation of density functions in Theorem \ref{cancel_temple}}\label{appendix:integral2} In this section, we will derive explicit formulas for the limiting distribution in Theorem \ref{cancel_temple}. For convenience, we rewrite the conclusion by $$\frac{2}{\alpha}\cdot \frac{\lambda_\kappa}{n^2}\to \nu, $$ where $\nu$ is different from $\mu$ in Theorem \ref{cancel_temple} by a factor of $\frac{2}{\alpha}$. We will only evaluate the cases $m=2, 3$. We first state the conclusions and prove them afterwards. \noindent{\it Case 1.} For $m=2$, the support of $\nu$ is $[\frac{1}{2}, 1]$ and the cdf of $\nu$ is \begin{equation}\label{eq:cdf-ru} F(t) =\sqrt{2t-1} \end{equation} for $t\in [\frac{1}{2}, 1]$. Hence the density function is given by \begin{eqnarray*} f(t)= \frac{1}{\sqrt{2t-1}},\ \ t\in [\frac{1}{2}, 1]. \end{eqnarray*} \noindent{\it Case 2.} For $m=3$, the support of $\nu$ is $[\frac{1}{3}, 1]$, and the cdf of $\nu$ is \begin{eqnarray}\label{eq:cdf-haha} F(t) = \begin{cases} \frac{2}{\sqrt{3}} \pi (t-\frac{1}{3}), & \text{if } \frac{1}{3} \le t < \frac{1}{2}; \\ \frac{2}{\sqrt{3}} \left( (t-\frac{1}{3}) (\pi - 3\arccos\frac{1}{\sqrt{6t-2}}) + \frac{\sqrt{6}}{2} \sqrt{t-\frac{1}{2}}\, \right), & \text{if } \frac{1}{2} \le t < 1. \end{cases} \end{eqnarray} By differentiation, we get the density function \begin{eqnarray*} f(t)= \begin{cases} \frac{2}{\sqrt{3}} \pi, & \text{if } \frac{1}{3} \le t < \frac{1}{2}; \\ \frac{2}{\sqrt{3}} \big( \pi - 3\arccos\frac{1}{\sqrt{6t-2}} \big), & \text{if } \frac{1}{2} \le t \le 1.\\ \end{cases} \end{eqnarray*} The above are the two density functions claimed below the statement of Theorem \ref{cancel_temple}. Now we prove them. From a comment below Theorem \ref{cancel_temple}, the limiting law of $\frac{2}{\alpha}\cdot \frac{\lambda_\kappa}{n^2}$ is the same as the distribution of $\sum_{i=1}^m Y_i^2$, where $(Y_1,\ldots,Y_m)$ has uniform distribution over the set $$\mathcal{H}:=\Big\{(y_1,\ldots,y_{m}) \in [0,1]^{m};\, \sum_{i=1}^{m}y_i = 1 \Big\}.$$ By (\ref{silk_peer}) the volume of $\mathcal{H}$ is $\frac{\sqrt{m}}{(m-1)!}$. Therefore, the cdf of $\sum_{i=1}^m Y_i^2$ is \begin{equation}\label{eq:dist} F(t)=P\Big(\sum_{i=1}^m Y_i^2 \le t\Big) = \frac{(m-1)!}{\sqrt{m}} \cdot \text{volume of } \Big\{ \sum_{i=1}^m y_i^2 \le t\Big\}\cap \mathcal{H},\ \ t\geq 0. \end{equation} Denote $B_m(t):=\{ \sum_{i=1}^m y_i^2 \le t\} \subset \mathbb{R}^m$. Let $V(t)$ be the volume of $B_m(t) \cap \mathcal{H} $. We start with some facts for any $m\geq 2$. First, $V(t) = 0$ for $t < \frac{1}{m}$. In fact, if $(y_1, \cdots, y_m)\in B_m(t)\cap \mathcal{H}$, then $$\frac{1}{m}=\frac{(\sum_{i=1}^m y_i)^2}{m} \le \sum_{i=1}^m y_i^2\le t.$$ Further, for $t>1$, $\mathcal{H}$ is inscribed in $B_m(t)$ and thus $V(t) = \frac{\sqrt{m}}{(m-1)!}$. Now assume $1/m \le t \le 1$. \medskip \noindent{\it The proof of (\ref{eq:cdf-ru})}. Assume $m=2$. If $1/2 \le t \le 1$, then $\{(y_1,y_2) \in [0,1]^2: y_1 + y_2 =1 \}\cap\{ y_1^2 + y_2^2 \le t \}$ is a line segment. Easily, the endpoints of the line segment are $$\Big(\frac{1+\sqrt{2t-1}}{2}, \frac{1-\sqrt{2t-1}}{2}\Big) \quad \text{and} \quad \Big(\frac{1-\sqrt{2t-1}}{2}, \frac{1+\sqrt{2t-1}}{2}\Big),$$ respectively. Thus $V(t) = \sqrt{2(2t-1)}.$ Therefore the conclusion follows directly from \eqref{eq:dist}. \medskip \noindent{\it The proof of (\ref{eq:cdf-haha})}. We first observe that as $t$ increases from $\frac{1}{3}$ to 1, the intersection $B_{3}(t)\cap \mathcal{H}$ expands and passes through $\mathcal{H}$ as $t$ exceeds some critical value $t_0$; see Figure \ref{fig:region}. We claim that $t_0 = \frac{1}{2}$. Indeed, the center $C$ of the intersection of $B_{3}(t)$ and the hyperplane $\mathcal{I}:=\{(y_1, y_2, y_3)\in \mathbb{R}^3; y_1+y_2+y_3=1\} \supset \mathcal{H}$ is $C=(\frac{1}{3}, \frac{1}{3}, \frac{1}{3}).$ Thus, the distance from the origin to $\mathcal{I}$ is $d=((\frac{1}{3})^2+(\frac{1}{3})^2+(\frac{1}{3})^2)^{1/2} = \frac{1}{\sqrt{3}}.$ By Pythagorean's theorem, the radius of the intersection (disc) on $\mathcal{I}$ is $$R(t) = \sqrt{t - d^2} = \sqrt{t - \frac{1}{3}}.$$ Let $t_0$ be the value such that the intersection $B_3(t)\cap \mathcal{H}$ exactly inscribes $\mathcal{H}$. By symmetry, the intersection point at the $(x,y)$-plane is $M = (\frac{1}{2},\frac{1}{2}, 0)$; see Figure \ref{fig:region}(b). Therefore $|CM| = \sqrt{\frac{1}{6}}.$ Solving $t_0$ from $|CM| = R(t_0),$ we have $t_0 = \frac{1}{2}$. \begin{figure}[h!] \centering \includegraphics[width=11cm]{intersection.png} \caption{Shaded region indicates volume $V(t)$ of intersection as $t$ changes from $1/3$ to 1 as $m=3$.} \label{fig:region} \end{figure} \medskip When $\frac{1}{3} \le t < \frac{1}{2}$, the intersection locates entirely in $\mathcal{H}$; see Figure \ref{fig:region}(a). Then $$V(t) = \pi R(t)^2 = \pi (t - \frac{1}{3}).$$ When $\frac{1}{2} \le t \le 1$, the volume of the intersection part [see Figure \ref{fig:region}(c)] is given by $$V(t) = \pi R(t)^2 - 3 \cdot V_{\text{cs}}(h(t),R(t)),$$ where $V_{\text{cs}}(h(t),R(t))$ is the area of circular segment with radius $R(t)$ and height $$h(t)=R(t)- |CM|= \sqrt{t-\frac{1}{3}}-\sqrt{\frac{1}{6}}.$$ Therefore, it is easy to check $$V(t)= \pi (t - \frac{1}{3}) - 3(t - \frac{1}{3}) \arccos \frac{1}{\sqrt{6t-2}} + 3 \sqrt{\frac{1}{6}\big(t-\frac{1}{2}\big)}\,.$$ This and \eqref{eq:dist} yield the desired conclusion. \medskip \noindent\textbf{Acknowledgements}. We thank Professors Valentin F\'{e}ray, Sho Matsumoto and Andrei Okounkov very much for communications and discussions. The research of the second author was supported in part by the Institute for Mathematics and its Applications with funds provided by the National Science Foundation.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Musculoskeletal Disorders (MSDs) makes up the vast proposition of the occupational diseases \cite{Eurogip}. Inappropriate physical load is viewed as a risk factor of MSD \cite{Chaffin1999}. Biomechanical analysis of joint moments and muscle loads will provide insight into MSDs. Over the past decades, many tools have been developed for biomechanical simulation and analysis. OpenSim \cite{Delp2007} is a virtual human modeling software that has been widely used for motion simulation and body/muscle dynamic analysis \cite{Thelen2006,Kim2017}. The simulation in OpenSim should be based on a generic virtual human that consists of bodies, muscles, joint constraints, etc., as shown in Figure 1. \begin{figure}[htbp] \centering \includegraphics[width=0.4\textwidth]{Fig1.jpg}\\ \caption{A generic OpenSim model.} \label{Fig1} \end{figure} A simulation is generally started by scaling the generic model specifically to the subject's geometric and mass data. The subject's body geometric data is obtained using a motion capture system, which records the spatial positions of flash reflecting markers that attached to the specific locations of the subject; then the generic OpenSim model is adjusted geometrically with attempts to minimize the position deviations between virtual markers and corresponding real markers. This makes a subject-specific model out of the generic model. The geometrical adjustment increases the accuracy of posture simulation and kinematic analysis that follow. For accurate dynamic analysis, the body segment inertial parameters, such as segment masses, should also be adjusted specifically to each subject. In OpenSim, this adjustment is carried out by scaling the mass of each segment of the generic model proportionately with respect to the whole body mass of the subject. This method of determining segment mass is based on the assumption that the mass distribution among body segments is similar among humans, which is not always the case. For example, the mean mass proportion of the thigh has been reported to be 10.27\% \cite{Clauser1969}, 14.47\% \cite{DeLeva1996}, 9.2\% \cite{okada1996}, and 12.72\% \cite{Durkin2003}, which indicates significant individual difference. Therefore, the scaling method used by OpenSim is likely to cause errors in the following dynamic analysis. There is a necessity to estimate the errors. This paper aims at estimating the errors caused by the scaling method used in OpenSim. Firstly, subject's segment masses are determined based on the accordingly 3D geometric model constructed with the help of 3D scan. The determined data is taken as an approximation to the true value of the subject's segment mass. Secondly, this set of data, as well as the proportionately scaled segment mass data, is used to specify a generic OpenSim model. Errors caused by proportionately scaling are calculated. Finally, influence of the errors on dynamics analysis is checked on a simulation of a walking task. The method to approximate subject's segment masses, model specification and dynamic simulation are described in chapter 2. Results are presented in chapter 3. These results are then discussed in chapter 4, followed by a conclusion in Chapter 5. \section{Methodology} \subsection{Approximating segment masses with 3D scan} A whole-body 3D scan was conducted to a male subject (31 years old, 77.0~kg, 1.77~m) with a low-cost 3D scanner (Sense$^{TM}$ 3D scanner). Before scanning, reflecting markers were placed on the subject to notify the location of each joint plate as shown in Figure 2. The locations of the joint plates were set according to Drillis (1966) \cite{Drillis1966}, which meant to facility the dismemberment of the 3D model. During scanning, careful caution was taken to make sure that no extra contact between limbs and the torso. The scanned 3D model was stored in a stl mesh file, as shown in Figure~3. \begin{figure}[htbp] \centering \includegraphics[width=0.3\textwidth]{markers.jpg}\\ \caption{Body markers that indicate the location of joint plates.} \label{Figure 3} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.6\textwidth]{Mesh.jpg}\\ \caption{The 3D geometric model of the subject generated by 3D scan.} \label{Mesh} \end{figure} Then the 3D model was dismembered into 15 parts in the way that described by Drillis (1969) \cite{Drillis1966}. Body markers and body parts lengths are referred. An example of the dismembered body part (Pelvis) is shown in Figure~4. Then, the volume of each body part was calculated. \begin{figure}[htbp] \centering \includegraphics[width=0.3\textwidth]{Pelvis.jpg}\\ \caption{The mesh of pelvis dismembered from the whole-body 3D model.} \label{Figure 4} \end{figure} To analyze the results obtained, the water displacements of eight distal body parts (hands, lower arms, feet, lower legs) were also measured, as described by Drilis (1966) \cite{Drillis1966}. \subsection{Specification of OpenSim models to the subject} In this step, a generic OpenSim model is specified to our subject in aspect of body segment mass. The model is developed by Delp S.L. et al (1990) \cite{Delp1990} (http://simtk-confluence.stanford.edu:8080/display/OpenSim/Gait+2392+and+\\2354+Models). It consists of 12 bodies, 23 degrees of freedom, and 52 muscles. The unscaled version of the model represents a subject that is about 1.8~m tall and has a mass of 75.16~kg. \begin{figure}[htbp] \centering \includegraphics[width=0.6\textwidth]{Generic_model2.jpg}\\ \caption{The OpenSim model used for error analysis.} \label{Figure 5} \end{figure} The approximating body segment mass data obtained from the process in chapter 2.1 as well as the proportionately weight-scaled body segment mass data is used to specific the generic model. The former is considered as the yardstick to estimate the error of the latter. \subsection{Dynamic simulation in OpenSim} An simulation is conducted on the two specific models. Simulation data comes from previous researches \cite{John2012}. The subject walks two steps in 1.2~s in an ordinary gait. Data of Spatial posture is collected at a frequency of 60~Hz and the and ground reaction forces at a frequency of 600~Hz. Inverse dynamic analysis is conducted on both models. Joint moments are calculated and compared between the two models. \section{Results} \subsection{The estimation of body segment masses}% Significant difference was found between volumes calculated from the 3D scanned geometric model and that measured by water displacement. For lower leg, the difference is as large as 27\% (4.5~l vs. 3.3~l). To approximate the real segment masses, assumption is made that the volume distribution of the 3D model merged by 3D scan among head, torso, pelvis, and upper limbs is the same as that of the real subject. Density data of the body parts \cite{Wei1995} were used to calculate the whole-body density of the subject, which, as well as body weight, gives estimation of whole-body volume. Then the overall volume was distributed to each segment with respect to the relative volume ratio of the 3D geometric model. In this way, segment volumes and masses are approximated. Relative data is shown in Figure~6. The whole-body volume calculated from the 3D geometric model is 7.31\% larger than the estimated whole-body volume (81.81~l to 76.24~l). The whole body density is estimated to be 1.01~g/ml. Mass proportion of the thigh is about 11.30\%, which is between that reported by Clauser et al., (1969) (10.27\%) \cite{Clauser1969}, Okada (1996) (9.2\%) \cite{okada1996} and by De Leva (1996) (14.47\%) \cite{DeLeva1996}, Durkin \& Dowling (2003) (12.72\%) \cite{Durkin2003}. \newgeometry{left=4cm,bottom=2.5cm} \begin{landscape} \begin{table} \centering \caption{Volume, density and mass of the whole body and segments.} {\begin{tabular}{p{3.5cm}|p{1.3cm}p{1.2cm}p{1.2cm}p{1.2cm}p{1.2cm}p{1.2cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1.2cm}p{1.2cm}p{1.2cm}p{1.2cm}} \toprule {} & {\noindent{\textbf{Overall}}} & {\noindent{\textbf{Pelvis}}} & {\noindent{\textbf{Head}}}& {\noindent{\textbf{Torso}}} & {\noindent{\textbf{Upper arm-l}}} & {\noindent{\textbf{Upper arm-r}}} & {\noindent{\textbf{Lower arm-l}}} & {\noindent{\textbf{Lower arm-r}}} & {\noindent{\textbf{Upper leg-l}}}& {\noindent{\textbf{Upper leg-r}}}& {\noindent{\textbf{Lower leg-l}}}& {\noindent{\textbf{Lower leg-r}}}& {\noindent{\textbf{Foot-l}}}& {\noindent{\textbf{Foot-r}}}& {\noindent{\textbf{Hand-l}}}& {\noindent{\textbf{Hand-r}}}\\ \midrule Volume(l)-3D scanned model &81.81&11.29&5.65&29.90&1.63&1.72&1.38&1.34&8.74&8.65&4.42&4.53&1.06&0.89&0.49&0.58\\ Volume(l)-water displacement& & & & & & &1.01&1.12& & & 3.25&3.35&1.00&1.00&0.46&0.46\\ Volume(l)-estimated& 76.24 & 10.81 & 5.41&28.64&1.56&1.65&1.01&1.12&8.37&8.29&3.25&3.35&1.00&1.00&0.46&0.46 \\ Density(kg/l)\cite{Wei1995}&1.01&1.01&1.07&0.92&1.06&1.06&1.10&1.10&1.04&1.04&1.08&1.08&1.08&1.08&1.11&1.11\\ Estimated mass(kg) & &10.92&5.79&26.35&1.66&1.75&1.11&1.23&8.71&8.62&3.51&3.62&1.08&1.08&0.51&0.51\\ \bottomrule \end{tabular}} \label{tab1} \end{table} \begin{table}[htbp] \centering \caption{Errors of proportionally scaled segment masses with respect to the approximate masses.} {\begin{tabular}{m{4cm}|m{1.5cm}p{1.5cm}p{2cm}p{2cm}p{2cm}p{2cm}p{1.5cm}p{2cm}} \toprule {} & {\noindent{\textbf{Pelvis}}} & {\noindent{\textbf{Torso}}} & {\noindent{\textbf{Upper leg-l}}}& {\noindent{\textbf{Upper leg-r}}}& {\noindent{\textbf{Lower leg-l}}}& {\noindent{\textbf{Lower leg-r}}}& {\noindent{\textbf{Talus}}}& {\noindent{\textbf{Calcaneus}}}\\ \midrule Approximate mass(kg)&10.92&38.91&8.71&8.62&3.51&3.62&0.07&0.87\\ Proportionally scaled mass (kg)&11.98 &34.82 &9.46 &9.46 &3.76 &3.76 &0.10 &1.28\\ Absolute error (kg)&1.06 &-4.08 &0.75&0.84&0.25&0.15&0.03&0.41 \\ Percentage of absolute error in overall body weight&1.37\% &-5.30\%&0.98\%&1.10\%&0.33\%&0.19\%&0.04\%&0.53\%\\ Relate error&9.66\%&-10.49\%&8.67\%&9.80\%&7.26\%&4.06\%&47.42\%&47.42\%\\ \bottomrule \end{tabular}} \label{tab2} \end{table} \end{landscape} \restoregeometry \subsection{Errors analysis of the OpenSim scaled model specific to the subject}% Segment masses generated by proportionately scale and 3D modeling are used to specify the OpenSim generic model, which bring about two specific models (noted as scaled model and approximate model). Figure~7 shows the segment mass data of the two models. Errors of the scaled model segment mass are between 4.06\% and 47.42\%. The most significant error merges from foot data which, however, represents only a small part of the overall body mass. \subsection{Motion simulation and dynamic analysis}% Both the proportionately scaled segment mass data and approximate segment mass data are used to specify the OpenSim generic model, bringing about two specific models (noted as scaled model and approximate model). Motion simulation is conducted on the two models. The simulated motion includes two steps of walking, lasting for 1.2~s. Since the two models differ in only segment mass, no difference in kinematic analysis is shown. As an example, the angles, velocity and acceleration of right hip flexion is shown in Figure~8. \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{ik}\\ \caption{Coordinate (q), velocity ($\omega$), acceleration (d$\omega$/dt) of right hip flexion in the motion.} \label{Figure8} \end{figure} Inverse dynamic analysis on the two models generates different results. Figure 9 shows the right hip flexion moments calculated from the two models. With the approximate model as yardstick, the error of the calculated right hip flexion moment of the scaled model has a mean of 1.89~Nm, which is 10.11\% of its mean absolute value. A total of 18 joint moments was calculated. The means of error percentage vary from 0.65\% to 12.68\%, with an average of 5.01\%. Relative data are shown in Table 1. \begin{figure}[htbp] \centering \includegraphics[width=1\textwidth]{Comp_moment.jpg}\\ \caption{Right hip flexion moments calculated from the two models.} \label{moment} \end{figure} \begin{table}[htbp] \caption{Errors of the joint moments calculated from the scaled model, with respect to that from the approximate model} {\begin{tabular}{p{3.5cm}p{2.5cm}p{3cm}p{3cm}} \toprule {} & {\noindent{\textbf{Mean of error (Nm)}}} & {\noindent{\textbf{Mean of instant moment (Nm)}}} & {\noindent{\textbf{Mean of error percentage}}}\\ \midrule Pelvis tilt &6.73&64.12&10.50\%\\ Pelvis list &4.06&37.78&10.75\%\\ Pelvis rotation & 1.02 & 17.78 & 5.74\% \\ Right hip flexion &1.89&18.73&10.11 \% \\ Right hip adduction &0.48&18.57&2.60 \% \\ Right hip rotation &0.07&3.47&2.13 \% \\ Right knee angle &0.65&19.21&3.40 \% \\ Right ankle angle &0.31&38.68&0.80 \% \\ Right subtalar angle &0.06&9.02&0.65 \%\\ Left hip flexion &2.27&17.90&12.68 \%\\ Left hip adduction &0.80&24.99&3.18 \%\\ Left hip rotation &0.10&4.28&2.23 \%\\ Left knee angle &0.83&19.37&4.28 \%\\ Left ankle angle &0.36&30.18&1.19 \%\\ Left subtalar angle &0.07&7.65&0.91 \%\\ Lumbar extension &5.47&60.85&9.00 \%\\ Lumbar bending &3.05&37.37&8.17 \%\\ Lumbar rotation &0.33&17.98&1.85 \%\\ \bottomrule \end{tabular}} \label{tab3} \end{table} \section{Discussions} \subsection{The use of 3D scan in the estimation of body segment masses} In previous researches, the inertial parameters of human body segment are usually determined by two means: (i) Applying predictive equations generated from database \cite{DeLeva1996}, (ii) Medical scanning of live subjects \cite{Lee2009}, and (ii) Segments geometric modelling \cite{Davidson2008}. The use of the first one, as stated by Durkin \& Dowling (2003) \cite{Durkin2003}, is limited by its sample population. Furthermore, the difference in segmentation methods makes it difficult to combine various equations \cite{Pearsall1994a}. The second method, medical scanning, such as dual energy X-ray absorptiometry is more accurate in obtaining body segment inertial parameters \cite{Durkin2002}. But it is more expensive and time-consuming. In this study, body segment masses are estimated by segment density data and segment volume. 3D scan is used to estimate body segment volume. In this process, errors may merge from two aspects. First, it is assumed that density data of each segment is constant among humans. This assumption may bring errors. Traditional body composition method defines two distinct body compartments: fat and lean body (fat-free). Fat has a density of 0.90~g/ml, while lean body has a density of 1.10~g/ml \cite{Lukaski1987}. Subject's body fat rate may influence the segment density. However, the range of density variation is smaller than that of the mass distribution. Therefore, the use of density and volume may reduce the estimation error of segment mass. For example, in the current study, the thigh, with a volume of 8.35~l, holds a mass proportion that would vary from 9.74\% (all fat, density = 0.90~g/ml) to 11.90\% (fat-free, density = 1.10~g/ml). This range is much narrower than that found in previous research (from 9.2\% \cite{okada1996} to 14.47\% \cite{DeLeva1996}). Second, 3D scan is used to build up 3D geometric model and calculate segment volumes. Significant difference exists between volumes calculated and volumes measured by water displacement. To approximate the real volume, assumption is made that the 3D geometric model has the same volume distribution with that of the subject, which may bring error. In summary, as a simple and low-cost method of segment mass determination, the use of density data and 3D geometric model is more likely to reduce the estimation error. 3D scan is an easy way to construct a 3D geometric model, but attention should be payed to the model's volume errors. The method used in this study to approximate the real segment volumes with 3D scanned model needs to be examined in future researches. \subsection{The error and error significance of the proportionately scaled model } Proportional scaling is efficient to specific a generic model. In this study, relative errors of segment masses of the scaled model are between 4.06\% to 47.42\%. The error of torso mass is 4.09~kg, which takes up to 5.30\% of the overall body weight. In the following motion simulation, these errors bring about difference in the calculated joint moment. Means of the difference of calculated joint moments are from 3.65\% to 12.68\%. This suggests that a careful specification of segment masses will increase the accuracy of the dynamic simulation. \section{Conclusions} This study aims at estimating the errors and their influences on dynamic analysis caused by the scaling method used in OpenSim. A 3D scan is used to construct subject's 3D geometric model, according to which segment masses are determined. The determined segment masses data is taken as the yardstick to assess OpenSim proportionately scaled model: errors are calculated, and influence of the errors on dynamics analysis is examined. As a result, the segment mass error reaches up to 5.31\% of the overall body weight (torso). Influence on the dynamic calculation has been found, with a average difference from 3.65\% to 12.68\% in the joint moments. Conclusions could be drawn that (i) the use of segment volume and density data may be more accurate than mass distribution reference data in the estimation of body segment masses and (ii) a careful specification of segment masses will increase the accuracy of the dynamic simulation significantly. This current work is a study to determine inertial parameters of the human body segment in biomechanical simulation. It explores new, more precise and simpler ways to implement biomechanical analysis. This work is a step towards characterizing muscular capacities for the analysis of work tasks and predicting muscle fatigue. \section{Acknowledge} This work was supported by INTERWEAVE Project (Erasmus Mundus Partnership Asia-Europe) under Grants number IW14AC0456 and IW14AC0148, and by the National Natural Science Foundation of China under Grant numbers 71471095 and by Chinese State Scholarship Fund. The authors also thank D.Zhang Yang for his support. \bibliographystyle{splncs.bst}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} One of the most important outcomes expected of a successful theory of quantum gravity is a clear and unambiguous solution to the problems associated with the curvature singularities that are predicted by classical general relativity. This expectation is natural since quantum mechanics is known to cure classical singularities in other contexts, such as the hydrogen atom. In recent years there has been much work suggesting that Loop Quantum Gravity (LQG) \cite{Thiemann07} may indeed resolve gravitational singularities at least in the case of symmetry-reduced models, such as spatially homogeneous \cite{cosm sing res} and inhomogeneous \cite{MartinBenito:2008ej} cosmologies and spherically symmetric black holes \cite{husain06,Ashtekar:2005qt,bh-interior}. Given the simplifications that these models entail, it is pertinent to ask which features of the LQG quantization scheme are crucial to the observed singularity resolution. There are two distinct, but related, features of the LQG quantization program that appear to play a role in achieving singularity resolution. The first is the fundamental discreteness that underlies LQG due to its focus on holonomies of connections and associated graphs embedded in a spatial manifold~\cite{Thiemann}. An analogous approach in a purely quantum mechanical context is so-called polymer quantization~\cite{AFW,halvorson}, in which the Hamiltonian dynamics occurs on a discrete spatial lattice and the basic observables are the operators associated with location on the lattice and translation between lattice points. Polymer quantization provides a quantization scheme that is mathematically and physically distinct from Schr\"odinger quantization. The second apparently key ingredient in the LQG singularity resolution mechanism is the regularization of the singular terms in the Hamiltonian using a trick first introduced in this context by Thiemann~\cite{Thiemann}. The regularization is achieved by first writing a classical inverse triad as the (singular) Poisson bracket of classical phase space functions whose quantum counterparts are known, and then defining the inverse triad operator as the commutator of these quantum counterparts. When applied to simple models this procedure gives rise to quantum operators with bounded spectra. The singularity is therefore kinematically ``removed'' from the spectra of relevant physical operators, such as the inverse scale factor. One question that arises concerns the role or perhaps the necessity of the Thiemann trick in singularity resolution in LQG. Recall that in the case of the hydrogen atom the singularity resolution is achieved by defining self-adjoint operators in a Hilbert space. This requires a careful choice of boundary conditions on the wave function \cite{fewster-hydrogen} but does not require modification of the singular $1/r$ potential. An example more relevant to quantum gravity is the reduced Schr\"odinger quantization of the ``throat dynamics'' of the Schwarzschild interior, which on imposition of suitable boundary conditions produced a discrete, bounded-from-below spectrum for the black hole mass~\cite{louko96}. Polymer quantization of the hydrogen atom was recently examined in~\cite{HLW}, retaining only the s-wave sector and regularizing the $1/r$ potential in a way that lets $r$ take values on the whole real axis. The choice of symmetric versus antisymmetric boundary conditions at the singularity was found to have a signficant effect on the ground state even after the singularity itself had been regularized. In particular, in the limit of small lattice separation the ground state eigenenergy showed evidence of convergence towards the ground state energy of the conventionally-quantized Schr\"odinger theory only for the antisymmetric boundary condition. In the present paper, we perform a similar polymer quantization of the more singular $1/r^2$ potential. When the potential is regularized, we shall find that the choice of the boundary condition again has a significant effect on the lowest-lying eigenenergies. However, we shall also find that the polymer theory with the antisymmetric boundary condition is well defined even without regularizing the potential, and with this boundary condition the regularized and unregularized potentials yield closely similar spectra. The boundary condition at the singularity is hence not only a central piece of input in polymer quantization, but it can even provide, along with the modification to the kinetic term, the pivotal singularity avoidance mechanism. While this is expected from general arguments that we will make explicit later on, it is interesting and reassuring to see the mechanism work in the special case of the $1/r^2$ potential, whose degree of divergence is just at the threshold where a conventional Schr\"odinger quantization will necessarily result into a spectrum that is unbounded below. The polymer treatment of this system is thus turning a Hamiltonian that is unbounded below into one that has a well-defined ground state. The $1/r^2$ potential is interesting in its own right: it has a classical scale invariance that is broken by the quantum theory. In addition, this potential appears frequently in black hole physics, for example in the near horizon and near singularity behaviour of the quasinormal mode potential~\cite{motl-quasi1,gk-quasi1}, in the near horizon behaviour of scalar field propagation \cite{camblong-bh} and in the Hamiltonian constraint in Painlev\'e-Gullstrand coordinates~\cite{GKS}. It may therefore conceivably be of direct relevance to quantum gravity. There is a substantial literature on Schr\"odinger quantization of this potential in $L_2(\mathbb{R}_+)$ (see for example \cite{case,frank-potential,narnhofer,Gupta:1993id,camblong00,% ordonez,inequivalence} and the references in~\cite{fulop07}), but we are not aware of previous work on polymer quantization of this potential. Our paper is organized as follows. In Section \ref{sec:schrodinger quantization} we review the Schr\"odinger quantization of the $1/r^2$ porential in $L_2(\mathbb{R}_+)$. In Section \ref{sec:Polymer Quantization} we formulate the polymer quantization of this system on a lattice of fixed size and describe the numerical method. We also include in this section a computation of the semi-classical polymer spectrum from the Bohr-Sommerfeld quantization condition, with a fixed polymerization length scale. The numerical results are presented in Section~\ref{sec:results} and the conclusions are collected in Section~\ref{sec:conclusions}. \section{Schr\"odinger quantization} \label{sec:schrodinger quantization} We consider the classical Hamiltonian \begin{equation} H = p^2 - \frac{\lambda}{r^2} , \label{eq:H-classical} \end{equation} where the phase space is $(r,p)$ with $r>0$ and $\lambda\in\mathbb{R}$ is a constant. We shall take $r$, $p$ and $\lambda$ all dimensionless, and on quantization we set $\hbar=1$. If physical dimensions are restored, $r$ and $p$ will become expressed in terms of a single dimensionful scale but $\lambda$ remains dimensionless. That the coupling constant is dimensionless is the speciality of a scale invariant potential. Quantization of $H$ \nr{eq:H-classical} is of course subject to the usual ambiguities. In particular, if one views $H$ as an effective Hamiltonian that comes from a higher-dimensional configuration space via symmetry reduction, with $r$ being a radial configuration variable, the appropriate Hilbert space may be $L_2(\mathbb{R}_+; m(r)dr)$, where $m$ is a positive-valued weight function. If for example $m(r) = r^a$, where $a\in\mathbb{R}$, then the ordering \begin{equation} {\widehat H} = - \left(\frac{\partial^2}{\partial r^2}+\frac{a}{r}\frac{\partial}{\partial r} \right) - \frac{\lambda}{r^2} \label{hamiltonian1} \end{equation} makes the quantum Hamiltonian ${\widehat H}$ symmetric. If the wave function in $L_2(\mathbb{R}_+; m(r)dr)$ is denoted by~$\psi$, we may map $\psi$ to $\widetilde\psi \in L_2(\mathbb{R}_+; dr)$ by $\widetilde\psi(r) = r^{a/2} \psi(r)$, and ${\widehat H}$ is then mapped in $L_2(\mathbb{R}_+; dr)$ to the Hamiltonian \begin{equation} {\widehat {\widetilde H}} = - \frac{\partial^2}{\partial r^2} - \frac{\widetilde\lambda}{r^2} , \end{equation} where \begin{equation} \widetilde\lambda := \lambda - \frac{a}{2}\left(\frac{a}{2}-1\right) . \end{equation} We shall consider any such mappings to have been done and take the quantum Hamiltonian to be simply \begin{equation} {\widehat H} = - \frac{\partial^2}{\partial r^2} - \frac{\lambda}{r^2} , \label{quantum hamiltonian} \end{equation} acting in the Hilbert space $L_2(\mathbb{R}_+; dr)$. To guarantee that the time evolution generated by ${\widehat H}$ \nr{quantum hamiltonian} is unitary, ${\widehat H}$ must be specified as a self-adjoint operator on $L_2(\mathbb{R}_+; dr)$~\cite{thirring-quantumbook}. A~comprehensive analysis of how to do this was given in \cite{narnhofer} (see also \cite{case,frank-potential,Gupta:1993id,camblong00,% ordonez,inequivalence}). We shall review the results of \cite{narnhofer} in a way that displays the spectrum explicitly for all the qualitatively different ranges of~$\lambda$. Before proceeding, we mention that several recent quantizations of the $1/r^2$ potential first regularize the potential using various renormalization techniques \cite{Gupta:1993id,camblong00,ordonez}. In particular, when the spectrum of a self-adjoint extension is unbounded below, these renormalization techniques need not lead to an equivalent quantum theory~\cite{inequivalence}. We shall here discuss only the self-adjoint extensions. To begin, recall \cite{thirring-quantumbook} that the deficiency indices of ${\widehat H}$ are found by considering normalizable solutions to the eigenvalue equation ${\widehat H}\psi = \pm i \psi$. An elementary analysis shows that ${\widehat H}$ is essentially self-adjoint for $\lambda \le - 3/4$, but for $\lambda > - 3/4$ a boundary condition at $r=0$ is needed to make ${\widehat H}$ self-adjoint. Physically, this boundary condition will ensure that no probability is flowing in/out at $r=0$. \subsection{$\lambda> 1/4$} For $\lambda > 1/4$, we write $\lambda = 1/4 + \alpha^2$ with $\alpha>0$. For $E>0$, the linearly independent (non-normalizable) solutions to the eigenvalue equation \begin{equation} {\widehat H}\psi = E \psi \label{eq:eigenvalue} \end{equation} are $\sqrt{r} \, J_{\pm i \alpha}(\sqrt{E} \, r)$. These oscillate infinitely many times as $r\to0$. To find the boundary condition, we consider the linear combinations \begin{equation} \psi_E(r) := \sqrt{r} \left[ e^{i\beta} E^{-i\alpha/2} J_{i \alpha}(\sqrt{E} \, r) + e^{-i\beta} E^{i\alpha/2} J_{-i \alpha}(\sqrt{E} \, r) \right] , \label{eq:psi_E} \end{equation} where $\beta$ is a parameter that a priori could depend on~$E$. As $\psi_E$ is periodic in $\beta$ with period~$2\pi$, and as replacing $\beta$ by $\beta + \pi$ multiplies $\psi_E$ by~$-1$, we may understand $\beta$ periodic with period~$\pi$. For concreteness, we could choose for example $\beta \in [0, \pi)$. For the probability flux through $r=0$ to vanish, we need \begin{equation} \overline{\psi_E} \, \partial_r \psi_{E'} - \overline{\psi_{E'}} \, \partial_r \psi_{E} \to 0 \ \ \text{as $r\to0$} \end{equation} for all $E$ and~$E'$, where the overline denotes the complex conjugate. Using the small argument asymptotic form (equation (9.1.7) in~\cite{AS}) \begin{equation} J_{\nu}(z)\to\frac{(z/2)^\nu}{\Gamma(\nu+1)} \ \ \text{as $z\to0$}, \end{equation} this is seen to require $\sin(\beta-\beta')=0$, and hence $\beta$ must be independent of~$E$. The choice of the constant $\beta$ hence specifies the boundary condition at the origin. To find the eigenvalues, we consider the normalizable solutions to~\nr{eq:eigenvalue}. Such solutions exist only when $E<0$, and they are $\sqrt{r} \, K_{i \alpha}(\sqrt{-E} \, r)$. These solutions must satisfy at $r\to0$ the same boundary condition as $\psi_E$~\nr{eq:psi_E}. Using the small argument asymptotic form (equations (9.6.2) and (9.6.7) in~\cite{AS}) \begin{equation} K_\nu(Z)=K_{-\nu}(z)\to \frac{\pi}{\sin(\nu\pi)}\left[ \frac{(z/2)^{-\nu}}{\Gamma(-\nu+1)}-\frac{(z/2)^\nu}{\Gamma(\nu+1)}\right] \ \ \text{as $z\to0$}, \end{equation} this shows that the eigenvalues are \begin{equation} E_n = E_0 \exp(- 2\pi n/\alpha) , \ \ n \in \mathbb{Z} , \label{tower} \end{equation} where \begin{equation} E_0 = - \exp[(2\beta +\pi)/\alpha] . \label{E0} \end{equation} This spectrum is an infinite tower, with $E_n \to 0_-$ as $n\to\infty$ and $E_n \to -\infty$ as $n\to-\infty$. The spectrum is unbounded from below. We note that Schr\"odinger quantization of a regulated form of the potential yields a semi-infinite tower of states that is similar to \nr{tower} as $n\to\infty$ but has a ground state~\cite{camblong00}. The energy of the ground state goes to $-\infty$ when the regulator is removed. \subsection{$\lambda = 1/4$} For $\lambda = 1/4$, the solutions to the eigenvalue equation \nr{eq:eigenvalue} for $E>0$ are $\sqrt{r} \, J_0(\sqrt{E} \, r)$ and $\sqrt{r} \, N_0(\sqrt{E} \, r)$. We consider the linear combinations \begin{equation} \psi_E(r) := \sqrt{r} \left\{ (\cos\beta) J_0(\sqrt{E} \, r) + (\sin\beta) \left[ \frac\pi2 N_0 (\sqrt{E} \, r) - \ln\left( \frac{\sqrt{E} \, e^\gamma}{2}\right) J_0 (\sqrt{E} \, r) \right] \right\} , \label{eq:psi_E-m} \end{equation} where $\gamma$ is Euler's constant and $\beta$ is again a parameter that may be understood periodic with period $\pi$ and could a priori depend on~$E$. As above, we find that $\beta$ must be a constant independent of $E$ and its value determines the boundary condition at the origin. Normalizable solutions to \nr{eq:eigenvalue} exist only for $E<0$. They are $\sqrt{r} \, K_0(\sqrt{-E} \, r)$, and they must satisfy the same boundary condition as $\psi_E$ \nr{eq:psi_E-m} at $r\to0$. Using the small argument expansion of $K_0$~\cite{AS}, we find that for $\beta=0$ there are no normalizable states, while for $0 < \beta < \pi$ there is exactly one normalizable state, with the energy \begin{equation} E_0 = - 4 \exp( -2\gamma + 2 \cot\beta) . \end{equation} \subsection{$-3/4 < \lambda < 1/4$} For $-3/4 < \lambda < 1/4$, we write $\lambda = 1/4 - \nu^2$ with $0<\nu<1$. The solutions to the eigenvalue equation \nr{eq:eigenvalue} for $E>0$ are $\sqrt{r} \, J_{\pm \nu}(\sqrt{E} \, r)$. Considering the linear combinations \begin{equation} \psi_E(r) := \sqrt{r} \left[ (\cos\beta) E^{-\nu/2} J_{\nu}(\sqrt{E} \, r) + (\sin\beta) E^{\nu/2} J_{-\nu}(\sqrt{E} \, r) \right] , \label{eq:psi_E-nu} \end{equation} we find as above that $\beta$ is a constant, understood periodic with period~$\pi$, and its value specifies the boundary condition at the origin. Normalizable solutions to \nr{eq:eigenvalue} exist only for $E<0$. They are $\sqrt{r} \, K_{\nu}(\sqrt{-E} \, r)$ and must satisfy the same boundary condition as $\psi_E$ \nr{eq:psi_E-nu} at $r\to0$. Using the small argument asymptotic form of $K_{\nu}$~\cite{AS}, we find that there are no normalizable states for $0\le \beta \le \pi/2$, while for $\pi/2 < \beta < \pi$ there is exactly one normalizable state, with the energy \begin{equation} E_0 = - {(- \cot\beta)}^{1/\nu} . \end{equation} We note that in the special case of a free particle, $\lambda=0$, the Bessel functions reduce to trigonometric and exponential functions. \subsection{$\lambda \le -3/4$} For $\lambda \le -3/4$, we write $\lambda = 1/4 - \nu^2$ with $\nu \ge 1$. ${\widehat H}$~is now essentially self-adjoint. Any prospective normalizable solution to \nr{eq:eigenvalue} would need to have $E<0$ and take the form $\sqrt{r} \, K_{\nu}(\sqrt{-E} \, r)$, but since now $\nu\ge1$, these solutions are not normalizable and hence do not exist. \section{Polymer quantization} \label{sec:Polymer Quantization} In this section we develop the polymer quantization of the $1/r^2$ potential We proceed as in~\cite{HLW}, briefly reiterating the main steps for completeness. It is necessary to extend the $r$ coordinate to negative values with the replacement $r \rightarrow x \in \mathbb{R}$ in order to use central finite difference schemes at the origin. This will allow us to introduce at the origin both a symmetric boundary condition (with the regulated potential developed in subsection~\ref{subsec:fqpt}) and an antisymmetric boundary condition. The polymer Hilbert space on the full real line is spanned by the basis states \begin{equation} \psi_{x_0}(x) = \left\{ \begin{array}{ll} 1, & x=x_0\\ 0, & x\neq x_0 \end{array} \right. \end{equation} with the inner product \begin{equation} (\psi_x , \psi_{x^\prime}) = \delta_{x,x^\prime}, \label{eq:bohr-ip} \end{equation} where the object on the right hand side is the Kronecker delta. The position operator acts by multiplication as \begin{equation} (\hat{x} \psi)(x) = x \psi(x). \label{xact} \end{equation} Defining a momentum operator takes more care. Consider the translation operator~$\widehat{U}$, which acts as \begin{equation} (\widehat{U}_\mu \psi)(x) = \psi(x+\mu). \label{Uact} \end{equation} In ordinary Schr\"odinger quantization we would have $\widehat{U}_\mu=e^{i\mu \hat{p}}$. Following~\cite{AFW}, we hence define the momentum operator and its square as \begin{subequations} \begin{eqnarray} \hat{p} &=& \frac{1}{2i{\bar{\mu}}}(\widehat{U}_{\bar{\mu}} - \widehat{U}_{\bar{\mu}}^\dagger),\\ \widehat{p^2} &=& \frac{1}{\mu^2}(2 - \widehat{U}_\mu - \widehat{U}_\mu^\dagger), \end{eqnarray} \end{subequations} where ${\bar{\mu}} := \mu/2$. We may thus write the polymer Hamiltonian as \begin{equation} \widehat{H}_{\mathrm{pol}} = \widehat{T}_{\mathrm{pol}} + \widehat{V}_{\mathrm{pol}} \ , \label{Hpol} \end{equation} where \begin{subequations} \begin{eqnarray} \widehat{T}_{\mathrm{pol}} &=& \frac{1}{\mu^2} (2 - \widehat{U}_\mu - \widehat{U}_\mu^\dagger), \label{Tpol} \\ \widehat{V}_{\mathrm{pol}} &=& - \frac{\lambda}{{\hat{x}^2}}. \label{Vpol} \end{eqnarray} \end{subequations} Considering the action of $\hat{x}$ and $\widehat{U}_\mu$, we see that the dynamics generated by (\ref{Hpol}) separates the polymer Hilbert space into an infinite number of superselection sectors, each having support on a regular $\mu$-spaced lattice \{$\Delta + n\mu$ $|$ $n \in \mathbb{Z}$\}. The choice of \{$\Delta$ $|$ $0 \leq \Delta < \mu$\} picks the sector. Since we wish to study singularity resolution, we concentrate on the $\Delta=0$ sector, which we expect the singularity of the potential to affect most. We shall discuss the regularization of $\widehat{V}_{\mathrm{pol}}$ at this singularity in subsection~\ref{subsec:fqpt}. \subsection{Semiclassical polymer theory} Before analyzing the full polymer quantum theory, we examine the semiclassical polymer spectrum using the Bohr-Sommerfeld quantization condition. Following \cite{cosm sing res,Ashtekar:2005qt,bh-interior}, we take the classical limit of the polymer Hamiltonian (\ref{Hpol}) by keeping the polymerization scale $\mu$ fixed and making the replacement $\widehat{U}_\mu \to e^{i\mu p}$, where $p$ is the classical momentum. Note that this is different from the continuum limit in which $\mu$ goes to zero and the quantum theory is expected to be equivalent to Schr\"odinger quantization \cite{corichi07}. We assume $\lambda>0$. It follows, as will be verified below, that the classical polymer orbits never reach the origin, and we may hence assume the configuration variable $x$ to be positive and revert to the symbol~$r$. The classical polymer Hamiltonian takes thus the form \begin{equation} H_{\mathrm{pol}} = \frac{\sin^2({\bar{\mu}} p)}{{\bar{\mu}}^2} - \frac{\lambda}{r^2} . \label{eq:class-pol-ham} \end{equation} Note that $H_{\mathrm{pol}}$ reduces to the classical non-polymerized Hamiltonian \nr{eq:H-classical} in the limit ${\bar{\mu}}\to0$. A first observation is that the kinetic term in $H_{\mathrm{pol}}$ is non-negative and bounded above by~$1/{{\bar{\mu}}^2}$. Denoting the time-independent value of $H_{\mathrm{pol}}$ on a classical solution by~$E$, it follows that $E$ is bounded above by \begin{equation} E < E_{\mathrm{max}} := \frac{1}{{\bar{\mu}}^2} , \end{equation} and on a given classical solution $r$ is bounded below by $r\ge r_-$, where \begin{equation} r_- := \left(\frac{\lambda {\bar{\mu}}^2}{1-{\bar{\mu}}^2 E }\right)^{1/2} . \end{equation} An elementary analysis shows every classical solution has a bounce at $r = r_-$. For $E\ge0$ this is the only turning point, and the solution is a scattering solution, with $r\to\infty$ as $t\to\pm\infty$. For $E<0$ there is a second turning point at $r=r_+ > r_-$, where \begin{equation} r_+ := \left(\frac{\lambda}{-E}\right)^{1/2} , \end{equation} and the solution is a bound solution, with $r$ oscillating periodically between $r_+$ and~$r_-$. Note that $r_+$ is independent of~${\bar{\mu}}$, and the outer turning point in fact coincides with the turning point of the non-polymerized classical theory. The classical polymer solutions are thus qualitatively similar to the classical non-polymerized solutions at large~$r$, both for $E\ge0$ and for $E<0$. What is different is that the polymer energy is bounded from above, and more importantly that the polymer solutions bounce at $r=r_-$. In this sense the classical polymer theory has resolved the singularity at $r=0$. The resolution depends on the polymerization scale: for fixed~$E$, $r_- = {\bar{\mu}} \sqrt{\lambda} \bigl[ 1 + O({\bar{\mu}}^2) \bigr] \to 0$ as ${\bar{\mu}}\to0$, and for fixed~${\bar{\mu}}$, $r_- \to {\bar{\mu}} \sqrt{\lambda}$ as $E\to0$. As the $E<0$ solutions are periodic, we can use the Bohr-Sommerfeld quantization condition to estimate the semiclassical quantum spectrum. A~subtlety here is that semiclassical estimates already in ordinary Schr\"odinger quantization with a $1/r^2$ term involve a shift in the coefficient of this term~\cite{AH}. Anticipating a similar shift here, we look at the Bohr-Sommerfeld quantization condition with $\lambda$ replaced by~${\lambda_{\mathrm{eff}}}$, and we will then determine ${\lambda_{\mathrm{eff}}}$ by comparison with the Schr\"odinger quantization. For a classical solution with given~$E$, formula \nr{eq:class-pol-ham} implies (with $\lambda$ replaced by~${\lambda_{\mathrm{eff}}}$) \begin{equation} r = \frac{{\bar{\mu}} \sqrt{\lambda_{\mathrm{eff}}}}{\sqrt{ \sin^2({\bar{\mu}} p) - {\bar{\mu}}^2 E}}. \end{equation} Taking $E<0$, the phase space integral $J(E) := \oint r\,dp$ over a full cycle is hence \begin{align} J(E) &= \oint r\,dp \nonumber \\ &= 2 \sqrt{\lambda_{\mathrm{eff}}} \int_0^{\pi/(2{\bar{\mu}})} \frac{{\bar{\mu}} \, dp}{\sqrt{ \sin^2({\bar{\mu}} p) - {\bar{\mu}}^2 E}} \qquad (p = y/{\bar{\mu}}) \nonumber \\ &= 2 \sqrt{\lambda_{\mathrm{eff}}} \int_0^{\pi/2} \frac{dy}{\sqrt{ \sin^2(y) - {\bar{\mu}}^2 E}} . \nonumber \\ &= \frac{2 \sqrt{\lambda_{\mathrm{eff}}}}{\sqrt{1 - {\bar{\mu}}^2 E}} \int_0^{\pi/2} \frac{dy}{\sqrt{1 - {(1 - {\bar{\mu}}^2 E)}^{-1} \cos^2(y)}} \nonumber \\ &= \frac{2 \sqrt{\lambda_{\mathrm{eff}}}}{\sqrt{1 - {\bar{\mu}}^2 E}} \, K \left( {(1 - {\bar{\mu}}^2 E)}^{-1/2} \right) , \label{eq:J-finalform} \end{align} where $K$ is the complete elliptic integral of the first kind~\cite{gradstein}. In the limit ${\bar{\mu}}^2 E \to 0$, the expansion (8.113.3) in \cite{gradstein} yields \begin{align} J(E) &= 2 \sqrt{\lambda_{\mathrm{eff}}} \left[ \ln\left( \frac{4}{{\bar{\mu}}\sqrt{-E}} \right) + \mathbb{O}\left({\bar{\mu}}\sqrt{-E}\right) \right] . \label{eq:J-asymptotic} \end{align} The Bohr-Sommerfeld quantization condition now states that the eigenenergies of the highly excited states are given asymptotically by $J(E) = 2\pi n$, where $n \gg 1$ is an integer. By~\nr{eq:J-asymptotic}, this gives the asymptotic eigenenergies \begin{equation} E_n = - \frac{16}{{\bar{\mu}}^2} \exp \bigl (- 2\pi n/\sqrt{\lambda_{\mathrm{eff}}} \bigr) , \ \ n\to\infty . \label{eq:pol-WKB-evs} \end{equation} The Bohr-Sommerfeld estimate \nr{eq:pol-WKB-evs} agrees with the spectrum \nr{tower} obtained from conventional Schr\"odinger quantization for $\lambda > 1/4$, provided ${\lambda_{\mathrm{eff}}} = \lambda - \frac14$ and we choose in \nr{tower} the self-adjoint extension for which \begin{equation} \beta = - \frac{\pi}{2} + \alpha \ln(4/{\bar{\mu}}) \ \ \ (\text{mod $\pi$}) . \label{eq:BS-beta} \end{equation} The shift ${\lambda_{\mathrm{eff}}} = \lambda - \frac14$ is exactly that which arises in ordinary Schr\"odinger quantization of potentials that include a $1/r^2$ term: the reason there is the matching of the small $r$ behaviour of the exact eigenstates to the WKB approximation. For a lucid analysis of this phenomenon in the quasinormal mode context, see the discussion between equations (23) and (28) in~\cite{AH}. Note, however, that in our system the Bohr-Sommerfeld condition cannot be applied directly to the unpolymerized theory, since $J(E)$ \nr{eq:J-finalform} diverges as ${\bar{\mu}}\to0$. \subsection{Full quantum polymer theory} \label{subsec:fqpt} We now return to the full polymer quantum theory, with the Hamiltonian \nr{Hpol} and $\lambda \in{\mathbb{R}}$. We write the basis states in Dirac notation as $\left| m\mu \right\rangle$, where $m\in{\mathbb{Z}}$. Writing a state in this basis as $\psi = \sum_m c_m \left|m\mu\right\rangle$, it follows from \nr{eq:bohr-ip} that the inner product reads $\left( \psi^{(1)}, \psi^{(2)} \right) = \sum_m \overline{{c_m}^{(1)}} \, c_m^{(2)}$. The Hilbert space is thus~$L_2({\mathbb{Z}})$. It will be useful to decompose this Hilbert space as the direct sum $L_2({\mathbb{Z}}) = L_2^s({\mathbb{Z}}) \oplus L_2^a({\mathbb{Z}})$, where the states in the symmetric sector $L_2^s({\mathbb{Z}})$ satisfy $c_m = c_{-m}$ and the states in the antisymmetric sector $L_2^a({\mathbb{Z}})$ satisfy $c_m = - c_{-m}$. The action of $\widehat{T}_{\mathrm{pol}}$ \nr{Tpol} reads \begin{equation} \widehat{T}_{\mathrm{pol}} \left( \sum_m c_m \left|m\mu\right\rangle \right) = \frac{1}{\mu^2} \sum_m \left( 2 c_m - c_{m+1} - c_{m-1} \right) \left|m\mu\right\rangle . \label{eq:T-action} \end{equation} $\widehat{T}_{\mathrm{pol}}$ is clearly a bounded operator on $L_2({\mathbb{Z}})$. $\widehat{T}_{\mathrm{pol}}$ is manifestly symmetric, and an explicit solution of the eigenvalue equation $\widehat{T}_{\mathrm{pol}} \psi = E \psi$, given in equation \nr{recsol} below, shows that there are no normalizable solutions with $E=\pm i$. $\widehat{T}_{\mathrm{pol}}$ is hence essentially self-adjoint (\cite{reed-simonII}, Theorem X.2). It is also positive, since $\left( \psi, \widehat{T}_{\mathrm{pol}}\psi \right) > 0$ for any $\psi \ne 0$ by the Cauchy-Schwarz inequality. The action of $\widehat{V}_{\mathrm{pol}}$ \nr{Vpol} reads \begin{equation} \widehat{V}_{\mathrm{pol}} \left( \sum_m c_m \left|m\mu\right\rangle \right) = -\frac{\lambda}{\mu^2} \sum_m f_m^{\mathrm{pol}} \, c_m \left|m\mu\right\rangle , \label{eq:V-action} \end{equation} where \begin{equation} f_m^{\mathrm{pol}} := \frac{1}{m^2}. \label{eq:fpol-def} \end{equation} As \nr{eq:V-action} is ill-defined on any state for which $c_m\ne0$, $\widehat{V}_{\mathrm{pol}}$ is not a densely-defined operator on $L_2({\mathbb{Z}})$. We consider two ways to handle this singularity. The first way is to regulate $\widehat{V}_{\mathrm{pol}}$ explicitly. Recall that for $x \in {\mathbb{R}}\setminus \{0\}$ we can write \begin{equation} \frac{\sgn(x)}{\sqrt{|x|}} = 2 \frac{d(\sqrt{|x|})}{dx} , \label{divsub} \end{equation} and on our lattice this can be implemented as the finite difference expression \begin{equation} \frac{\sgn(x)}{\sqrt{|x|}} \rightarrow \frac{1}{\mu} \left(\sqrt{|x_{m+1}|} - \sqrt{|x_{m-1}|} \right) + O(\mu^2). \label{eq:reg-invsqrt} \end{equation} We hence define the regulated polymer version of $\sgn(x)/\sqrt{|x|}$ by dropping the $O(\mu^2)$ term in~\nr{eq:reg-invsqrt}, and we define the regulated polymer potential $\widehat{V}_{\mathrm{pol}}^{\mathrm{reg}}$ by raising this to the fourth power, \begin{equation} \frac{\lambda}{{(x_m)}^2} \rightarrow \frac{\lambda}{\mu^4} \left(\sqrt{|x_{m+1}|} - \sqrt{|x_{m-1}|} \right)^4 , \end{equation} or \begin{equation} \widehat{V}_{\mathrm{pol}}^{\mathrm{reg}} \left( \sum_m c_m \left|m\mu\right\rangle \right) = - \frac{\lambda}{\mu^2} \sum_m f_m^{\mathrm{reg}} \, c_m \left|m\mu\right\rangle , \label{eq:Vreg-action} \end{equation} where \begin{equation} f_m^{\mathrm{reg}} := \left(\sqrt{|m+1|} - \sqrt{|m-1|} \right)^4. \label{eq:freg-def} \end{equation} $\widehat{V}_{\mathrm{pol}}^{\mathrm{reg}}$ is clearly a bounded essentially self-adjoint operator on $L_2({\mathbb{Z}})$, and its operator norm is $4|\lambda|/(\mu^2)$. The regulated polymer Hamiltonian can now be defined by \begin{equation} \widehat{H}_{\mathrm{pol}}^{\mathrm{reg}} = \widehat{T}_{\mathrm{pol}} + \widehat{V}_{\mathrm{pol}}^{\mathrm{reg}} . \end{equation} It follows by the Kato-Rellich theorem (\cite{reed-simonII}, Theorem X.12) that $\widehat{H}_{\mathrm{pol}}^{\mathrm{reg}}$ is essentially self-adjoint on $L_2({\mathbb{Z}})$ and bounded below by $-4|\lambda|/(\mu^2)$. Further, both $\widehat{T}_{\mathrm{pol}}$ and $\widehat{V}_{\mathrm{pol}}^{\mathrm{reg}}$ leave $L_2^s({\mathbb{Z}})$ and $L_2^a({\mathbb{Z}})$ invariant, and their boundedness and self-adjointness properties mentioned above hold also for their restrictions to $L_2^s({\mathbb{Z}})$ and $L_2^a({\mathbb{Z}})$. It follows that $\widehat{H}_{\mathrm{pol}}^{\mathrm{reg}}$ restricts to both $L_2^s({\mathbb{Z}})$ and $L_2^a({\mathbb{Z}})$ as a self-adjoint operator bounded below by $-4|\lambda|/(\mu^2)$. We denote both of these restrictions by $\widehat{H}_{\mathrm{pol}}^{\mathrm{reg}}$, leaving the domain to be understood from the context. The second way to handle the singularity of $\widehat{V}_{\mathrm{pol}}$ \nr{eq:V-action} is to restrict at the outset to the antisymmetric subspace $L_2^a({\mathbb{Z}})$, where $\widehat{V}_{\mathrm{pol}}$ is essentially self-adjoint and its operator norm is $|\lambda|/(\mu^2)$. It follows as above that the unregulated polymer Hamiltonian \begin{equation} \widehat{H}_{\mathrm{pol}} = \widehat{T}_{\mathrm{pol}} + \widehat{V}_{\mathrm{pol}} \end{equation} on $L_2^a({\mathbb{Z}})$ is essentially self-adjoint and bounded below by $-|\lambda|/(\mu^2)$. Two comments are in order. First, $\widehat{H}_{\mathrm{reg}}$ can be written in terms of operators as \begin{equation} \widehat{H}_{\mathrm{reg}} = \frac{1}{\mu^2}(2 - \widehat{U}_\mu - \widehat{U}_\mu^\dagger) - \frac{\lambda}{\mu^4}\left( \widehat{U}_\mu \sqrt{|\hat{x}|} \widehat{U}_\mu^\dagger - \widehat{U}_\mu^\dagger \sqrt{|\hat{x}|} \widehat{U}_\mu \right)^4. \label{Hreg} \end{equation} The potential in (\ref{Hreg}) can hence be viewed as arising by the substitution \begin{equation} \frac{\sgn(x)}{\sqrt{|x|}} \rightarrow \frac{2}{i\mu}\widehat{U}_\mu^\dagger \left\{ \sqrt{|x|},\widehat{U}_\mu \right\} , \end{equation} in place of~(\ref{divsub}). This method is similar to Thiemann's regularization of inverse triad operators in loop quantum gravity~\cite{Thiemann}. Second, the regulated potential vanishes at the origin but is greater in absolute value than the unregulated potential for $|m|\geq1$. However, the difference is significant only for the lowest few~$|m|$, and the two potentials quickly converge as $|m|\to\infty$, as shown in Figure~\ref{x2plot}. The regulated and unregulated potentials hence differ significantly only near the singularity. \subsection{Eigenstates and the numerical method} \label{sec:num-method} We are now ready to look for the eigenstates of the Hamiltonian. Writing the eigenstate as $\psi = \sum_m c_m \left|m\mu\right\rangle$ and denoting the eigenvalue by~$E$, the regulated eigenvalue equation $\widehat{H}_{\mathrm{pol}}^{\mathrm{reg}} \psi = E \psi$ and the unregulated eigenvalue equation $\widehat{H}_{\mathrm{pol}} \psi = E \psi$ both give a recursion relation that takes the form \begin{equation} c_m\left( 2 - \mu^2 E -\lambda f_m \right) = c_{m+1} + c_{m-1} , \label{recursion} \end{equation} where $f_m = f_m^{\mathrm{reg}}$ \nr{eq:freg-def} for the regulated potential and $f_m = f_m^{\mathrm{pol}}$ \nr{eq:fpol-def} for the unregulated potential. Note that the polymerization scale $\mu$ enters this recursion relation only in the combination~$\mu^2 E$, whether or not the potential is regulated. This is a direct consequence of the scale invariance of the potential. From now on, we take $\lambda>0$ and $E<0$. We use the ``shooting method'' that was applied in \cite{HLW} to the polymerized $1/|x|$ potential. For large~$|m|$, (\ref{recursion}) is approximated by \begin{equation} c_m\left( 2 - \mu^2 E \right) = c_{m+1} + c_{m-1}. \label{recapprox} \end{equation} The linearly independent solutions to \nr{recapprox} are \begin{equation} c_m = \left[1 - \frac{\mu^2 E}{2} + \sqrt{\left(1 - \frac{\mu^2 E}{2}\right)^2-1}\right]^{\pm m}. \label{recsol} \end{equation} The upper (respectively lower) sign gives coefficients that increase (decrease) exponentially as $m \rightarrow \infty$. We can therefore use \nr{recsol} with the lower sign to set the initial conditions at large positive $m$ \cite{elaydi}. To set up the shooting problem, we choose a value for $\mu^2 E$ and begin with some $m_0 \gg \sqrt{\frac{\lambda}{\mu^2|E|}}$ to find $c_{m_0}$ and $c_{m_0 - 1}$ using the approximation~(\ref{recsol}). We then iterate downwards with~(\ref{recursion}). In the antisymmetric sector, we stop the iteration at $c_0$ and shoot for values of $\mu^2 E$ for which $c_0=0$. This shooting problem is well defined both for the unregulated potential \nr{eq:V-action} and for the regulated potential~\nr{eq:Vreg-action}, since the computation of $c_0$ via (\ref{recursion}) does not require evaluation of $f_m$ at $m=0$. In the symmetric sector, we stop at the iteration at $c_{-1}$ and shoot for values of $\mu^2 E$ for which $c_{-1}=c_1$. As the computation of $c_{-1}$ requires evaluation of $f_m$ at $m=0$, the symmetric sector is well defined only for the regulated potential. \section{Results} \label{sec:results} We shall now compare the spectra of full polymer quantization, Bohr-Sommerfeld polymer quantization and ordinary Schr\"{o}dinger quantization. We are particularly interested in the sensitivity of the results to the choice of the symmetric versus the antisymmetric sector. First of all, we find that when the potential is regulated, the choice of the symmetric versus antisymmetric boundary condition in the full polymer quantum theory has no significant qualitative effect for sufficiently large~$\lambda$, the only difference being slightly lower eigenvalues for the symmetric boundary condition. The lowest five eigenvalues in the two sectors are shown in Table \ref{evorder} for $\lambda=2$. This is in a sharp contrast with what was found in \cite{HLW} for the $1/r$ potential, where the symmetric sector contained a low-lying eigenvalue that appeared to tend to $-\infty$ as the polymerization scale was decreased. \begin{table}[htb] \begin{center} \begin{tabular}{|c|c|c|} \hline & antisymmetric & symmetric\\ \hline $E_0$&-6.14&-6.37\\ $E_1$&$-2.35\cdot10^{-2}$&$-2.43\cdot10^{-2}$\\ $E_2$&$-2.03\cdot10^{-4}$&$-2.10\cdot10^{-4}$\\ $E_3$&$-1.76\cdot10^{-6}$&$-1.82\cdot10^{-6}$\\ $E_4$&$-1.52\cdot10^{-8}$&$-1.57\cdot10^{-8}$\\ \hline \end{tabular} \end{center} \caption{The lowest five eigenvalues of the regulated potential with antisymmetric and symmetric boundary conditions ($\lambda=2$, $\mu=1$).} \label{evorder} \end{table} Another key feature is that for sufficiently large $\lambda$ there is indeed a negative energy ground state. For $3\le\lambda\le4$, the plots of the lowest eigenvalues as a function of $\lambda$ in Figures \ref{x2urEvsL} and \ref{x2rEvsL} show that the analytic lower bound obtained in subsection \ref{subsec:fqpt} is accurate within a factor of $1.2$ for the regulated potential in the antisymmetric sector and within a factor of two for the unregulated potential. Figures \ref{x2urEvsL} and \ref{x2rEvsL} also indicate that the lowest eigenvalues converge towards zero as $\lambda$ decreases, for both the unregulated and regulated potentials, with the unregulated eigenvalues reaching zero slightly before the regulated. Near $E_n=0$ the relationship is quadratic in $\lambda$ while the plots straighten out to a linear relationship for larger~$\lambda$. The numerics become slow as the energies are close to zero. We were unable to investigate systematically whether bound states exist for $\lambda\leq1/4$, and in particular to make a comparison with the single bound state that occurs in Schr\"odinger quantization with certain self-adjoint extensions. For $\lambda$ slightly below~$1/4$, we do find one bound state, but we do not know whether the absence of further bound states is a genuine property of the system of an artefact of insufficient computational power. This would be worthy of further investigation. The eigenvalues show a similar dependence on $\lambda$ for both regulated and unregulated potentials, with the energies for the regulated potential being lower than those for the unregulated version as one would expect from comparing the potentials as in Figure~\ref{x2plot}. For $\lambda > 1/4$, we find that the eigenvalues $E_n$ depend on $n$ exponentially, except for the lowest few eigenvalues ($n=0,1$). The coefficient in the exponent is in close agreement with the exact Schr\"odinger spectrum (\ref{tower}) and with the Bohr-Sommerfeld polymer spectrum (\ref{eq:pol-WKB-evs}) with ${\lambda_{\mathrm{eff}}} = \lambda - 1/4$. Representative spectra are shown as semi-log plots in Figures \ref{x2urEvsn} and~\ref{x2rEvsn}, where the linear fit is computed using only the points with $n\geq2$. By matching the linear fit to the Schr\"odinger spectrum (\ref{tower}) and reading off the self-adjointness parameter~$\beta$, we can determine the self-adjoint extension of the Schr\"odinger Hamiltonian that matches the polymer theory for the highly-excited states. The results, shown in Figures \ref{x2urBvsA} and~\ref{x2rBvsA}, show that the self-adjointness parameter $\beta$ depends linearly on the coupling parameter~$\alpha$, and the slope in this relation is within 10 per cent of the slope obtained from the Bohr-Sommerfeld estimate~\nr{eq:BS-beta}, $\ln 8 \approx 2.0794$ (for $\mu =1$, ${\bar{\mu}} =1/2$). Finally, our numerical eigenvalues $E_n$ are in excellent agreement with the analytic approximation scheme of~\cite{GKS}, provided this scheme is understood as the limit of large $\lambda$ with fixed~$n$. If the numerical results shown in figures \ref{x2urBvsA} and \ref{x2rBvsA} are indicative of the complementary limit of large $n$ with fixed~$\lambda$, they show that the approximation scheme of \cite{GKS} does not extend to this limit. \section{Conclusions} \label{sec:conclusions} We have compared Schr\"odinger and polymer quantizations of the $1/r^2$ potential on the positive real line. The broad conclusion is that these quantization schemes are in excellent agreement for the highly-excited states and differ significantly only for the low-lying states. In particular, the polymer spectrum is bounded below, whereas the Schr\"odinger spectrum is known to be unbounded below when the coefficient of the potential term is sufficiently negative. We also find that the Bohr-Sommerfeld semiclassical quantization condition reproduces correctly the distribution of the highly-excited polymer eigenvalues. At some level this agreement is not surprising, since one expects that for any mathematically consistent quantization scheme, in some appropriate large~$n$, semi-classical limit, the spectra should agree. For anti-symmetric boundary conditions both Schr\"odinger and regulated and unregulated polymer obey the criteria, so it is perhaps not surprising that they agree at least for energies close to zero. It is somewhat surprising that they agree so well for low $n$ (where "low" is in the context of the polymer spectra which are bounded below). A central conceptual point was the regularization of the classical $r=0$ singularity in the polymer theory. We did this first by explicitly regulating the potential, using a finite differencing scheme that mimics the Thiemann trick used with the inverse triad operators in LQG~\cite{Thiemann}: this method allows a choice of either symmetric or antisymmetric boundary conditions at the origin. We then observed, as prevously noted in~\cite{GKS}, that the singularity can alternatively be handled by leaving the potential unchanged but just imposing the antisymmetric boundary condition at the origin. The numerics showed that all of these three options gave very similar spectra, and the agreement was excellent for the highly-excited states. To what extent is the agreement of these three regularization options specific to the $1/r^2$ potential? Consider the polymer quantization of the Coulomb potential, $-1/r$. When the Coulomb potential is explicitly regulated, it was shown in \cite{HLW} that the choice between the symmetric and antisymmetric boundary conditions makes a significant difference for the ground state energy. We have now computed numerically the lowest five eigenenergies for the unregulated $-1/r$ potential with the antisymmetric boundary condition, with the results shown in Table~\ref{xurev}. Comparison with the results in \cite{HLW} shows that the regularization of the potential makes no significant difference with the antisymmetric boundary condition. As noted in \cite{HLW}, for sufficiently small lattice spacing the antisymmetric boundary condition spectrum tends to that which is obtained in Schr\"odinger quantization with the conventional hydrogen s-wave boundary condition~\cite{fewster-hydrogen}. \begin{table}[htb] \begin{center} \begin{tabular}{|c|c|} \hline $n$ & $E_n$\\ \hline 0&-0.250\\ 1&-0.0625\\ 2&-0.0278\\ 3&-0.157\\ 4&-0.0100\\ \hline \end{tabular} \end{center} \caption{The lowest five eigenvalues for the unregulated Coulomb potential with the antisymmetric boundary condition ($\lambda=1$, $\mu=0.1$).} \label{xurev} \end{table} We conclude that in polymer quantization of certain singular potentials, a suitably-chosen boundary condition suffices to produce a well-defined and arguably physically acceptable quantum theory, without the need to explicitly modify the classical potential near its singularity: the antisymmetric boundary condition effectively removes the $r=0$ eigenstate from the domain of the operator $1/r^2$ by requiring $c_0=0$ in the basis state expansion $\sum_m c_m |m\mu \rangle$. A similar observation has been made previously in polymer quantization of a class of cosmological models, as a way to obtain singularity avoidance without recourse to the Thiemann trick \cite{ashtekar07,corichi08}, and related discussion of the self-adjointness of polymer Hamiltonians arising in the cosmological context has been given in~\cite{Kaminski07}. While we are not aware of a way to relate our system, with the $-1/r^2$ potential and no Hamiltonian constraints, directly to a specific cosmological model, it is nonetheless reassuring that the various techniques we have used in this case for dealing with the singularity all lead to quantitatively similar spectra. Whether this continues to hold in polymer quantization of theories that are more closely related to LQG is an important open question that is currently under investigation~\cite{KLZ}. \section*{Acknowledgements} We thank Jack Gegenberg, Sven Gnutzmann and Viqar Husain for helpful discussions and correspondence. GK and JZ were supported in part by the Natural Sciences and Engineering Research Council of Canada. JL was supported in part by STFC (UK) grant PP/D507358/1.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \IEEEPARstart{U}{ltrasound} elastography (USE), a non-invasive clinical diagnostic technique, measures the mechanical property, the stiffness, that has a significant correlation with the tissue pathology. Adjunct to conventional B-mode ultrasound, USE improves the diagnostic quality and can provide important qualitative and quantitative information about tissue stiffness that may be helpful in many clinical applications, e.g., investigating diseases like fibrosis, cirrhosis, and hepatitis in the liver, diagnosing cancer in organs like breast and prostate \cite{ferraioli2012accuracy,chang2011clinical,barr2017wfumb}. In addition to extracting mechanical properties of tissues, USE has been successfully used to determine the state of muscles and tendons \cite{hatta2015quantitative}, assess stiffness in the brain and remove blood clotting \cite{ghanouni2015transcranial}. The USE imaging work-flow starts with stimulation for tissue displacement, followed by an acquisition of echo signals usually known as ultrasound Radio Frequency (RF) signal and finally processing the RF data to reconstruct the elastography image. To generate tissue displacement, different excitation approaches, e.g., manual force, ARF, and vibration are used in clinically applicable USE techniques and among these techniques Quasi-static Elastography (QSE), Acoustic Radiation Force Impulse Imaging (ARFI), Shear Wave Elastography (SWE), Supersonic Shear Imaging (SSI), and Transient Elastography (TE) are vastly applied for cancer diagnosis and clinical management \cite{barr2017wfumb,nightingale2001feasibility,berg2012shear,bercoff2004supersonic,sandrin2003transient}. Several studies report that QSE with B-mode ultrasound improves the diagnosis and evaluation of breast lesions \cite{tan2008improving}. However, QSE has limitations as it is highly operator dependent and incapable of deeper organ imaging \cite{li2017quality}. Moreover, the quantitative information provided by SWE gives better performance compared to QSE \cite{yang2017qualitative}. Therefore, SWE has emerged as a new imaging tool that has high reproducibility, capable of deeper organ imaging and has low operator dependency compared to QSE \cite{sim2015value}. It has shown potential performance in breast, liver, and prostate lesion detection and diagnosis \cite{sang2017accuracy,yang2017qualitative,li2017quality}. In SWE imaging, an automated stimulation of tissue displacement by ARF is induced to generate a shear wave propagation. As a result of this propagation, tissues are displaced in the normal direction of wave propagation. The initial challenge is to track such small tissue displacement over time. In most of the SWE algorithms, tissue displacement is estimated by using normalized cross-correlation between the tracked ultrasound reference and displaced echo data \cite{pinton2005real}. Ultrasound tracking, however, suffers from jitter, transducer bandwidth, Signal-to-Noise Ratio (SNR), kernel length, the correlation coefficient between RF-lines being tracked, and magnitude of tracked RF lines correlated to the tracking frequency \cite{cespedes1995theoretical,hollender2015single}. When shear wave speed (SWS) is estimated from such noisy tracked tissue motion, it is prone to erroneous estimation. Therefore, denoising schemes, e.g., particle filter, directional filter, and EMD-based denoising, are adopted in different reported articles to make the motion estimation robust \cite{deffieux2011effects,8169096}. Two types of approaches are reported for estimating SWS from the denoised motion data. The first category of approaches called the time of flight (ToF) algorithms, locally estimate wave arrival time using the maximum displacement peaks \cite{rouze2012parameters,amador2017improved} or cross-correlation of time signals \cite{bercoff2004supersonic, song2012comb}. Although ToF-based algorithms are fast and implementable in real-time, they are not noise-robust because of noise amplification during inversion operation and misplacement of peaks \cite{wang2013precision}. The second category of methods for shear wave velocity estimation involves the frequency domain. These approaches use phase velocity estimated from the local maximum wave number, and two dimensional Fourier transformation on the time-space signal to estimate the phase velocity \cite{8485657,bernal2011material}. In both categories of approaches, the number of ARF pushes make a difference in the quality of the reconstructed images as described in LPVI and CSUE \cite{8485657,song2012comb}. Though current state-of-the-art is the LPVI technique, the efficacy of this algorithm largely depends on the window selection like other conventional approaches. Moreover, multiple pushes that may be required for improved SWE imaging will create the risk of tissue heating. In recent years, Deep Neural Network (DNN) based methods have outperformed conventional state-of-the-art algorithms in signal and image processing tasks. DNN has made it possible to automatically detect a metastatic brain tumor and diagnose liver fibrosis and cardiac diseases \cite{7426413,8051114}. Also, profound imaging quality and accuracy have been achieved in MRI image reconstruction \cite{8067520}, classification and segmentation problem \cite{8051114}, and image denoising \cite{8340157} with the incorporation of DNN. In ultrasound elastography, DNN based classification and QSE image reconstruction \cite{wu2018direct} have been published. It is reported that DNN-based QSE image reconstruction algorithms can effectively extract, represent, and integrate highly semantic features without manual intervention \cite{wu2018direct}. Therefore, the DNN-based SWE image reconstruction algorithm can be an alternative to the existing conventional algorithms. In this work, we propose SHEAR-net, a DNN-based noise robust high quality SWE image reconstruction method employing tracked tissue motion data induced by a single ARF pulse. The attributes of this proposed work are: \begin{itemize} \item A novel architecture called the S-net which is a combination of 3-D CNN, Convolutional LSTM, and 2-D CNN. The S-net solves the inclusion localization problem and reconstructs sharp inclusion boundary using the reflected wave patterns from tissue boundaries. The temporal correlation among these patterns are generated using recurrent layers; \item A shear modulus estimation block using dense layers with skip connections that takes concatenated feature maps of the S-net and recurrent blocks as input; \item Dynamic training of the S-net, recurrent layers and modulus estimation block with multi-task learning (MTL) loss function. The latter makes the SHEAR-net an end-to-end learning DNN approach; \item The proposed technique can retain almost the same image quality with half of the ARF intensity required for the conventional algorithms and estimate shear modulus with greater accuracy for tissue displacement $\geq$0.5 $\mu$m; \item A larger ROI for imaging to visualize multiple inclusion with just a single push; \end{itemize} The test results reveal that SHEAR-net, the first ever DNN-based SWE imaging technique can outperform the state-of-the-art algorithms both in quality and reconstruction time. \section{Materials and method} In this section the newly proposed deep learning based approach, SHEAR-net, is adopted for shear modulus (SM) estimation from a single push ARF induced tissue displacement. First, we present the network architecture in Sec. II-A and its representative block diagram is shown in Fig. \ref{pipe}. We also explain its functionality by dividing the SHEAR-net into sub-blocks. The blocks are optimized by a multi-task learning loss function as described in Sec. II-B. The first task for our problem is to localize the inclusion position from the raw displacement data and for that, we propose the S-net. The displacement data is passed into the RNN-Block (RB) as illustrated in Fig. \ref{pipe} and temporal correlation among the time frames is extracted without changing the image dimension. The output of the RB is concatenated with the output of S-net and finally, the Modulus Estimator (ME) as shown in Fig. \ref{pipe} calculates the absolute modulus for each pixel. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{PIPELINE} \caption{The detailed block diagram of the proposed SHEAR-net.} \label{pipe} \end{figure} \subsection{Proposed SHEAR-net} \subsubsection{S-net} We have designed an S-net, a combination of 3-D CNN block, recurrent block, and 2-D CNN block as shown in Fig. \ref{re} to achieve sharper edges in the inclusion boundary and localize the inclusion by reconstructing a binary mask. The raw displacement data $\mathbf{D}$, is given as \begin{equation} \mathbf{D}=[\mathbf{D}_1~\mathbf{D}_2\cdot\cdot\cdot\mathbf{D}_{T_d}],~\mathbf{D}_t\in\mathbb{R}^{h\times w\times1}|~t=1,~2,\cdot\cdot\cdot,T_d \label{eqn1} \end{equation} where $\mathbf{D}_t$ denotes the 3-D tissue displacement data with spatial dimension of $h \times w \times1$ at $t$ time frame and $T_d$ is the total frame count. The S-net first extracts a low level of spatio-temporal features $\mathbf{X}_{l_{st}}$ as the following \begin{equation} \begin{split} \mathbf{X}_{l_{st}}&=[\mathbf{X}^1_{l_{st}}~\mathbf{X}^2_{l_{st}}\cdot\cdot\cdot\mathbf{X}^{T_d}_{l_{st}}],\\ \mathbf{X}^t_{l_{st}}&=\mathcal{F}_3(\mathbf{D_t};\theta)\in\mathbb{R}^{\frac{h}{m}\times \frac{w}{m}\times f}|~t=1,~2,\cdot\cdot\cdot,T_d \end{split} \end{equation} \begin{figure*}[!t] \centering \includegraphics[width=6.5 in]{renew} \caption{The detailed feature map diagram of the proposed S-net. It is a 3-D CNN-recurrent block-2-D CNN combination.} \label{re} \end{figure*} where $\mathbf{X}^t_{l_{st}}$ denotes the encoded spatio-temporal feature maps with spatial dimension of $\frac{h}{m} \times \frac{w}{m} \times f$ at the $t$ time frame and the $\mathcal{F}_3(\cdot)$, $m$, $f$, and $\theta$ represent 3-D CNN operation, shrink coefficient on the spatial domain, feature map number, and network parameter, respectively. The feature maps are then passed into the recurrent block that has multiple ConvLSTM units as shown in Fig. \ref{re1} (c). Although the LSTM is a powerful tool to handle temporal correlation in a given sequence, for more general solutions of spatio-temporal forecasting problems, convolutional LSTM has the superiority in state transitions and holding spatial information. As illustrated in Fig. \ref{re1} (c), the ConvLSTM 2-D block takes in the current input $X_t$ with previous cell states $C_{t-1}$ and hidden states $H_{t-1}$ to generate the current cell state $C_t$ and hidden state $H_t$. The relation between inputs and gates are governed by \begin{align} i_t &=\sigma(W_{xi}*X_t+W_{hi}*H_{t-1}+b_i),\nonumber \\ f_t &=\sigma(W_{xf}*X_t+W_{hf}*H_{t-1}+b_f),\nonumber \\ C_t &=f_t\circ C_{t-1}+i_t\circ tanh(W_{xc}*X_t+W_{hc}*H_{t-1}+b_c),\nonumber \\ o_t &=\sigma(W_{xo}*X_t+W_{ho}*H_{t-1}+b_o),\nonumber \\ h_t &=f_t \circ tanh(C_t),\label{eqn1} \end{align} where $*$ and $\circ$ are defined as convolutional and element-wise matrix multiplication operator, respectively; $W_{xi}$, $W_{hi}$, $W_{xf}$, $W_{hf}$, $W_{xo}$, $W_{ho}$, $b_i$, $b_f$, and $b_o$ are convolutional parameters for the network and $i_t$, $f_t$, and $o_t$ represent input, forget and output gates, respectively. In the network, `$tanh$' is used as the activation function and $\sigma$ represents sigmoid operation. For a general input $\mathbf{X}=[\mathbf{X}_1\cdot\cdot\cdot\mathbf{X}_{T_d}]$, the S-net uses a ConvLSTM block for each time frame, $t$ and extracts hidden states, $\mathbf{H}$ and a high level of spatio-temporal features, $\mathbf{O}^t_{H_{st}}$ for each time frame as in the following \begin{align} \mathbf{O}_{H_{st}}&=[\mathbf{O}^1_{H_{st}}\mathbf{O}^2_{H_{st}}\cdot\cdot\cdot\mathbf{O}^{T_d}_{H_{st}}],\mathbf{R}=\mathcal{M}(\mathbf{O}_{H_{st}};\theta) \\ \mathbf{O}^t_{H_{st}}&=\mathcal{M}(\mathbf{X}_t;\theta)\in\mathbb{R}^{\frac{h}{m}\times \frac{w}{m}\times f}|~t=1,~2,\cdot\cdot\cdot,T_d\\ \mathbf{H}&=\mathcal{M}(\mathbf{X};\theta)\in\mathbb{R}^{\frac{h}{m}\times \frac{w}{m}\times f} \label{eqn3} \end{align} where $\mathcal{M}(\cdot)$ indicates convolutional LSTM operation. A series of convolutional blocks extract deeper high level of spatio-temporal features using (5). And the final ConvLSTM layer shrinks the temporal length to 1 and outputs 3-D feature maps, $\mathbf{R}$ in (4). After the 4-D to 3-D feature map conversion using the recurrent block, the S-net uses consecutive 2-D CNN and upsampling layers to extract a high level of spatial features and restore the original image dimension by \begin{equation} \mathbf{X}_{H_{s}}=\mathcal{U}(\mathcal{F}_2(\mathbf{R};\theta))\in\mathbb{R}^{h\times w\times f}, \end{equation} where $\mathcal{U}(\cdot)$ and $\mathcal{F}_2(\cdot)$ represent upsampling and 2-D CNN operation, respectively. Finally, the binary mask is reconstructed as \begin{equation} \mathbf{M}=\mathcal{F}_2(\mathbf{X}_{H_{s}};\theta)\in\mathbb{R}^{h\times w\times 1}, \end{equation} which is our target feature extracted by the S-net that localizes the inclusion and in this convolutional layer, we have used `sigmoid' activation to keep the output within 0 to 1. \subsubsection{RME Block} The RME block consists of recurrent layers with skip connections and a modulus estimator for the reconstruction of the SM image. The recurrent layers, in this case, take tissue displacement data, $\mathbf{D}$ as input and learn different reflection patterns corresponding to wave propagation such as reflected waves from inclusion boundaries and from tissue boundaries as shown in Fig. \ref{ref}. The two recurrent layers use the tissue motion data and extracts the reflection patterns by (4) and (5). A skip connection between the hidden states of the first layer calculated form (6) and output of the second layer from (4) ensures smooth temporal feature propagation and is given by \begin{equation} \mathbf{R}^\tau=\mathbf{cat}(\mathbf{R},\mathbf{H})\in\mathbb{R}^{h\times w\times 2f}, \label{eqn9} \end{equation} where $\mathbf{cat}(\cdot)$ denotes the concatenation operation along the feature map axis. Some of the feature maps from \eqref{eqn9} are demonstrated in Fig. \ref{ref} and these reflected wave patterns are important for the SHEAR-net to inherently learn stiffness variation. These high level spatio-temporal feature maps are concatenated with the binary mask of S-net by \begin{equation} \mathbf{O}^\tau=\mathbf{cat}(\mathbf{R}^\tau,\mathbf{M})\in\mathbb{R}^{h\times w\times 2f+1}, \end{equation} \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{renew1} \caption{Block diagram of the RME block consisting of (a) and (b); (a) recurrent layers of ConvLSTM and (b) ME, and (c) a single ConvLSTM block.} \label{re1} \end{figure} The final block, ME as shown in Fig. \ref{re1} (b) estimates the shear modulus from the concatenated feature maps of $\mathbf{O}^\tau$. This feature map concatenation directs the ME to look at the reflection patterns from inclusion boundaries in the positions with high activation estimated from the S-net. The ME includes a convolutional layer and 3 dense blocks with skip connections. As seen in Fig. \ref{re1} (b), each dense block with 2 outputs, $\mathbf{E}_{ij}$, given by \begin{equation} \begin{split} \mathbf{E}_{i1}&=\mathcal{F}_2(\mathbf{O}^\tau;\theta)\in\mathbb{R}^{h\times w\times f}\\ \mathbf{E}_{i2}&=\mathcal{F}_2(\textbf{cat}(\mathbf{E}_{i1},\textbf{O}^\tau);\theta)\in\mathbb{R}^{h\times w\times f} \end{split} \end{equation} are concatenated together, and finally the 2-D CNN layers reconstruct the SM image from the concatenated feature maps as \begin{equation} \mathbf{P}=\mathcal{F}_2(\textbf{cat}(\mathbf{E}_{11},~\mathbf{E}_{12},\cdot\cdot\cdot,\mathbf{E}_{i2});\theta)\in\mathbb{R}^{h\times w\times 1}. \end{equation} Each input feature maps in (18) has different receptive field and therefore, capture high level of spatial features for accurate SM estimation. The skip connections in the dense layers allows this smooth flow of features to concatenate with the forward layers and help reduce the gradient vanishing problem. \subsection{Multi-Task Learning (MTL) Loss Function} The proposed SHEAR-net optimizes its weight based on a novel MTL loss function. The first task of the S-net is to localize the inclusion boundary and the second task is to estimate the shear modulus value of each pixel using RB with ME (RME). However, optimizing one task is not independent of the other as the output of S-net is concatenated with the output of RB as an input of ME. \begin{figure}[!ht] \centering \includegraphics[width=3 in]{reflection} \caption{Illustrating outputs of the recurrent layer that extracts temporal correlation from time frames and finds features related to the inclusion boundary as seen from the shades.} \label{ref} \end{figure} For SWE imaging, we have already discussed the efficacy of S-net. The goal is to label each pixel of the image either as an object (inclusion in our case) or as background. In most cases, a major portion of the ROI belongs to the background class. As the ratio of the inclusion area to the background area is very small, the network has low accuracy in estimating the shear modulus inside the inclusion compared to that of the background in most cases. Therefore, we have adopted the IoU loss function to direct the SHEAR-net towards the inclusion area and emphasize on the overlap of the ground truth and the predicted region. If $P$ and $G$ are the set of predicted and ground truth binary labels, respectively, then the IoU function (also known as the Jaccard similarity index), is defined as \begin{equation} J_c(P,G) = \dfrac{|P \cap G|}{|P \cup G|} = \dfrac{|P \cap G|}{|P - G| + |P \cap G| + |G - P|}. \end{equation} The IoU loss is defined as \begin{equation} \label{eqn_iou_loss} \text{IoU}= 1 - Jc(P, G) = 1 - \dfrac{|P \cap G|}{|P \cup G| + \epsilon}, \end{equation} where $\epsilon$ is a small safety factor added in order to handle division by zero, $1e-7$ for our case. For optimizing the RME, we have defined a loss function called Modulus Loss, $L_m$ for the purpose of estimating the shear modulus. The modulus loss is given by \begin{equation} L_m=\sum_{j=1}^{m}(G_j-P_j)^2, \label{mod} \end{equation} where $G_j$ and $P_j$ denote ground truth and SHEAR-net predicted pixels, respectively, and $m$ is the number of pixels present in the image. Now, our target is to optimize both the loss functions at the same time. To this end, the joint loss function for the proposed SHEAR-net is given by \begin{table}[t] \centering \caption{\label{param}Simulation Paramters For Shear Wave Generation} \begin{tabu}{|c|c|} \hline \textbf{Parameter} & \textbf{Value}\\ \hline ARF intensity, A & $1\times10^6$~N/m$^3$ \\\hline $\sigma_x$, $\sigma_y$, $\sigma_z$ & 0.21 mm, 0.21 mm, 0.43 mm \\\hline Focusing point & $x_0$, $y_0$, $z_0$\\\hline \multirow{2}{*}{\textbf{Medium}} & Nearly incompressible\\\cline{2-2} & linear, isotropic, elastic solid\\\hline Poison's ratio, $\nu$ & 0.499\\\hline Density, $\rho$ & 1000~kg/m$^3$\\\hline Time for ARF excitation & 200 $\mu$s\\\hline Time for shear wave propagation & 18 ms\\\hline FEM size & $40\times20\times40$~mm\\\hline Inclusion radius & 2.5~mm\\\hline Inclusion co-ordinate & 8~mm, 0~mm, 20~mm\\\hline Mesh element and size & tetrahedral and 0.2 mm\\\hline \end{tabu} \end{table} \begin{equation} J=\alpha L_m+(1-\alpha)\text{IoU} \label{weight} \end{equation} For our work, the value of the weight factor, $\alpha$ is selected to be 0.5 after observing the consistency in the results for a different range of values. \begin{table}[b] \centering \caption{\label{var}Variable Parameters For Simulation Phantom Generation} \begin{tabular}{|c|c|} \hline \textbf{Parameter} & \textbf{Value}\\ \hline ARF intensity, A & $1\times10^6, 2\times10^6$~N/m$^3$\\\hline Inclusion radius & random number within 1--5 mm\\\hline Inclusion co-ordinate & randomly generated\\\hline Background stiffness (BS) & 10, 20~kPa\\\hline Inclusion (Sphere Oval) modulus& 2, 4, 6, 8, 10 times of BS\\\hline \end{tabular} \end{table} \section{Experimental Setup} \subsection{Simulation of Shear Wave Propagation} In order to generate a shear wave (SW) in an elastic medium (see Table \ref{param}), we need to apply ARF. It is reported that ARF modeled with Gaussian distribution \cite{palmeri2017guidelines} is highly correlated with the impulse response generated by a transducer. We have used COMSOL Multiphysics 5.1 to simulate SW propagation and further processing was done in MATLAB (Mathworks, Natick, MA) software to generate 2-D tissue displacement. In our simulation, ARF was modeled as a Gaussian impulse given by \begin{equation} \centering \text{ARF}=Aexp(-(\frac{(x-x_0)^2}{2\sigma_x^2}+\frac{(y-y_0)^2}{2\sigma_y^2}+\frac{(z-z_0)^2}{2\sigma_z^2})), \end{equation} where $x_0$, $y_0$, $z_0$ represent ARF focusing point, $\sigma_x$, $\sigma_y$, $\sigma_z$ define ARF beam width in $x$, $y$, $z$ direction, respectively. For safety issues, ARF intensity $A$ is chosen to keep the maximum displacement around 20$\mu$m to mimic the displacement required for real life tissue imaging. The parameters used for the simulation of shear wave propagation are given in Table \ref{param}. \subsection{Simulation Dataset} First, we import the simulation data (i.e., gold standard and training data) in MATLAB using the COMSOL-MATLAB interface for ultrasound tracking. In Field II, we have designed an L12-4 probe and tracked 2-D tissue displacement data using \cite{palmeri2006ultrasonic} over a time span of 8 ms. From the tracked motion data we have extracted 2-D displacement data of size $96\times48$ for 49 time frames. This size is taken because of memory constraint. The gold standard for the training of the network is the shear modulus image that is generated in COMSOL Multiphysics 5.1 and processed in MATLAB. Our goal is to create a dataset that has varied samples of breast and liver phantoms with different tissue stiffness, different inclusion shape, size, and position. For this reason, we have simulated a variety of finite element models that can mimic human breast (female) and liver tissues. These models particularly mimicked breast fibroadenoma and homogeneous liver tissue. Table \ref{var} presents the parameters for varied simulation data. For this initial study, we have generated data for a homogeneous inclusion in a homogeneous background. The stiffness values were taken so as to obtain the previously reported models \cite{johns1987x}. The ARF intensity is varied so that a maximum tissue displacement of 20$\mu$m or 10$\mu$m is obtained. The ARF excitation point is kept fixed at the center for all the models. However, the center of the inclusions is not aligned with the center of focus in most of the data. This brings position variation in the dataset along with the variation of tissue stiffness. The total number of samples in our dataset is 800. The number of samples for spherical inclusion, fibroadenoma, and liver mimicking tissue is 300, 300 and 200, respectively. \subsection{CIRS Phantom Dataset} For this study, we have downloaded CIRS experimental phantom (Model 049A, CIRS Inc., Norfolk, VA, USA) data from ftp://ftp.mayo.edu received from Ultrasound Research Laboratory, Department of Radiology, Mayo Clinic, USA. From the provided data we have used Type III and Type IV phantoms for our study having a background stiffness of 25 kPa each and inclusion stiffness of 45 kPa and 80 kPa, respectively. Both types of phantoms had 4 different inclusion sizes, i.e., 2 mm, 4 mm, 6 mm, and 10 mm. The phantom has a sound speed of 1540 m/s, ultrasound attenuation of 0.5 dB/cm/MHz and the inclusions are centered around 30 mm and 60 mm from the phantom surface. In this study, the ARF pulse is focused at 30 mm with a duration of 400 $\mu$s and the push frequency is 4.09 MHz. The push beam is generated by 32 active elements shifted by 16 elements from the end of the L7-4 probe and placed on each side of the inclusion. A single push acquisition is used in our study and acquired data is processed using the auto-correlation algorithm to get the motion data with a frame rate of 11.765 kHz and spatial resolution of 0.154 mm. All the CIRS phantom data are pre-processed using a 15 point locally weighted smoothing window \cite{cleveland1981lowess} as tissue displacements are affected by high-frequency ultrasound tracking noise also known as jitter. \subsection{Training, Optimization} Our model is implemented using Keras library backend with Tensrflow. We have split the dataset into training, validation, and test sets with 380, 160, and 121 simulation phantoms, 49 time frames each, respectively. We have also split our limited CIRS phantom data in the same way with 8 in training, 4 in validation and 4 in the test set. With end-to-end learning, we have trained the full SHEAR-net from scratch. Normalized 2-D tissue displacement data for 49 time frames are directly used as input for training without augmentation. Any augmentation that may misplace the displacement patterns will slow down the convergence rate. We have used a batch size of 16 because of memory constraint and ADAM as the optimizer with the initial learning rate of $5\times10^{-3}$. The learning rate is varied with cosine annealing and converges at around 100 epochs. As for the training labels, 2 sets of labels are generated: label 1 for the S-net and label 2 for the RME. Label 1 is a binary mask having a pixel value of 1 inside the inclusion and 0 outside. Label 2 is the true modulus image with the absolute shear modulus of each pixel. For each label, the SHEAR-net optimizes the loss function in \eqref{weight} and outputs the predicted modulus image. Note that for all the 49 time frames of a sample data, we have used the above 2 labels. Given the input sequence and the target image, the loss function is calculated using the forward propagation and the parameters of the network are updated using the backpropagation. This is repeated for a number of iterations that is 120 for our case. \subsection{Evaluation metrices} We evaluate our proposed method's performance in the test set by computing signal-to-noise ratio (SNR) as defined in \cite{wu2018direct}, peak-signal-to-noise-ratio (PSNR) and structural similarity index (SSIM) as defined in \cite{schlemper2018deep}, and S{\o}rensen--Dice coefficient (DSC) ($ = {(2|P \cap G|)}/({|P| + |G|})$) as quantitative evaluation indices. We have also computed the runtime of our proposed method using a PC with GPU NVIDIA GeForce GTX 1080 Ti and CPU Intel Core i7-7700K @4.20GHz. \begin{table}[t] \centering \caption{\label{homo_1}Quantitative Performance Evaluation Of The Reconstructed SM Image Using The LPVI Technique And The Proposed SHEAR-net For Two Homogeneous Phantoms} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{\textbf{Type I}} & \multicolumn{2}{c|}{\textbf{Type II}} \\ \cline{2-5} \multirow{-2}{*}{ \textbf{Indices}}& \textbf{LPVI} & \textbf{SHEAR-net} & \textbf{LPVI} & \textbf{SHEAR-net} \\ \hline \textbf{PSNR{[}dB{]}} & 15.23 & 22.52 & 16.23 & 20.98 \\ \hline \textbf{SNR{[}dB{]}} & 23.11 & 39.41 & 24.56 & 35.82 \\ \hline \textbf{SSIM} & 0.45 & 0.94 & 0.63 & 0.87 \\ \hline \begin{tabular}[c]{@{}c@{}}\textbf{Background}\\ \textbf{mean$\pm{}$SD [kPa]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}8.191\\ $\pm{1.773}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}6.816\\ $\pm{0.199}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}3.521\\ $\pm{0.926}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}3.282\\ $\pm{0.112}$\end{tabular} \\ \hline \end{tabular} \end{table} \begin{figure}[!t] \centering \includegraphics[width=3 in]{homo} \caption{Reconstructed 2-D SM image of two types of homogeneous simulation phantom using the LPVI technique and the proposed SHEAR-net.} \label{homo_2} \end{figure} \section{Results} In this section, we present the results of our proposed SHEAR-net for both the simulation and experimental phantom data and also compare its performance with the most recent state-of-the-art algorithm: local phase velocity imaging (LPVI) \cite{8485657}. \subsection{Simulation Study} Our simulation test set contains homogeneous phantoms mimicking liver tissue and phantoms with spherical and oval-shaped inclusion that mimic breast tissue with fibroadenoma. We first show the results on homogeneous phantoms. Figure \ref{homo_2} presents 2-D SM images reconstructed using the LPVI method and our proposed SHEAR-net. The true mean SM for Type I is 6.663 kPa and that for Type II is 3.335 kPa. The reconstructed SM images of these two phantoms by the SHEAR-net show more homogeneity in comparison to that for the LPVI algorithm. The estimated values of mean SM$\pm{}$standard deviation (SD) as presented in Table \ref{homo_1} are evidence of this fact. Other quantitative indices, i.e,. PSNR, SNR, SSIM, and DSC presented in Table \ref{homo_1} indicate that SHEAR-net has the ability to reconstruct significantly better quality 2-D SM image compared to that of the LPVI technique. \begin{table}[!ht] \centering \caption{Comparative Results Between LPVI And SHEAR-net For Inclusions Of Different Shape And Modulus } \label{tmi_1} \begin{tabular}{| c|c|c|c|c|} \hline & \multicolumn{2}{c|}{ {\begin{tabular}[c]{@{}c@{}}\textbf{Type I (Spherical} \\ \textbf{inclusion)}\end{tabular}}} & \multicolumn{2}{c|}{ {\begin{tabular}[c]{@{}c@{}}\textbf{Type II (Oval}\\ \textbf{inclusion)}\end{tabular}}} \\ \cline{2-5} \multirow{-2}{*}{ \textbf{Index}}& \textbf{LPVI} & \textbf{SHEAR-net} & \textbf{LPVI} & \textbf{SHEAR-net} \\ \hline \textbf{PSNR{[}dB{]}} & 18.06 & 25.01 & 17.11 & 28.44 \\ \hline \textbf{SNR{[}dB{]}} & 19.12 & 35.98 & 18.21 & 33.57 \\ \hline \textbf{SSIM} & 0.77 & 0.90 & 0.55 & 0.97 \\ \hline \textbf{DSC} & 0.59 & 0.78 & 0.62 & 0.81 \\ \hline \begin{tabular}[c]{@{}c@{}}\textbf{Background} \\ \textbf{mean$\pm{}$SD [kPa]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}5.163\\ $\pm{0.107}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}3.438\\ $\pm{0.025}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}11.306\\ $\pm{0.106}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}3.393\\ $\pm{0.025}$\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}\textbf{Inclusion}\\ \textbf{mean$\pm{}$SD [kPa]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}14.236\\ $\pm{0.060}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}15.822\\ $\pm{0.014}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}25.584\\ $\pm{0.070}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}23.455\\ $\pm{0.016}$\end{tabular} \\ \hline \end{tabular} \end{table} \begin{figure}[!t] \centering \includegraphics[width=3 in]{tmi} \caption{ Qualitative comparison of the reconstructed SM image from the tracked motion data of simulation phantoms with inclusion. White cross marks indicate the focusing point of the ARF.} \label{tmi_2} \end{figure} Next, we present the results on simulation data with inclusion. Our simulation dataset contains two shapes of inclusion: Type I- spherical inclusion, and Type II- oval inclusion. Figure \ref{tmi_2} presents the 2-D reconstructed SM images using the LPVI method and our proposed SHEAR-net and Table \ref{tmi_1} shows numerical indices evaluated for these reconstructed images. The spherical inclusion sample has a mean SM of 3.33 kPa for the background and 16.587 kPa for the inclusion. Oval inclusion sample has a mean SM of 3.33 kPa for the background and 23.499 kPa for the inclusion. The illustrations and mean$\pm{}$SD both indicate that both the techniques have a good inclusion coverage area. However, the reconstructed SM images by the LPVI technique has more background noise and a little contrast variation inside the inclusion. On the contrary, the reconstructed images by the SHEAR-net demonstrate more overall homogeneity and less noise in the background and thus we get high PSNR and SNR values. These images have a more accurate mean with small SD both inside and outside the inclusion. Moreover, SHEAR-net reconstructs a sharper boundary around the inclusion irrespective of the shape and has higher structural similarity compared to that of the LPVI technique which is evident from the DSC and SSIM values. \begin{table}[t] \centering \caption{\label{force_1}Quantitative Comparison Between LPVI And The SHEAR-net For Experiment With Force Variation} \begin{tabular}{| c|c|c|c|c|} \hline & \multicolumn{2}{c|}{ {\begin{tabular}[c]{@{}c@{}}\textbf{Type I (100\% }\\ \textbf{force)}\end{tabular}}} & \multicolumn{2}{c|}{ {\begin{tabular}[c]{@{}c@{}}\textbf{Type II (50\%}\\ \textbf{force)}\end{tabular}}} \\ \cline{2-5} \multirow{-2}{*}{ \textbf{Index}}& \textbf{LPVI} & \textbf{SHEAR-net} & \textbf{LPVI} & \textbf{SHEAR-net} \\ \hline \textbf{PSNR{[}dB{]}} & 20.34 & 25.01 & 18.06 & 23.39 \\ \hline \textbf{SNR{[}dB{]}} & 22.52 & 35.98 & 19.12 & 30.66 \\ \hline \textbf{SSIM} & 0.89 & 0.91 & 0.77 & 0.90 \\ \hline \textbf{DSC} & 0.73 & 0.78 & 0.59 & 0.76 \\ \hline \begin{tabular}[c]{@{}c@{}}\textbf{Background} \\ \textbf{mean$\pm{}$SD [kPa]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}6.607\\ $\pm{0.107}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}3.412\\ $\pm{0.025}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}11.306\\ $\pm{0.107}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}3.438\\ $\pm{0.025}$\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}\textbf{Inclusion}\\ \textbf{mean$\pm{}$SD [kPa]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}15.338\\ $\pm{0.060}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}15.822\\ $\pm{0.014}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}14.236\\ $\pm{0.060}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}15.157\\ $\pm{0.014}$\end{tabular} \\ \hline \end{tabular} \end{table} \begin{figure}[!t] \centering \includegraphics[width=3 in]{force} \caption{Effect of ARF intensity variation on the reconstruction of SM image. Dashed boxes indicate regions where tissue displacement is $<1\mu m$.} \label{force_2} \end{figure} \begin{table}[!t] \centering \caption{\label{size_1}Reconstruction Results For Experiment With Inclusion Size Variation} \begin{tabular}{| c|c|c|c|c|}\hline {\begin{tabular}[c]{@{}c@{}}\textbf{Inclusion}\\ \textbf{diameter}\end{tabular} }& \multicolumn{2}{c|}{ {\textbf{Type I (10 mm)}}} & \multicolumn{2}{c|}{ {\textbf{Type II (4 mm)}}} \\ \hline \textbf{Index}& \textbf{LPVI} & \textbf{SHRAR-net} & \textbf{LPVI} & \textbf{SHEAR-net} \\ \hline \textbf{PSNR{[}dB{]}} & 19.39 & 22.95 & 8.92 & 16.84 \\ \hline \textbf{SNR{[}dB{]}} & 16.76 & 21.77 & 12.31 & 20.43 \\ \hline \textbf{SSIM} & 0.78 & 0.90 & 0.49 & 0.87 \\ \hline \textbf{DSC} & 0.65 & 0.71 & 0.49 & 0.87 \\ \hline \begin{tabular}[c]{@{}c@{}}\textbf{Background}\\ \textbf{mean$\pm{}$SD [kPa]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}17.005\\ $\pm{0.111}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}6.707\\ $\pm{0.26}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}14.057\\ $\pm{0.165}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}6.583\\ $\pm{0.25}$\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}\textbf{Inclusion}\\ \textbf{mean$\pm{}$SD [kPa]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}28.546\\ $\pm{0.065}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}28.496\\ $\pm{0.015}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}15.864\\ $\pm{0.057}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}9.125\\ $\pm{0.013}$\end{tabular} \\ \hline \end{tabular} \end{table} \begin{figure}[!ht] \centering \includegraphics[width=3 in]{size} \caption{SM reconstruction of inclusions of different sizes: (a) 10mm and (b) 3mm using the LPVI technique and the proposed SHEAR-net.} \label{size_2} \end{figure} Now, we evaluate the quality of reconstructed SM image to demonstrate the robustness of the technique against ARF intensity variation, inclusion size variation, and position variation. The effects of these variations on the reconstructed SM image will be discussed in the sequel. First, Fig. \ref{force_2} and Table \ref{force_1} show the results for ARF intensity variation. The phantom for this experiment has a mean SM of 3.337 kPa for background and 16.587 kPa for inclusion. We have induced two different ARF intensity: 100\% force refers to the intensity that can create 20 $\mu$m maximum tissue displacement and that 50\% force refers to 10 $\mu$m maximum tissue displacement. Reducing the force results in more background noise in the LPVI technique compared to our proposed SHEAR-net and is evident from the PSNR and SNR values presented in Table \ref{force_1}. Moreover, the change in the values of DSC, SSIM, and mean$\pm{}$SD both for the inside and outside of the inclusion for lowering the ARF intensity is more drastic for the LPVI technique compared to that for the SHEAR-net. Another important feature of SHEAR-net to notice in the reconstructed images is the dashed region with $0.5\mu m<d<1\mu m$, where $d$ indicates the tissue displacement. The mean values in the dashed region for the LPVI technique and SHEAR-net when the force is 100\% are 7.756 kPa and 3.523 kPa, respectively. When the force is halved, SHEAR-net retains almost the same mean value (i.e., 3.597 kPa) of SM inside the dashed region. However, the mean value (i.e., 13.256 kPa) of SM for the LPVI method has a very large deviation from the true SM value. \begin{table}[!t] \centering \caption{\label{pos_1}Results To Evaluate Robustness Against Inclusion Position Variation} \begin{tabular}{| c|c|c|c|c|} \hline & \multicolumn{2}{c|}{ {\begin{tabular}[c]{@{}c@{}}\textbf{Type I (Position} \\ \textbf{top)}\end{tabular}}} & \multicolumn{2}{c|}{ {\begin{tabular}[c]{@{}c@{}} \textbf{Type II (Position}\\ \textbf{bottom)}\end{tabular}}} \\ \cline{2-5} \multirow{-2}{*}{ \textbf{Index}}& \textbf{LPVI} & \textbf{SHEAR-net }& \textbf{LPVI} & \textbf{SHEAR-net} \\ \hline \textbf{PSNR{[}dB{]}} & 18.67 & 29.11 & 16.61 & 28.41 \\ \hline \textbf{SNR{[}dB{]}} & 15.33 & 30.89 & 13.22 & 31.29 \\ \hline \textbf{SSIM} & 0.36 & 0.99 & 0.22 & 0.98 \\ \hline \textbf{DSC} & 0.21 & 0.81 & 0.23 & 0.79 \\ \hline \begin{tabular}[c]{@{}c@{}}\textbf{Background} \\ \textbf{mean$\pm{}$SD [kPa]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}48.708\\ $\pm{0.100}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}6.607\\ $\pm{0.024}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}48.144\\ $\pm{0.106}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}6.638\\ $\pm{0.025}$\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}\textbf{Inclusion}\\ \textbf{mean$\pm{}$SD [kPa]}\end{tabular} & \begin{tabular}[c]{@{}c@{}}67.301\\ $\pm{0.132}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}56.968\\ $\pm{0.021}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}53.517\\ $\pm{0.025}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}45.189\\ $\pm{0.005}$\end{tabular} \\ \hline \end{tabular} \end{table} Next, we present the comparative results for varying inclusion size. In Fig. \ref{size_2}, we show the reconstructed images for inclusions of two different diameters. And Table \ref{size_1} demonstrates the quantitative indices obtained for them. Type I with 10 mm diameter inclusion has a mean SM of 6.691 kPa for background and 28.299 kPa for inclusion. Type II with 3 mm diameter inclusion has a mean SM of 6.671 kPa for background and 12.108 kPa for inclusion. Compared to the LPVI technique, the SHEAR-net shows greater insensitivity against the inclusion size variation. It is evident that the SHEAR-net is able to reconstruct SM images of small inclusion having a diameter of around 3 mm and also of the moderate size inclusion with a diameter of 10 mm. Although the LPVI technique can reconstruct SM images of relatively large inclusions, it shows below average performance for the small inclusions. The SHEAR-net is thus found to be more robust for inclusion size variation compared to LPVI. Finally, last but not least important observation in our experiment is the inclusion position variation. To observe the robustness of the techniques in imaging inclusions that are positioned 10-15 mm far apart from the ARF focus point. We use Type I phantom that has a mean SM of 6.675 kPa for background and 57.271 kPa for inclusion and Type II phantom that has a mean SM of 6.674 kPa for background and 45.965 kPa for inclusion. The results of this observation are presented in Table \ref{pos_1}. A qualitative comparison is illustrated in Fig. \ref{pos_2}. Form the quantitative indices it is evident that the SHEAR-net is able to reconstruct high quality SM images that have inclusion center 10-15 mm apart from the ARF focus point. On the contrary, the LPVI technique suffers from the inability to reconstruct SM images in regions where the tissue displacement is below 1 $\mu$m as discussed earlier; it fails completely for the inclusions as shown in Fig. \ref{pos_2}. \begin{figure}[!t] \centering \includegraphics[width=3 in]{position} \caption{Illustrating the robustness of SHEAR-net when the inclusions are centered 10-15 mm apart from the ARF focus point. The LPVI technique fails completely to reconstruct the lesions.} \label{pos_2} \end{figure} \vspace{-0.5 cm} \subsection{CIRS Phantom with Inclusion Study} Figure \ref{real_2} demonstrates the 2-D reconstructed SM image of CIRS experimental phantom data with inclusion and Table \ref{real_1} presents the evaluated quantitative indices for these images. Each dataset has a mean SM value of 8.83 kPa in the background, and the inclusions of Type III and Type IV have mean SM values of 15.01 kPa and 24.68 kPa, respectively. We can observe contrast variation inside the inclusion from the zoomed-in-view of reconstructed images of Type III phantom using the LPVI technique. Also, the LPVI technique reconstructs a more noisy background and has little structural similarity for the Type IV inclusion. Therefore, the DSC and SSIM values are small compared to the proposed SHEAR-net. On the contrary, the homogeneity in the background and the better coverage of the inclusion in the reconstructed images by the proposed SHEAR-net are evident from the mean$\pm{}$SD values. Although the index values in Table \ref{real_1} for the Type III phantom are lower compared to that of the Type IV phantom for both the techniques, the performance of the LPVI technique declines more compared to the proposed SHEAR-net when the stiffness difference between the background and inclusion is small. The proposed SHEAR-net might be able to reconstruct better quality images if more CIRS experimental phantom data were available for training. \begin{table}[!t] \centering \caption{\label{real_1}Results Of 2-D SM Reconstruction Using The LPVI And The SHEAR-net On 2 CIRS Phantom Data} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{ \textbf{Type III}} & \multicolumn{2}{c|}{ {\color[HTML]{000000} \textbf{Type IV}}} \\ \cline{2-5} \multirow{-2}{*}{ \textbf{Index}} & \textbf{LPVI} & \textbf{SHEAR-net} & \textbf{LPVI} & \textbf{SHEAR-net} \\ \hline \textbf{PSNR {[}dB{]}} & 10.62 & 14.22 & 20.78 & 21.61 \\ \hline \textbf{SNR {[}dB{]}} & 11.21 & 15.31 & 23.5 & 25.80 \\ \hline \textbf{SSIM} & 0.69 & 0.74 & 0.82 & 0.86 \\ \hline \textbf{DSC} & 0.54 & 0.67 & 0.65 & 0.72 \\ \hline \textbf{\begin{tabular}[c]{@{}c@{}}Background\\ mean$\pm{}$SD {[}kPa{]}\end{tabular}} & \begin{tabular}[c]{@{}c@{}}8.519\\ $\pm{}0.032$\end{tabular} & \begin{tabular}[c]{@{}c@{}}9.124\\ $\pm{}0.026$\end{tabular} & \begin{tabular}[c]{@{}c@{}}8.499\\ $\pm{}0.031$\end{tabular} & \begin{tabular}[c]{@{}c@{}}10.89\\ $\pm{}0.025$\end{tabular} \\ \hline \textbf{\begin{tabular}[c]{@{}c@{}}Inclusion\\ mean$\pm{}$SD {[}kPa{]}\end{tabular}} & \begin{tabular}[c]{@{}c@{}}11.47\\ $\pm{}0.011$\end{tabular} & \begin{tabular}[c]{@{}c@{}}14.44\\ $\pm{}0.013$\end{tabular} & \begin{tabular}[c]{@{}c@{}}19.09\\ $\pm{}0.007$\end{tabular} & \begin{tabular}[c]{@{}c@{}}23.18\\ $\pm{}0.017$\end{tabular} \\ \hline \end{tabular} \end{table} \begin{figure}[!t] \centering \includegraphics[width=3 in]{real} \caption{2-D reconstructions of CIRS experimental phantom data: (a) Type III inclusion (b) Type IV inclusion; the zoomed-in-view of the marked region is illustrated to clearly visualize stiffness variation inside the lesion.} \label{real_2} \end{figure} Lastly, we present the average results of 125 test cases in In Table \ref{res} including 121 simulation data and 4 CIRS phantom data. We test the robustness of the proposed technique for ARF variation and inclusion shape, size, stiffness, and position variation. The proposed SHEAR-net shows superior performance over the LPVI technique in terms of every index. It is evident from the PSNR and SNR values that the reconstructed images have a more homogeneous background with less noise compared to that of the LPVI technique. In addition, the average SSIM score of 0.94 and DSC score of 0.758 demonstrate the proposed technique's efficacy in reconstructing images with high structural similarity and a good coverage area of the inclusion, respectively. On the contrary, the performance of the LPVI technique for the test set shows that it is less robust to the variations mentioned earlier. Moreover, the reconstruction time for the SHEAR-net is 0.17 s and thus makes it currently the fastest SM image reconstruction algorithm to our knowledge. Finally, we have observed that multiple push could improve the performance of the LPVI technique, however, Table \ref{res} demonstrates that the proposed SHEAR-net can perform above that mark with just a single push. \begin{table}[!h] \centering \caption{\label{res}Quantitative Comparison Between LPVI And SHEAR-net On 125 Test Phantoms} \begin{tabular}{| c|c|c|c|c|c|}\hline Index & \begin{tabular}[c]{@{}l@{}}PSNR\\{[}dB]\end{tabular} & \begin{tabular}[c]{@{}l@{}}SNR\\{[}dB]\end{tabular} & SSIM & DSC & \begin{tabular}[c]{@{}l@{}}Run Time\\{~}{~}{~}(S)\end{tabular} \\ \hline \textbf{LPVI} & 18.56 & 20.65&0.79&0.65&3712\\\hline \textbf{SHEAR-net} & 22.604&25.94&0.758&0.76&0.17\\\hline \end{tabular} \end{table} \vspace{-0.5 cm} \subsection{Discussion} In this study, we present a new technique called the SHEAR-net for the shear modulus imaging in soft tissues. This is the first ever DNN-based SM image reconstruction algorithm from a single ARF pulse induced ultrasound 2-D tissue displacement data. The study shows promising results using the SHEAR-net in SM image reconstruction with high noise robustness, accurate shape representation, position independence and large 2-D ROI. The proposed technique can accurately estimate SM from a single ARF pulse induced tissue displacement data. Moreover, we have shown that the SHEAR-net is able to retain almost the same imaging quality even at half the ARF intensity level that is generally used in conventional imaging for displacement generation. We have used our algorithm to produce results on simulation and CIRS phantom data. Due to resource limitation, we could not study the efficacy of the SHEAR-net on \emph{in-vivo} data.\\ \begin{figure}[!t] \centering \includegraphics[width=3 in]{disc} \caption{One of the unique features of SHEAR-net. It is able to reconstruct multiple inclusion with a clear contrast variation to indicate different stiffness. (a) The gold standard of the simulation phantoms (b) Reconstructed SM images using SHEAR-net.} \label{three} \end{figure} Now, we discuss the insight behind the parameters chosen for performance evaluation. Firstly, we look at the significance of the level of ARF intensity in SWE imaging. The intensity and tissue displacement are proportional to each other. High-intensity ARF creates high magnitude tissue displacement and the wave propagation covers a greater distance. This results in better quality SM image reconstruction as evident from the qualitative illustration in Fig. \ref{force_2} and quantitative evaluation in Table \ref{force_1}. However, high-intensity ARF excitation raises the risk associated with tissue heating \cite{liu2014thermal,doherty2013acoustic}. The level of ARF excitation, allowable clinically, generates tissue displacement of 20 $\mu$m \cite{liu2014thermal,doherty2013acoustic}. Therefore, we have set this value as 100$\%$ force in our simulation. We have observed that a higher force contributes to better quality SWE imaging but for tissue heating, it is not practically implementable. Our proposed SHEAR-net has the unique feature of SWE image reconstruction at 50$\%$ force maintaining almost the same quality as that of 100$\%$ and the same ROI dimension. This makes SWE imaging with SHEAR-net safer, more reliable and practical. Reducing the ARF intensity results in a smaller distance propagation of the induced shear waves. Our observations have shown that the best imaging area for the conventional approaches is an ROI where the tissue displacement is $\geq$1$\mu$m. We have illustrated in Fig. \ref{force_2} that as the tissue displacement decreases below 1$\mu$m, the quality of the reconstructed SM images gradually decreases. The quantitative values given in Table \ref{force_1} also support this observation. On the contrary, the SHEAR-net can reconstruct SM image for tissue displacement $\geq$0.5$\mu$m. This creates a promising opportunity to generate shear waves with lower ARF intensities and also maintain a larger ROI for SM image reconstruction. Note that with the conventional algorithms, the window for SWE imaging as seen in the modern ultrasound machines, e,g., Siemens Accuson S2000 is small. Multiple acquisitions and windows are required to visualize the whole region of 4$\times$4 cm$^2$ area, as seen in the B-mode image. With the proposed SHEAR-net, the observation window can be greater than the diagnostic windows in modern ultrasound machines making it possible to observe a 4$\times$2 cm$^2$ area in a single excitation. The immensely interesting and unique feature of the SHEAR-net as illustrated in Fig. \ref{three} is its ability to reconstruct SM images with multiple inclusions from the tracked tissue displacement data of a single push. For this demonstration, we have experimented with three sets of multiple inclusions phantom with different SM values for each inclusion. Note that these multiple-inclusion data were not included in our training and validation set. The LPVI technique fails completely to reconstruct an inclusion centered around 10-15 mm apart from the ARF focus point. From this observation, we can conclude that the SHEAR-net is the only existing technique that can reconstruct the SM images of multiple inclusions with a single push and maintain visually differentiable contrast for each stiffness. Moreover, the presence of different inclusions with shape and stiffness variation does not impact the reconstruction quality. Finally, the whole process is independent of the ARF focus point as all the results produced in this paper have the same focus point for ARF excitation irrespective of the position of the inclusion center. The temporal resolution is an important factor for both the conventional approaches and the proposed SHEAR-net. For memory constraints, we have taken a sampling rate of 8 kHz. However, in the conventional approaches up to 12.5 kHz sampling frequency is taken to get high temporal resolution and better quality image reconstruction. We have observed that taking a high sampling rate increases the SWE image quality. Therefore, using 8 kHz sampling frequency in a constrained environment is one limitation of the proposed SHEAR-net. Another limitation is the lack of diversity in the data. For this initial study, our dataset includes random inclusion positions, sizes, and stiffness variations for two specific shapes. Therefore, our future investigation will include experimental phantoms and \emph{in-vivo} clinical ultrasound SWE data with diverse and more complex tissue structures. We have observed that the conventional algorithms perform better imaging with increased spatial resolution. However, we could not increase the spatial resolution for SHEAR-net because of the memory constraint. For our dataset, we have used images of the size 96$\times$48. Finally, due to the memory constraint our training batch size was 16. However, we have tested with sizes 14 and 15 too. Our observations showed that increasing the batch size also improved the performance of the SHEAR-net. \section{Conclusion} This paper has introduced SHEAR-net, a novel deep learning based shear modulus image reconstruction technique from single ARF pulse induced 2-D tissue displacement over a time period. The proposed architecture relies on a novel S-net to localize inclusion and RME block to estimate the SM values for each point. The SHEAR-net has demonstrated promising qualitative and quantitative performance both in simulation and CIRS phantom study. It has reconstructed SM images from tissue displacement generated by half of the ARF intensity generally used for the conventional algorithms. We have shown that the SHEAR-net can accurately estimate the SM for tissue displacement of $\geq$0.5$\mu$m and thus can maintain a larger ROI compared to the conventional algorithms. Moreover, the half ARF intensity level allows the SHEAR-net to perform imaging without any safety concern associated with tissue heating. In addition, the proposed technique can reconstruct multiple inclusions with contrast variation within the an ROI. Our algorithm has estimated shear modulus from the noisy tracked tissue displacements real-time. The memory constraints in spatial and temporal resolution presently are keeping the performance of the proposed SHEAR-net capped. The future work will focus on breaking these boundaries, and a detailed \emph{in-vivo} study. \section*{Acknowledgment} This work has been supported by HEQEP UGC (CP$\#$096/BUET/Win-2/ST(EEE)/2017), Bangladesh. We thank Dr. Matthew Urban, Ultrasound Research Laboratory, Department of Radiology, Mayo Clinic, USA for supporting us with the CIRS phantom data. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} This work considers the adaptation of particle filtering~\cite{Gordon1993,Doucet2001} to modern graphics processing units (GPUs). Such devices are typical of a trend away from faster clock speeds toward expanding parallelism to maximise throughput at the processor level. Capitalising on this wider concurrency for a given application is rarely trivial, requiring adaptation of current best-practice sequential algorithms, development of new algorithms, or even revival of old ideas since bested in a serial context. This work makes such an attempt for the particle filter, with a focus on resampling strategies. Our motivation arises from Bayesian inference in large state-space models, in particular those of marine biogeochemistry~\cite{Evans1985,Jones2010}. The broad methodology is that of particle Markov chain Monte Carlo (PMCMC)~\cite{Andrieu2010}, a meld of MCMC over parameters with sequential Monte Carlo (SMC) over state. For Metropolis-Hastings in parameter space, the particle filter is a candidate for computing the marginal likelihood (over the state) of each newly proposed parameter configuration. This requires iteration of the particle filter, so the use of GPUs to reduce overall runtime is attractive. Beyond these applications, the material here may be of relevance to GPU adaptation of bootstrap methods, or in the presence of hard runtime constraints as in real-time applications. For some sequence of time points $t = 1,\ldots,T$ and observations at those times $\mathbf{y}_1,\ldots,\mathbf{y}_T$, the particle filter estimates the time marginals of a latent state, $p(\mathbf{X}_t|\mathbf{y}_{1:t})$, proceeding recursively as: \begin{enumerate} \item \emph{(Initialisation)} At $t = 0$, draw $P$ samples $\mathbf{x}_0^1,\ldots,\mathbf{x}_0^P \sim p(\mathbf{X}_0)$. \item \emph{(Propagation)} For $t = 1,\ldots,T$, $i = 1,\ldots,P$ and some proposal distribution $q(\mathbf{X}_t|\mathbf{X}_{t-1})$, draw $\mathbf{x}_t^i \sim q(\mathbf{X}_t|\mathbf{x}_{t-1}^i)$. \item \emph{(Weighting)} Weight each particle with $w_t^i = \frac{p(\mathbf{y}_t|\mathbf{x}_t^i) p(\mathbf{x}_t^i|\mathbf{x}_{t-1}^i) p(\mathbf{x}_{t-1}^i|\mathbf{y}_{1:t-1})}{q(\mathbf{x}_t^i|\mathbf{x}_{t-1}^i)}$, so that the weighted sample set $\{\mathbf{x}_t^i,w_t^i\}$ represents the filter density $p(\mathbf{X}_t|\mathbf{y}_{1:t})$. \end{enumerate} The basic particle filter suffers from \emph{degeneracy} -- the tendency, after several iterations, to heap all weight on a single particle. The usual strategy around this is to introduce a further step: \begin{enumerate} \item[4.] \emph{(Resampling)} Redraw, with replacement, $P$ samples from the weighted sample set $\{\mathbf{x}_t^i,w_t^i\}$, using weights as unnormalised probabilities, and assign a new weight of $1/P$ to each. \end{enumerate} Initialisation, propagation and weighting are very readily parallelised, noting that these are independent operations on each particle. Resampling, on the other hand, may require synchronisation and collective operations across particles, particularly to sum weights for normalisation. The focus of this work is on this resampling stage. Parallel implementation of resampling schemes has been considered before, largely in the context of distributed memory parallelism~\cite{Bolic2004,Bolic2005}. Likewise, implementation of the particle filter on GPUs has been considered~\cite{Lenz2008,Hendeby2007}, although with an emphasis on low-level implementation issues rather than potential algorithmic adaptations as here. \section{Resampling algorithms}\label{sec:resampling} For each particle $i$ at time $t$, consider the resampling algorithm to be the means by which its number of \emph{offspring}, $o_i$, for propagation to $t + 1$ is selected, or alternatively the means by which its \emph{ancestor}, $a_i$, from time $t - 1$ is selected. Figure \ref{fig:radial} describes a number of popular resampling schemes, while Code \ref{code:resamplers} provides pseudocode. \begin{figure*}[tp] \centering \includegraphics[width=\textwidth]{radial} \caption{Visualisation of popular resampling strategies. Arcs along the perimeter of the circles represent particles by weight, arrows indicate selected particles and are positioned \textbf{(a)} uniformly randomly in the multinomial resampler, \textbf{(b \& c)} by evenly slicing the circle into strata and randomly selecting an offset (stratified resampler) or using the same offset (systematic resampler) into each stratum, or \textbf{(d)} initialising multiple Markov chains and simulating to convergence in the Metropolis resampler.} \label{fig:radial} \end{figure*} \begin{code}[t] {\small \begin{minipage}{\linewidth} \input{multinomial.tab} \end{minipage} \begin{minipage}[t]{0.5\linewidth} \input{systematic.tab} \end{minipage} \hfill \begin{minipage}[t]{0.5\linewidth} \input{metropolis.tab} \end{minipage} } \caption{Pseudocode for the multinomial, systematic and Metropolis resamplers.}\ \label{code:resamplers} \end{code} The stratified and systematic~\cite{Kitagawa1996} resamplers output offspring, while the multinomial and Metropolis resamplers output ancestors. Converting between the two is straightforward. The multinomial, stratified and systematic resamplers require a collective prefix-sum of weights. While this can be performed quite efficiently on GPUs~\cite{Sengupta2008}, it is not ideal, requiring thread communication and multiple kernel launches. The Metropolis resampler requires only ratios between weights, so that threads can operate independently in a single kernel launch; this is more suited to GPU architectures. The Metropolis resampler is parameterised by $B$, the number of iterations to be performed for each particle before settling on its chosen ancestor. Selecting $B$ is a tradeoff between speed and reliability: while runtime reduces with fewer steps, the sample will be biased if $B$ is too small to ensure convergence. Bias may not be such a problem for tracking applications where performance is judged by outcomes, but will violate assumptions that lead to unbiased state estimates in a particle MCMC framework~\cite{Andrieu2010}. Convergence depends largely on the particle of greatest weight, $p_{\text{max}}$, being exposed by a sufficient number of proposals that the probability of it being returned by any one chain approaches $w_{\text{max}} = \max_i w_i/\sum_j w_j$. Following \citet{Raftery1992}[\S2.1], construct a binary 0-1 process $Z_n = \delta(U_n = p_{\text{max}})$ over the sequence $U_n$ generated by a single chain of the Metropolis resampler. It seems sensible now to require that $P(Z_B = 1\,|\,Z_0)$ be within some $\epsilon$ of $w_{\text{max}}$. $Z_n$ is a Markov chain with transition matrix given by: \begin{equation} T = \left(\begin{array}{cc} 1 - \alpha & \alpha \\ \beta & 1 - \beta \end{array}\right)\,, \end{equation} where $\alpha$ is the probability of transitioning from 1 to 0, and $\beta$ from 0 to 1. As a uniform proposal across all particle indices is used (\proc{Metropolis-Resampler}, line \ref{line:proposal}), the chance of selecting $p_{\text{max}}$ is $1/P$, being of greatest weight this will always be accepted through the Metropolis criterion (line \ref{line:accept}), and so $\beta = 1/P$. For $\alpha$, we have: \begin{equation} \alpha = \sum_{i = 1, i \neq p_{\text{max}}}^{P} \left(\frac{1}{P} \cdot \frac{w_i/\sum_j w_j}{w_{\text{max}}}\right) = \frac{1}{Pw_{\text{max}}}(1 - w_{\text{max}}) \,. \end{equation} The $l$-step transition matrix is then: \begin{equation} T^l = \frac{1}{\alpha + \beta}\left(\begin{array}{cc} \alpha & \beta \\ \alpha & \beta \end{array}\right) + \frac{\lambda^l}{\alpha + \beta}\left(\begin{array}{cc} \alpha & -\alpha \\ -\beta & \beta \end{array}\right)\,, \end{equation} where $\lambda = (1 - \alpha - \beta)$, and we require: \begin{equation} \lambda^B \leq \frac{\epsilon(\alpha + \beta)}{\max(\alpha,\beta)}\,, \end{equation} satisfied when: \begin{equation} B \geq \frac{\log \frac{\epsilon(\alpha + \beta)}{\max(\alpha,\beta)}}{\log \lambda}\,. \end{equation} In practice one may select $P$ and an upper tolerance for $w_{\text{max}}$ based on expected weight variances given the quality of $q(\cdot)$ proposals, and compute an appropriate $B$ as above to use throughout the filter. \section{Experiments}\label{sec:experiments} Weight sets are simulated to assess the speed and accuracy of each resampling algorithm. Assume that weights are approximately Dirichlet distributed, $\mathbf{w} \sim \text{Dir}(\boldsymbol{\alpha})$, with $\alpha_1 = \alpha_2 = \ldots = \alpha_P = \alpha$. Data sets are simulated for all combinations of $P = 256,512,\ldots,65536$ and $\alpha = 10,1,.1,.01$. Experiments are conducted on an NVIDIA Tesla S2050 using double precision, CUDA 3.1, gcc 4.3 and all compiler optimisations. Implementation of the Metropolis resampler uses a custom kernel, with random numbers provided by the Tausworthe~\cite{LEcuyer1996} generator of the Thrust library~\cite{Thrust}. The multinomial and systematic resamplers use vector operations of Thrust. To remove the overhead of dynamic memory allocations, temporaries allocated by Thrust have been replaced with pooled memory, reusing previously allocated arrays. Figure \ref{fig:results} gives both the accuracy and runtime of the multinomial, systematic and Metropolis resamplers across $P$ and $\alpha$. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{results} \caption{Experimental results over 1000 resamplings: \textbf{(top row)} mean error, $\sum_{i=1}^{P} (\frac{o_i}{P} - \frac{w_i}{\sum_j w_j})^2$, and \textbf{(bottom row)} runtime across various numbers of particles, $P$, with Dirichlet $\alpha$ parameter of \textbf{(left to right)} 10, 1, .1 and .01.} \label{fig:results} \end{figure} The Metropolis resampler converges to a multinomial resampling as the number of steps increases, such that the error in outcomes should match, but can never beat, that of the multinomial resampler. Figure \ref{fig:results} shows that the Metropolis resampler has converged in all cases, so certainly the estimates of $B$ from our analysis appear reliable. The stratified resampler is known to minimise the variance, and thus error, of resampling outcomes. The results here confirm this expectation. The multinomial resampler performs slightly faster than the stratified resampler with sorting enabled, but slightly slower when not. This is due to the binary search proceeding faster when weights are sorted. Certainly any thought of pre-sorting to hasten the binary search seems futile -- the saving is much less than the additional overhead of the sort on the GPU. Judging by the lack of scaling over $P$, runtime of the systematic and sorted multinomial resamplers appears dominated by the overhead of multiple kernel launches. At $\alpha$ of .1 and .01, the unsorted systematic resampler beats the others in runtime as well as error. At $\alpha$ of 10 and 1, the Metropolis resampler is fastest for $P \leq 4096$; in all other cases $B$ must be too large to be competitive. Note that the modest variance in weights at these $\alpha$ means that the fairest comparison is against the unsorted approaches, as pre-sorting is unnecessary for numerical accuracy. At $\alpha = 1$, effective sample size (ESS) is approximately $.5P$, making this quite a realistic scenario. At $\alpha = .1$, ESS is about $.1P$, and at $\alpha = .01$ about $.01P$: more severe cases. Performance of the Metropolis resampler hinges almost entirely on the rate at which random numbers can be generated, and this should be the first focus for further runtime gains. It is possible to reduce $B$, proportionately reducing runtime, but care should be taken that resampling results are not biased as a result. Nevertheless, this may be worthwhile under tight performance constraints, and indeed such configurability might be considered an advantage. \bibliographystyle{abbrvnat} {\small
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Batched Triangle Listing}\label{sec:batched-triangle} In this section, we extend our algorithms in the previous sections to handle batches of more than one update. Our results are in the batched model (formally defined in~\cref{sec:prelims}) where in each batch of updates, $O(1)$ updates occur \emph{adjacent to any node in the graph}. Such a model is realistic since often many updates can occur in total over an entire real-world network (such a social network) but the updates adjacent to each node in the network are often few in number. We show that using our techniques above, we can also perform $\mathsf{List}\xspace(K_3)$ in $1$-round using $O(\log n)$ bandwidth. When provided a batch of updates, each node with one or more new neighbors sends the ID of the new neighbor(s) to all its neighbors. Furthermore, each node sends the IDs of all nodes deleted from its adjacency list to all its neighbors. We show that this simple algorithm allows us to perform $\mathsf{List}\xspace(K_3)$ in one round under any type of update. The pseudocode for this algorithm is given in~\cref{alg:batched-triangle}. All messages are sent in the same round in~\cref{alg:batched-triangle} and multiple messages sent (in the pseudocode) between two nodes are concatenated into the same message. \SetKwFunction{FnBListTriangle}{BatchedListTriangles} \begin{algorithm}[!t] \caption{\label{alg:batched-triangle} Batched Listing Triangles} \textbf{Input:} Updates $\mathcal{U}$ which is a set of updates consisting of node insertions, node deletions, edge insertions, and/or edge deletions.\\ \textbf{Output:} Each triangle is listed by at least one of its nodes and no set of $3$ nodes which is not a triangle is wrongly listed.\\ \Fn{\FnBListTriangle{$\mathcal{U}$}}{ \If{$\mathcal{U}$ adds node(s) $u \in I_v$ to $N(v)$}{\label{batched-triangle:add-adj} $v$ sends $(\textsc{New}\xspace, \{ID_u \mid u \in I_v\})$ to all $w \in N(v)$.\label{batched-triangle:send-new}\\ } \If{$w$ receives $(\textsc{New}\xspace, S_v)$ from $v$}{\label{batched-triangle:receive-new} \For{$ID_u \in S_v$}{ \If{$u \in N(w)$}{ $w$ starts listing new triangle $\{u, v, w\}$.\label{batched-triangle:start-listing}\\ } } } \If{$\mathcal{U}$ deletes node(s) $u \in D_v$ from $N(v)$}{\label{batched-triangle:delete-adj} $v$ sends $(\textsc{Delete}\xspace, \{ID_u \mid u \in D_v\})$ to all $w \in N(v)$.\label{batched-triangle:send-del}\\ \For{$u \in D_v$}{ \For{$w \in N(v) \cup D_v$}{ \If{$v$ lists triangle $\{u, v, w\}$}{\label{batched-triangle:list-triangle} $v$ stops listing triangle $\{u, v, w\}$.\label{batched-triangle:stop-listing} } } } } \If{$w$ receives $(\textsc{Delete}\xspace, S_v)$ from $v$}{\label{batched-triangle:receive-2-del} \For{$ID_u \in S_v$}{ \For{all triangles $\{u, v, w\}$ that $w$ lists}{\label{batched-triangle:list-new-2} $w$ stops listing triangle $\{u, v, w\}$.\label{batched-triangle:stop-list-2} } } } } \end{algorithm} \triangleslist* \begin{proof} For any inserted edge $\{u, v\}$, node $u$ sends $ID_v$ to all its neighbors (and similarly for node $v$). Any neighbor $w$ of both $u$ and $v$ (and where neither $u$ or $v$ are deleted from its adjacency list) lists the new triangle $\{u, v, w\}$. Edges $\{u, w\}$ and/or $\{v, w\}$ may also be new edges and the proof still holds. Now, we consider node insertions. An inserted node, $u$, creates a triangle if it is adjacent to two newly inserted edges, $\{u, v\}, \{u, w\}$, and the edge $\{v, w\}$ exists in the graph or is newly inserted. If $\{v, w\}$ already exists, then $v$ sends $\textsc{ID}\xspace_u$ to $w$ and $w$ sends $\textsc{ID}\xspace_u$ to $v$; thus, both $v$ and $w$ now list triangle $\{u, v, w\}$. If $\{v, w\}$ is newly inserted, then $v$ and $w$ would still send $\textsc{ID}\xspace_u$ to each other and also send each other's IDs to $u$. Thus, all three nodes now list the triangle. Finally, we consider node/edge deletions. For any destroyed triangle $\{u, v, w\}$, suppose wlog that $u$ lists it. Either $u$ is adjacent to an edge deletion, in which case, $u$ stops listing $\{u, v, w\}$; or, $u$ is still adjacent to both $v$ and $w$ and edge $\{v, w\}$ is deleted. In the second case, $v$ and $w$ both send $u$ each other's ID and node $u$ stops listing $\{u, v, w\}$. A node deletion is incident to all remaining nodes of the triangle and so the remaining nodes stop listing the triangle. Since each node is adjacent to $O(1)$ updates and sends a message of size equal to the number of adjacent updates times the size of each node's ID (where the size of each ID is $O(\log n)$), the total bandwidth used is $O(\log n)$. \end{proof} \section{Batched Clique Listing}\label{sec:batched-clique} A slightly modified algorithm allows us to perform $\mathsf{List}\xspace(K_s)$ under the same constraints; however, the proof is more involved than the $K_3$ case but uses the same concepts we developed in the previous sections. The modification computes the cliques using the triangles listed after a batch of updates similar to~\cref{alg:list-cliques}. Unlike the case for triangles, here, as in the case for counting cliques under single updates, \cref{thm:list-clique}, we cannot handle edge insertions in $O(\log n)$ bandwidth. We give the new pseudocode for this procedure in~\cref{alg:batched-clique}. This part of the procedure is identical to that given in~\cref{alg:list-cliques}. \SetKwFunction{FnBListClique}{BatchedListCliques} \begin{algorithm}[!t] \caption{\label{alg:batched-clique} Batched Listing Cliques} \textbf{Input:} Updates $\mathcal{U}$ which is a set of updates consisting of node insertions, node deletions, and/or edge deletions.\\ \textbf{Output:} Each $K_s$ is listed by at least one of its nodes and no set of $s$ nodes which is not a $K_s$ is wrongly listed.\\ \Fn{\FnBListClique{$\mathcal{U}$}}{ Run \FnBListTriangle{$\mathcal{U}$} (\cref{alg:batched-triangle}).\\ \For{all $v \in V$}{\label{batched-clique:iterate-node} \For{every $S \subseteq N(v)$ of $s-1$ neighbors}{\label{batched-clique:every-subset} \If{$v$ lists $\{v\} \cup A$ as a triangle for every subset of $2$ nodes $A = \{a, b\}\subseteq S$}{\label{batched-clique:all-subsets-triangle} $v$ lists $S \cup \{v\}$ as a $K_s$.\label{batched-clique:list-new-clique} } } $v$ stops listing every clique containing a destroyed triangle.\\ } } \end{algorithm} To prove the main theorem in this section,~\cref{lem:batched-clique}, we only need to prove a slightly different version of \cref{obs:list-clique-triangles} that still holds in this setting. Namely, we modify~\cref{obs:list-clique-triangles} so that it holds for triangles created by two or more updates in the same round. \begin{observation}\label{lem:batched-obs-1} Under node insertions and node/edge deletions, suppose $v$ is inserted in round $r$ and $u$ is inserted in round $r' \geq r$. If $\{u, v, w\}$ is a triangle in round $r'$, then in round $r'$, $u$ lists $\{u, v, w\}$ using $O(1)$ bandwidth. \end{observation} \begin{proof} The key difference between this observation and~\cref{obs:list-clique-triangles} is that the observation holds when $u$ is inserted in the same round $r$ as $v$. Suppose triangle $\{u, v, w\}$ is created in round $r'$. We first consider the case when $r' > r$. In this case, $w$ sends $ID_u$ to $v$ and $v$ sees $u$ in its adjacency list. Thus, $v$ lists triangle $\{u, v, w\}$. Now, suppose $r' = r$. In this case, $w$ still sends $ID_u$ to $v$ and $v$ sees $u$ in its adjacency list; thus, $v$ lists $\{u, v, w\}$. There are multiple scenarios for edge deletions that destroy triangle $\{u, v, w\}$. First, if one edge, $\{u, v\}$, is deleted, then $u$ and $v$ send each other's IDs to $w$ in deletion messages. Thus, $w$ stops listing $\{u, v, w\}$ if it previously listed $\{u, v, w\}$. If two edges are deleted, then, every node is adjacent to at least one deletion and will stop listing the triangle. A node deletion causes at least one adjacent edge to be deleted from every remaining node in the triangle; thus, any remaining node(s) will stop listing the triangle. \end{proof} \begin{observation}\label{lem:batched-obs-2} Under node insertions and node/edge deletions, if triangle $\{u, v, w\}$ is created in round $r$ from two or more node insertions, then all three nodes in the triangle lists it. \end{observation} \begin{proof} Without loss of generality, suppose $u$ and $v$ are inserted in round $r$. Then, $u$ and $v$ sees each other in their adjacency lists. Node $w$ sends $ID_u$ and $ID_v$ to both $u$ and $v$. Hence, both $u$ and $v$ lists triangle $\{u, v, w\}$. Node $w$ receives $ID_u$ from $v$ and $ID_v$ from $u$ and sees both $u$ and $v$ in its adjacency list. Hence, node $w$ lists $\{u, v, w\}$. When all three nodes are inserted in the same round, each node receive the other nodes' IDs via messages and successfully lists the triangle. The remaining parts of this proof is identical to the second part of the proof of~\cref{lem:batched-obs-1}. \end{proof} \cliqueslist* \begin{proof} The key to this proof, as in the case for one update, is that there exists one node that lists all triangles incident to it in a new clique and hence lists the clique. By~\cref{lem:batched-obs-1}, the node inserted in the earliest round for each $K_s$ lists it. If all nodes in a $K_s$ are inserted in the current round, then by~\cref{lem:batched-obs-2} and~\cref{obs:list-clique-triangles}, all nodes in the clique lists it. We now show that each node which lists a clique will stop listing it when it is destroyed. By~\cref{lem:triangle-deletion}, at least one triangle is destroyed for every node in a clique $C$ if an edge deletion destroys $C$. Then, a triangle is destroyed by either an edge deletion or a node deletion (or both). For a node deletion $v$, an edge incident to every other node in any triangle containing $v$ will be deleted. Thus, each node listing the triangle containing $v$ will stop listing it when $v$ is deleted. For any edge deletion $\{u, v\}$ that destroys a triangle $\{u, v, w\}$, if edges $\{u, w\}$ and $\{v, w\}$ are not deleted, then $w$ receives a delete message from $u$ and $v$ listing each other's ID. Then, $w$ knows that triangle $\{u, v, w\}$ is destroyed and stops listing it. We know every node $v$ that lists $C$ lists all of the triangles composed of nodes in $C$ that are incident to $v$. We just showed that every node that lists a triangle successfully stops listing it when it is destroyed. Then, it follows that each node which lists $C$ stops listing it when it is destroyed. Since we are guaranteed $O(1)$ incident updates to every node, by~\cref{alg:batched-clique}, each node sends $O(1)$ IDs for each batch of updates and the size of each message is still $O(\log n)$. \end{proof} \section{Open Questions} We hope the observations we make in this paper will be useful for future work. The open question we find the most interesting regarding work in this area are techniques to list other induced subgraphs that are not cliques in one round under very small bandwidth. A preliminary look at listing $k$-paths and $k$-cycles in $1$-round under $O(1)$-bandwidth proves to be impossible if node deletions are allowed due to the following reason. Suppose we are given a subgraph $H$ with radius greater than $2$ and suppose $v$ is currently listing $H$. Then a node deletion of a node $u$ that destroys the subgraph, where $u$ is at a distance greater than $2$ from $v$, will not be detected by $v$ in one round. However, perhaps results could found for other types of subgraphs and if there are no node/edge deletions. Another interesting open question is whether these results could be extended to \emph{arbitrarily} large number of edge/node updates, specifically extending our results given in the batched setting. \section{Clique Detection and Listing}\label{sec:clique} In this section, we show our main result by providing a simple upper bound for the $1$-round bandwidth complexity of $\mathsf{List}\xspace(K_s)$ under node insertions and node/edge deletions. Specifically, we show that $\mathsf{List}\xspace(K_s)$ has $1$-round $O(1)$-bandwidth complexity. This directly resolves Open Question $4$ in~\cite{BC19}. We describe our algorithm below and give its pseudocode in~\cref{alg:list-cliques}. To begin as a simple warm-up illustration of our technique, we re-prove Theorem 3.3.3 of~\cite{BC19} using only the node insertion portion of the algorithm. \SetKwFunction{FnListCliques}{ListCliques} \begin{algorithm}[!t] \caption{\label{alg:list-cliques} Listing Cliques} \textbf{Input:} Update $U$ which can be a node insertion, node deletion, or edge deletion.\\ \textbf{Output:} Each $K_s$ is listed by at least one of its nodes and no set of $s$ nodes which is not a $K_s$ is wrongly listed.\\ \Fn{\FnListCliques{$U$}}{ \If{$U$ adds $u$ to $N(v)$}{\label{clique:add-adj} $v$ sends $\textsc{New}\xspace$ to all $w \in N(v)$.\label{clique:send-new}\\ } \If{$w$ receives $\textsc{New}\xspace$ from $v$ and $u$ is added to $N(w)$}{\label{clique:receive-new} $w$ starts listing new triangle $\{u, v, w\}$.\label{clique:start-listing}\\ } \If{$U$ deletes $u$ from $N(v)$}{\label{clique:destroy-adj} $v$ sends $\textsc{Delete}\xspace$ to all $w \in N(v)$.\label{clique:send-del}\\ \For{$w \in N(v)$}{ \If{$v$ lists triangle $\{u, v, w\}$}{\label{clique:list-triangle} $v$ stops listing triangle $\{u, v, w\}$.\label{clique:stop-listing} } } } \If{$v$ receives $\textsc{Delete}\xspace$ from $u$ and $w$}{\label{clique:receive-2-del} \If{$v$ lists triangle $\{u, v, w\}$}{\label{clique:list-new-2} $v$ stops listing triangle $\{u, v, w\}$.\label{clique:stop-list-2} } } \For{all $v \in V$}{\label{clique:iterate-node} \For{every $S \subseteq N(v)$ of $s-1$ neighbors}{\label{clique:every-subset} \If{$v$ lists $\{v\} \cup A$ as a triangle for every subset of $2$ nodes $A = \{a, b\}\subseteq S$ }{\label{clique:all-subsets-triangle} $v$ lists $S \cup \{v\}$ as a $K_s$.\label{clique:list-new-clique} } } $v$ stops listing every clique containing a destroyed triangle.\\ } } \end{algorithm} In~\cref{alg:list-cliques}, for every update which adds a node $u$ to an adjacency list $N(v)$ (\cref{clique:add-adj}), $v$ sends an $O(1)$ sized message to every neighbor $w \in N(v)$ (\cref{clique:send-new}). If any neighbor $w$ also gets $u$ added to its adjacency list $N(w)$ (\cref{clique:receive-new}), then $w$ lists triangle $\{u, v, w\}$. If instead $u$ is deleted from $N(v)$ (\cref{clique:destroy-adj}), then $v$ first sends $\textsc{Delete}\xspace$ to all its neighbors $w \in N(v)$ (\cref{clique:send-del}). Furthermore, for every neighbor $w \in N(v)$, if $v$ lists triangle $\{u, v, w\}$ (\cref{clique:list-triangle}), then $v$ stops listing $\{u, v, w\}$ (\cref{clique:stop-listing}). If a node $v$ receives a $\textsc{Delete}\xspace$ message from two of its neighbors (\cref{clique:receive-2-del}) and if it lists $\{u, v, w\}$ as a triangle (\cref{clique:list-new-2}), then it stops listing $\{u, v, w\}$ (\cref{clique:stop-list-2}). Finally, after receiving all the messages for this round, every vertex $v \in V$ determines the subsets $S \subseteq N(v)$ of $s-1$ of its neighbors (\cref{clique:every-subset}) where every subset of $2$ nodes of $\{a, b\} \subseteq S$ including itself, $\{a, b\} \cup \{v\}$, is a triangle (\cref{clique:all-subsets-triangle}); each such $S \cup \{v\}$ is a $K_s$ and $v$ lists it as a $K_s$. Note that~\cref{alg:list-cliques} treats a node that is deleted and reinserted like a new node when it is reinserted (i.e.\ nodes do not distinguish between a completely new neighbor from one that is deleted and reinserted at a later time). We now prove a set of useful observations which we use to prove our main theorem regarding $\mathsf{List}\xspace(K_s)$. The first observation below states that any node inserted in round $r$ will list every triangle containing it that is formed after its insertion. This is the crux of our analysis since this combined with our other observations states that every clique \emph{is listed by its earliest inserted node.} \begin{observation}\label{lem:node-insertion-triangle} Under node insertions and node/edge deletions, suppose $v$ is inserted in round $r$ and $u$ is inserted in round $r' > r$. If $\{u, v, w\}$ is a triangle in round $r'$, then in round $r'$, $u$ lists $\{u, v, w\}$ using $O(1)$ bandwidth. \end{observation} \begin{proof} We prove this via induction. We assume our observation holds for the $j$-th node insertion and prove it for the $(j+1)$-st insertion. The observation trivially holds for the initial graph since all nodes know the topology of the initial graph and thus lists all triangles they are part of. Let $v$ be a node inserted in round $r \leq j$ and $u$ be a node inserted in round $j + 1$. If $u$ forms a triangle with $v$ and another node $w$, when node $u$ is inserted, it must contain an edge to both $v$ and $w$ and edge $\{v, w\}$ must already exist in the graph. Edge $\{v, w\}$ must already exist in the graph since edges are only inserted as a result of node insertions; hence, if an edge between $v$ and $w$ exists it must have been inserted when either $v$ or $w$ was inserted (or it existed in the initial graph), whichever one was inserted later. When a node receives a new neighbor, it sends $\textsc{New}\xspace$ to all its neighbors indicating it received a new neighbor (\cref{clique:send-new} of~\cref{alg:list-cliques}). Since both $v$ and $w$ are adjacent and send $\textsc{New}\xspace$ to each other, $v$ receives $\textsc{New}\xspace$ from $w$ and knows that $u$ must be its new neighbor. Thus, $v$ lists the triangle containing $w$ and $u$. Because this holds for all vertices inserted in round $r \leq j$, we proved that every node inserted in round $r \leq j$ correctly lists every triangle created in round $j + 1$ that contains it. We now show that $v$ does not incorrectly list a set of three nodes $\{u, v, w\}$ as a triangle when the set is not a triangle. This means that we show on an edge or node deletion, $v$ stops listing triangles that are destroyed. We first show that any node which lists a triangle will also successfully list its deletion after an edge deletion. First, if an edge deletion removes a neighbor of $v$ that is part of a triangle that $v$ lists, then $v$ stops listing this triangle (\cref{clique:stop-listing}). Node $v$ also sends $\textsc{Delete}\xspace$ to all its neighbors (\cref{clique:send-del}). Suppose the edge deletion happens between $u$ and $v$ and $w$ lists triangle $\{u, v, w\}$. Then, $w$ receives $\textsc{Delete}\xspace$ from $u$ and $v$ in round $j + 1$ and knows that edge $\{u, v\}$ is deleted; $w$ then stops listing $\{u, v, w\}$ (\cref{clique:stop-list-2,clique:list-new-2,clique:receive-2-del}). Hence, after an edge deletion in round $j + 1$, all nodes which lists a triangle destroyed by the deletion stops listing the triangle. What remains to be shown is that a node which lists a triangle can successfully list its deletion after a node deletion. When a triangle is destroyed due to a node deletion, one of its nodes must have been deleted. Hence, any node $u$ which lists a triangle $\{u, v, w\}$ and sees the deletion of one of its two neighbors from its adjacency list will stop listing it. \end{proof} \begin{observation}[\cite{BC19}]\label{obs:list-clique-triangles} If a node lists all the triangles containing it within a clique of size $k \geq 3$, then the node lists the clique. \end{observation} \begin{proof} This observation was made in~\cite{BC19}. If a node lists all triangles containing it within a clique, then it lists all edges between the nodes in each of these triangles. From this information, it can compute and list every clique that may be formed from the set of nodes in these triangles. \end{proof} \begin{theorem}[Theorem 3.3.3~\cite{BC19}]\label{thm:list-node-insert} The deterministic $1$-round bandwidth complexity of $\mathsf{List}\xspace(K_s)$ under node insertions is $O(1)$. \end{theorem} \begin{proof} By~\cref{lem:node-insertion-triangle}, under node insertions, any node $v$ inserted in round $r$ lists every triangle containing it and a node $u$ inserted in round $r' > r$ in $1$-round using $O(1)$-bandwidth, deterministically. Then, suppose $K_s$ is formed in round $j$ after the $j$-th node insertion. Each node of $K_s$ in the initial graph lists all subsequent triangles formed in $K_s$ from node insertions (or the first inserted node can do this if no nodes in $K_s$ were present in the initial graph). By~\cref{obs:list-clique-triangles}, this node lists $K_s$ in round $j$ using $O(1)$ bandwidth. \end{proof} \begin{corollary}\label{cor:list} Given node insertions, the $i$-th node $v$ inserted for clique $K_s$ lists the smaller $K_{s - i + 1}$ clique of $K_s$ composed of nodes inserted after $v$. \end{corollary} \begin{proof} By~\cref{lem:node-insertion-triangle}, each node $v$ lists all triangles containing $v$ and incident to nodes inserted after it. Hence, each node lists all incident cliques with nodes inserted after it. \end{proof} \begin{observation}\label{lem:min-can-list} If a node $v$ lists a clique of size $k \geq 3$, then $v$ lists all triangles containing $v$ in the clique. \end{observation} \begin{proof} This observation can be easily shown: if a node lists a clique, then it lists all nodes contained in the clique by definition. Then, because the node knows that such a clique exists, it can find all combinations of $3$ nodes in the clique and list such combinations as all triangles in the clique. \end{proof} \begin{observation}\label{lem:triangle-deletion} Any node/edge deletion in a clique results in (at least) one adjacent triangle deletion (where the triangle is composed of nodes in the clique) for every node in the clique. \end{observation} \begin{proof} A clique consists of a set of triangles which contains all subsets of $3$ vertices from the clique. Suppose the edge deletion occurs between vertices $u$ and $v$, then any other vertex $w \neq u, v \in C$, where $C$ is the clique, must be adjacent to $u$ and $v$, and hence formed a triangle with $u$ and $v$. Since the edge between $u$ and $v$ is destroyed, $w$ cannot form a triangle with $u$ and $v$, and hence, a triangle adjacent to $w$ is destroyed. \end{proof} Given the above observations and the relevant lemmas and theorems from~\cite{BC19}, we are now ready to prove the main theorem of this section which resolves Open Question $4$ of~\cite{BC19}. \insertsthm* \begin{proof} To begin, we stipulate that no node lists a clique if it did not list the clique previously after receiving a deletion message. \cref{lem:node-insertion-triangle} shows that every node $v$ which is inserted in round $r$ lists every triangle containing it and a node $u$ inserted in round $r' > r$. Using this observation, we handle the updates in the following way (reiterating the strategy employed by our key observations). First, we show that any created clique is listed by at least one of the nodes it contains. By~\cref{lem:node-insertion-triangle} and~\cref{obs:list-clique-triangles}, the node that is inserted first in a $K_s$ clique lists the $K_s$ clique that is created by node insertions. We now show the crux of our proof that each node which lists $K_s$ also successfully stops listing it after it is destroyed. As proven previously in~\cref{lem:node-insertion-triangle}, any node which lists a triangle stops listing it when it is destroyed. Given any edge deletion or node deletion which destroys a clique $C$, it must destroy at least one triangle adjacent to every node in the clique $C$ by~\cref{lem:triangle-deletion}. Every node which lists the clique lists all triangles containing it in the clique by~\cref{lem:min-can-list}. Then, every node $v$ which lists a clique will know if one of its listed triangles is destroyed; then $v$ will also know of the clique that is destroyed if a triangle that makes up the clique is destroyed. Finally, to conclude our proof, we explain one additional scenario that may occur: a clique which is destroyed may be added back again later on. Any clique which is destroyed due to an edge deletion cannot be added again unless a node deletion followed by another node insertion occurs since we are only allowed edge insertions associated with node insertions. Thus, by~\cref{cor:list}, the earliest node(s) (if such nodes existed in the initial graph) in this newly formed clique lists this clique. Hence, we have proven that any created clique is listed by at least one of its nodes and that any node which lists the clique also stops listing it after it is destroyed, concluding our proof of our theorem. The bandwidth is $O(1)$ since each node sends either no message, \textsc{New}\xspace, or \textsc{Delete}\xspace to each of its neighbors each round. \end{proof} \section{Introduction} Detecting and listing subgraphs under limited bandwidth conditions is a fundamental problem in distributed computing. Although the static version of this problem has been studied by many researchers in the past~\cite{Abboud17,CCG21,CGL20, CPSZ21,ChangPZ19,CS19,CS20, DruckerKO13, EFFKO19,FischerGKO18,FraigniaudMORT17,GonenO17, HPZZ20,IzumiG17,KorhonenR17,Pandurangan18} in both the upper bound and lower bound settings, the dynamic version of such problems often require different techniques. Triangle and clique listing are problems where every occurrence of a triangle or clique in the graph is listed by at least one node in the triangle or clique. More specifically, the question we seek to answer is: given a change in the topology of the graph under one or more updates, can we accurately list all cliques in the updated graph? For certain settings, this question has an easy answer. For example, in the \textsc{Local}\xspace model where communication is synchronous and error-free and messages can have unrestricted size, it is trivial for any node to list all $k$-cliques adjacent to it after any set of edge insertion/deletion and/or node insertion/deletion. In the \textsc{Local}\xspace model, every node would broadcast the entirety of its adjacency list to all its neighbors each round. Thus, each node can reconstruct the edges between its neighbors from these messages and it is easy to solve the clique listing problem using the reconstructed neighborhood. In the traditional \textsc{Congest}\xspace model, messages are passed between neighboring nodes in synchronous rounds where each message has size $O(\log n)$. Detecting and listing triangles and cliques in the \textsc{Congest}\xspace model turn out to be much harder problems. This note focuses on triangle and clique \emph{listing} for which a number of previous works provided key results. The summary of these results can be found in~\cref{table:congest-results}. In the table, $\Delta$ is the maximum degree in the input graph. All of these results focus on the \emph{static} setting where the topology of the graph does not change. \begin{table}[htb!] \centering \footnotesize \begin{tabular}{| c | c |} \toprule Problem & Rounds \\ \hline Triangle Listing & \shortstack{$\tilde{O}(n^{3/4})$~\cite{IzumiG17}\\ $\tilde{O}(n^{1/2})$~\cite{ChangPZ19} \\ $\tilde{O}(n^{1/3})$~\cite{CS19}}\\ \hline Triangle Listing Lower Bound & $\tilde{\Omega}(n^{1/3})$~\cite{Pandurangan18,IzumiG17}\\ \hline Deterministic Triangle Listing & \shortstack{$n^{2/3 + o(1)}$~\cite{CS20}\\ $O(\Delta/\log n + \log \log \Delta)$~\cite{HPZZ20}}\\ \hline $4$-Clique Listing & $n^{5/6 + o(1)}$~\cite{EFFKO19}\\ \hline $4$-Clique Listing Lower Bound & $\tilde{\Omega}(n^{1/2})$~\cite{CzumajK20}\\ \hline $5$-Clique Listing & \shortstack{$n^{73/75 + o(1)}$~\cite{EFFKO19}\\ $n^{3/4 + o(1)}$~\cite{CGL20}}\\ \hline $k$-Clique Listing & \shortstack{$\tilde{O}(n^{k/(k+2)})$ ($k \geq 4, k \neq 5$)~\cite{CGL20}\\ $\widetilde{O}(n^{1-2/k})$~\cite{CCG21}\\ $n^{1-2/k + o(1)}$~\cite{CLV22}}\\ \hline $k$-Clique Listing Lower Bound & \shortstack{$\tilde{\Omega}(n^{1-2/k})$~\cite{FischerGKO18}\\ $\tilde{\Omega}(n^{1/2}/k)$ ($k \leq n^{1/2}$)~\cite{CzumajK20}\\ $\tilde{\Omega}(n/k)$ ($k > n^{1/2}$)~\cite{CzumajK20}}\\ \bottomrule \end{tabular} \caption{Results for subgraph listing problems in the \textsc{Congest}\xspace model.} \label{table:congest-results} \end{table} Additional works also provide lower bounds under very small bandwidth settings~\cite{Abboud17, IzumiG17,FischerGKO18,Pandurangan18}. A detailed description of all the aforementioned results can be found in the recent comprehensive survey of Censor-Hillel~\cite{censorhillel21}. In this paper, we focus on the \emph{dynamic} setting for subgraph listing problems for which we are able to obtain $1$-round algorithms that require $O(1)$ or $O(\log n)$ bandwidth. In the dynamic setting, edge and node updates are applied to an initial (potentially empty) graph. The updates change the topology of the graph. All nodes in the initial graph know the graph's complete topology. After the application of each round of updates, the nodes collectively must report an accurate list of the correct set of triangles or cliques. We build off the elegant results of~\cite{BC19} who define and investigate in great depth the question of detecting and listing triangles and cliques in dynamic networks. The model they present in their paper can capture real-world behavior such as nodes joining or leaving the network or communication links which appear or disappear between pairs of nodes at different points in time. In this paper, we make several observations about triangle listing which may be used to re-prove results provided in their paper as well as resolve an open question stated in their work. Our paper is structured as follows. First, we formally define the model in~\cref{sec:prelims}. Then, we summarize our results in~\cref{sec:contributions}. \cref{sec:clique} gives our main result for listing $k$-cliques under node insertions/deletions and edge deletions. \cref{sec:wedges} describes our result for listing wedges under node deletions and edge insertions/deletions. \cref{sec:batched-triangle} describes our result on triangle listing under batched updates, and~\cref{sec:batched-clique} gives our result on clique listing under batched updates. \section*{Acknowledgements} We are very grateful to our anonymous reviewers whose suggestions greatly improved the presentation of our results. In particular, we thank one anonymous reviewer for the simple argument that $k$-path and $k$-cycle listing in one round is difficult for radius greater than $2$. \bibliographystyle{alpha} \section{Preliminaries}\label{sec:prelims} The model we use in our paper is the same as the model used in~\cite{BC19}. For completeness, we restate their model as well as the problem definitions in this section. The network we consider in this dynamic setting can be modeled as a sequence of graphs: $(G_0, \dots, G_r)$. The initial graph $G_0$ represents the starting state of the network. All nodes in the initial graph $G_0$ know the graph's complete topology. Each subsequent graph in the sequence is either identical to the preceding graph or differs from it by a single topology change: either an edge insertion, edge deletion, node insertion (along with its adjacent edges), or node deletion (along with its adjacent edges). Later, we also consider graphs where each node may be adjacent to $O(1)$ multiple topology changes. We denote the neighbors of node $v$ in the $i$-th graph by $N_i(v)$; we omit the subscript $i$ when the current graph is obvious from context. We assume the network is synchronized. In each round, each node can send to each one of its neighbors a message containing $B$ bits where $B$ is defined as the \emph{bandwidth} of the network. The messages sent to different neighbors can be different. In this paper, we focus on problems that can be solved with bandwidth $B = O(1)$ or $B = O(\log n)$. We assume that each node has a unique ID and each node in the network knows the IDs of all its neighbors. In other words, we assume that each node has an adjacency list containing the IDs of all its neighbors and knows when a new node (previously not in its adjacency list) becomes connected to it. Furthermore, we assume that the size of any ID is $O(\log{n})$ in bits. Since in both~\cite{BC19} and our work, the number of vertices in the graph can change dynamically, we define $n$ to be the number of vertices in the graph in the current round. Each round proceeds in three synchronous parts as follows. \begin{enumerate} \item At the start of each round, the topological change occurs. Nodes are inserted/deleted from adjacency lists and new communication links are established or destroyed between pairs of nodes. A node can determine whether an adjacent update occurred on an adjacent edge by comparing its current list of neighbors with its list of neighbors from the previous round. However, this also means that a node $u$ cannot distinguish between the insertion of an edge $(u, v)$ and the insertion of node $v$. \item Then, nodes exchange messages using (potentially new) communication links. Nodes previously able to communicate may not continue to be able to communicate once the links are destroyed due to new deletions. \item Finally, nodes receive messages and list the triangles or cliques for this round. \end{enumerate} In this paper, we only deal with $1$-round algorithms or algorithms where the output of each node in the graph following $1$ round of communication after the last topological change is correct. We define the \emph{$1$-round bandwidth complexity} of an algorithm to be the minimum bandwidth $B$ for which such a $1$-round algorithm exists. Our paper only deals with \emph{deterministic} $1$-round algorithms. We state a subgraph is \emph{created} if it is a subgraph that is created by a new edge in the current round. We state a subgraph is \emph{destroyed} if it is listed by at least one node in the previous round but no longer exists in the current round. When a node is deleted, it can no longer send any messages or list any subgraphs in the round where it is deleted (because it no longer exists). We solve the triangle and $k$-clique \emph{listing} problem in $O(1)$ or $O(\log n)$ bandwidth in this paper. We denote a clique with $s$ vertices by $K_s$. Specifically, the problem $\mathsf{List}\xspace(H)$ is defined to be the problem where given a generic unlabeled graph $H$, each labeled subgraph of the current graph $G_i \in (G_0, G_1, \dots, G_r)$ that is isomorphic to $H$ must be listed by at least one node. A node \emph{lists} a subgraph if it lists the IDs of all nodes in the subgraph. Furthermore, every listed subgraph must be isomorphic to $H$. Naturally, all of our listing algorithms also apply to the \emph{detection} setting where at least one node in the network detects the appearance of one or more $H$ (the problem denoted by $\mathsf{Detect}\xspace(H)$ in~\cite{BC19}). As shown in Observation 1 of~\cite{BC19}, $B_{\mathsf{Detect}\xspace(H)} \leq B_{\mathsf{List}\xspace(H)}$, and we only discuss $\mathsf{List}\xspace(H)$ in the following sections whose results also transfer to $\mathsf{Detect}\xspace(H)$. \subsection{Batched Updates Model} We define a new model where more than one update can occur in each round. Specifically, for every node $v \in V_i$ in the current graph $G_i = (V_i, E_i)$, at most $O(1)$ updates are incident to $v$. This means that any node in $G_{i-1} = (V_{i-1}, E_{i-1})$ is incident to $O(1)$ new edge insertions and deletions in $G_i$. A node $u \in V_{i} \setminus V_{i-1}$ is a newly inserted node and \emph{all} edges incident to $u$ are newly inserted edges. Hence, $u$ is incident to $O(1)$ edges when it is inserted. We denote the subgraph $H$ listing problem in this model as $\mathsf{BatchedList}\xspace(H)$. Notably, we do not need the assumption that each newly inserted node $v$ can tell which of its neighbors are also newly inserted nodes. This is because all edges incident to newly inserted nodes are newly inserted edges. Thus, a newly inserted node $v$ can send its entire adjacency list to all its neighbors. An important part of our model is that all nodes, including the newly inserted nodes, are adjacent to $O(1)$ updates so newly inserted nodes can send its adjacency list in $O(\log n)$ bandwidth to all its neighbors. \section{Our Contributions}\label{sec:contributions} All of the algorithms in our paper are deterministic $1$-round algorithms that use $O(1)$ or $O(\log n)$ bandwidth. Specifically, we show the following main results in this paper. We first answer Open Question 4 posed in~\cite{BC19}. The previous algorithm given in~\cite{BC19} under this setting required $O(\log n)$ bandwidth. \begin{restatable}{thm}{insertsthm}\label{thm:list-clique} The deterministic $1$-round bandwidth complexity of listing cliques, $\mathsf{List}\xspace(K_s)$, under node insertions and node/edge deletions is $O(1)$. \end{restatable} To prove this result, we also show a number of simple but new observations about triangle listing in~\cref{sec:clique} which may prove to be useful for more future research in this area. In addition to cliques, we give an algorithm for wedges, which has not been considered in previous literature. An \emph{induced wedge} among a set of three nodes $\{u, v, w\}$ in the input graph is a path of length $2$ that uses all three nodes, and the induced subgraph of the three nodes is not a cycle. We denote a wedge by $\Gamma$. \begin{restatable}{thm}{wedgethm}\label{lem:wedge-listing} The deterministic $1$-round bandwidth complexity of listing wedges, $\mathsf{List}\xspace(\Gamma)$, under edge insertions and node/edge deletions, is $O(\log n)$. \end{restatable} Censor-Hillel \etal~\cite{CKS21} studied the setting of \emph{highly} dynamic networks in which the number of topology changes per round is \emph{unlimited}. In this setting, they showed $O(1)$-amortized round complexity algorithms for $k$-clique listing and four and five cycle listing. In our $1$-round \textsc{Congest}\xspace setting, we show the following two theorems when multiple updates occur in the same round. Our results are the first to consider more than one update in the $1$-round, low bandwidth setting. \begin{restatable}{thm}{triangleslist}\label{lem:batched} The deterministic $1$-round bandwidth complexity of listing triangles, $\mathsf{BatchedList}\xspace(K_3)$, when each node is incident to $O(1)$ updates, is $O(\log n)$ under node/edge insertions and node/edge deletions. \end{restatable} \begin{restatable}{thm}{cliqueslist}\label{lem:batched-clique} The deterministic $1$-round bandwidth complexity of listing cliques of size $s$, $\mathsf{BatchedList}\xspace(K_s)$, when each node is incident to $O(1)$ updates, is $O(\log n)$ under node insertions and node/edge deletions. \end{restatable} \section{Wedge Listing}\label{sec:wedges} An \emph{induced wedge} $(u, v, w)$ in the input graph $G = (V, E)$ is a path of length $2$ where edges $(u, v)$ and $(v, w)$ exist and no edge exists between $u$ and $w$ in the induced subgraph consisting of nodes $\{u, v, w\} \subseteq V$. We denote a wedge by $\Gamma$. In this section, we provide an algorithm listing \emph{induced} wedges. Listing non-induced wedges is a much simpler problem since each node can simply list pairs of its adjacent neighbors without knowing whether an edge exists between each pair. For simplicity, from here onward, we say ``wedge" to mean induced wedge. \cref{lem:wedge-listing} is the main theorem in this section for listing wedges. We give the pseudocode for the algorithm used in the proof of this theorem in~\cref{alg:list-wedges}. Our algorithm uses listing triangles as a subroutine since an induced wedge can be formed from an edge deletion to a triangle. Listing wedges may be useful as a subroutine in future work that proves additional bounds for listing other small subgraphs using small bandwidth. \SetKwFunction{FnListWedges}{ListWedges} \begin{algorithm}[!t] \caption{\label{alg:list-wedges} Listing Wedges} \textbf{Input:} Update $U$ which can be a node deletion, edge insertion, or edge deletion.\\ \textbf{Output:} Each induced wedge is listed by at least one of its nodes and no set of three nodes which is not a wedge is wrongly listed as a wedge.\\ \Fn{\FnListWedges{$U$}}{ \If{$U$ adds $u$ to $N(v)$}{ $v$ sends $(\textsc{New}\xspace, ID_u)$ to all $w \in N(v)$.\label{wedge:edge-ins}\\ \For{every wedge $(v, x, u)$ that $v$ lists}{ $v$ stops listing wedge $(v, x, u)$. \label{wedge:stop-list-adj}\\ $v$ starts listing triangle $\{v, x, u\}$. \label{wedge:ins-triangle-list} } } \ElseIf{$U$ deletes $u$ from $N(v)$}{ $v$ sends $(\textsc{Delete}\xspace, ID_u)$ to all $w \in N(v)$.\label{wedge:edge-del}\\ \For{every triangle $\{v, x, u\}$ that $v$ lists}{ $v$ starts listing new wedge $(v, x, u)$.\label{wedge:del-list-new-wedge}\\ $v$ stops listing triangle $\{v, x, u\}$.\label{wedge:stop-del-list-triangle} } \For{every wedge $(x, v, u)$ and/or $(u, v, x)$ that $v$ lists}{\label{wedge:del-destroyed-wedge} $v$ stops listing $(u, v, x)$ and/or $(x, v, u)$.\label{wedge:del-stop-list}\\ } } \If{node $x$ receives $(\textsc{New}\xspace, ID_u)$ from $v$}{\label{wedge:receive-ins} \If{$x$ is not adjacent to $u$}{\label{wedge:not-adj} $x$ starts listing new wedge $(x, v, u)$.\label{wedge:list-new}\\ } \If{$x$ lists wedge $(v, x, u)$}{\label{wedge:lists-wedge-ins} $x$ stops listing wedge $(v, x, u)$.\label{wedge:stops-list-wedge-ins}\\ $x$ starts listing triangle $\{v, x, u\}$.\label{wedge:starts-list-triangle-ins}\\ }% } \If{$x$ receives $(\textsc{Delete}\xspace, ID_u)$ from $v$}{\label{wedge:receive-del} \If{$x$ lists wedge $(x, v, u)$}{\label{wedge:listed-wedge} $x$ stops listing wedge $(x, v, u)$.\label{wedge:stop-list}\\ } \If{$x$ lists triangle $\{x, v, u\}$}{\label{wedge:list-triang} $x$ starts listing new wedge $(v, x, u)$.\label{wedge:del-new}\\ $x$ stops listing triangle $\{v, x, u\}$.\label{wedge:del-new-stop-triangle} } } } \end{algorithm} We describe our algorithm below. All messages are sent \emph{in the same round}; multiple messages sent between $u$ and $v$ are concatenated with each other and sent as one message. On an update $U$, in~\cref{alg:list-wedges}, each node sends $O(1)$ bits, indicating whether a neighbor is deleted or added to their neighbors. Let us denote these bits by $\textsc{Delete}\xspace$ and $\textsc{New}\xspace$, respectively. Furthermore, they send the ID of the neighbor that is deleted or inserted. Let this ID be $\textsc{ID}\xspace_u$ for node $u$. This procedure is shown in~\cref{wedge:edge-ins,wedge:edge-del} in~\cref{alg:list-wedges} and all of the message sending is done in $1$ round. An edge insertion can form a triangle; thus, in the same round as the $(\textsc{New}\xspace, ID_u)$ message, node $v$ also starts listing the new triangle $\{v, x, u\}$ (\cref{wedge:ins-triangle-list,wedge:stop-list-adj}) and stops listing the wedge. The node/edge deletion can destroy a wedge (\cref{wedge:del-destroyed-wedge}); in which case, the node stops listing the wedge (\cref{wedge:del-stop-list}). Similarly, for an edge/node deletion that destroys a triangle, $v$ starts listing wedge % $(v, x, u)$ for every triangle $\{v, x, u\}$ that $v$ lists (\cref{wedge:del-list-new-wedge,wedge:stop-del-list-triangle}). Any node $x$ that receives $(\textsc{New}\xspace, ID_u)$ from a neighbor $v$ (\cref{wedge:receive-ins}) lists $(x, v, u)$ as a new wedge (\cref{wedge:list-new}) if it is not adjacent to $u$ (\cref{wedge:not-adj}). If $x$ lists wedge $(v, x, u)$ (\cref{wedge:lists-wedge-ins}), then $x$ stops listing the wedge (\cref{wedge:stops-list-wedge-ins}) and starts listing triangle $\{v, x, u\}$ (\cref{wedge:starts-list-triangle-ins}). Any node $x$ that receives $(\textsc{Delete}\xspace, ID_u)$ from a neighbor $v$ (\cref{wedge:receive-del}) first checks if a wedge it lists is destroyed. If $x$ lists wedge $(x, v, u)$, then it stops listing wedge $(x, v, u)$ since it is destroyed (\cref{wedge:stop-list}). Then, it checks if the deletion destroys a triangle it lists. If it lists triangle $\{x, v, u\}$, then it lists a new wedge $(v, x, u)$ since a destroyed triangle creates a new wedge (\cref{wedge:del-new,wedge:del-new-stop-triangle}) and stops listing the triangle. \wedgethm* \begin{proof} We prove via induction that this algorithm achieves the following guarantees \begin{enumerate}[(a)] \item any wedge is listed by at least one node \item and no sets of three nodes that is not a wedge is incorrectly listed as a wedge \end{enumerate} after any edge insertion or node/edge deletion. In the base case, the graph is either the input graph and every wedge in the graph is listed by at least one node or the graph is empty. We assume that at step $j$, each wedge in the graph is listed by some node in the wedge. We show that all wedges formed in round $j+1$ by some update is listed by at least one node in the wedge; then, we show that each wedge that is destroyed in round $j + 1$ is no longer listed by any node. We prove this by casework over the type of update: \begin{itemize} \item Edge insertion: Given an edge insertion $\{u, v\}$, node $v$ sees some node $u$ added to its adjacency list and sends $(\textsc{New}\xspace, \textsc{ID}\xspace_u)$ to all its neighbors in $N(v)$ (similarly for $u$). Suppose $x \in N(v)$ is a neighbor of $v$ but not of $u$. Then, $x$ can distinguish between triangle $\{x, v, u\}$ and wedge $(x, v, u)$ by seeing whether $u$ is in its adjacency list (i.e. whether edge $\{x, u\}$ exists). Hence, $x$ lists wedge $(x, v, u)$. % An edge insertion can cause a wedge to be destroyed if it forms a triangle from a wedge. Suppose $(x, v, u)$ is a wedge that becomes a triangle due to edge insertion $\{x, u\}$. If $x$ lists the wedge, then $x$ knows $\{x, v, u\}$ is now a triangle because it see $u$ added to its adjacency list. The same argument holds for $u$. If $v$ lists the wedge, then $v$ receives $(\textsc{New}\xspace, ID_x)$ and $(\textsc{New}\xspace, ID_u)$ and knows that $\{x, v, u\}$ is now a triangle and stops listing it as a wedge. \item Edge deletion: Suppose an edge deletion $\{u, v\}$ destroys wedge $(x, v, u)$. Since $u$ and $v$ can see that the other is no longer a neighbor, if they could list wedge $(x, v, u)$, they now know that $(x, v, u)$ has been destroyed and stops listing it. Node $v$ sends to $x$ the tuple $(\textsc{Delete}\xspace, \textsc{ID}\xspace_u)$ using $O(\log n)$ bandwidth and $1$ round of communication. Hence, after this one round of communication, $x$, if it can list $(x, v, u)$, now knows $(x, v, u)$ has been destroyed and stops listing it. We now show that if an edge deletion destroys a triangle, then at least one of its nodes lists the new wedge. Under our current set of update operations, any triangle is formed after $3$ edge insertions. After the first two edge insertions, at least one node $x$ lists the wedge that is formed by our argument provided above for edge insertions. Then, after the third insertion, $x$ lists the triangle formed (and stops listing the wedge). Suppose without loss of generality that node $x$ lists triangle $\{x, v, u\}$. If $\{x, u\}$ is deleted, $x$ sees $u$ removed from its adjacency list and lists $(x, v, u)$ as a new wedge. The same argument holds for edge deletion $(x, v)$. If instead $(u, v)$ is deleted, then node $x$ receives $(\textsc{Delete}\xspace, ID_v)$ and $(\textsc{Delete}\xspace, ID_u)$ and lists $(v, x, u)$ as a new wedge. % \item Node deletion: Suppose wlog $u$ is deleted and there exists some wedge containing $u$, $v$, and $w$. Then, if $u$ is deleted and the wedge consists of edges $\{v, u\}$ and $\{u, w\}$, then both $v$ and $w$ notice that $u$ has been removed from their adjacency lists. Hence, if either of the nodes listed the wedge, they would know it is destroyed and stops listing it. If the wedge consists of edges $\{u, v\}$ (resp. $\{u, w\}$) and $\{v, w\}$, then $w$ would know of the deletion of edge $\{u, v\}$ (resp. $v$ of edge $\{u, w\}$) after receiving messages since $v$ (resp. $w$) sent to all its neighbors $(\textsc{Delete}\xspace, \textsc{ID}\xspace_u)$. \end{itemize} Hence, in step $j+1$ after the $j+1$-st synchronous round, all wedges formed in step $j+1$ is listed by at least one vertex. The bandwidth is $O(\log n)$ since the IDs of vertices are $O(\log n)$ sized. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The problem of inhomogeneous charge and magnetic ordering in strongly correlated electron systems (SCES) is one of the most intensively studied problems of the contemporary condensed matter physics. The motivation behind these studies is the inhomogeneous charge ordering (e.g., striped phases) which has been experimentally observed in many rare-earth and transition-metal compounds like $La_{1.6}Nd_{0.4}Sr_{x}CuO_{4}$, $YBa_{2}Cu_{3}O_{6+x}$, $Bi_{2}Sr_{2}Cu_{2}O_{8+x}$~\cite{tran,mook}. Some of these compounds also exhibit high temperature superconductivity (HTSC). Theoretical studies on these materials proposed that these SCES have a natural tendency toward the phase separation~\cite{loren,leman}. A class of SCES like cobaltates~\cite{qian06,tera97,tekada03}, $GdI_{2}$~\cite{tara08} and its doped variant $GdI_{2}H_{x}$~\cite{tulika06}, $NaTiO_{2}$~\cite{clarke98,pen97,khom05}, MgV$_{2}$O$_{4}$~\cite{rmn13} etc. have attracted great interest recently as they exhibit a number of remarkable cooperative phenomena such as valence and metal-insulator transition, charge, orbital and spin/magnetic order, excitonic instability and possible non-fermi liquid states~\cite{tara08}. These are layered triangular lattice systems and are characterized by the presence of localized (denoted by $f$-) and itinerant (denoted by $d$-) electrons. The geometrical frustration from the underying triangular lattice coupled with strong quantum fluctuations give rise to a huge degeneracy at low temperatures resulting in competing ground states close by in energy. Therefore, for these systems one would expect a fairly complex ground state magnetic phase diagram and the presence of soft local modes strongly coupled with the itinerant electrons. It has recently been proposed that these systems may very well be described by different variants of the two-dimensional Falicov-Kimball model (FKM)~\cite{tara08,tulika06} on a triangular lattice. The FKM was introduced to study the metal-insulator transitions in the rare-earth and transition-metal compounds~\cite{fkm69,fkm70}. The model has also been used to describe a variety of many-body phenomenon such as tendency of formation of charge and spin density wave, mixed valence, electronic ferroelectricity and crystallization in binary alloys~\cite{lemanski05,farkov02}. Many experimental results show that a charge order generally occurs with a spin/magnetic order. Therefore, the FKM on a triangular lattice has been studied recently including a spin-dependent on-site interaction between $f$- and $d$- electrons with a local Coulomb interaction between $f$- electrons. Several interesting ground state phases namely long range Ne\`el order, ferromagnetism or a mixture of both have been reported~\cite{umesh6,sant}. It has been realized later that even though including the local spin-dependent interactions to the FKM on triangular lattice gives many interesting phases, in fact there are other important interactions e.g., super-exchange interaction ($J_{se}$) (interaction between $f$- electrons occupying nearest neighboring sites) between $f$- electrons, which gives rise to many other interesting phases relevant for real materials such as $GdI_{2}$, $NaTiO_{2}$ etc. Therefore, we have generalized the FKM Hamiltonian to include super-exchange interaction $J_{se}$ between $f$- electrons and Hamiltonian is given as, \begin{eqnarray} H =-\,\sum\limits_{\langle ij\rangle\sigma}(t_{ij}+\mu\delta_{ij})d^{\dagger}_{i\sigma}d_{j\sigma} +\,(U-J)\sum\limits_{i\sigma}f^{\dagger}_{i\sigma}f_{i\sigma}d^{\dagger}_{i\sigma}d_{i\sigma} \nonumber \\ +\,U\sum_{i\sigma}f^{\dagger}_{i,-\sigma}f_{i,-\sigma}d^{\dagger}_{i\sigma}d_{i\sigma} +\,J_{se}\sum\limits_{\langle ij \rangle \sigma} (-\,f_{i\sigma}^{\dagger} f_{i\sigma} f_{j,-\sigma}^{\dagger} f_{j,-\sigma} \nonumber \\ + \,f_{i\sigma}^{\dagger} f_{i\sigma} f_{j\sigma}^{\dagger} f_{j\sigma}) +\,U_{f}\sum\limits_{i\sigma}f^{\dagger}_{i\sigma}f_{i\sigma}f^{\dagger}_{i,-\sigma}f_{i,-\sigma} +\,E_{f}\sum\limits_{i\sigma}f^{\dagger}_{i\sigma}f_{i\sigma} \end{eqnarray} \noindent here $\langle ij\rangle$ denotes the nearest neighboring ($NN$) lattice sites $i$ and $j$. The $d^{\dagger}_{i\sigma}, d_{i\sigma}\,(f^{\dagger}_{i\sigma},f_{i\sigma})$ are, respectively, the creation and annihilation operators for $d$- ($f$-) electrons with spin $\sigma=\{\uparrow,\downarrow\}$ at the site $i$. First term is the band energy of the $d$- electrons. Here $\mu$ is chemical potential. The hopping parameter $t_{\langle ij\rangle} = t$ for $NN$ hopping and zero otherwise. The interaction between $d$- electrons is neglected. Second term is on-site interaction between $d$- and $f$- electrons of same spins with coupling strength ($U - J$) (where $U$ is the usual spin-independent Coulomb interaction and $J$ is the exchange interaction. Inclusion of the exchange term enables us to study the magnetic structure of $f$- electrons and band magnetism of $d$- electrons. Third term is the on-site interaction $U$ between $d$- and $f$- electrons of opposite spins. Fourth term is the super-exchange interaction between localized electrons occupying nearest neighboring sites. This interaction favors anti-ferromagnetic arrangement of $f$- electrons over ferromagnetic arrangement of $f$- electrons. Fifth term is on-site Coulomb repulsion $U_f$ between opposite spins of $f$- electrons. The last term is the dispersionless energy level $E_f$ of $f$- electrons. \section{Methodology} Hamiltonian $H$ (Eq.$1$), preserve states of the $f$- electrons, i.e. the $d$- electrons traveling through the lattice change neither occupation numbers nor spins of the $f$- electrons. Therefore, the local $f$- elctron occupation number $\hat{n}_{fi\sigma}=f_{i\sigma}^{\dagger}f_{i\sigma}$ is invariant and $\big[\hat{n}_{fi\sigma},H\big]=0$ for all $i$ and $\sigma$. This shows that $\omega_{i\sigma}=f_{i\sigma}^{\dagger}f_{i\sigma}$ is a good quantum number taking values only $1$ or $0$ according to whether the site $i$ is occupied or unoccupied by $f$- electron of spin $\sigma$, respectively. Following the local conservation of $f$- electron occupation, $H$ can be rewritten as, \begin{eqnarray} H&=&\sum\limits_{\langle ij \rangle \sigma}\, h_{ij}(\{\omega_{\sigma}\})\,d_{i\sigma}^{\dagger}d_{j\sigma} +\,J_{se}\sum\limits_{\langle ij \rangle \sigma}{\{-\,\omega_{i\sigma}\omega_{j,-\sigma}} \nonumber \\&& +\,{\omega_{i\sigma}\omega_{j\sigma}\}} +\,U_{f}\sum\limits_{i\sigma}{\omega_{i\sigma}\omega_{i,-\sigma}} +\,E_{f}\sum\limits_{i\sigma}\,{\omega_{i\sigma}} \end{eqnarray} \noindent where $h_{ij}(\{\omega_{\sigma}\})=\big[-t_{ij}+\{(U-J)\omega_{i\sigma}+U\omega_{i,-\sigma}-\mu\}\delta_{ij}\big]$ and $\{\omega_{\sigma}\}$ is a chosen configuration of $f$- electrons of spin $\sigma$. We set the scale of energy with $t_{\langle ij \rangle} = 1$. The value of $\mu$ is chosen such that the filling is ${\frac{(N_{f}~ + ~N_{d})}{4N}}$ (e.g. $N_{f} + N_{d} = N$ is one-fourth case and $N_{f} + N_{d} = 2N$ is half-filled case etc.), where $N_{f} = (N_{f_{\uparrow}}+N_{f_{\downarrow}})$, $N_{d} = (N_{d_{\uparrow}} + N_{d_{\downarrow}})$ and $N$ are the total number of $f-$ electrons, $d-$ electrons and sites respectively. For a lattice of $N$ sites the $H(\{\omega_{\sigma}\})$ (given in Eq.2) is a $2N\times 2N$ matrix for a fixed configuration $\{\omega_{\sigma}\}$. For one particular value of $N_f(= N_{f_{\uparrow}} + N_{f_{\downarrow}})$, we choose values of $N_{f_{\uparrow}}$ and $N_{f_{\downarrow}}$ and their configuration $\{\omega_{\uparrow}\} = \{{\omega_{1\uparrow}, \omega_{2\uparrow},\ldots, \omega_{N\uparrow}}\}$ and $\{\omega_{\downarrow}\} = \{{\omega_{1\downarrow}, \omega_{2\downarrow},\ldots, \omega_{N\downarrow}}\}$. Choosing the parameters $U$, $J$ and $J_{se}$, the eigenvalues $\lambda_{i\sigma}$($i = 1 \ldots N$) of $h(\{\omega_{\sigma}\})$ are calculated using the numerical diagonalization technique on the triangular lattice of finite size $N(=L^{2}, L = 12)$ with periodic boundary conditions (PBC). The partition function of the system is written as, \begin{equation} \it{Z}=\,\sum\limits_{\{\omega_{\sigma}\}}\,Tr\,\left(e^{-\beta H(\{\omega_{\sigma}\})}\right) \end{equation} \noindent where the trace is taken over the $d$- electrons, $\beta=1/k_{B}T$. The trace is calculated from the eigenvalues $\lambda_{i\sigma}$ of the matrix $h(\{\omega_{\sigma}\})$ (first term in Eq.2). \noindent The ground state total internal energy $E(\{\omega_{\sigma}\})$ is calculated as, \begin{eqnarray} E(\{\omega_{\sigma}\})=\sum\limits_{i\sigma}^{N_{d}}\lambda_{i\sigma}(\{\omega_{\sigma}\}) + J_{se}\sum\limits_{\langle ij \rangle \sigma}{\{-\omega_{i\sigma}\omega_{j,-\sigma}} +{\omega_{i\sigma}\omega_{j\sigma}\}} \nonumber \\ + U_{f}\sum\limits_{i\sigma}\omega_{i\sigma}\omega_{i,-\sigma} + E_{f}\sum\limits_{i\sigma}\omega_{i\sigma} \end{eqnarray} Our aim is to find the unique ground state configuration (state with minimum total internal energy $E(\{\omega_{\sigma}\}$)) of $f$- electrons out of exponentially large possible configurations for a chosen $N_{f}$. In order to achieve this goal, we have used classical Monte Carlo simulation algorithm by annealing the static classical variables $\{\omega_{\sigma}\}$ ramping the temperature down from a high value to a very low value. Details of the method can be found in our earlier papers~\cite{sant1,umesh1,umesh2,umesh3,umesh4,umesh5}. \section{Results and discussion} \begin{figure*} { \begin{center} \includegraphics[trim=0.5mm 0.5mm 0.5mm 0.1mm,clip,width=7.0cm,height=4.8cm]{Fig1.eps}~~~~\includegraphics[trim=0.5mm 0.5mm 0.5mm 0.05cm,clip,width=8.9cm,height=4.5cm]{Fig2.eps} {{\bf (i)}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{\bf (ii)}} \end{center} } \caption{(Color online) {\bf (i)} Variation of magnetic moment of $d$- and $f$- electrons with number of $d$- electrons $N_d$ for $n_{f}=1$, $U = 5$, $J = 5$, $U_{f} = 10$ and for different values of $J_{se}$. Magnetic moment of $d$- and $f$- electrons are shown by dash and solid lines respectively. Variation of $N_{d}^{\ast}$ with $J_{se}$ is shown in the inset.} {\bf (ii)} (a) Up-spin and (b) down-spin $d$- electron densities are shown on each site for $J_{se} = 0.10$, $U = 5$, $U_{f} = 10$, $J = 5$, $n_{f} = 1$ and $N_{d} = 40$. The color coding and radii of the circles indicate the $d$- electron density profile. Triangle-up and triangle-down correspond to the sites occupied by up-spin and down-spin $f$- electrons respectively. \end{figure*} \begin{figure*} { \begin{center} \includegraphics[trim=0.5mm 0.5mm 0.5mm 0.1mm,clip,width=8.9cm,height=6.0cm]{Fig3.eps}~~~~\includegraphics[trim=0.5mm 0.0mm 0.5mm 0.1mm,clip,width=8.9cm,height=6.0cm]{Fig4.eps} {{\bf (i)}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{\bf (ii)}} \end{center} \begin{center} \includegraphics[trim=0.4mm 0.4mm 0.4mm 0.1mm,clip,width=8.9cm,height=4.0cm]{Fig5.eps}~~~~\includegraphics[trim=0.4mm 0.4mm 0.4mm 0.1mm,clip,width=8.9cm,height=4.0cm]{Fig6.eps} {{\bf (iii)}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{\bf (iv)}} \end{center} } \caption{(Color online) {\bf One-fourth filled case ($n_{f} + n_{d} = 1$) :} Triangle-up and triangle-down correspond to the sites occupied by up-spin and down-spin $f$- electrons respectively. Open green circles correspond to the unoccupied sites. The color coding and radii of the circles indicate the $d$-electron density profile. {\bf (i)} The ground-state magnetic configurations of $f$- electrons for $J_{se} = 0.01$, $n_{f} = \frac{1}{2}$, $n_{d} = \frac{1}{2}$, $U = 5$, $U_{f} = 10$ and for various values of $J$. {\bf (ii)} Variation of magnetic moment of $d$- and $f$- electrons with exchange correlation $J$ at different values of $J_{se}$ for $n_{f} = \frac{1}{2}$, $n_{d} = \frac{1}{2}$ at $U=5$, $U_{f} = 10$. Magnetic moment of $f$- and $d$- electrons are shown by solid and dash lines respectively. Note that ${m_{d}} = 0$ for finite values of $J_{se}$.} {\bf (iii)} (a) Up-spin and (b) down-spin $d$- electron densities are shown on each site for $n_{f} = \frac{1}{2}$, $n_{d} = \frac{1}{2}$, $U = 5$, $U_{f} = 10$ , $J = 0$ and $J_{se} = 0.01$. {\bf (iv)} (a) Up-spin and (b) down-spin $d$- electron densities are shown on each site for $n_{f} = \frac{1}{2}$, $n_{d} = \frac{1}{2}$, $U = 5$, $U_{f} = 10$ $J = 3$ and $J_{se} = 0.01$. \end{figure*} We have studied the effect of $J_{se}$ on variation of magnetic moment of $d$- electrons ${m_{d}}$ $(= \frac{M_{d}}{N} = \frac{\large(N_{d_{\uparrow}}~ - ~N_{d_{\downarrow}}\large)}{N}\large)$ and magnetic moment of $f$- electrons ${m_{f}}$ $( = \frac{M_{f}}{N} = \frac{\large(N_{f_{\uparrow}}~ - ~N_{f_{\downarrow}}\large)}{N}\large)$, at a fixed value of $U$, $U_{f}$ and $J$ for $n_{f} = 1~\large(n_{f} = \frac{N_{f}}{N} = \frac{\large(N_{f_{\uparrow}}~ + ~N_{f_{\downarrow}}\large)}{N}\large)$. We have also studied the density of $d$- electrons at each site for the above case. Fig.$1$(i) shows the variation of magnetic moment of $d$- and $f$- electrons with number of $d$- electrons $N_d$ for three different values of $J_{se}$ i.e. $J_{se} = 0$, $0.05$ and $0.1$ at a fixed value of $J = 5$, $U = 5$ and $U_{f} = 10$. We are studying the presence of magnetic phases for various values of $N_d$, starting from $N_d = 144$ to $N_d = 0$ at different values of $J_{se}$. We observed that for $N_d = 144$, it is AFM phase and remains AFM upto $N_d =N_{d}^{*}$ , below $N_{d}^{*}$ it is no more AFM and a net magnetic moment exists. Value of $N_{d}^{*}$ depends upon $J_{se}$. Variation of $N_{d}^{*}$ with $J_{se}$ for $U = 5 = J$, $U_f = 10$ and $n_{f} = 1$, shown in inset of Fig.$1$(i). We have noted that the ground state is Neel ordered anti-ferromagnetic (AFM) for $N_d = 144$, irrespective of the value of $J_{se}$. As seen from Hamiltonian for $U = 5 = J$, there is no on-site coulomb repulsion between $d$- and $f$- electrons of the same spins, so up-spin $d$- electrons are more likely to occur at sites with up-spin $f$- electrons and same is true for down-spin $d$- electrons. Fig.$1$(i) also shows that the magnetic moment of $f$- electrons and $d$- electrons start increasing at value of $N_d$ below $70$ for $J_{se} = 0$, below $N_d = 65$ for $J_{se} = 0.05$ and below $N_d = 44$ for $J_{se} = 0.1$. It means larger the value of $J_{se}$, lower is the value of $N_d$ below which magnetic moment of $f$- and $d$- electrons start increasing. At $N_d = 144$, FM arrangement of $f$- electrons is not energetically favourable as $d$- electron's motion is prohibited by the Pauli exclusion principle whereas in an AFM arrangement system gains superexchange energy due to virtual hopping of $d$- electrons. As we lower $N_d$, sites with empty d-level appear and it becomes possible for $d$- electrons to move and gain kinetic energy. This kinetic energy gain then competes with super-exchange to decide the ground state magnetic ordering. As we lower $N_d$ from 144, hopping of $d$- electrons increases. However, the overall phase remains AFM (with zero magnetic moment) due to dominant superexchange interactions until $N_d$ reaches 65 (which we call $N_{d}^{\ast}$) (for $J_{se} = 0.05$). Below $N_{d}^{\ast}$, system develops a finite magnetic moment as parallel spin arrangement of $f$- electrons at neighbouring sites facilitates $d$- electron's hopping further. As we keep on decreasing $N_d$, a stage reaches where the system becomes totally FM because now band energy gain of $d$- electrons in the FM background totally overcomes the super-exchange energy gain of $f$- electrons. With further decrease of $N_d$ toward lower values of $N_d$( in this case $N_d < 12$) , system re-enters the AFM phase as the band energy gain of few $d$- electrons does not remain sufficient to overcome the energy gain due to super-exchange interactions of the AFM phase. Therefore the system prefers the AFM arrangement of $f$- electrons at low $N_d$ values which we call re-entrant AFM phase. We have also studied the density of $d$- electrons at each lattice site. Fig.$1$(ii) shows the density of $d$- electrons at each site for the parameters taken in Fig.$1$(i) for $J_{se} = 0.1$ at $N_d = 40$. Density of $d$- electrons at each site strongly depends on the value of exchange correlation $J$ (between $d$- and $f$- electrons) and super-exchange interation $J_{se}$ between localized electrons. It is clear from Hamiltonian that for $U = 5 = J$, there is no onsite coulomb repulsion between $d$- and $f$- electrons of the same spins. Hence up-spin $d$- electrons density is more at sites having up-spin $f$- electrons. Effects of super-exchange interaction $J_{se}$, exchange interaction $J$, and on-site Coulomb repulsion $U$ on density of $d$- electrons can be understood in the following ways. It is seen from Fig.$1$(ii) that there are some sites having more $d$- electron densities than other sites. Super-exchange interaction $J_{se}$ arranges $f$- electrons in such a pattern that corresponds to minimum energy state, due to which each nearest-neighbour(NN) of a site ( each site have 6 NN) have opposite spin of $f$- electrons. If all the NN have spins opposite to concerned site then it becomes very difficult for $d$- electrons to hop from concerned site to NN sites, as there is finite onsite coulomb repulsion present between $d$- and $f$- electrons of the opposite spins. The random distribution of $f$- electrons and random values of $d$- electron densities is a consequence of the competition between superexchange interaction $J_{se}$, exchange interaction $J$ and on-site Coulomb repulsion $U$. We have also studied ground state magnetic phase diagram of up-spin and down-spin $f$- electrons, magnetic moments of $d$- and $f$- electrons and the density of $d$- electrons on each site for the range of values of parameters $J$, $U$, $U_{f}$ and $J_{se}$ for two cases (i) $n_{f} + n_{d} = 1$ (one-fourth filled case) and (ii) $n_{f} + n_{d} = 2$ (half filled case). We have chosen large value of $U_{f}$ so that double occupancy of $f$- electrons is avoided. \subsection{One-fourth filled case ($n_{f} + n_{d} = 1$):} In Fig.$2$(i) the ground state magnetic configurations of up-spin and down-spin $f$- electrons are shown (for one-fourth filled case) for $J_{se} = 0.01$, $U = 5$, $U_{f} = 10$ and for different $J$ values. Due to superexchange interaction between $f$- electrons, no ordered configurations of $f$- electrons are observed as explained already. Fig.$2$(ii) shows the variation of magnetic moment of $d$- electrons ($m_{d}$) and magnetic moment of $f$- electrons ($m_{f}$) with exchange correlation $J$ at three different values of $J_{se}$ i.e $J_{se} = 0$, $0.01$, and $0.05$, for $U = 5$ and $U_{f} = 10$. For $J = 0$, the on-site interaction energy between $d$- and $f$- electrons is same irrespective of their spins. Hence the ground state configuration is AFM type as possible hopping of $d$- electrons minimizes the energy of the system. It is clearly shown in the variation of $d$- electron density at each site in Fig.$2$(iii). For finite but small value of $J$, the on-site interaction energy between $d$- and $f$- electrons of same spins will be smaller in comparison to the on-site interaction energy between $d$- and $f$- electrons of opposite spins. So few sites with FM arrangement of spin-up $f$- electrons will be occupied by some down-spin $d$- electrons and some up-spin $d$- electrons. With this arrangement there is finite hopping possible for $d$- electrons which increases its kinetic energy and hence total energy of system goes down. Therefore the $m_d$ and $m_f$ increase with increasing $J$. It is also seen from Fig.$2$(ii) clearly that as value of $J_{se}$ increases, the magnetic moments of both $d$- and $f$- electrons decrease, as $J_{se}$ favours AFM arrangement of $f$- electrons and also favours AFM arrangement of $d$- electrons because on-site interaction between $d$- and $f$- electrons of same spins is $(U - J)$ and on-site interaction between $d$- and $f$- electrons of opposite spins is $U$. There is no magnetic moment for $d$- electrons for $J_{se} = 0.01$ and $J_{se} =0.05$. Figs.$2$(iii) and $2$(iv) show the density of $d$- electrons at a fixed value of $U = 5$, $U_{f} = 10$ and for $J = 0$ and $J = 3$ respectively. When $J = 0$ the interaction between $d$- and $f$- electrons is same irrespective of their spins, so the density of $d$- electrons at sites occupied by f-electrons are same, while it is maximum at unoccupied sites. With the increase in $J$ value density of $d$- electrons at sites where $f$- electrons of same spin are present increases and at empty sites it decreases, because as $J$ increases, the interaction $(U-J)$ between $d$- and $f$- electrons of the same spins decreases. Also the random distribution of $d$- electron densities at some sites is, of course, due to the superexchange interaction between $f$- electrons. \begin{figure*} { \begin{center} \includegraphics[trim=0.5mm 0.5mm 0.5mm 0.0mm,clip,width=8.9cm,height=6.0cm]{Fig7.eps}~~~~\includegraphics[trim=0.5mm 0.0mm 0.5mm 0.0mm,clip,width=8.9cm,height=6.0cm]{Fig8.eps} {{\bf (i)}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{\bf (ii)}} \end{center} \begin{center} \includegraphics[trim=0.4mm 0.4mm 0.4mm 0.1mm,clip,width=8.9cm,height=4.0cm]{Fig9.eps}~~~~\includegraphics[trim=0.4mm 0.4mm 0.4mm 0.1mm,clip,width=8.9cm,height=4.0cm]{Fig10.eps} {{\bf (iii)}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{\bf (iv)}} \end{center} } \caption{(Color online) {\bf Half filled case ($n_{f} + n_{d} = 2$) :} Triangle-up and triangle-down correspond to the sites occupied by up-spin and down-spin $f$- electrons respectively. The color coding and radii of the circles indicate the $d$- electron density profile. {\bf (i)} The ground-state magnetic configurations of $f$- electrons for $J_{se} = 0.01$, $n_{f} = 1$, $n_{d} = 1$, $U = 5$, $U_{f} = 10$ and for various values of $J$. {\bf (ii)} Variation of magnetic moment of $d$- and $f$- electrons with exchange correlation $J$ at different values of $J_{se}$ for $n_{f} = 1$, $n_{d} = 1$ at $U=5$, $U_{f} = 10$. Magnetic moment of $f$- and $d$- electrons are shown by solid and dash lines respectively.(note that $m_d = 0$ for finite values of $J_{se}$. {\bf (iii)} (a) Up-spin and (b) down-spin $d$- electron densities are shown on each site for $n_{f} = 1$, $n_{d} = 1$, $U = 5$, $U_{f} = 10$ $J = 0.50$ and $J_{se} = 0.01$. {\bf (iv)} (a) Up-spin and (b) down-spin $d$- electron densities are shown on each site for $n_{f} = 1$, $n_{d} = 1$, $U = 5$, $U_{f} = 10$ $J = 2$ and $J_{se} = 0.01$.} \end{figure*} \subsection{Half-filled case ($n_{f}+n_{d}=2$):} In Fig.$3$(i) the ground state magnetic configurations of up-spin and down-spin $f$- electrons are shown (for half-filled case) for $J_{se} = 0.01$, $U = 5$, $U_{f} = 10$ and for different $J$ values. Due to superexchange interaction between $f$- electrons, no ordered configurations of $f$- electrons are observed. Fig.$3$(ii) shows the variation of magnetic moment of $d$- electrons ($m_{d}$) and magnetic moment of $f$- electrons ($m_{f}$) with exchange correlation $J$ at two different values of $J_{se}$ i.e $J_{se} = 0$, and $0.01$, for $U=5$ and $U_{f} = 10$. For $J = 0$, the on-site interaction energies between $d$- and $f$- electrons are same irrespective of their spins. Hence the ground state configuration is AFM type as possible hopping of $d$- electrons minimizes the energy of the system, which have been explained already. For finite but small value of $J$, the on-site interaction energy between $d$- and $f$- electrons of same spins will be smaller in comparison to the on-site interaction energy between $d$- and $f$- electrons of opposite spins, so few sites with FM arrangement of spin-up $f$- electrons will be occupied by some down-up $d$- electrons and some up-spin $d$- electrons. With this arrangement there is finite hopping possible for $d$- electrons which increases its kinetic energy and hence total energy of system decreases. Therefore the $m_d$ and $m_f$ increase with increasing $J$. It is also seen from Fig.$2$(ii) (one-fourth filled case) clearly that as value of $J_{se}$ increases the magnetic moments of both $d$- and $f$- electrons decrease, as $J_{se}$ favour AFM arrangement of $f$- and $d$- electrons. For larger value of $J$, on-site interaction between $d$- and $f$- electrons with same spin decreases as coupling strength is $U - J$ (see Hamiltonain). So $d$- electrons would prefer sites having $f$- electrons with same spin. In this case, the AFM arrangement of $f$- electrons is favoured over FM as the system gains super-exchange energy due to the possibility of virtual hopping of $d$- electrons to neighbouring sites. Thus system re-enters AFM phase at higher values of $J$. Figs.$3$(iii) and $3$(iv) show the density of $d$- electrons at a fixed value of $U = 5$, $U_{f} = 10$ and for $J = 0.5$ and $J = 2$ respectively. Fig.$3$(iii) shows that density of $d$- electrons with up-spin is slightly more at sites with up-spins $f$- electrons as compared to sites with down-spin $f$- electrons. This is because of the fact that there is large value of on-site interaction energy ($U = 5$) between $d$- and $f$- electrons of the opposite spins as compared to on-site interaction energy($U - J = 4.50$) between $d$- and $f$- electrons of the same spins. In conclusion, the ground state magnetic properties of two dimensional spin-1/2 FKM on a triangular lattice are studied for a range of parameter values like $d$- and $f$- electrons fillings, superexchange interaction $J_{se}$, on-site Coulomb correlation $U$ and exchange correlation $J$ etc. Extending the model to include the super-exchange interaction $J_{se}$ between $f$- electrons occupying nearest neighboring sites leads to interesting results: in particular, it strongly favours the $AFM$ arrangement of $f$- electrons and also plays a role in inducing AFM coupling between $d$- electrons. We have found that the magnetic moments of $d$- and $f$- electrons depend strongly on the values of $J$ and on the number of $d$- electrons $N_d$. In half-filled case, a very small value of $J_{se}$ ($J_{se} > 0.01$) makes the system AFM and hence no net magnetic moment for $d$- or $f$- electrons is observed for $J_{se} > 0.01$ where as in one-fourth filling case a higher $J_{se}$ is needed to make the system AFM. These results are quite relevant for study of triangular lattice systems mentioned above. $Acknowledgments.$ SK acknowledges MHRD, India for a research fellowship. UKY acknowledges support form UGC, India via Dr. DS Kothari Post-doctoral Fellowship scheme.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Delta Scuti stars are very complex pulsators. They are located on and above the main sequence, they pulsate mainly in p-type and g-type non-radial modes, beside the radial ones. The modes are excited by the $\kappa$-mechanism in the He ionization zone (\citealt{Unno81, Aerts10}). The amplitudes of the radial modes are remarkably lower than in the classical radial pulsators, although they lie in the extension of the classical instability strip to the main sequence. They are close to the Sun on the HR diagram, but due to the excitation of the low order modes, no high level regularity of the modes has been predicted among them. Classical pulsators, with simple structure of the excited modes, and the Sun, with stochastically excited high-order modes that are predicted to have regular frequency spacing, have the advantage for mode identification. The space missions yielded the detection of a huge number of $\delta$ Scuti stars with a much higher signal to noise ratio than we had before (\citealt{Baglin06, Auvergne09, Borucki10}). They allowed us the detection of a much larger set of modes. In the era of ground-based observations, we had hoped to match the increased number of modes by comparing them directly to model frequencies. Unfortunately this hope has not been realized due to the still existing discrepancy between the numbers of observed and predicted frequencies. Up to now we could not avoid the traditionally used methods of mode identification, using the color amplitude ratio and phase differences \citep{Watson88, Viskum98, Balona99, Garrido00}. The basic problem in mode identification of $\delta$ Scuti stars is the rotational splitting of modes due to intermediate and fast rotation. Starting from the first-order effect in slow rotators \citep{Ledoux51}, the second-order effects \citep{Vorontsov81, Vorontsov83, Dziembowski92} and the third-order effects \citep{Soufi98} were intensively investigated theoretically in the frame of the perturbative theory and were applied for individual stars \citep{Templeton00, Templeton01, Pamyatnykh98}. The theoretical investigation of the intermediate and fast rotating stars exhibited a rapid improvement since the work of \citet{Lignieres06} and \citet{Roxburgh06}. In the following years, a series of papers \citep{Lignieres08, Lignieres09,Lignieres10, Reese08, Reese09} investigated different aspects of the ray dynamic approach for fast rotating stars. Instead of the traditional quantum numbers ($l$, $n$), they introduced the modified quantum numbers ($\hat{l}$, $\hat{n}$) including the odd and even parity of modes in fast rotating stars. They reached a level that recently echelle diagrams were published; for example, see \citet{Ouazzani15}. In the ray dynamic approach different families of modes, named the low frequency modes, whispering gallery modes, chaotic modes and island modes were recognized. These modes represent different pulsational behavior. The low frequency modes are counterparts of the high-order g-modes. They have negligible amplitude in the outer layers, so they should not be detected observationally. The whispering gallery modes are counterparts of the high degree acoustic modes. They probe the outer layers but due to low visibility they might not be detected. Chaotic modes do not have counterparts in the non-rotating case. Due to the lack of symmetry in the cancellation and the significant amplitude in the whole of the stellar interior, these modes are expected to be detected observationally. However, they appear only in very fast rotating models. Island modes are counterparts of the low degree acoustic modes. They probe the outer layers of the star and present good geometric visibility. Therefore these modes should be easily detected observationally. Low $\hat{l}$ modes are expected to be the most visible modes in the seismic spectra of rapidly rotating stars. For a given parity, the mode frequencies line up along ridges of given $\hat{l}$ values. However, the first difficulty with studying the island modes is to be able to identify them among all the other type of modes present in the spectrum of rapidly rotating stars (chaotic and whispering gallery modes). The regular arrangement of the excited modes in stars having high order p modes (Sun and solar-type oscillation in red giants) or having high order g modes (white dwarfs) allowed us to reach the asteroseismology level. The radial distribution of the physical parameters (pressure, temperature, density, sound speed and chemical composition) were derived for the Sun. The mode trapping allowed us to derive the masses of the H and He layers in white dwarfs. Using the space data many investigations aimed to find regularity in the $\delta$ Scuti stars, in MOST data \citep{Matthews07}, in CoRoT data \citep{Garcia Hernandez09, Garcia Hernandez13, Mantegazza12} and in {\it Kepler} data \citep{Breger11, Kurtz14}. The most comprehensive study \citep{Garcia15} reported regularities for 11 stars on a sample of 15 {\it Kepler} $\delta$ Scuti stars, providing the large separation for them. They revealed two echelle ridges with 6 and 4 frequency members for KIC 1571717. Up to now this is the most extended survey for regularities in $\delta$ Scuti stars. Our goal was to survey the possible regularities of $\delta$ Scuti stars on a much larger sample of CoRoT targets. In addition, as a new method we searched for complete sequence(s) of quasi-equally spaced frequencies with two approaches, namely visual inspection and algorithmic search. We present in this paper our detailed results for the whole sample. \section{CoRoT data}\label{data} The CoRoT satellite was launched in 2006 \citep{Baglin06}. LRa01, the first long run in the direction of anti-center, started on October 15, 2007 and finished on March 03, 2008, resulting in a $\Delta$T=131 day time span. Both chromatic and monochromatic data were obtained on the EXO field with a regular sampling of 8 minutes, although for some stars an oversampling mode (32s) was applied. After using the CoRoT pipeline \citep{Auvergne09} the reduced N2 data were stored in the CoRoT data archive. Any kind of light curves of the EXO field were systematically searched for $\delta$ Scuti and $\gamma$ Doradus light curves by one of us \citep{Hareter13}. We did not rely on the automatic classification tool (CVC, \citealt{Debosscher09}) because of ambiguities and the risk of misclassifications that might have appeared in the original version. Rather, we selected the targets by visual inspection of light curves and their Fourier transform and kept those for which classification spectra (AAOmega, \citealt{Guenther12, Sebastian12}) were available. A recent check of the new version of CVC (CoRoT N2 Public Archive\footnote{\url{http://idoc-corotn2-public.ias.u-psud.fr/invoquerSva.do?sva=browseGraph}}, updated 2013 February) revealed that most of our stars (57) were classified as $\delta$ Scuti stars with high probability. Some GDOR (4), MISC (11), ACT (5) and $\beta$ Ceph (3) classifications also appeared. The initial sample of our investigation consists of 90 $\delta$ Scuti stars extracted from the early version of N2 data in the archive. Nowadays a modified version of N2 data on LRa01 can be found in the archive. Comparing our list and the new version, we noticed that the light curve of 14 stars from our initial sample had been omitted from the new version. The low peak-to-peak amplitude of the light curve, in some cases, might explain the decision but we did not find any reasons why targets with peak-to-peak amplitude from 0.01 to 0.05 mag had been excluded. We therefore kept these stars in our initial sample. Because the CoRoT N2 data are still affected by several instrumental effects, we used a custom IDL-code that removes the outliers and corrects for jumps and trends. The jumps were detected by using a two sampled t-test with sliding samples of 50 data points and the trends were corrected by fitting low order polynomials. The outliers were clipped using an iterative median filter, where a 3$\sigma$ rejection criterion was employed. The range of the light variation for most of the stars is 0.003 - 0.04 magnitude, with the highest population around 0.01 mag. The brightness range is from 12.39 to 15.12 mag, covering almost three magnitudes. The frequencies were extracted using the software SigSpec {\citep{Reegen07} in the frequency range from 0 to 80 d$^{-1}$. The significance limit was set initially to 5. The resulting list of frequencies for 90 $\delta$ Scuti stars served as an initial database for our frequency search \citep{Hareter13}. \begin{deluxetable}{rrrr} \tablecaption{List of excluded targets \label{omit}} \tablehead{ \colhead{No} & \colhead{CoRoT ID} & \colhead{SSF} & \colhead{FF} } \startdata 16 & 102713193 & 52 & $-$ \\ 17 & 102614844 & 78 & $-$ \\ 42 & 102646094 & 45 & $-$ \\ 44 & 102746628 & 51 & $-$ \\ 57 & 102763839 & 93 & $-$ \\ 58 & 102664100 & 35 & $-$ \\ 59 & 102766985 & 61 & $-$ \\ 60 & 102668347 & 123 & $-$ \\ 61 & 102668428 & 57 & $-$ \\ 64 & 102706982 & 68 & $-$ \\ \tableline 41 & 102645677 & 106 & 14 \\ 46 & 102749985 & 63 & 9 \\ 85 & 102589213 & 70 & 10 \enddata \tablecomments{The columns contain the running numbers (No), official CoRoT ID, the number of SigSpec frequencies (SSF), and the number of filtered frequencies (FF), respectively.} \end{deluxetable} \subsection{The final sample of targets and filtering} We filtered the SigSpec frequencies using some trivial ideas (tested for CoRoT data by \citealt{Balona14}). We removed \begin{itemize} \item low frequencies close to 0 d$^{-1}$ in most cases up to 2 d$^{-1}$, since we were primarily interested in the $\delta$ Scuti frequency region \item the possible technical peaks connected to the orbital period of the spacecraft ($f_{\mathrm {orb}}$= 13.97 d$^{-1}$) \item frequencies of lower significance in groups of closely spaced peaks, because they are most likely due to numerical inaccuracies during the pre-whitening cascade. We kept only the highest amplitude ones. \item the low-amplitude, low-significance frequencies in general. The lowest amplitude limit was different from star to star, since the frequencies showed a different amplitude range from star to star, but it was around 0.1 mmag in general. \end{itemize} We might dismiss true pulsating modes in the filtering process, but finding regularities among fewer frequencies is more convincing. Accidental coincidences could appear with higher probability if we use a larger set of frequencies. After finding a narrow path in solving the pulsation-rotation connection we may widen the path to a road. In 10 stars only a few frequencies remained after the filtering process. In each case a dominant peak remained in the $\delta$ Scuti frequency range giving an excuse of the positive classification as a $\delta$ Scuti star. The limited number of frequencies in these stars was not enough for our main purpose (to find regularities between the frequencies), so we omitted them from further investigation. In addition, we did not find any regularities in three stars. The 13 stars, that were omitted for any reason, are listed in Table~\ref{omit}. Our finally accepted sample where we found regularities with one of our methods is listed in Table~\ref{bigtable}. In both tables the CoRoT ID of the stars is given in the second column. For the sake of simpler treatment during the investigation we introduced a running number (first column in the tables). We refer to the stars by the running number in the rest of the paper. The 96 running numbers instead of 90 are due to a special test checking the ambiguity of our results. The double running numbers mean stars (see CoRoT ID in Table~\ref{bigtable}) where the filtering of SigSpec frequencies and the search for periodic spacing were independently done for the same stars (6 stars). The running numbers representing the same stars were identified (connected to each other) only at the end of the searching process. The independent cleaning due to the not-fixed limiting amplitude and subjectivity resulted in different numbers of the frequencies and consequently in different values of the spacing, the number of the frequencies in the echelle ridges, and the numbers of echelle ridges. The number of independent $\delta$ Scuti stars in our sample is 77, where we got positive results with one of our methods concerning the regular spacing. The $T_{\mathrm{eff}}$, $\log g$ and radial velocity ($v_{\mathrm {rad}}$), derived by one of us \citep{Hareter13} are presented in the third, fourth and fifth column of Table~\ref{bigtable}. \begin{figure*} \includegraphics[width=17cm]{fig1.eps} \caption[]{ Sequences with quasi-equal spacing, and shifts of the sequences for star No. 65 (CoRoT 102670461). 1st -- black dots, average spacing 3.431$\pm$0.091 d$^{-1}$; 2nd -- red squares, 3.467$\pm$0.073 d$^{-1}$; 3rd -- green triangles, 3.488$\pm$0.036 d$^{-1}$; 4th -- blue stars, 3.484$\pm$0.030 d$^{-1}$ average spacings were obtained. The mean spacing of the star is 3.459$\pm$0.030 d$^{-1}$. The shifts of the 2nd, 3rd and 4th sequences relative to the first one are also given in the same color as the sequences. } \label{fig1} \end{figure*} The filtering guidelines yielded a much reduced number of frequencies. For comparison we listed the number of SigSpec frequencies (SSF) and that of the filtered frequencies (FF) in the 6th and 7th columns. Only about 20-30\% of the original peaks were kept in our final list of frequencies. When the effectiveness of our method for finding regularities has been confirmed, the application could be extended including frequencies at lower amplitude. For possible additional investigation we attached the filtered frequencies of each star to this paper in an electronic version\footnote{See the web page of this journal: \url{http://}}. Table~\ref{sample_data} shows an excerpt from this data file as an example. Additional information on flags is discussed later. \section{Search for periodic spacing}\label{data_processing} \begin{deluxetable*}{ccrcccccc} \centering \tablecaption{Sample from the data file \label{sample_data}} \tablehead{ \colhead{No} & \colhead{CoRoT ID} & \colhead{$f$} & \colhead{$A(f)$} & \colhead{VI1} & \colhead{VI2} & \colhead{SSA1} & \colhead{SSA2} & \colhead{SSA3} \\ \colhead{} & \colhead{} & \colhead{(d$^{-1}$)} & \colhead{(mmag)} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} } \startdata 1 & 102661211 & 10.0232 & 8.462 & 0 & $-$ & 1 & 1 & $-$ \\ 1 & 102661211 & 7.8170 & 3.606 & 2 & $-$ & 2 & 5 & $-$ \\ 1 & 102661211 & 14.7389 & 1.990 & 3 & $-$ & 6 & 0 & $-$ \\ 1 & 102661211 & 12.0054 & 1.602 & 6 & $-$ & 2 & 4 & $-$ \\ 1 & 102661211 & 8.7854 & 1.437 & 5 & $-$ & 4 & 0 & $-$ \\ $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ \enddata \tablecomments{ This table is published in its entirety in the electronic edition of ApJS, a portion is shown here for guidance regarding its form and content. The columns contain local id, CoRoT ID, used frequency, Fourier amplitude of the frequency, and echelle ridge flags of the frequency obtained from the different search methods (VI or SSA), respectively. The 0 value means that the frequency is not on any echelle ridges, while sign $-$ denotes nonexistent search result. See the text for details.} \end{deluxetable*} Investigations on the regular behavior of frequencies in $\delta$ Scuti stars and derivation of the large separation have been carried out in the past (see in \citealt{Paparo15}). Even in the earlier years clustering of non-radial modes around the frequencies of radial modes over many radial orders has been reported for a number of $\delta$ Scuti stars: 44 Tau, BL Cam, FG Vir \citep{Breger09}, giving the large separation. The clustering supposes that the sequence of low-order $l$=1 modes, slightly shifted with respect to the frequency of the radial modes, also reveals the large separation in the mean value \citep{Breger99}. In all cases the histogram of the frequency differences or the Fourier Transform (FT) using the frequencies as input data were used. Both methods are sensitive to the most probable spacing frequency differences. We searched for sequence(s) among the frequencies with quasi-equal spacing in our sequence search method. The visual inspection (VI) of our targets in the whole sample led us to establish the constraints for the Sequence Search Algorithm (SSA). We present here the description of both the VI and the SSA and the results for the individual targets. \subsection{Visual inspection (VI)}\label{visual} In the visual inspection of the frequency distribution of our target, we recognized that almost equal spacing exists between the pair(s) of frequencies of the highest amplitude. The pairs proved to be connected to each other producing a sequence. New members with frequencies of lower amplitude were intentionally searched, so the sequence was extended to both the lower and higher frequency regions. Following the process with other pairs of frequencies of higher amplitude, we could localize more than one sequence, sometimes many sequences in a star. We noticed such an arrangement from star to star over the whole sample. We present here another example of the sequences compared to \cite{Paparo15} (paper Part I), to show how equal the spacings are between the members of a sequence, how the sequences are arranged compared to each other, and how we find a sequence if one consecutive member is missing. A new parameter appears in this process, namely the shifts of a frequency (member) to the consecutive lower and higher frequencies of the reference sequence (the first one is accepted). Fig.~\ref{fig1} shows four sequences of similar regular spacing for CoRoT 102670461 (running number: 65). The sequences consist of 8, 6, 4 and 4 members, respectively, altogether including more than 45\% of the filtered frequencies. We allowed to miss one member of the sequence, if the half of the second consecutive member's spacing matched the regular spacing. In this particular case the missing members of the sequences are in the 20-23.5 d$^{-1}$ interval, which is in general the middle of the interval of the usually excited modes in $\delta$ Scuti stars. The frequencies of the highest amplitudes normally appear in this region. The mean value of the spacing is independently given for each sequence in the figure's caption. The mean values differ only in the second digits. The general spacing value, calculated from the average of the sequences, is 3.459$\pm$0.030 d$^{-1}$. Fig.~\ref{fig1} also displays the shifts that we discussed before. They do not have random value, but represent characteristic values. Although the shifts are not the same for each member in a sequence, their mean values are characteristic for each sequence. We got 1.562$\pm$0.097 and 1.894$\pm$0.047 for the second, 2.225$\pm$0.087 and 1.208$\pm$0.112 for the third and 2.665$\pm$0.127 and 0.821$\pm$0.132 d$^{-1}$ for the fourth sequence relative to the first (reference) sequence. The frequencies of the second sequence are almost midway between the first sequence, which we would expect in a comb-like structure. The third sequence is shifted by 0.635 d$^{-1}$ relative to the second one, while the fourth one is shifted by 0.297 d$^{-1}$ relative to the third one (practically half of the shift between the second and third sequences) although this value is determined only by averaging two independent values due to the missing members in the sequences. The shift of the fourth sequence relative to the second one is 1.069 d$^{-1}$. According to the AAO spectral classification \citep{Guenther12, Sebastian12}, CoRoT 102670461 has $T_{\mathrm {eff}}$=7325$\pm$150 K, $\log g$=3.575$\pm$0.793 and A8V spectral type and a variable star classification as a $\delta$ Scuti type star \citep{Debosscher09}. Following the process used by \citet{Balona15} for {\it Kepler} stars (discussed later in detail), we derived a possible equatorial rotational velocity (100 km~s$^{-1}$) and a first-order rotational splitting (0.493 d$^{-1}$). Knowing the rotational splitting another regularity appears. The shift of the fourth sequence relative to the second one (1.069 d$^{-1}$) remarkably agrees with twice the value of the estimated equatorial rotational splitting. The appearance of twice the value of the rotational frequency is predicted by the theory \citep{Lignieres10}. \begin{figure} \includegraphics[width=9cm]{fig2.eps} \caption[]{ Echelle diagram of the star No. 65, consistent with the four sequences of Fig.~\ref{fig1}. 45\% of the filtered frequencies are located on the echelle ridges (the other frequencies are shown by small dots). } \label{fig2} \end{figure} The sequences in Fig.~\ref{fig1} are practically a horizontal representation of the widely used echelle diagram. In Fig.~\ref{fig2} we present the echelle diagram of the star No. 65, modulo 3.459 d$^{-1}$, in agreement with Fig.~\ref{fig1}. Fig.~\ref{fig2} displays all the filtered frequencies (small and large dots) but only 45\% of them are located on the echelle ridges (large dots). The first, second, third and fourth sequences in the order of Fig.~\ref{fig1} agree with the echelle ridges at about 0.1, 0.55, 0.75 and 0.85 d$^{-1}$, respectively, in modulo value. We found sequence(s) in 65 independent targets by the visual inspection. The number of frequencies included in the sequences (EF$_{\mathrm {VI}}$), the number of sequences (SN$_{\mathrm {VI}}$) and the spacings (SP$_{\mathrm {VI}}$) are given in the 8th, 9th and 10th columns of Table~\ref{bigtable}, respectively. To archive the work behind these columns we added flags in five additional columns to the frequencies of the sequences in the electronic table (see also Table~\ref{sample_data}). Concerning a given star, many columns contain flags, as many spacing values were found by our methods. VI means the sequence of the visual inspection, while $1,2,\dots$, flag means that this frequency belongs to the 1st, 2nd, $\dots$, sequence. The $0$ flag marks those frequencies that are not located in any sequence. If the visual inspection resulted in more than one spacing, then VI1 and VI2 columns were filled in. Using the flagged frequencies a similar diagram could be prepared for all targets, obtaining the individual spacings and the shifts which we presented in Fig.~\ref{fig1} for the star No. 65. The distribution of the spacing obtained by visual inspection on the whole sample shows two dominant peaks between 2.3-2.4 d$^{-1}$ (10 stars) and an equal population between 3.2-3.5 d$^{-1}$ (7, 7 and 6 stars in each 0.1 d$^{-1}$ bins). We summarize the results on the independent spacings as follows. The ambiguity of the personal decision is shown by six cases (double numbering), where both the filtering process and the visual inspection were independently done. The most serious effect probably was the actual personal condition of the investigator. Since we do not want to polish our method we honestly present the differences in the solution. Different EF$_{\mathrm {VI}}$, SN$_{\mathrm {VI}}$ and sometimes different spacing (SP$_{\mathrm {VI}}$) values were derived. However, in half of the cases the independent investigation resulted in similar spacing (stars No. 1=55, 2=66 and 8=92). In two stars one of the searches had negative results (stars No. 81 and 13) while the other search was positive (stars No. 11 and 74). There was only one case (star No. 14=96) where a completely different spacing value was obtained (1.844 versus 2.429, 3.3387 d$^{-1}$). In a few cases (stars No. 50, 54 and 77) a spacing and twice its value were also found. However, those cases are more remarkable (stars No. 78, 92 and 96), where both of the two most popular spacings were found. They argue against the simplest explanation, namely that the sequences represent the consecutive radial order with the same $l$ value. The visual inspection is not the fastest way for searching for regular spacing in a large sample. We developed an algorithmic search using the constraints that we learned in the visual inspection as a first trial on the long way to disentangling the pulsation and rotations in $\delta$ Scuti stars. Following this concept we could test that the sequence search algorithm (SSA) properly works. Any extension could come only after the positive test of the first trial. \subsection{The Algorithm (SSA)} We present here the Sequence Search Algorithm (SSA) developed for treatment of an even larger sample than ours. We define the $i$th {\it frequency sequence} for a given star with $n$ element by the following set: $S_i=\{ f^{(1)}, f^{(2)}, f^{(3)}, \dots, f^{(n)} \}$, where $i$ and $n$ are positive integers ($i, n \in \mathbb{N}$). The $S_i$ set is ordered $\{ f^{(1)} < f^{(2)} < \dots <f^{(n)} \}$ and \begin{equation}\label{ser_def} f^{(j)}+kD-\Delta f \le f^{(j+1)} \le f^{(j)}+kD+\Delta f \end{equation} is true for each ($f^{(j)}, f^{(j+1)}$) pair, $j \in \mathbb{N}$, $k=1$ or $k=2$. $D$ means the {\it spacing}, $\Delta f$ is the {\it tolerance value}. The upper frequency indices indicate serial numbers within the found sequence. We define independent lower frequency indices as well which show the position in the frequency list ordered by decreasing amplitude vis. $A(f_1) > A(f_2) > A(f_3), \dots,$. Since we do not have definite knowledge that all modes are excited above an amplitude limit, we allowed ``gaps'' in the sequences. This means that the sequence's definition inequality Eq.~(\ref{ser_def}) is fulfilled for some $j$ indices at $k=2$. Formulating this in another way, $S_i=\{ f^{(1)}, f^{(2)}, \dots, f^{(j)}, \emptyset, f^{(j+1)}, \dots, f^{(n)} \}$ is considered as a sequence, where $\emptyset$ means the empty set. We also allow more than one gaps in a sequence, but two subsequent gaps are forbidden. {\LongTables \begin{deluxetable*}{rrrrrrrrrrrrrr} \tablecaption{List of our sample \label{bigtable}} \tablehead{ \colhead{No} & \colhead{CoRoT ID} & \colhead{$T_{\mathrm {eff}}$} & \colhead{$\log g$} & \colhead{$v_{\mathrm {rad}}$} & \colhead{SSF} & \colhead{FF} & \colhead{EF$_{\mathrm {VI}}$} & \colhead{SN$_{\mathrm {VI}}$} & \colhead{SP$_{\mathrm {VI}}$} & \colhead{EF$_{\mathrm {A}}$} & \colhead{SN$_{\mathrm {A}}$} & \colhead{SP$_{\mathrm {A}}$} & \colhead{SP$_{\mathrm {FT}}$} \\ \colhead{} & \colhead{} & \colhead{(K)} & \colhead{} & \colhead{(km~s$^{-1}$)} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{(d$^{-1}$)} & \colhead{} & \colhead{} & \colhead{(d$^{-1}$)} & \colhead{(d$^{-1}$)} } \startdata 1=55 & 102661211 & 7075 & 3.575 & 45.0 & 163 & 52 & 25 & 6 & 2.251 & 28,29 & 6,5 & 2.092,1.510 & 0.886 \\ 2=66 & 102671284 & 8550 & 3.650 & 87.5 & 130 & 19 & 8 & 1 & 2.137 & 5 & 1 & 2.161 & 2.137 \\ 3 & 102702314 & 7000 & 2.975 & 95.0 &141 & 25 & 12 & 3 & 2.169 & 10 & 2 & 2.046 & 0.933 \\ 4 & 102712421 & 7400 & 3.950 & 32.5 & 103 & 25 & 13 & 2 & 2.362 & 11 & 2 & 2.356 & 2.294 \\ 5 & 102723128 & 6975 & 3.900 & 2.5 & 72 & 18 & 7 & 2 & 1.798 & 8 & 2 & 1.668 & 1.852 \\ 6 & 102703251 & 9100 & 3.800 & 42.5 & 118 & 27 & 6 & 1 & 1.850 & 15 & 3 & 1.767 & 1.866 \\ 7 & 102704304 & 7050 & 3.250 & 55.0 & 184 & 53 & $-$ & $-$ & $-$ & 30 & 6 & 1.795 & 0.779 \\ 8=92 & 102694610 & 8000 & 3.700 & 55.0 & 193 & 55 & 14 & 3 & 2.470 & 35 & 8 & 2.481 & 4.237 \\ 9 & 102706800 & 7125 & 3.325 & 52.5 & 122 & 49 & 27 & 5 & 2.758 & 21,22 & 4,5 & 2.784,3.506 & 1.786 \\ 10 & 102637079 & 7325 & 3.850 & 35.0 & 162 & 43 & 21 & 3 & 2.629 & 21 & 4 & 2.614 & 1.374 \\ 11=81 & 102687709 & 7950 & 4.400 & 47.5 & 107 & 19 & 9 & 2 & 3.481 & 5 & 1 & 3.570 & 3.472 \\ 12 & 102710813 & 8350 & 4.150 & 70.0 & 94 & 13 & 4 & 1 & 2.573 & 4 & 1 & 2.569 & 3.125 \\ 13=74 & 102678628 & 7100 & 3.225 & 45.0 & 230 & 49 & $-$ & $-$ & $-$ & 16 & 4 & 2.674 & 2.809 \\ 14=96 & 102599598 & 7600 & 4.000 & 65.0 & 99 & 18 & 4 & 1 & 1.844 & 5 & 1 & 1.866 & 3.472 \\ 15 & 102600012 & 8000 & 4.400 & 12.5 & 107 & 27 & 9 & 1 & 2.475 & 4,4 & 1,1 & 7.342,2.438 & 2.809 \\ 18 & 102618519 & 7500 & 4.500 & 35.0 & 102 & 54 & 10 & 1 & 2.362 & 18,11,16 & 4,2,3 & 6.001,2.359,3.345 & 2.232 \\ 19 & 102580193 & 7525 & 4.150 & 50.0 & 125 & 43 & 7 & 1 & 3.531 & 8,8 & 2,2 & 6.175,4.023 & 3.205 \\ 20 & 102620865 & 11250 & 3.975 & 50.0 & 244 & 40 & $-$ & $-$ & $-$ & 19 & 4 & 1.974 & 1.097 \\ 21 & 102721716 & 7700 & 4.150 & 25.0 & 149 & 52 & 21 & 3 & 2.537 & 9,5 & 2,1 & 7.492,2.636 & 2.427 \\ 22 & 102622725 & 6000 & 4.300 & $-$20 & 144 & 23 & 15 & 4 & 3.497 & 5,5 & 1,1 & 1.877,2.598 & 4.464 \\ 23 & 102723199 & 6225 & 3.225 & 40.0 & 113 & 22 & 9 & 3 & 3.364 & 10 & 2 & 1.461 & 1.316 \\ 24 & 102623864 & 7900 & 4.000 & 50.0 & 117 & 30 & 16 & 4 & 2.226 & 8 & 2 & 3.320 & 2.294 \\ 25 & 102624107 & 8400 & 4.050 & 57.5 & 70 & 32 & 4 & 1 & 3.215 & 8 & 2 & 3.299 & 2.100 \\ 26 & 102724195 & 7550 & 3.900 & 42.5 & 58 & 28 & 14 & 3 & 3.362 & 10,9 & 2,2 & 3.200,2.728 & 1.208 \\ 27 & 102728240 & 7450 & 4.200 & 25.0 & 168 & 55 & 20 & 4 & 3.255 & 18,18 & 4,4 & 5.995,3.178 & 1.623 \\ 28 & 102702932 & 6975 & 3.350 & 47.5 & 155 & 48 & 16 & 4 & 3.247 & 26 & 6 & 2.655 & 0.806 \\ 29 & 102603176 & 12800 & 4.300 & 35.0 & 308 & 64 & 29 & 6 & 2.342 & 35 & 7 & 2.389 & 0.984 \\ 30 & 102733521 & 7125 & 3.625 & 50.0 & 174 & 43 & 18 & 3 & 3.267 & 16,17 & 4,3 & 3.437,2.297 & 1.667 \\ 31 & 102634888 & 7175 & 4.000 & 40.0 & 179 & 39 & $-$ & $-$ & $-$ & 15 & 3 & 2.622 & 1.344 \\ 32 & 102735992 & 7225 & 3.800 & 62.5 & 83 & 38 & 10 & 2 & 3.117 & 16 & 4 & 3.253 & 1.552 \\ 33 & 102636829 & 7000& 3.200 & 42.5 & 93 & 43 & 9 & 2 & 2.303 & 21,23 & 5,5 & 2.396,1.540 & 1.282 \\ 34 & 102639464 & 9450 & 3.900 & 52.5 & 141 & 31 & $-$ & $-$ & $-$ & 5 & 1 & 3.099 & 3.333 \\ 35 & 102639650 & 7500 & 3.900 & 32.5 & 78 & 28 & 16 & 3 & 3.484 & 8,9 & 2,2 & 3.492,2.609 & 3.387 \\ 36 & 102641760 & 7950 & 4.300 & 40.0 & 135 & 32 & $-$ & $-$ & $-$ & 9 & 2 & 2.723 & 2.632 \\ 37 & 102642516 & 7275 & 3.700 & 45.0 & 72 & 20 & 8 & 2 & 2.335 & 5 & 1 & 2.586 & 3.012 \\ 38 & 102742700 & 7550 & 3.875 & 15.0 & 121 & 28 & 14 & 3 & 2.443 & 5 & 1 & 2.910 & 2.404 \\ 39 & 102743992 & 7950 & 4.300 & 42.5 & 126 & 20 & 6 & 1 & 2.454 & 6 & 1 & 4.382 & 4.386 \\ 40 & 102745499 & 7900 & 3.850 & 80.0 & 119 & 22 & 10 & 3 & 2.603 & 8 & 2 & 1.747 & 1.323 \\ 43 & 102649349 & 9425 & 3.950 & 65.0 & 121 & 16 & 4 & 1 & 1.949 & 5 & 1 & 1.947 & 2.119 \\ 45 & 102647323 & 8200 & 4.300 & 67.5 & 100 & 32 & 7 & 1 & 2.379 & 8,9 & 2,2 & 3.306,1.407 & 3.846 \\ 47 & 102650434 & 8500 & 3.875 & 72.5 & 210 & 34 & 13,14 & 3,4 & 1.597,2.525 & 11 & 2 & 1.611 & 1.092 \\ 48 & 102651129 & 8350 & 3.750 & 40.0 & 88 & 35 & 12 & 2 & 3.413 & 13 & 3 & 3.464 & 3.521 \\ 49 & 102753236 & 7600 & 4.100 & 32.5 & 375 & 37 & 12 & 2 & 3.767 & 14 & 3 & 2.317 & 2.604 \\ 50 & 102655408 & 7375 & 4.000 & 42.5 & 75 & 28 & 14,6 & 3,1 & 3.394,1.550 & 8 & 2 & 3.936 & 2.747 \\ 51 & 102655654 & 7200 & 3.675 & 72.5 & 97 & 16 & 4 & 1 & 3.377 & 4 & 1 & 1.867 & 3.378 \\ 52 & 102656251 & 7950 & 4.200 & 60.0 & 128 & 22 & 4 & 1 & 3.288 & 10 & 2 & 2.747 & 1.623 \\ 53 & 102657423 & 8150 & 3.425 & 52.5 & 161 & 36 & 10 & 2 & 2.523 & 18 & 4 & 2.492 & 2.403 \\ 54 & 102575808 & 7250 & 3.325 & 17.5 & 202 & 47 & 22,37 & 4,6 & 4.659,2.289 & 17,18 & 4,4 & 2.300,3.275 & 4.717 \\ 55=1 & 102661211 & 7075 & 3.575 & 45.0 & 163 & 43 & 9 & 3 & 2.337 & 21,24 & 5,5 & 2.544,2.262 & 0.874 \\ 56 & 102761878 & 7375 & 3.700 & 32.5 & 80 & 11 & 4 & 1 & 2.564 & $-$ & $-$ & $-$ & 4.310 \\ 62 & 102576929 & 8925 & 4.050 & 32.5 & 104 & 20 & 7 & 2 & 6.365 & 9 & 2 & 1.834 & 1.748 \\ 63 & 102669422 & 7300 & 3.675 & 50.0 & 82 & 35 & 14 & 2 & 3.390 & 18 & 4 & 3.285 & 1.712 \\ 65 & 102670461 & 7325 & 3.575 & 50.0 & 142 & 49 & 22 & 4 & 3.459 & 21 & 4 & 3.437 & 1.282 \\ 66=2 & 102671284 & 8550 & 3.650 & 87.5 & 130 & 39 & 10 & 2 & 2.152 & 16 & 4 & 2.406 & 2.119 \\ 67 & 102607188 & 8100 & 4.200 & 40.0 & 95 & 23 & $-$ & $-$ & $-$ & 4 & 1 & 3.101 & 3.425 \\ 68 & 102673795 & 8050 & 3.750 & 27.5 & 65 & 13 & $-$ & $-$ & $-$ & 5 & 1 & 1.929 & 2.119 \\ 69 & 102773976 & 7525 & 4.400 & 17.5 & 52 & 13 & $-$ & $-$ & $-$ & 4 & 1 & 4.682 & 3.731 \\ 70 & 102775243 & 7950 & 4.250 & 50.0 & 126 & 31 & 10,4 & 2,1 & 4.167,3.002 & 8 & 2 & 3.059 & 3.676 \\ 71 & 102775698 & 9550 & 3.750 & 22.5 & 473 & 56 & 24 & 4 & 3.351 & 30,28 & 6,6 & 3.277,2.218 & 1.131 \\ 72 & 102675756 & 7350 & 3.175 & 77.5 & 342 & 40 & 23 & 4 & 2.277 & 23,25 & 5,5 & 2.249,1.977 & 2.137 \\ 73 & 102677987 & 7700 & 3.950 & 37.5 & 102 & 26 & 13 & 3 & 3.293 & 8,10 & 2,2 & 3.416,2.417 & 1.176 \\ 74=13 & 102678628 & 7100 & 3.225 & 20.0 & 230 & 68 & 32 & 6 & 3.343 & 37 & 8 & 2.940 & 0.647 \\ 75 & 102584233 & 6400 & 3.725 & 75.0 & 58 & 12 & 6 & 2 & 3.287 & $-$ & $-$ & $-$ & 3.472 \\ 76 & 102785246 & 7425 & 3.800 & 30.0 & 76 & 37 & 20 & 5 & 3.527 & 21,21 & 4,4 & 1.772,2.067 & 1.761 \\ 77 & 102686153 & 7125 & 3.525 & 45.0 & 106 & 31 & 10,19 & 2,6 & 2.867,5.713 & 9,9 & 2,2 & 2.521,3.692 & 2.033 \\ 78 & 102786753 & 7100 & 3.425 & 55.0 & 238 & 59 & 22,11 & 4,2 & 2.543,3.297 & 29 & 6 & 2.392 & 1.101 \\ 79 & 102787451 & 7300 & 4.000 & 37.5 & 76 & 13 & 6 & 2 & 3.428 & 4 & 1 & 3.357 & 3.676 \\ 80 & 102587554 & 7375 & 3.700 & 47.5 & 82 & 34 & 13,14 & 3,2 & 4.293,2.487 & 11,15,12 & 2,3,3 & 4.247,1.734,3.365 & 1.712 \\ 81=11 & 102687709 & 7950 & 4.400 & 47.5 & 107 & 36 & $-$ & $-$ & $-$ & 8 & 2 & 3.480 & 4.032 \\ 82 & 102688156 & 7725 & 4.400 & 55.0 & 96 & 21 & 7 & 1 & 2.308 & 5 & 1 & 4.098 & 4.032 \\ 83 & 102788412 & 8000 & 3.925 & 70.0 & 47 & 10 & 5 & 1 & 2.357 & $-$ & $-$ & $-$ & 6.250 \\ 84 & 102688713 & 7300 & 4.150 & 47.5 & 111 & 40 & 4 & 1 & 3.584 & 17 & 4 & 2.699 & 2.500 \\ 86 & 102589546 & 7250 & 3.700 & 27.5 & 178 & 35 & 17 & 3 & 2.599 & 13,12 & 3,2 & 4.890,2.591 & 2.551 \\ 87 & 102690176 & 7425 & 3.525 & 60.0 & 111 & 35 & 20 & 4 & 2.551 & 17 & 4 & 1.458 & 4.386 \\ 88 & 102790482 & 7225 & 3.475 & 52.5 & 125 & 48 & 15 & 3 & 2.704 & 19 & 4 & 2.837 & 2.358 \\ 89 & 102591062 & 7600 & 3.650 & 30.0 & 101 & 10 & 6 & 1 & 2.551 & $-$ & $-$ & $-$ & 6.944 \\ 90 & 102691322 & 7650 & 4.050 & 37.5 & 45 & 18 & $-$ & $-$ & $-$ & 4,4 & 1,1 & 7.170,3.645 & 3.497 \\ 91 & 102691789 & 7800 & 3.750 & 75.0 & 58 & 20 & 9 & 2 & 2.648 & 5 & 1 & 2.803 & 6.250 \\ 92=8 & 102694610 & 8000 & 3.700 & 55.0 & 193 & 53 & 30,22 & 5,5 & 2.454,3.471 & 35,38 & 7,7 & 2.576,1.880 & 4.032 \\ 93 & 102794872 & 7575 & 4.150 & 32.5 & 157 & 58 & 8 & 1 & 4.346 & 20 & 4 & 4.219 & 1.706 \\ 94 & 102596121 & 7700 & 4.000 & 22.5 & 92 & 33 & $-$ & $-$ & $-$ & 7 & 1 & 3.445 & 2.564 \\ 95 & 102598868 & 7750 & 3.900 & 35.0 & 76 & 26 & 6 & 2 & 3.003 & 10,8 & 2,2 & 2.462,3.294 & 2.564 \\ 96 & 102599598 & 7600 & 4.000 & 65.0 & 99 & 55 & 22,19 & 5,4 & 2.429,3.387 & 42,37 & 9,7 & 2.584,1.835 & 1.552 \enddata \tablecomments{ Columns: (1) the running number (No.), (2) the CoRoT ID, (3) the effective temperature ($T_{\mathrm{eff}}$), (4) the surface gravity ($\log g$), (5) the radial velocity ($v_{\mathrm {rad}}$), (6) the number of SigSpec frequencies (SSF), (7) the number of filtered frequencies (FF), (8) the number of frequencies included in the sequences from the VI (EF$_{\mathrm {VI}}$), (9) the number of sequences from the VI (SN$_{\mathrm {VI}}$), (10) the dominant spacing from the VI (SP$_{\mathrm {VI}}$), (11) the number of frequencies included in the sequences from the SSA (EF$_{\mathrm {A}}$), (12) the number of sequences from the SSA (SN$_{\mathrm {A}}$), (13) the dominant spacing from the SSA (SP$_{\mathrm {A}}$), (14) the spacing from the FT (SP$_{\mathrm {FT}}$). } \end{deluxetable*} } \begin{figure*} \includegraphics[width=15.5cm]{fig3.eps} \caption[]{ Echelle diagrams using the best spacing obtained by SSA. The labels mark the running number of stars in our sample. The spacings used for modulo calculation are 2.092, 2.161, 2.046, 2.356, 1.668, 1.767, 1.795, 2.481, 2.784, and 2.614 d$^{-1}$ for the increasing running numbers, respectively. } \label{fig3} \end{figure*} \begin{figure*} \includegraphics[width=15.5cm]{fig4.eps} \caption[]{ Echelle diagrams using the best spacing obtained by SSA. The labels mark the running number of stars in our sample. The spacings used for modulo calculation are 3.570, 2.569, 2.674, 1.866, 7.342, 6.001, 6.175, 1.478, 7.492, and 1.877 d$^{-1}$ for the increasing running numbers, respectively. } \label{fig4} \end{figure*} \begin{figure*} \includegraphics[width=15.5cm]{fig5.eps} \caption[]{ Echelle diagrams using the best spacing obtained by SSA. The labels mark the running number of stars in our sample. The spacings used for modulo calculation are 1.461, 3.320, 3.299, 3.200, 5.995, 2.655, 2.389, 3.082, 2.622, and 1.671 d$^{-1}$ for the increasing running numbers, respectively. } \label{fig5} \end{figure*} \begin{figure*} \includegraphics[width=15.5cm]{fig6.eps} \caption[]{ Echelle diagrams using the best spacing obtained by SSA. The labels mark the running number of stars in our sample. The spacings used for modulo calculation are 2.396, 3.099, 3.492, 2.723, 2.586, 2.910, 4.382, 1.747, 1.947, and 3.306 d$^{-1}$ for the increasing running numbers, respectively. } \label{fig6} \end{figure*} \begin{figure*} \includegraphics[width=15.5cm]{fig7.eps} \caption[]{ Echelle diagrams using the best spacing obtained by SSA. The labels mark the running number of stars in our sample. The spacings used for modulo calculation are 1.611, 3.464, 2.317, 3.936, 1.867, 2.748, 2.492, 2.300, 2.544, and 1.834 d$^{-1}$ for the increasing running numbers, respectively. } \label{fig7} \end{figure*} \begin{figure*} \includegraphics[width=15.5cm]{fig8.eps} \caption[]{ Echelle diagrams using the best spacing obtained by SSA. The labels mark the running number of stars in our sample. The spacings used for modulo calculation are 3.285, 3.437, 2.406, 3.101, 1.929, 4.682, 3.059, 3.495, 2.249, and 3.416 d$^{-1}$ for the increasing running numbers, respectively. } \label{fig8} \end{figure*} \begin{figure*} \includegraphics[width=15.5cm]{fig9.eps} \caption[]{ Echelle diagrams using the best spacing obtained by SSA. The labels mark the running number of stars in our sample. The spacings used for modulo calculation are 2.940, 1.772, 2.521, 2.392, 3.357, 4.247, 3.480, 4.098, 2.699, and 4.890 d$^{-1}$ for the increasing running numbers, respectively. } \label{fig9} \end{figure*} \begin{figure*} \includegraphics[width=15.5cm]{fig10.eps} \caption[]{ Echelle diagrams using the best spacing obtained by SSA. The labels mark the running number of stars in our sample. The spacings used for modulo calculation are 1.862, 2.837, 7.170, 2.803, 2.576, 4.219, 3.445, 2.462, and 2.464 d$^{-1}$ for the increasing running numbers, respectively. } \label{fig10} \end{figure*} The SSA scans through the frequency lists and selects frequency sequences defined by Eq.~(\ref{ser_def}) with a parameter set $D$, $\Delta f$ and $n$. The search begins from the highest amplitude frequency $f_1$ that we called {\it basis frequency}. The search proceeds with the frequency $\hat{f}_1$ the closest neighbor of $f_1$, if $\vert \hat{f}_1- f_1 \vert \le D$ and $\vert \hat{f}_1- f_1 \vert > \Delta f$. If the $\hat{f}_1$ is too close (viz. $\vert \hat{f}_1- f_1 \vert \le \Delta f$), the algorithm steps to the next frequency $\hat{f}_2$ and so on. We collect the sequences $S_1, S_2, \dots, S_N$, ($N \le i$) found by the search from the frequencies $f_1$, $\hat{f}_1$, $\hat{f}_2$, $\dots$,$\hat{f}_{i-1}$ as a {\it pattern} belonging to a given $D$ and basis frequency $f_1$. Next, the algorithm goes to the second highest-amplitude frequency $f_2$ and (if it is not the element of the previous pattern) begins to collect a new pattern again. On the basis of the VI (Sec.~\ref{visual}) we demanded that at least one of the two highest amplitude frequencies must be in a pattern, so we did not search from smaller amplitude frequencies ($f_i$, $i \ge3$) as a basis frequency. Starting from the parameter range obtained by the VI we made numerical experiments determining the optimal input parameters. We found the smallest difference between the results of the automatic and visual sequence search at $\Delta f=0.1$~d$^{-1}$. Since we do not have any other reference point, we fixed $\Delta f$ at the this value. If we chose $n$ (the length of the sequence) to be small ($n \le 3$) we obtained a huge number of short sequences for most of the stars. Avoiding this, we set $n=4$. The crucial parameter of the algorithm is the spacing $D$. Our program determines $D$ in parallel with the sequences. The primary searching interval was $D_{\mathrm {min}} = 1.5 \le D \le 7.8 = D_{\mathrm {max}}$. The lower limit was fixed according to our results obtained by the VI. To reduce the computation time we applied an adaptive grid instead of an equidistant one. We calculated the spacings between the ten highest amplitude frequencies for each star $D_{1,2}=\vert f_1 - f_2 \vert$, $D_{1,3}=\vert f_1 - f_3 \vert$, $\dots$, $D_{9,10}=\vert f_9 - f_{10} \vert$. The $D_{l,m}$ values could be either too high or too low for a large separation, therefore we selected those ones where $D_{\mathrm {min}} \le D_{l,m} \le D_{\mathrm {max}}$ and restricted our further investigations to this selected $D_{l,m}$. Then we define a fine grid around all such spacings with $D_{l,m,h}=D_{l,m}\pm h \delta f$, where $h=15$ and $\delta f = 0.01$~d$^{-1}$. The SSA script ran for all $D=D_{l,m,h}$ searching for possible sequences for all $D$ values. The SSA script calculates (1) the total number of frequencies in all series for a given $D$, which is the frequency number of the pattern, (2) the number of found sequences, (3) the actual standard deviation of the echelle ridges and (4) the amplitude sum of the pattern frequencies. These four output values helped us to recognize the dominant spacing, since the algorithm revealed in many stars two or three characteristic spacings. The similar parameters that we derived by VI, the number of the frequencies in the sequences (EF$_{\mathrm A}$), the number of sequences (SN$_{\mathrm A}$) and the spacings (SP$_{\mathrm A}$) are given in the 11th, 12th and 13th column of Table~\ref{bigtable}. The algorithmic search recognized many more spacing values. Obviously, when we have more spacings, the appropriate set of frequencies and the number of frequencies are also given. The best solutions are given at the first place of the columns. The sequences obtained by SSA are also flagged in the electronic table in additional columns (see in Table~\ref{sample_data}). SSA1, SSA2..., etc. agree with the first, second,..., etc. value of the spacing. The flags are similar as in the case of VI ($0$ -- not included, $1, 2, 3,\dots$, are the frequencies of the 1st, 2nd, 3rd, $\dots$, sequences). The summary of the results on SSA is the following. SSA found independent solutions for 73 stars. Unexpectedly, the test cases showed seemingly more diversity. As we noticed from the beginning, the filtering process resulted, for some cases, in quite different number of frequencies used in the SSA. Comparing to the number of the SigSpec frequencies, the differences in the resulted frequency content of the double-checked stars is not remarkable, in most cases less than 10\%. In any case there are block of frequencies of highest amplitudes that are common to both files of the double checked cases. This guarantees that the SSA uses the same basis frequencies for the sequence search. Keeping the differences of the frequency content of the double-checked cases, we intended to check the sensitivity of the SSA to the frequency content. It is obvious that if we have a larger frequency content, then we find more sequences and more frequencies located on the echelle ridges. Of course, this will also influence the mean spacings. Nevertheless, as Table~\ref{bigtable} shows, the spacings differ by less than 10\%. The comparison of the two approaches, VI and the SSA gives the following result. They resulted in similar spacing for 42 stars. In the SSA we found six cases with half of the VI values. In 23 cases different spacing values were found. The seemingly large number contains the cases where we did not find any sequences in the star by one of the two approaches (12 for VI and 4 for SSA. There is no overlap in these subsets). The best spacings found by the algorithm for the CoRoT targets (the first value of 13th column) are used to create the echelle diagrams presented in Figs.~\ref{fig3}-\ref{fig10}. All filtered frequencies are plotted (small and large dots), while the frequencies located on an echelle ridge are marked by large dots. Taking into account the fixed $\pm$0.1 d$^{-1}$ tolerance we may not expect to find any effects caused by the change in chemical composition (glitches) or effects caused by the evolution (avoided crossing). However, we may conclude that we found unexpectedly large numbers of regular frequency spacing in our sample of CoRoT $\delta$ Scuti stars. Any relation that we find among the echelle ridges, the physical parameters and the estimated rotational splitting confirms that the echelle ridges are not an accidental arrangement of unrelated frequencies along an echelle ridge. \subsection{Fourier Transform (FT)}\label{FT} \begin{figure} \includegraphics[width=9cm]{fig11.eps} \caption[]{ Fourier Transform of star No. 65. The highest peak at 1.282 d$^{-1}$ agrees with a shift of sequences in VI. The lower amplitude peak agrees with 3.459 or 3.437 d$^{-1}$ spacings that are obtained by VI and SSA, respectively. } \label{fig11} \end{figure} \begin{figure} \includegraphics[width=9cm]{fig12.eps} \caption[]{ Some characteristic examples of FT in our sample. The labels mark the running number of the star. The simplest and the most complex examples are in the top and the middle panels. The bottom panels show examples with very low value of the spacing. Both VI and SSA resulted in higher values. The highest peak probably represents a shift between sequences. } \label{fig12} \end{figure} Fourier Transform (FT) of the frequencies involved in the pulsation is, nowadays, widely used in searching period spacing and finding the large separation since \citet{Handler97} to \citet{Garcia15}. It is worthwhile to compare the spacing obtained by FT and by our sequence search method. We followed the way described by \citet{Handler97} (instead of the way introduced by \citealt{Moya10}) and derived the FT spacing (the highest peak) for our sample, given in the 14th column of Table~\ref{bigtable}. The FT of the star No. 65 is shown in Fig.~\ref{fig11}. The highest peak suggests a large separation at 1.282 d$^{-1}$ that does not agree with the spacing obtained by the VI and SSA (3.459 and 3.437 d$^{-1}$, respectively). FT spacing is closer to the characteristic shifts derived for the third sequence relative to the first one (1.209 d$^{-1}$) to the leftward direction. The FT shows a peak near our value but it is definitely not the highest peak. \begin{figure} \includegraphics[width=9cm]{fig13.eps} \caption[]{ Comparison of FT diagram and echelle diagrams with three different spacings obtained by SSA for stars No. 18 and 80. The top panels give the FT diagram. These panels are marked by the highest peak. The other panels are marked by the spacing used for getting the echelle diagrams. The highest peak of FT and the best solution of SSA do not agree. } \label{fig13} \end{figure} A general comparison of FT spacing to our spacing values, both visual (VI) and algorithmic (SSA), reveals that the two methods (three approaches) do not yield a unique solution. There are cases when VI, SSA and FT spacings are the same (stars No. 2, 4, 5, 6, 11, 48 and 79) despite the spacings being around 2.2$\pm$0.1 d$^{-1}$ (stars No. 2 and 4) or around 1.7$\pm$0.2 d$^{-1}$ (stars No. 5 and 6) or around 3.5$\pm$0.1 d$^{-1}$ (stars No. 11, 48 and 79). As the echelle diagrams show, these stars have the simplest regular structure. There are cases when VI and SSA spacings are the same (stars No. 8, 12, 25 and 43), but the FT shows different spacings. There are cases, when VI and FT spacings are the same (stars No. 19 and 24) or SSA and FT spacings are the same (stars No. 13, 23, 32 and 39). In Fig.~\ref{fig12} we present some characteristic examples of FT, representing the simplest cases (upper panels), the most complicated cases, when the decision which is the highest peak is hard (middle panels), and cases when FT shows a completely different spacing than VI and SSA (bottom panels). Our example for the visual inspection, star No 65, belongs to this group. We omitted the low-frequency region applying the Nyquist frequency to the FT. We present the numbers of the spacings in 1 d$^{-1}$ bins for the different methods in Table~\ref{distrib}. The numbers in a bin are slightly different for VI and SSA, but FT shows a remarkably higher number in the 0-1 and 1-2 d$^{-1}$ region of the spacings. In a latter phase the 1-2 d$^{-1}$ bin was divided in to two parts to avoid the artifact of the lower limit of SSA for spacing (1.5 d$^{-1}$). VI and SSA have low numbers in the 1-1.5 d$^{-1}$ bin, while FT has much higher value. The VI definitely interpreted such a spacing as a shift of the sequences. SSA has lower value probably due to the lower limit that we learned from the VI. In the 1.5-2.0 d$^{-1}$ bin the VI still has lower population but SSA and FT found a similar population. In both cases there is no additional search for the shifts of the sequences. It is worthwhile to see how the different SSA spacings, when more are obtained, are related to the FT spacing. We present two cases. The FT diagram and the echelle ridges where we used the different spacings are shown in Fig.~\ref{fig13}. Left panels belong to star No. 18, while right panels to star No. 80. The panels are labeled with the actual spacing value that we used for the calculation of the modulo values. The top panels show the FT. The second panels give the dominant SSA spacing resulting in the most straight echelle ridges, but the other values also fulfill the requirement of SSA. The FT agrees with one of the SSA spacings, but not necessarily with the dominant SSA spacing. \begin{deluxetable}{rrrr} \tablecaption{Spacing distributions \label{distrib}} \tablehead{ \colhead{Range} & \colhead{$N_{\mathrm {VI}}$} & \colhead{$N_{\mathrm {SSA}}$} & \colhead{$N_{\mathrm {FT}}$} } \startdata 0-1 & $-$ & $-$ & 7 \\ 1-2 & 5 & 16 & 25 \\ (1-1.5 & $-$ & 2 & 13) \\ (1.5-2 & 5 & 14 & 12) \\ 2-3 & 35 & 31 & 23 \\ 3-4 & 26 & 19 & 16 \\ 4-5 & 3 & 6 & 9 \\ 6-7 & 1 & 3 & 3 \\ 7- & $-$ & 3 & $-$ \enddata \tablecomments{ Distribution of spacings obtained by different methods in 1 d$^{-1}$ bins. The columns show the spacing range and the number of the spacings found by the methods VI, SSA, and FT within the given range. } \end{deluxetable} We conclude that the different methods (with different requirements) are able to catch different regularities among the frequencies. The different spacing values are not a mistake of any method but the methods are sensitive to different regularities. The VI and SSA concentrate on the continuous sequence(s), while the FT is sensitive to the number of similar frequency differences, disregarding how many sequences are among the frequencies. When we have a second sequence with a midway shift, then the FT shows it, instead of the spacing of a single sequence. The spacing of a single sequence will be double the value of the highest peak in FT. If the shifts of the sequences are asymmetric, the FT shows a low and a larger value with equal probability. When we have many peaks in the FT, then it reflects that we have many echelle ridges with different shifts with respect to each other. The sequence method helps to explain the fine structure of the FT. \section{Test for refusing artifacts and confirmation of sequences}\label{test} The comparison of spacing obtained by three different approaches results in a satisfactory agreement if we consider the different requirements. However, the spacing is the only point where we are able to compare them, since this is the only output of FT. We cannot compare the unexpectedly large number of echelle ridges (sequences), since we identified them for the first time. What we can do and what we really did, is to make any test that can rule out some possible artifacts and confirm the existence of so many sequences with almost equal spacing in $\delta$ Scuti stars. (1) We started with a very basic test. Can we get the echelle ridges as a play of randomness on normally distributed frequencies? Three tests, one-dimensional Kolmogorov-Smirnov (K-S) test, Cram\'er-von~Mieses test, and the $\chi^2$-test were applied to our frequency list for the stars and to randomly generated frequency lists. The frequency distribution of 14 stars showed significant differences from the normal distribution, but in the mathematical sense most of our frequency list proved to be randomly distributed. The surprising mathematical test inspired more check. The classical K-S test and its more sensitive refinements such as Anderson-Darling or Cram\'er-von~Mieses tests are successfully applied for small samples. These tests are indeed the suggested tools for small element ($\sim 20$) samples. Our frequency lists have 9-68 elements; the average value is 32.8. We prepared a 30-element equidistantly distributed artificial frequency list. In our phrasing all the 30 frequencies build one single sequence. None of the tests, however, found significant differences from the randomness. If we increase the number of our synthetic data points and we reach 100-200 elements (depending on the used test) the tests detect the structure, viz. the significant (95\%) difference from the normal distribution. As an additional control case, we tested 30 frequencies of a pulsating model of FG Vir (discussed in paper Part I). All tests revealed that the model frequencies ($l$=0, 1, and 2) were also randomly distributed, although these frequencies were a result of a pulsation code and a sequence of grouped frequencies was reported for FG Vir \citep{Breger05}. Adding the rotational triplets and multiplets (64 frequencies) to the list (altogether 94 frequencies), the tests proved a significant difference from the normal distribution. We conclude that these statistical tests would give correct results for our specific distributions only if we had two to four times more data points than we have. The present negative results have no meaning; they are only small sample effects. In other words, such global statistical tests are not suitable tools for detecting or rejecting any structures in our frequency lists. (2) If the echelle ridges that we found were coincidences only, we could find similar regularities for random frequency distribution as well. Checking this hypothesis we have chosen three stars which represent well our results: the stars No. 39, 10 and 92 show a single sequence with 6 frequencies, four sequences with 21 frequencies (the average length of a sequence is 5.25), and 7 sequences with 35 frequencies (average length = 5), respectively. We prepared 100 artificial data sets for each of these stars. The data sets contain random numbers as frequencies within the interval of the real frequency intervals. The number of the random ``frequencies" is the same as the number of the real frequencies. The real star amplitudes are randomly assigned to the synthetic frequencies. We run the SSA on these synthetic data with the same parameters as we used for real data. We found the following results. We compared two parameters of the test and real data: the total number of frequencies located on echelle ridges, and the average length of the sequences. In the most complex case (star No.~92) we did not find a regular structure in the simulated data, for which the total number of frequencies located on the echelle ridges is as high as in the real star (35). In the two simpler cases only 5\% (for star No.~39) and 2\% (for star No.~10) of the echelle ridges proved to be as long as in the real stars. These Monte Carlo tests show that the coincidence as an origin of few of the echelle ridges that we found in our sample stars cannot be ruled out completely, but the probability of such a scenario is low ($<$5\%) and depends highly on the complexity of the echelle ridges (the more echelle ridges the lower the probability). This could concern, in the case of our sample, a maximum of one to three stars. (3) Obviously a basic test was whether any regularity can be caused by the instrumental effects (after removing most of them) and whether data sampling resulted in the systematic spacing of the frequencies? The well-known effect from the ground-based observations (especially from single sites) is the 1, 2, $\dots$, d$^{-1}$ alias structure around the pulsation frequencies. In this case, we worked on continuous observation with the CoRoT space telescope. In principle, it excludes the problem of alias structure, but the continuity is interrupted from time to time by the non-equal long gaps caused by passing through the South Atlantic Anomaly (SAA). In the spectral window pattern the only noticeable alias peak is at 2.006 d$^{-1}$ and sometimes an even lower peak around 4 d$^{-1}$. The expected alias structure around any pulsation peak is only 2 percent. A test on a synthetic light curve was presented by \citet{Benko15}. Comparison of the equally spaced and gapped data shows no difference in the frequencies. The requirement for a sequence containing four members is, at least, a quintuplet structure of the alias peaks around the frequencies of the highest amplitude, which is very improbable for the CoRoT data. We may conclude that our sequences are not caused by any alias structure of the CoRoT data. Table~\ref{bigtable} contains some spacings with near integer value, but in most cases different methods yielded different values. In an alias sequence we must have strictly equal spacing and mostly only one echelle ridge. (4) The linear combination of the higher amplitude modes creates a systematic arrangement of the frequencies reflecting the spacing between the highest amplitude modes. A high amplitude $\delta$ Scuti star, CoRoT~101155310 \citep{Poretti11} was used as a control case for two reasons. No systematic spacing was found for the 13 independent frequencies by our SSA algorithm which means the star does not show any instrumental effects discussed in the previous paragraph. To complete the list with the linear combination, our algorithm found a dominant spacing around 2.67 d$^{-1}$ which is near the frequency difference of the highest amplitude modes. Our visual inspection and algorithmic search were based on the investigation of the spacing of the peaks of the highest amplitude. It was necessary to check the frequency list for linear combinations. Half of our targets (44) showed linear combinations, with one (15) or two (12) $f_{\mathrm a}+f_{\mathrm b}=f_{\mathrm c}$ connections. In some cases (stars No. 21, 54, 66, 78, 7, 74 and 8) 9-14 linear combination frequencies were found. Comparing these to the frequencies in the echelle ridges, we found that the linear combinations were not included in the echelle ridges. There is only a single case (star No. 71.) where the echelle ridge at around 0.18 modulo value contains three members of a linear combination. In other cases, only two members fit the echelle diagrams. Star No. 38 is a critical case, where by omitting a member of the linear combination frequencies, we have to delete the single echelle ridge. We conclude that the echelle structure is not seriously modified in the other targets. All echelle frequencies connected to linear combinations in our stars were compared. The frequencies are different from star to star, so the connection between the frequencies does not have any technical origin. (5) To keep the human brain's well-known property in check, namely that it searches everywhere for structures (visual inspection) or any artifact in the algorithm, we used well-known $\delta$ Scuti stars as test cases. Spacing of consecutive radial orders were published for different $\delta$ Scuti stars: 44 Tau \citep{Breger08}, BL Cam \citep{Rodriguez07}, FG Vir \citep{Breger05}, summarized by \citet{Breger09} and KIC 8054146 \citep{Breger12}. We checked these stars by our SSA algorithm to see whether we would find similar results or not. \begin{deluxetable}{crrrr} \tablecaption{Comparison of spacings \label{comp} } \tablehead{ \colhead{Star} & \colhead{SP$_{\mathrm p}$} & \colhead{SP$_{\mathrm A}$} & \colhead{EF$_{\mathrm A}$} & \colhead{SN$_{\mathrm A}$} \\ \colhead{} & \colhead{(d$^{-1}$)} & \colhead{(d$^{-1}$)} & \colhead{} & \colhead{} } \startdata 44 Tau & 2.25 & 4.62 & 22 & 5 \\ BL Cam & 7.074 & 7.11 & 8 & 2 \\ FG Vir & 3.7 & 3.86 & 15 & 3 \\ KIC 8054146 & 2.763 & 2.82,3.45 & 7,12 & 1,2 \enddata \tablecomments{The first two columns contain the star name and published spacing SP$_{\mathrm p}$. The last three columns show the results of our SSA search: spacing (SP$_{\mathrm A}$), the total number of frequencies in all sequences (EF$_{\mathrm A}$), and the number of found sequences (SN$_{\mathrm A}$), respectively.} \end{deluxetable} We summarize the results in Table~\ref{comp}. The published and the SSA spacings are in good agreement (2nd and 3rd columns). For 44 Tau we found double the value of the published spacing. In the case of KIC 8054146 we found a second spacing by SSA in addition to the first matching spacing. We also present the number of frequencies involved in the sequences and the number of sequences (4th and 5th columns). We conclude that our algorithm finds the proper spacing of the data and the VI does not simply reflect the human brain. We confirmed that the echelle ridges belong to the pulsating stars and reflect the regularities connected to the stars. \begin{figure} \includegraphics[width=8cm]{fig19.eps} \caption[]{ Theoretical HR diagram derived from the parameters obtained from the AAO spectroscopy. The location of the targets was used to derive the estimated rotational velocity and rotational frequencies } \label{fig19} \end{figure} \section{Rotation-pulsation connection} The basic problem of the mode identification in $\delta$ Scuti stars is partly the lack of regular arrangement of the frequencies predicted by the theory. Further complication is caused by the rotational splitting of the non-radial modes, especially for fast rotating stars. Application of our sequence search method for $\delta$ Scuti stars revealed an unexpectedly large number of echelle ridges in many targets. Knowing the regular spacing of the frequencies we wonder whether we can find a connection between the echelle ridges and the rotational frequency of the stars. \subsection{Estimated rotational velocities} We do not have rotational velocity independently measured for our targets. Nowadays the space missions have enormously increased the number of stars investigated photometrically with extreme high precision, but the ground-based spectroscopy cannot keep up with this increase. However, for our targets we have at least AAO spectroscopy for classification purposes \citep{Guenther12, Sebastian12}. Based on the AAO spectroscopy one of us \citep{Hareter13} derived the T$_{\mathrm{eff}}$ and $\log g$ values for our sample used the same rotational velocity (100 km$^{-1}$) for all the stars (see Table~\ref{bigtable}). The error bars are also given in \citet{Hareter13}. For giving insight to the error of AAO spectroscopy we present the most typical range of errors for T$_{\mathrm{eff}}$ and $\log g$. For 70\% of the stars the error of T$_{\mathrm{eff}}$ falls in the range of 50-200 K. In a few cases ($\le 9$) the errors are over 1000 K. For $\log g$ the typical range is 0.2-0.8, that represents of the 83\% of the stars. The physical parameters were used to plot our targets on the theoretical HR diagram as shown in Fig.~\ref{fig19}. To get a more sophisticated knowledge on the rotation of our targets we followed the process of \citet{Balona15} who determined 10 boxes on the theoretical HR diagram. Using the catalog of projected rotational velocities \citep{Glebocki00} they determined the distribution of $v\sin i$ for each box. The true distribution of equatorial velocities in the boxes was derived by a polynomial approximation \citep{Balona75}. Using the characteristic equatorial rotational velocities of the boxes we derived the estimated equatorial rotational velocity and the rotational frequency for each target presented in Table~\ref{est}. To obtain the rotational frequency, an estimate of the stellar radius is required; we followed \citet{Balona15} in using the polynomial fit of \citet{Torres10} developed from studies of 94 detached eclipsing binary systems plus $\alpha$ Cen. The polynomial fit is a function of T$_{\mathrm{eff}}$, $\log g$, and [Fe/H]; we assume solar metallicity for our estimates. The radii calculated this way are also given in Table~\ref{est}. We also included the mass and the mean density in the table, calculating these using the \citet{Torres10} polynomial fit for mass and radius. Although these are only estimated values, they allow us to compare the rotational frequency and the shifts between the sequences to search for a connection between them, if there is any. \begin{deluxetable*}{rrrrrrrrrrrrr} \tabletypesize{\scriptsize} \tablecaption{Estimated sterllar properties \label{est} } \tablehead{ \colhead{Star} & \colhead{$R$} & \colhead{$V_{\mathrm{eq}}$} & \colhead{$\Omega_{\mathrm{rot}}$} & \colhead{$M$} & \colhead{$\rho$} & \colhead{} & \colhead{Star} & \colhead{$R$} & \colhead{$V_{\mathrm{eq}}$} & \colhead{$\Omega_{\mathrm{rot}}$} & \colhead{$M$} & \colhead{$\rho$} \\ \colhead{} & \colhead{(R$_\sun$)} & \colhead{(km~s$^{-1}$)} & \colhead{(d$^{-1}$)} & \colhead{(M$_\sun$)} & \colhead{(g~cm$^{-3}$)} & \colhead{} & \colhead{} & \colhead{(R$_\sun$)} & \colhead{(km~s$^{-1}$)} & \colhead{(d$^{-1}$)} & \colhead{(M$_\sun$)} & \colhead{(g~cm$^{-3}$)} } \startdata 1=55 & 3.911 & 80 & 0.404 & 2.046 & 0.0482 & & 49 & 1.936 & 130 & 1.327 & 1.725 & 0.3349 \\ 2=66 & 3.985 & 110 & 0.545 & 2.528 & 0.0563 & & 50 & 2.176 & 110 & 0.999 & 1.728 & 0.2360 \\ 3 & 9.641 & 80 & 0.164 & 3.044 &0.0048 & & 51 & 3.414 & 80 & 0.463 & 1.976 & 0.0699 \\ 4 & 2.340 & 70 & 0.591 & 1.776 & 0.1952 & & 52 & 1.745 & 130 & 1.472 & 1.764 & 0.4681 \\ 5 & 2.409 & 70 & 0.574 & 1.677 & 0.1690 & & 53 & 5.402 & 110 & 0.402 & 2.726 & 0.0244 \\ 6 & 3.335 & 140 & 0.829 & 2.527 & 0.0959 & & 54 & 5.797 & 100 & 0.341 & 2.486 & 0.0180 \\ 7 & 6.370 & 80 & 0.248 & 2.519 & 0.0137 & & 55=1 & 3.911 & 80 & 0.404 & 2.046 & 0.0482 \\ 8=92 & 3.540 & 110 & 0.614 & 2.248 & 0.0714 & & 56 & 3.347 & 110 & 0.649 & 2.013 & 0.0756 \\ 9 & 5.726 & 100 & 0.345 & 2.428 & 0.0182 & & 62 & 2.311 & 150 & 1.282 & 2.184 & 0.2492 \\ 10 & 2.679 & 110 & 0.811 & 1.840 & 0.1348 & & 63 & 3.448 & 110 & 0.630 & 2.014 & 0.0692 \\ 11=81 & 1.347 & 130 & 1.907 & 1.660 & 0.9574 & & 65 & 4.008 & 100 & 0.493 & 2.146 & 0.0470 \\ 12 & 1.928 & 150 & 1.537 & 1.920 & 0.3770 & & 66=2 & 3.985 & 110 & 0.545 & 2.528 & 0.0563 \\ 13=74 & 6.651 & 100 & 0.297 & 2.588 & 0.0124 & & 67 & 1.767 & 130 & 1.454 & 1.809 & 0.4620 \\ 14=96 & 2.222 & 130 & 1.156 & 1.800 & 0.2309 & & 68 & 3.304 & 140 & 0.837 & 2.204 & 0.0860 \\ 15 & 1.352 & 130 & 1.899 & 1.674 & 0.9532 & & 69 & 1.297 & 90 & 1.371 & 1.541 & 0.9955 \\ 18 & 1.143 & 90 & 1.555 & 1.499 & 1.4126 & & 70 & 1.633 & 130 & 1.573 & 1.734 & 0.5611 \\ 19 & 1.796 & 90 & 0.990 & 1.669 & 0.4055 & & 71 & 3.702 & 140 & 0.747 & 2.767 & 0.0768 \\ 20 & 2.986 & 110 & 0.728 & 3.073 & 0.1626 & & 72 & 7.355 & 100 & 0.269 & 2.812 & 0.0100 \\ 21 & 1.825 & 130 & 1.407 & 1.721 & 0.3988 & & 73 & 2.406 & 130 & 1.068 & 1.875 & 0.1897 \\ 22 & 1.248 & 40 & 0.633 & 1.153 & 0.8345 & & 74=13 & 6.651 & 100 & 0.297 & 2.588 & 0.0124 \\ 23 & 6.039 & 80 & 0.262 & 2.151 & 0.0138 & & 75 & 2.913 & 40 & 0.271 & 1.629 & 0.0929 \\ 24 & 2.282 & 110 & 0.952 & 1.897 & 0.2247 & & 76 & 2.907 & 110 & 0.748 & 1.923 & 0.1103 \\ 25 & 2.220 & 150 & 1.335 & 2.015 & 0.2595 & & 77 & 4.235 & 80 & 0.373 & 2.131 & 0.0395 \\ 26 & 2.547 & 150 & 1.163 & 1.870 & 0.1593 & & 78 & 4.910 & 70 & 0.282 & 2.260 & 0.0269 \\ 27 & 1.668 & 130 & 1.540 & 1.616 & 0.4903 & & 79 & 2.161 & 70 & 0.640 & 1.704 & 0.2378 \\ 28 & 5.430 & 80 & 0.291 & 2.318 & 0.0204 & & 80 & 3.347 & 110 & 0.649 & 2.013 & 0.0756 \\ 29 & 2.096 & 110 & 1.037 & 3.232 & 0.4947 & & 81=11 & 1.347 & 130 & 1.907 & 1.660 & 0.9574 \\ 30 & 3.649 & 80 & 0.433 & 2.005 & 0.0581 & & 82 & 1.320 & 130 & 1.945 & 1.596 & 0.9770 \\ 31 & 2.135 & 70 & 0.648 & 1.664 & 0.2409 & & 83 & 2.558 & 140 & 1.081 & 1.997 & 0.1680 \\ 32 & 2.851 & 70 & 0.485 & 1.853 & 0.1126 & & 84 & 1.759 & 90 & 1.011 & 1.601 & 0.4146 \\ 33 & 6.839 & 100 & 0.289 & 2.583 & 0.0114 & & 86 & 3.307 & 110 & 0.657 & 1.967 & 0.0766 \\ 34 & 2.962 & 140 & 0.934 & 2.524 & 0.1368 & & 87 & 4.360 & 100 & 0.453 & 2.255 & 0.0383 \\ 35 & 2.536 & 110 & 0.857 & 1.853 & 0.1601 & & 88 & 4.610 & 100 & 0.429 & 2.242 & 0.0322 \\ 36 & 1.530 & 130 & 1.679 & 1.707 & 0.6716 & & 89 & 3.679 & 100 & 0.537 & 2.158 & 0.0611 \\ 37 & 3.315 & 110 & 0.656 & 1.976 & 0.0764 & & 90 & 2.083 & 130 & 1.233 & 1.777 & 0.2770 \\ 38 & 2.640 & 110 & 0.823 & 1.893 & 0.1449 & & 91 & 3.234 & 110 & 0.672 & 2.113 & 0.0880 \\ 39 & 1.530 & 130 & 1.679 & 1.707 & 0.6716 & & 92=8 & 3.540 & 110 & 0.614 & 2.248 & 0.0714 \\ 40 & 2.823 & 110 & 0.770 & 2.038 & 0.1276 & & 93 & 1.805 & 130 & 1.423 & 1.684 & 0.4035 \\ 43 & 2.754 & 140 & 1.004 & 2.456 & 0.1655 & & 94 & 2.243 & 110 & 0.969 & 1.832 & 0.2288 \\ 45 & 1.562 & 150 & 1.897 & 1.779 & 0.6573 & & 95 & 2.594 & 110 & 0.838 & 1.937 & 0.1564 \\ 47 & 2.861 & 140 & 0.967 & 2.220 & 0.1335 & & 96 & 2.222 & 130 & 1.156 & 1.800 & 0.2309 \\ 48 & 3.387 & 140 & 0.817 & 2.315 & 0.0839 & & & & & & & \enddata \tablecomments{The table contains the running number, the radius of the star, the estimated rotational velocity, estimated rotational frequency, mass, and mean density. The radius and mass used in these estimates were calculated from the spectroscopic parameters using the formulas of \citet{Torres10}, assuming solar metallicity. The same parameters for stars after running numbers 48 can be found in the 7th, 8th, 9th, 10th, 11th, and 12th columns. } \end{deluxetable*} \subsection{Echelle ridges and rotation} We have three parameters that we can compare for our targets, namely the shift of the sequences, the rotational frequencies derived, and the spacing, or in some cases the spacings. \subsubsection{Midway shift of the sequences} \begin{deluxetable}{llll} \tablecaption{Midway shifts \label{midway} } \tablehead{ \colhead{Star} & \colhead{Spacing} & \colhead{No. of ridges} & \colhead{Mod. of ridges} \\ \colhead{} & \colhead{(d$^{-1}$)} & \colhead{(\%)} & \colhead{} } \startdata 7 & 1.795 & 1-2 {\it (8)} & 0.36-0.89 \\ 8 & 2.481 & 7-8 {\it (1)} & 0.07-0.58 \\ & & 5-4 {\it (1)} & 0.29-0.79 \\ 10 & 2.614 & 4-3 {\it (3)} & 0.24-0.76 \\ 13 & 2.674 & 2-4 {\it (5)} & 0.90-0.37 \\ 18 & 6.001 & 1-2 {\it (9)} & 0.90-0.35 \\ 20 & 1.478 & 1-2 {\it (1)} & 0.06-0.57 \\ 27 & 5.995 & 2-3 {\it (2)} & 0.11-0.62 \\ 28 & 2.655 & 3-5 {\it (6)} & 0.93-0.22 \\ & & 4-2 {\it (11)} & 0.05-0.60 \\ & & 3-6 {\it (12)} & 0.93-0.37 \\ 29 & 2.389 & 6-4 {\it (0)} & 0.77-0.26 \\ 32 & 1.671 & 1-3 {\it (6)} & 0.63-0.17 \\ 33 & 2.396 & 4-3 {\it (1)} & 0.67-0.19 \\ 54 & 2.300 & 1-2 {\it (3)} & 0.54-0.03 \\ 66 & 2.406 & 4-2 {\it (0)} & 0.93-0.43 \\ 71 & 3.495 & 1-4 {\it (3)} & 0.65-0.14 \\ & & 3-2 {\it (1)} & 0.85-0.34 \\ 72 & 2.249 & 1-3 {\it (4)} & 0.16-0.68 \\ & & 2-4 {\it (9)} & 0.43-0.89 \\ 74 & 2.940 & 7-5 {\it (5)} & 0.64-0.15 \\ & & 8-2 {\it (2)} & 0.77-0.28 \\ 76 & 1.772 & 1-3 {\it (9)} & 0.79-0.25 \\ & & 2-3 {\it (5)} & 0.73-0.25 \\ 77 & 2.521 & 1-2 {\it (3)} & 0.51-0.04 \\ 78 & 2.392 & 6-4 {\it (5)} & 0.75-0.22 \\ 87 & 1.867 & 3-2 {\it (5)} & 0.05-0.54 \\ 92 & 2.576 & 1-2 {\it (7)} & 0.85-0.31 \\ & & 3-5 {\it (1)} & 0.13-0.62 \\ & & 7-6 {\it (4)} & 0.19-0.70 \\ 96 & 2.464 & 1-7 {\it (5)} & 0.49-0.02 \\ & & 6-5 {\it (3)} & 0.76-0.27 \\ & & 3-2 {\it (3)} & 0.92-0.41 \enddata \tablecomments{ The table contains the running numbers, the spacing, the numbering of the echelle ridges and the modulo value of the echelle ridges for identification in Figs.~\ref{fig3}-\ref{fig10}. The ratio of the shift of the sequences and half of the spacing is given by italics in 3rd column. } \end{deluxetable} In the framework of the sequence search method we derived the shifts between each pair of sequences as we described in Sec.~\ref{visual}. The independent shifts were averaged for the members in the sequence. In the rest of the paper we refer to the average value when we mention the shift. There are two expectations for the shifts. Similar to the spacing in the asymptotic regime, the sequences of the consecutive radial orders of the different $l$ values are shifted relative to each other. For example the $l$=0 and $l$=1 radial orders are shifted to midway between the large separation in the asymptotic regime. The other possible expectation for the shift is the rotational splitting. We checked the shifts of each target for both effects. Of course, we have shifts only when we found more that one echelle ridge. Only one echelle ridge was found in 20 stars. In 34 stars we have no positive result for the midway shift. However, we found shifts with half of the regular spacing (shifted to midway) in 22 stars. We present them in Table~\ref{midway}. The table contains the running numbers, the spacing, the numbering of the echelle ridges and the modulo value of the echelle ridges for identification in Figs.~\ref{fig3}-\ref{fig10}. To follow how precise the midway shift is, we give the deviation in percentage by italics. In some stars there are two pairs with a midway shift (stars No. 8, 71, 72, 74 and 76), while in stars No. 28, 92 and 96 three pairs appear with a midway shift compared to the spacing. Of course, it could happen that the shift to midway represents a 1:2 ratio of the estimated rotational frequency and the spacing, but we mentioned them independently as a similarity to the behavior in the asymptotic regime. In general the ratio of the dominant spacing to the rotational frequency is in the 1.5-4.5 interval for most of our targets (52 stars). \subsubsection{Shift of sequences with the rotational frequency} The pulsation-rotation connection appears in a prominent way when one, two or even more shifts between pairs of the echelle ridges agree with the rotational frequency. We found 31 stars where a doublet, triplet or multiplet appears with a splitting near the rotational frequency. In Table~\ref{doublet} we give the running number of stars, the estimated rotational frequency, the shifts between the sequences, the numbering of echelle ridges connected to each other, and the modulo values of these echelle ridges for identification on Figs~\ref{fig3}-\ref{fig10}. \begin{deluxetable*}{lllll} \tablecaption{Doublets, triplets and multiplets \label{doublet} } \tablehead{ \colhead{Star} & \colhead{$\Omega_{\mathrm{rot}}$} & \colhead{Shift} & \colhead{No. of ridges} & \colhead{Mod. of ridges} \\ \colhead{} & \colhead{(d$^{-1}$)} & \colhead{(d$^{-1}$)} & \colhead{} & \colhead{} } \startdata 1 & 0.404 & 0.455 {\it (13)} & 4-5 & 0.17-0.39 \\ & & 0.482 {\it (19)} & 3-2 & 0.52-0.73 \\ 7 & 0.248 & 0.224 {\it (11)} & 6-2 & 0.78-0.89 \\ & & 0.224-0.563 {\it(11-14*)} & 6-2-5 & 0.78-0.89-0.22 \\ 8 & 0.614 & 0.521-0.551 {\it (18-11)} & 6-8-4 & 0.37-0.58-0.79 \\ & & 0.325-0.355 {\it (6*-16*)} & 5-2-8 & 0.29-0.43-0.58 \\ 9 & 0.345 & 0.664 {\it (4*)} & 3-2 & 0.91-0.14 \\ 10 & 0.811 & 0.874 {\it (8)} & 1-2 & 0.16-0.50 \\ & & 0.874-0.662 {\it (8-23)} & 1-2-3 & 0.16-0.50-0.75 \\ 13 & 0.297 & 0.317 {\it (7)} & 1-2 & 0.78-0.90 \\ 18 & 1.555 & 1.430 {\it (9)} & 1-4 & 0.90-0.15 \\ & & 1.649 {\it (6)} & 3-2 & 0.09-0.35 \\ & & 1.315 {\it (18)} & 4-2 & 0.15-0.35 \\ & & 1.430-1.315 {\it (9-18)} & 1-4-2 & 0.90-0.15-0.35 \\ 20 & 0.727 & 0.886 {\it (22)} & 3-4 & 0.30-0.75 \\ 27 & 1.540 & 1.533 {\it (0)} & 1-4 & 0.90-0.16 \\ 28 & 0.291 & 0.308 {\it (6)} & 3-4 & 0.93-0.05 \\ 29 & 1.037 & 0.997-1.041 {\it (4-0)} & 1-6-2 & 0.35-0.77-0.20 \\ & & 0.546-0.451 {\it (5*-15*)} & 1-7-6 & 0.35-0.56-0.77 \\ 30 & 0.433 & 0.459 {\it (6)} & 2-1 & 0.77-0.91 \\ & & 0.901 {\it (4*)} & 4-2 & 0.51-0.77 \\ 31 & 0.648 & 0.587 {\it (10)} & 1-3 & 0.64-0.86 \\ 32 & 0.485 & 0.478 {\it (1)} & 1-4 & 0.63-0.77 \\ & & 0.571 {\it (18)} & 3-2 & 0.17-0.33 \\ 33 & 0.289 & 0.251 {\it (15)} & 5-3 & 0.08-0.19 \\ 49 & 1.327 & 1.568 {\it (18)} & 1-3 & 0.39-0.05 \\ 53 & 0.402 & 0.428 {\it (6)} & 4-1 & 0.48-0.58 \\ & & 0.361 {\it (11)} & 2-3 & 0.68-0.80 \\ 54 & 0.341 & 0.611 {\it (12*)} & 1-3 & 0.54-0.79 \\ 55 & 0.404 & 0.818 {\it (1*)} & 2-5 & 0.45-0.78 \\ & & 0.729-0.818 {\it (11*-1*)} & 3-2-5 & 0.17-0.45-0.78 \\ 62 & 1.282 & 1.277 {\it (0)} & 1-2 & 0.61-0.92 \\ 63 & 0.630 & 0.611 {\it (3)} & 4-2 & 0.67-0.87 \\ 66 & 0.545 & 0.563 {\it (3)} & 1-4 & 0.69-0.93 \\ & & 0.269-0.301 {\it (1*-10*)} & 1-3-4 & 0.69-0.80-0.93 \\ 71 & 0.747 & 0.654 {\it (16)} & 1-3 & 0.65-0.85 \\ & & 0.672 {\it (11)} & 4-2 & 0.14-0.34 \\ & & 0.859 {\it (15)} & 5-6 & 0.75-0.02 \\ 72 & 0.269 & 0.603-0.563 {\it (12*-5*)} & 1-2-3 & 0.16-0.43-0.68 \\ 73 & 1.068 & 1.247 {\it (17)} & 2-1 & 0.20-0.57 \\ 74 & 0.297 & 0.359-0.356-0.346 {\it (21-20-17)} & 4-3-5-2 & 0.91-0.05-0.15-0.28 \\ 76 & 0.748 & 0.811-0.844 {\it (8-13)} & 1-3-2 & 0.79-0.25-0.72 \\ 78 & 0.282 & 0.364-0.345-0.297-0.361 {\it (29-22-5-28)} & 2-5-4-1-3 & 0.93-0.11-0.22-0.35-0.50 \\ 84 & 1.011 & 0.506 {\it (0*)} & 4-2 & 0.42-0.60 \\ 86 & 0.657 & 0.610 {\it (8)} & 3-1 & 0.26-0.32 \\ 87 & 0.453 & 0.525-0.933-0.406 {\it (16-3*-12)} & 4-2-1-3 & 0.17-0.54-0.78-0.05 \\ 88 & 0.429 & 0.357-0.322 {\it (20-33)} & 4-2-1 & 0.28-0.42-0.54 \\ 92 & 0.614 & 0.567 {\it (8)} & 4-7 & 0.96-0.19 \\ & & 0.605-0.699 {\it (1-14)} & 5-1-3 & 0.62-0.85-0.13 \\ & & 0.766-0.605-0.699 {\it (25-1-14)} & 2-5-1-3 & 0.31-0.62-0.85-0.13 \\ 93 & 1.423 & 1.285 {\it (11)} & 4-1 & 0.78-0.10 \\ & & 1.159-1.285 {\it (23-11)} & 3-4-1 & 0.51-0.78-0.10 \\ 96 & 1.156 & 1.123-1.254 {\it (3-8)} & 1-3-2 & 0.49-0.92-0.41 \enddata \tablecomments{The table contains the running number, the estimated rotational velocity, the shifts between the rotationally connected echelle ridges, the numbering of echelle ridges connected rotationally, and the modulo value of the echelle ridges for identification purpose on Figs.~\ref{fig3}-\ref{fig10}. } \end{deluxetable*} Of course, we may not expect that the estimated rotational frequency and the split (shift) of the doublet and the triplet components agree to high precision. As a guideline we followed \citet{Goupil00} who derived about 30\% deviation in the split of the component from the equally-spaced splitting. We accepted the doublets, triplets and multiplets if the deviation of the shifts is less than 20\% compared to the estimated rotational frequency. To follow how reliable are the doublets, triplets and multiplets we included the ratio of the actual shift and the estimated rotational frequency. In most cases presented in Table~\ref{doublet} the ratios are even less that 10\% (13 stars). We included some examples with higher than 20\% representing triplets (stars No. 10, 8 and 93) or complete or incomplete multiplets (stars No. 78 and 92). For getting a complete view of the connection between the shifts and the estimated rotational frequencies, we included cases where shifts are twice (stars No. 9, 30, 54, 55 and 72) or half (stars No. 8, 29, 66 and 84) the value of the estimated rotational frequency. The deviations are marked by an asterisk in these cases. A missing component in an incomplete multiplet (star No. 87) is also marked by an asterisk. The attached file to this paper with the flags allows the interested readers to derive the shifts between the pairs of the echelle ridges. The numbering of the flags agrees with the numbering in electronic table. \subsubsection{Difference of spacings and the rotational frequency} \begin{deluxetable*}{rrrrrrrrr} \tablecaption{Possible large separation \label{lsep}} \tablehead{ \colhead{No} & \colhead{$SP_1$} & \colhead{$SP_2$} & \colhead{$SP_1-SP_2$} & \colhead{$\Omega_{\mathrm{rot}}$} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} \\ \colhead{} & \colhead{(d$^{-1}$)} & \colhead{(d$^{-1}$)} & \colhead{(d$^{-1}$)} & \colhead{(d$^{-1}$)} & \colhead{(d$^{-1}$)} & \colhead{(d$^{-1}$)} & \colhead{(d$^{-1}$)} & \colhead{(d$^{-1}$)} } \startdata 35 & 3.492 & 2.609 & 0.883 & 0.857 & *3.492 & 2.609 & 1.752 & 4.349 \\ 45 & 3.306 & 1.407 & 1.889 & 1.897 & 3.306 & 1.407 & -- & *5.203 \\ 47 (VI) & 2.525 & 1.597 & 0.928 & 0.967 & 2.525 & 1.597 & 0.63 & *3.492 \\ 72 & 2.249 & 1.977 & 0.272 & 0.269 & 2.249 & 1.977 & *1.708 & 2.518 \\ 73 & 3.416 & 2.417 & 0.999 & 1.068 & *3.416 & 2.417 & 1.349 & 4.484 \\ 95 & 3.294 & 2.262 & 0.832 & 0.838 & *3.294 & 2.462 & 1.624 & 4.132 \\ \tableline 1 & 2.092 & 1.510 & 0.582 & 0.404 & *2.092 & 1.510 & 1.106 & 2.496 \\ 22 & 2.598 & 1.877 & 0.721 & 0.633 & 2.598 & 1.877 & 1.244 & *3.231 \\ 92 & 2.576 & 1.880 & 0.696 & 0.614 & *2.576 & 1.880 & 1.266 & 3.190 \\ 96 (VI) & 3.387 & 2.429 & 0.958 & 1.156 & *3.387 & 2.429 & 1.273 & 4.543 \\ \tableline 9 & 3.506 & 2.784 & 0.722 & 0.345 & 3.506 & 2.784 & *2.439 & 3.851 \\ 54 & 3.275 & 2.300 & 0.975 & 0.341 & 3.275 & 2.300 & *1.959 & 3.616 \enddata \tablecomments{The columns contain the running numbers (No), the spacings, the difference of the spacings, the rotational frequency and the possible large separations in agreement with the Equations (2), (3), (4), and (5).} \end{deluxetable*} There are 25 stars in our sample where SSA found more than one spacing between the frequencies (see Table~\ref{bigtable}). Based on the results obtained for the model frequencies of FG Vir, namely that one of the spacing agrees with the large separation and the other one with the sum of the large separation and the rotational frequency, we generalized how to get the large separation if none of the spacing represent the large separation itself but both spacings are the combination of the large separation and the rotational frequency (Part I paper). We recall the equations: \begin{eqnarray} SP_1 & =& \Delta\nu, \ {\mathrm {and}} \ SP_2 = \Delta\nu - \Omega_{\mathrm{rot}}, \\ SP_2 & =& \Delta\nu, \ {\mathrm {and}} \ SP_1 = \Delta\nu + \Omega_{\mathrm{rot}}, \\ SP_1 & =& \Delta\nu + 2\cdot\Omega_{\mathrm{rot}}, \ {\mathrm {and}} \ SP_2 = \Delta\nu + \Omega_{\mathrm{rot}}, \\ SP_2 & =& \Delta\nu - 2\cdot\Omega_{\mathrm{rot}}, \ {\mathrm {and}} \ SP_1 = \Delta\nu - \Omega_{\mathrm{rot}}, \end{eqnarray} where, $SP_1$ and $SP_2$ are the larger and smaller values of the spacings, respectively, found by SSA, $\Delta\nu$ is the large separation in the traditionally used sense, and $\Omega_{\mathrm{rot}}$ is the estimated rotational frequency. The four possible value of the large separation ($\Delta\nu$) are (2) $\Delta\nu = SP_1$, (3) $\Delta\nu = SP_2$ (4) $\Delta\nu = SP_2 - \Omega_{\mathrm {rot}}$ or (5) $\Delta\nu = SP_1 + \Omega_{\mathrm {rot}}$. We applied these equations to the $SP_1$ and $SP_2$ spacings of CoRoT 102675756, the star No. 72 of our sample in Part I paper. Obtaining the possible values of the large separation, we plotted them on the mean density versus large separation diagram, along with the relation derived using stellar models by \citet{Suarez14}. We concluded that the most probable value of the large separation is the closest one to the relation. We applied this concept in this paper to our targets in which the difference of the spacings agrees with the estimated rotational frequency exactly, or nearly, or in which the spacing difference is twice or three times of the rotational frequency. We mentioned the latest group for curiosity, where special relation appears between the estimated rotational velocity. We emphasize that our results are not forced to fulfill the theoretical expectation. To keep the homogeneity we everywhere used the $\Omega_{\mathrm{rot}}$, the estimated rotation frequency, to calculate the large separation not the actual difference of the spacings if we have any. In addition to the SSA solutions, we included solution for two stars (No. 47 and 96) from the VI that agreed with the aforementioned requirements. We calculated the four possible large separations for these stars that we present in Table~\ref{lsep}. The three groups, concerning the agreement of the difference of the spacings and the rotational frequency, are divided by a line. The columns give the running number, $SP_1$, $SP_2$, $SP_1-SP_2$, $\Omega_{\mathrm{rot}}$ and the four possible large separations in agreement with the Equations (2), (3), (4) and (5). Fig.~\ref{figexact}. shows the location of the best fitting large separations (marked by asterisk in Table~\ref{lsep}) on the mean density versus large separation diagram, along with the relation given by \citet{Suarez14}. The three groups are shown by different symbols, and the large separations obtained from different equations are marked by different colors. The stars with $\Delta\nu = SP_1$ (black symbols) perfectly agree with the middle part of the theoretically derived line. These are the stars with intermediate rotational frequency. The stars with higher and lower rotational frequency marked by blue and green symbols and derived by $\Delta\nu =SP_2 - \Omega_{\mathrm{rot}}$ and $\Delta\nu = SP_1 + \Omega_{\mathrm {rot}}$, respectively, deviate more from the theoretical line. The small black open circles represent $\Delta\nu = SP_2 - 2\cdot \Omega_{\mathrm {rot}}$ (next the green symbols) or $\Delta\nu = SP_1 + 2\cdot \Omega_{\mathrm {rot}}$ values (next the blue symbols). \begin{figure} \includegraphics[width=9cm]{2spacJoyceFinal2Omega.eps} \caption[]{ Location of the stars on the log mean density vs. log large separation diagram, along with the relation based on stellar models from \citet{Suarez14}. Three groups are those in which the difference of the spacings is: equal to the rotational frequency (triangle); near to that value (square); or twice or three times of the rotational frequency (circles) presented for curiosity. The color code corresponds how the $\Delta\nu$ was calculated: black Eq.~(2), green Eq.~(4), and blue Eq.~(5). Open circles shows $\Delta\nu$ calculated with $\pm 2\cdot\Omega_{\mathrm{rot}}$. \label{figexact} } \end{figure} \begin{figure} \includegraphics[width=9cm]{AllStarsFinalOmega.eps} \caption[]{ Location of the whole sample on the log mean density vs. log large separation diagram, along with the relation based on stellar models from \citet{Suarez14}. The new symbols represent the stars for which there is no agreement between the rotational frequency and the difference of the spacings (inverted triangle) or the stars with only one spacing (diamonds). The color code is the same as in the previous figure, with the addition of the red color corresponding to $\Delta\nu = SP_2$. } \label{figall} \end{figure} We found numerical agreement between the difference of the spacings and the rotational frequency only in half of the stars (12 stars) for which SSA found more then one spacings. We do not know why we do not have numerical agreement for the other stars. A reason may be the uncertainties in estimated rotational velocity. Nevertheless, we proceeded to apply the conclusion based on Equations (2)-(5) deriving the possible large separation to the stars where we do not have an agreement (14 stars) and to the stars (53) where SSA found only one spacing. Plotting in Fig.~\ref{figall} the best-fitting value of the large separation for both groups on the mean density versus large separation diagram along with the relation of \citet{Suarez14}, we found that the large separations are closely distributed along the \citet{Suarez14} line. The figure contains not only the two new groups but the whole sample. Different symbols are used for the two groups (inverted triangle, and diamond, respectively) but the color code according to the calculation of $\Delta\nu$ is kept in the same sense as in Fig.~\ref{figexact}. The distribution of the whole sample is consistent. The stars with $\Delta\nu=SP_1$ agree with the middle part of the line, whether they fulfill the equations or not, although some stars appear with $\Delta\nu=SP_1$ on the upper part of the plot from group with two spacing. The deviation of the stars with higher and lower rotational frequency can be also noticed. We may have a slight selection effect in the lower $\Delta\nu$ region due to the limitation of the spacing search at 1.5 d$^{-1}$. We may conclude that we found an unexpectedly clear connection between the pulsational frequency spacings and the estimated rotational frequency in many targets of our sample. The tight connection confirms that our echelle ridges are not frequencies accidentally located along the echelle ridges. They represent the pulsation and rotation of our targets. Of course the well-determined rotational frequency for as large sample as we have would be needed to confirm the results with higher precision than we have here. However, this way of investigation seems to be a meaningful approach to disentangle the pulsation and rotation in the mostly fast rotating $\delta$ Scuti stars. The frequencies along the ridges could be identified with the island modes in the ray dynamic approach, while frequencies widely distributed in the echelle diagrams could be the chaotic modes. Both of them have observable amplitude in fast rotating stars, but only the island modes show regularity as the echelle ridges \citep{Ouazzani15}. For the authors it is not trivial to give a deeper interpretation of the results in the ray dynamic approach, but hopefully colleagues will interpret it in forthcoming papers. \section{Summary} We aimed to survey the possible regularities in $\delta$ Scuti stars on a large sample in order to determine whether or not we can use the regular arrangement of high precision space-based frequencies for mode identification. Ninety stars observed by the CoRoT space telescope were investigated for regular spacing(s). We introduced the sequence search method with two approaches, the visual inspection and the algorithmic search. The visual inspection supported the parameter range and the tolerance value for quasi-equal spacing. The method proved to be successful in determining the dominant spacing and in finding sequences/echelle ridges in 77 stars stars from one up to nine ridges. Compared to the spacings obtained by SSA and FT we concluded that the different methods (with different requirements) are able to catch different regularities among the frequencies. Not only does the spacing in a sequence represent regularity among the frequencies, but the shift of the sequences, too, can be found. The sequence search method resulted in very useful parameters beside the most probable spacing, namely the shift of the sequences and the difference of the spacings. The determination of the averaged shift between the pairs of echelle ridges opens a new field of investigation. With the comparison of the shift to the spacing, we determined one midway shift of at least one pair of the echelle ridges in 22 stars. Comparing the shifts to the estimated rotational frequency we recognized rotationally split doublets (in 21 stars), triplets (in 9 stars) and multiplets (in 4 stars) not only for a few frequencies, but for whole echelle ridges in $\delta$ Scuti stars that are pulsating in the non-asymptotic regime. The numerical agreement between the difference of the spacings and the rotational frequency obtained for FG Vir (Part I paper) and in many of our sample stars (12) revealed a possibility for deriving the large separation ($\Delta\nu$) in $\delta$ Scuti stars pulsating in the non-asymptotic regime. Generalized to those stars for which there is no numerical agreement between the difference of the spacings with the rotational frequency (14), or for which only one spacing was obtained by SSA (53), we found an arrangement of each target along the theoretically determined mean density versus large separation diagram \citep{Suarez14} calculating the $\Delta\nu$ as $\Delta\nu = SP_1$, $\Delta\nu = SP_2$, $\Delta\nu = SP_2 -\Omega_{\mathrm{rot}}$ and $\Delta\nu = SP_1+\Omega_{\mathrm{rot}}$. The large separation agrees with the dominant spacing for the stars rotating at intermediate rate. The large separation for sample stars with the higher mean density and fast rotation agrees with $SP_1+\Omega_{\mathrm{rot}}$ and for the stars with lower mean density and slow rotation agrees with $SP_2-\Omega_{\mathrm{rot}}$ (if two spacings were found; otherwise the only spacing was used in the calculation). The consistent interpretation of our results using the physical parameters of the targets and the agreement with the theoretically expected relation suggest that the unexpectedly large number of echelle ridges represents the pulsation and rotation of our target, and not frequencies accidentally located along the echelle ridges. Although we could not reach at this moment the mode identification level using only the frequencies obtained from space data, this step in disentangling the pulsation-rotation connection is very promising. The huge database obtained by space missions (MOST, CoRoT and {\it Kepler}) allows us to search for regular spacings in an even larger sample and provide more knowledge on how to reach the asteroseismological level for $\delta$ Scuti stars. \acknowledgments{ This work was supported by the grant: ESA PECS No 4000103541/11/NL/KLM. The authors are extremely grateful to the referee for encouraging us to include the rotation (if possible) in our interpretation. The other remarks are also acknowledged. }
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Progress in 3d human pose and shape estimation has been sustained over the past years as several statistical 3d human body models\cite{ghum2020,pavlakoscvpr2019,SMPL2015}, as well as learning and inference techniques have been developed\cite{sminchisescu_ijrr03,bogo2016,SMPL2015,ghum2020,dmhs_cvpr17,zanfir2018monocular,Rhodin_2018_ECCV,Kanazawa2018,kolotouros2019learning,ExPose:2020,zanfir2020neural}. More recently there has been interest in human interactions, self-contact \cite{Mueller:CVPR:2021, fieraru2021learning}, and human-object interactions, as well as in frameworks to jointly reconstruct multiple people\cite{jiang2020coherent, zhang2021body, zanfir2018monocular, fieraru2021remips}. As errors steadily decreased on some of the standard 3d estimation benchmarks like Human3.6M\cite{Ionescu14pami} and HumanEva\cite{sigal2010humaneva}, other laboratory benchmarks recently appeared \cite{Fieraru_2020_CVPR,fieraru2021learning}, more complex in terms of motion, occlusion and scenarios, \eg capturing interactions. While good quality laboratory benchmarks remain essential to monitor and track progress, as a rich source of motion data to construct pose and dynamic priors, or to initially bootstrap models trained on more complex imagery, overall there is an increasing need to bridge the gap between the inevitably limited subject, clothing, and scene diversity of the lab, and the complexity of the real world. It is also desirable to go beyond skeletons and 3d markers towards more holistic models of humans with estimates of shape, clothing, or gestures. While several recent self-supervised and weakly-supervised techniques emerged, with promising results in training with complex real-world image data\cite{zanfir2020weakly,joo2020exemplar,kolotouros2019learning}, their quantitative evaluation still is a challenge, as accurate 3d ground truth is currently very difficult to capture outside the lab. This either pushes quantitative assessment back to the lab, or makes it dominantly qualitative and inherently subjective. It is also difficult to design visual capture scenarios systematically in order to improve performance, based on identified failure modes. HSPACE (Synthetic Parametric Humans Animated in Complex Environments) is a large-scale dataset that contains high resolution images and video of multiple people together with their ground truth 3d representation based on GHUM -- a state-of-the art full-body, expressive statistical pose and shape model. HSPACE contains multiple people in diverse poses and motions (including hand gestures), at different scene positions and scales, and with different body shapes, ethnicity, gender and age. People are placed in synthetic complex scenes and under natural or artificial illumination, as simulated by accurate light transport algorithms. The dataset also features occlusion due to other people, objects, or the environment, and camera motion. In order to produce HSPACE, we rely a corpus of 100 diverse 3d human scans (purchased from RenderPeople\cite{Renderpeople.com}), with parametric varying shape, animated with over a 100 real human motion capture snippets (from the CMU human motion capture dataset), and placed in 100 different synthetic indoor and outdoor environments, available from various free and commercial sources. We automatically animate the static human scans and we consistently place multiple people and motions, sampled from our asset database, into various scenes. We then render the resulting scenes for different cinematic camera viewpoints at 4K/HDR, using a realistic, high-quality game-engine. Our contribution is the construction of a large scale automatic system, which requires considerable time as well as human and material resources in order to perfect. The system supports our construction of a 3d dataset, HSPACE, unique in its large-scale, complexity and diversity, as well as accuracy and ground-truth granularity. Such features complement and considerably extend the current dataset portfolio of our research community, being essential for progress in the field. To make the approach practical and scalable we also develop: (1) procedures to fit GHUM to complex 3d human scans of dressed people with capacity to retarget and animate both the body and clothing, automatically, with realistic results, and (2) automatic 3d scene placement methodology to temporally avoid collisions between people, as well as between people and the environment. Finally, we present large-scale studies revealing insight into practical uses of synthetic data, the importance of using weakly-supervised real data in bridging the sim-to-real gap, and the potential for improvement as model capacity increases. The dataset and an evaluation server will be made available for research and performance evaluation. \section{Related Work} There are quite a few people datasets with various degrees of supervision: 2d joint annotations, semantic segmentations \cite{MsCOCO, OpenImages}, or 3d by fitting a statistical body model or from multi-camera views\cite{zhang2020object, STRAPS2020BMVC, joo2020exemplar, mehta2017monocular}, dense pose \cite{Guler2018DensePose}, indoor mocap datasets with 3d pose ground truth for single or multiple people \cite{sigal2010humaneva, Ionescu14pami, Fieraru_2020_CVPR, fieraru2020three, fieraru2021learning, fieraru2021aifit}, in the wild datasets where IMUs and mobile devices were used to recover 3d pseudo ground truth joints \cite{vonMarcard2018}. All these datasets contain real images, however the variability of the scenes and the humans is limited and the 3d ground truth accuracy is subject to annotators bias, joint positioning errors (for mocap) or IMUs sensor data optimization errors. It is also difficult to increase the diversity of a real dataset, as one cannot capture the same exact sequence from e.g. a different camera viewpoint. In order to address the above-mentioned issues, efforts have been made to generate data synthetically using photorealistic 3d assets (scenes, characters, motions). Some synthetic datasets compose statistical body meshes or 3d human scans with realistic human textures on top of random background images, HDRI backdrops or 3d scenes with limited variability \cite{varol2017learning, yan2021ultrapose, Patel:CVPR:2021, zhu2020simpose}, or rely on game engine simulations to recover human motions and trajectories \cite{caoHMP2020}. Table \ref{tbl:datasets_comparison} reviews some of the most popular datasets along several important diversity axes. Our HSPACE dataset addresses some of the limitations in the state of the art by diversifying over people, poses, motions and scenes, all within a realistic rendering environment and by providing a rich set of 2d and 3d annotations. \begin{table*}[!htbp] \setlength{\tabcolsep}{0.8em} \begin{center} \scalebox{0.78}{ \begin{tabular}{lllllllllll} Dataset & \#Frames & \#Views & \#Subj. & \#Motions & Complexity & Image & GT format \\ \hline\hline HumanEva \cite{sigal2010humaneva} & $\approx 40k$ & 4/7 & 4 & 6 & 1 subject, no occlusion & lab & 3DJ \\ Human3.6m \cite{Ionescu14pami} & $\approx 3,6M$ & 4 & 11 & 15 & 1 subject, minor occlusion & lab & 3DJ, GHUM/L \\ CHI3D \cite{fieraru2020three} & $\approx 728k$ & 4 & 6 & 120 & multiple interacting subjects, lab & lab & 3DJ, GHUM/L. CS \\ HumanSC3D \cite{fieraru2021learning} & $\approx 1.3M$ & 4 & 6 & 172 & 1 subject, frequent self-contact & lab & 3DJ, GHUM/L, CS \\ Fit3D \cite{fieraru2021aifit} & $\approx 3M$ & 4 & 13 & 47 & 1 subject, extreme poses & lab & 3DJ, GHUM/L \\ TotalCapture \cite{trumble2017total} & $\approx 1.9M$ & 8 & 5 & 10 & 1 subject, no occlusion & lab & 3DJ \\ PanopticStudio \cite{Joo_2015_ICCV} & $\approx1.5M$ & $480$ & $\approx100$ & $\approx 120$ & multiple subjects, furniture & lab & 3DJ \\ HUMBI \cite{yu2020humbi} & $\approx 300K$ & $107$ & 772 & 772 & 1 subject, no occlusion & lab & meshes, SMPL \\ 3DPW \cite{vonMarcard2018} & $\approx 51k$ & $1$ & 18 & $60$ & multiple subjects in the wild & natural & SMPL \\ MuPoTS-3D \cite{singleshotmultiperson2018} & $\approx 8k$ & $1$ & 8 & $\approx 50$ & multiple subjects in the wild & natural & 3DJ \\ EFT \cite{joo2020exemplar} & $\approx 120K$ & 1 & \textgreater 1000 & 0 & multiple subjects, in the wild & natural & SMPL \\ STRAPS \cite{STRAPS2020BMVC} & 331 & 1 & 62 & 0 & 1 subject, in the wild & natural & SMPL \\ \hline MPI-INF-3DHP-Train \cite{mehta2017monocular} & $\approx 1.3M$ & $14$ & 8 & 8 & 1 subject, minor occlusion & composite & 3DJ \\ SURREAL \cite{varol2017learning} & $6.5M$ & 1 & 145 & \textgreater{}2000 & 1 subject, no occlusion & composite & SMPL \\ 3DPeople \cite{pumarola20193dpeople} & 2.5M & 4 & 80 & 70 & 1 subject, no occlusion & composite & 3DJ \\ UltraPose\cite{yan2021ultrapose} & $\approx 500k$ & 1 & \textgreater{}1000 & 0 & 1 subject, minor occlusions & composite & DeepDaz, DensePose \\ AGORA\cite{Patel:CVPR:2021} & $\approx 14k$ & 1 & \textgreater{}350 & 0 & multiple subjects, occlusion & realistic & SMPL-X, masks \\ \hline \textbf{HSPACE} & $1M$ & 5 (var) & 100$\times$16 & 100 & multiple subjects, occlusion & realistic & GHUM/L, masks \\ \hline \end{tabular} } \end{center} \vspace{-5mm} \caption{\small Comparison of different human sensing datasets. From left to right columns represent dataset name, number of different frames, average number of views for each frame, number of different subjects, number of motions, the complexity of the scenes, whether the images are captured in indoor lab environments, in the wild natural scenes or are a composite of synthetic and natural images, as well as the type of ground truth offered e.g. 3d joints, type of statistical body mode (SMPL or GHUM), or 3d surface contact signatures (CS).} \label{tbl:datasets_comparison} \end{table*} \section{Methodology} Our methodology consists of (1) procedures to fit the GHUM body model to a dressed human scan, as well as realistically repose and reshape it (repose and reshape logic), and (2) methods to place multiple moving (dressed) scans animated using GHUM, into a scene in a way that is physically consistent so that people do not collide with each other and with the environment (dynamic placement logic). \noindent{\bf Statistical GHUM Body Model.} We rely on GHUM \cite{ghum2020}, a recently introduced statistical body model in order to represent and animate the human scans in the scene. The shape space $\boldsymbol{\beta}$ of the model is represented by a variational auto-encoder. The pose space $\bm{\theta} = \left( \boldsymbol{\theta}_{b}, \boldsymbol{\theta}_{lh}, \boldsymbol{\theta}_{rh} \right)$ is represented using normalizing flows \cite{zanfir2020weakly} with separate components for global rotation $\mathbf{r} \in \mathbb{R}^{6}$ \cite{zhou2018continuity} and translation $\mathbf{t} \in \mathbb{R}^{3}$. The output of the model is a mesh $\mathbf{M}\left(\bm{\theta}, \bm{\beta}\right) = \left(\mathbf{V}, \mathbf{F}\right)$, where $\mathbf{V} \in \mathbb{R}^{10,168 \times 3}$ are the vertices and $\mathbf{F}$ are the $20,332$ faces. \subsection{Fitting GHUM to Clothed Human Scans} The first stage in our pipeline is to fit the GHUM\cite{ghum2020} model to an initial 3d scan of a person $\mathcal{M}_{s} = \left(\mathbf{V}_s, \bm F_s, \mathbf{T}_s\right)$ containing vertices $\mathbf{V}_s \in \mathbb{R}^{N_s}$, faces $\bm F_s \in \mathbb{N}^{N_{ts} \times 3}$ and texture information $\mathbf{T}_s$ containing per vertex $UV$ coordinates and normal, diffuse and specular maps. The task is to find a set of parameters $\left( \bm{\theta}, \bm{\beta}, \mathbf{r}, \mathbf{t} \right)$ such that the target GHUM\cite{ghum2020} mesh $\mathcal{M}_t\left(\bm{\theta}, \bm{\beta}, \mathbf{r}, \mathbf{t}\right) = \left(\mathbf{V}_t, \bm F_t\right)$ is an accurate representation of the underlying geometry of $\mathcal{M}_s$. For the sake of simplicity, we drop the dependence on the parameters $\mathbf{r}$ and $\mathbf{t}$. As illustrated in fig. \ref{fig:fitting_pipeline}, we uniformly sample camera views around the subject and render it using the texture information associated to $\mathcal{M}_s$. Image keypoints for the body, face, and hands are predicted for each view using a standard regressor \cite{ghum2020,bazarevsky2020blazepose} and we triangulate to obtain a 3d skeleton $\mathbf{J}_s \in \mathbb{R}^{N_j \times 3}$ for the source mesh. The fitting procedure of the GHUM mesh $\mathcal{M}_t\left(\bm{\theta}, \bm{\beta}\right)$ to $\mathcal{M}_s$ is formulated as a nonlinear optimization problem with the following objective \begin{align} L\left(\bm{\theta}, \bm{\beta}\right) =& \lambda_{j}L_{j}\left(\mathbf{J}_t, \mathbf{J}_s\right) + L_{m}\left(\mathbf{V}_t, \mathbf{V}_s\right) + \nonumber \\ & l\left( \boldsymbol{ \theta } \right) + l\left( \boldsymbol{ \beta } \right). \\ \bm{\theta}^{*}, \bm{\beta}^{*} =& \argmin(L\left(\bm{\theta}, \bm{\beta}\right)) \label{eq:fitting} \end{align} In \eqref{eq:fitting}, $\mathbf{J}_t \in \mathbb{R}^{N_j}$ are the skeleton joints for the posed mesh $\mathcal{M}_t\left(\bm{\theta}, \bm{\beta}\right)$ and $L_{j}\left(\mathbf{\mathbf{J}_t, \mathbf{J}_s}\right) = \frac{1}{N_j} \sum_{i=1}^{N_j}\|\mathbf{J}_{s, i} - \mathbf{J}_{t, i}\|_2$ is the 3d mean per joint position error between the 3d joints of the source and those of the target. $L_{m}\left(\mathbf{V}_t, \mathbf{V}_s\right)$ is an adaptive iterative closest point loss between the target vertices $\mathbf{V}_t$ and the source vertices $\mathbf{V}_s$. At each optimization step we split the vertices $\mathbf{V}_t$ into two disjoint subsets: the vertices $\mathbf{V}_t^i$ that are inside $\mathcal{M}_s$ and the vertices $\mathbf{V}_t^o$ which are outside of $\mathcal{M}_s$. In order to classify a vertex as inside or outside, we rely on a fast implementation of the generalized winding number test \cite{Jacobson:WN:2013, fieraru2021remips}. Given the closest distance $d$ between a point $\mathbf{p}$ and a vertex set $\mathbf{V}$ \begin{equation} c(\mathbf{p},\mathbf{V})=\min_{\mathbf{q} \in \mathbf{V}} d(\mathbf{p},\mathbf{q}) \end{equation} we define $L_m$ as follows \begin{equation} L_m=\lambda_i\sum_{\mathbf{p} \in \mathbf{V}_t^i} c(\mathbf{p},\mathbf{V}_s) + \lambda_o \sum_{\mathbf{p} \in \mathbf{V}_t^o} c(\mathbf{p},\mathbf{V}_s) \label{eq:inside_outside_loss} \end{equation} We set $\lambda_i < \lambda_{o}$, enforcing the reconstructed mesh $\mathbf{M}_t$ to be inside of $\mathbf{M}_s$, but close to the surface. We add regularization for pose and shape based on their native latent space priors $l(\bm{\theta})= \|\bm{\theta}\|_2^2, \;\; l(\bm{\beta})=\|\bm{\beta}\|_2^2$ in order to penalize deviations from the mean of their Gaussian distributions. \subsection{Reposing and Reshaping Clothed People} \label{sec:reposing} \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.9\linewidth]{Figures/local_coordinate.png} \end{center} \vspace{-4mm} \caption{\small \textbf{Reposing and Reshaping Clothed People.} We compute displacements from GHUM to the scanned mesh in a local coordinate system. For each vertex of the scan, we consider its nearest neighbor point on the GHUM mesh. This point is parameterized by barycentric coordinates. When the GHUM mesh is generated for different pose and shape parameters, its local geometry rotates and scales. We want displacements between the scan and the updated GHUM geometry be preserved. We use a tangent space coordinate system that allows equivariance to rotations. Furthermore, due to the way the tangent space is computed, based on triangle surface area, we are also invariant to scale deformations.} \label{fig:local_coordinate} \end{figure} We design an automated process of generating large-scale animations of the same subject's scan, but with different shape characteristics. We need the animation process to be compatible with LBS pipelines, such as Unreal Engine, in order to automate the rigging and rendering process for large-scale data creation. This is a non-physical process in the absence of explicit clothing models, but we aim for automation and scalability, rather than perfect simulation fidelity. We aim not only for animation diversity, but also for shape diversification. We support transformations in the tangent-space of local surface geometry that can accommodate changes in both shape and pose – this is different from inverse skinning methods ~\cite{huang2020arch} that only handle the latter. \vspace{3mm} \noindent{\bf Tangent-space representation.} Given a source scan with mesh $\mathbf{M}_s$ and its fitted GHUM mesh $\mathbf{M}_t$, we compute a displacement field $\mathbf{D} \in \mathbb{R}^{N_{s} \times 3}$ from $\mathbf{M}_s$ to $\mathbf{M}_t$. For each vertex $\mathbf{v}_{k} \in \mathbf{V}_s$ we compute its closest point $\widetilde{\mathbf{v}}_{k}$ on $\mathbf{M}_t$ and we denote by $\mathbf{a}_{k} \in \mathbb{R}^{3}$ its barycentric coordinates on the projection face $\mathbf{f}_{k} \in \bm F_t$. From all values $\mathbf{a}_{k}, k \in 1, \ldots, N_{s}$ we build a sparse connection matrix $\mathbf{A} \in \mathbb{R}^{N_{s} \times N_{t}}$. The displacement field $\mathbf{D}$ from $\mathbf{M}_s$ to $\mathbf{M}_t$ is defined as \begin{align} \mathbf{D} = \mathbf{V}_s - \widetilde{\mathbf{V}}_{s}, \label{eq:displacement} \end{align} where $\widetilde{\mathbf{V}}_{s} \in \mathbb{R}^{N_s \times 3}$ are all the stacked closest points $\widetilde{\mathbf{v}}_{k}$ and $\widetilde{\mathbf{V}}_{s} = \mathbf{A} \mathbf{V}_t$. We want each of the displacement vectors $\mathbf{d}_k$ in $\mathbf{D}$ to reside in a local coordinate system determined by the supporting local geometry $\{\mathbf{a}_{k}, \mathbf{f}_{k} \in \mathbf{F_{t}}\}$. Hence, we compute associated normal $\mathbf{n}_k$, tangent $\mathbf{t}_k$ and bitangent $\mathbf{b}_k$ vectors. The normals and tangents are interpolated given per-vertex information available for the faces $\mathbf{f}_{k} \in \mathbf{F_{t}}$, $\mathbf{a}_{k}$. Per-vertex tangents are a function of the UV coordinates. For more details on the usage of UV coordinates to obtain tangents, see \cite{premecz2006iterative}. After Gram–Schmidt orthonormalization of tangents and normals, we derive a rotation matrix $\mathbf{R}_k = [\mathbf{t}_k; \mathbf{n}_k; \mathbf{t}_k \times \mathbf{n}_k] \in \mathbb{R}^{3 \times 3}$ representing a local coordinate system for each displacement vector $\mathbf{d}_k$. We stack the rotation matrices for all displacement vectors and construct $\mathbf{R} \in {\mathbb{R}^{N_s \times 3 \times 3}}$. \noindent{\bf Controlling shape and pose.} For a target set of pose and shape parameters $\left(\bm{\theta}^{\prime}, \bm{\beta}^{\prime}\right)$ of GHUM, let $\mathbf{M}_t'\left(\bm{\theta}^{\prime}, \bm{\beta}^{\prime}\right) = \left( \mathbf{V}_t', \bm F_t \right)$ be the new target GHUM posed mesh with vertices $\mathbf{V}_t'$. The task is to find $\mathbf{M}_s'\left(\mathbf{V}_s', \bm F_s\right)$ which would correspond to the same change in pose and shape for $\mathbf{M}_s$. For that, we first compute $\widetilde{\mathbf{V}}_{s}^{\prime} = \mathbf{A} \mathbf{V}_t'$. Using $\widetilde{\mathbf{V}}_{s}^{\prime}$ and $\mathbf{M}_t'$ we get updated local orientations $\mathbf{R}^{\prime}$ for each $\widetilde{\mathbf{v}}_{k}^{\prime} \in \widetilde{\mathbf{V}}_{s}^{\prime}$ from the normal, tangent and bitangent vectors similarly to $\mathbf{R}$. Note $\mathbf{R}^{\prime}\mathbf{R}^{-1}$ gives the change of orientation for the supporting faces $\mathbf{f}_{k} \in \bm F_t$ from $\widetilde{\mathbf{v}}_{k}$ to $\widetilde{\mathbf{v}}_{k}^{\prime}$. We use them to compute the change in orientation for the displacement field $\mathbf{D}$ \begin{align} \mathbf{V}_s' = \widetilde{\mathbf{V}}_{s}^{\prime} + \mathbf{R}^{\prime}\mathbf{R}^{-1}\mathbf{D} \end{align} and obtain the corresponding mesh $\mathbf{M}_s'\left(\mathbf{V}_s', \bm F_s\right)$. \paragraph{Rendering engine compatibility} Rendering engines use linear blend skinning to display realtime realistic animations, so we cannot incorporate tangent-space transformations to drive the animation. Instead, we use tangent-space transformations to compute a new target rest mesh (this is equivalent to unposing and reshaping), with different body shapes sampled from the latent distribution of the GHUM model, and then continue the animation by LBS. We compute the skinning weights for $\mathbf{M}_s'$ as $\mathbf{W}_{s}^{\prime} = \mathbf{A} \mathbf{W}_{t}^{\prime}$, where $\mathbf{W}_{t}^{\prime} \in \mathbb{R}^{N_t \times N_j}$ are the skinning weights for $\mathbf{M}_t'$. The skeleton animation posing values, skinning matrix $\mathbf{W}_{s}^{\prime}$ and updated rest mesh $\mathbf{M}_s'$ are sufficient for animation export. The limitations of our animation method lie in the hair or clothing simulation which lacks physical realism. However, this geometric animation process is efficient and easy to compute and, as can be seen in fig. \ref{fig:sample_sequences}, results are visually plausible within limits. Our quantitative experiments show that such synthesis methodology improves performance on challenging tasks like 3d pose and shape estimation. \subsection{Scene Placement Logic} \label{sec:scene_placement} In order to introduce multiple animated scans into scenes, we develop a methodology for automatic scene placement based on free space calculations. Typically, we sample several people, their shape, and their motions as well as a bounded, square region of the synthetic scene, so it can be comfortably observed by 4 cameras placed in the corners of the square at different elevations. This is important as some synthetic scenes could be very large, and sampling may generate people spread too far apart or not even visible in any of the virtual cameras. The union of tightly bounding parallelepipeds for each human shape at each timestep of their animation defines a \emph{motion volume}. These are aligned with a global three-dimensional grid. The objective is to estimate a set of positions and planar orientations for the motion volumes, such that no two persons occupy the same unit volume at the same motion timestep (as otherwise trajectories from different people at different timesteps can collide). Given a scene (3d bounding boxes around any objects including the floor/ground), we sample a set of random motion volumes and initially place them into the scene such that the mid point of the motion paths is in the middle of the scene. We define a loss function which is the sum of {\it a)} number of collision between the sequences (defined as their time-varying 3d bounding boxes intersecting or intersecting with object bounding boxes) and {\it b)} the number of time steps when they are outside the scene bounding box. The input to the loss function is a set of per-sequence translation variables, as well as rotations around the axis of ground normal. We then minimize this loss function using a non-differentiable covariance matrix adaptation optimization method (CMA)\cite{Hansen2006} over the initial translation and rotation of the motion volumes, and only accept solutions where the physical loss is 0 (\ie has no collisions and all sequences are inside the scene bounding box). While the scene placement model can be improved in a number of ways, including the use of physical models or environmental semantics it provides an automation baseline for initial synthesis. See fig.\ref{fig:placement} for an illustration. \begin{figure}[!htbp] \begin{center} \includegraphics[width=\linewidth]{Figures/placement_all_numbered.png} \end{center} \vspace{-4mm} \caption{\small Dynamic placement logic ensures that multiple moving people follow plausible human motions, and are positioned in a scene in way that is consistent with spatial occupancy from other objects or people. An optimization algorithm ensures no two people occupy the same scene location at the same motion timestep. Trajectories are shown in color, with start/end denoted by A/B.} \label{fig:placement} \end{figure} \noindent{\bf Automatic Pipeline.} We designed a pipeline, such that given a query for a specific body scan asset, animation and scene, we produce a high quality rendering placed automatically, at a physically plausible location. \subsection{HSPACE dataset} \noindent{\bf Dataset Statistics.} Our proposed HSPACE dataset was created by using $100$ unique photogrammetry scans of people from the commercial dataset RenderedPeople~\cite{renderpeople}. We reshape the scans using our proposed methodology (see section~\ref{sec:reposing}), with $16$ uniformly sampled shape parameters sampled from GHUM's VAE shape space. For animation, we use $100$ CMU motion capture sequences for which we have corresponding GHUM pose parameters. For background variation, we use $100$ complex, good quality 3d scenes. These include both indoor and outdoor environments. To create a sequence in our dataset, we randomly sample from all factors of variation and place the animations in the scene using our scene placement method (see section~\ref{sec:scene_placement}). In total, we collect $1,000,000$ unique rendered frames, each consisting of $5$ subjects on average. An example of a scene with multiple dynamic people is shown in fig.\ref{fig:teaser_figure}. \noindent{\bf Rendering.} HSPACE images and videos are rendered using Unreal Engine 5 at 4k resolution. The rendering uses ray-tracing, high resolution light mapping, screen-space ambient occlusion, per-category shader models (e.g. Burley subsurface scattering for human skin), temporal anti-aliasing and motion blur. For each frame we capture the ground truth 3d pose of the various people inserted in the scene and save render passes for the finally rendered RGB output, as well as segmentation masks. On average, our system renders at 1 frame/s including saving data on disk. All of our dataset was rendered on 10 virtual machines with GPU support running in the cloud. \begin{figure*}[!htbp] \begin{center} \includegraphics[width=0.78\linewidth]{Figures/appearance_shape.png} \end{center} \vspace{-4mm} \caption{\small Three scans with different appearance and body mass index, synthesised using GHUM statistical shape parameters, based on a single scan of each subject. Notice plausible body shape variations and reasonable automatic clothing deformation as body mass varies.} \label{fig:appearance_shape} \end{figure*} \begin{figure*}[!htbp] \begin{center} \includegraphics[width=0.78\linewidth]{Figures/processing_pipeline.png} \end{center} \vspace{-4mm} \caption{\small Main processing pipeline for our synthetic human animations. Given a single 3d scan of a dressed person, we automatically fit GHUM to the scan, and build a representation that supports the plausible animation of both the body and the clothing based on different 3d human motion capture signals. Shape can be varied too -- notice also plausible positioning for the fringes of the long blouse outfit.} \label{fig:fitting_pipeline} \end{figure*} \begin{figure*}[!htbp] \begin{center} \includegraphics[width=0.94\linewidth ]{Figures/animation_sequence_00.jpeg} \includegraphics[width=0.94\linewidth]{Figures/animation_sequence_01.jpeg} \end{center} \vspace{-4mm} \caption{\small Frames from HSPACE sequences with companion GHUM ground truth. Highly dynamic motions work best with characters wearing tight fitted clothing, the animated sequences look natural and smooth (bottom rows) but also notice good performance for less tight clothing (top rows). See our Sup. Mat. for videos.} \label{fig:sample_sequences} \end{figure*} \begin{figure*}[!htbp] \begin{center} \includegraphics[width=0.96\linewidth]{Figures/sample_render_passes2.jpeg} \end{center} \vspace{-4mm} \caption{\small Human scans animated and placed in complexly lit 3d scenes with background motion (e.g. curtains, vegetation).} \label{fig:sample_render_passes} \end{figure*} \section{Experiments} We validate the utility of HSPACE for both training and evaluation of 3d human pose and shape reconstruction models. We split HSPACE 80/20$\%$ into training and testing subsets, respectively. We use different people and animation assets for each split. We additionally employ a dataset with images in-the-wild, \textbf{Human Internet Images (HITI)} (100,000 images), of more than 20,000 different people performing activities with highly complex poses (e.g. yoga, sports, dancing). This dataset was collected in-house and is annotated with both 2d keypoints and body part segmentation. We use it our experiments for training in a weakly supervised regime. The test version of this dataset, \textbf{Human Internet Images (HITI-TEST)}, consists of 40,000 images with fitted GHUM parameters under multiple depth-ordering constraints that we can use as pseudo ground-truth for evaluation in-the-wild (see our Sup. Mat. for details). \noindent{\bf Evaluation of GHUM Fitting to Human Scans.} In order to evaluate our GHUM fitting procedure, we compute errors of the nonlinear optimization fit in \eqref{eq:fitting} with keypoints only ($L_{j}$), as well as for the full optimization ($L_{j} + L_{m}$) with $L_{m}$ as well. Results are given in table \ref{tbl:ghum_fitting_errrors}. \begin{table}[!htbp] \small \centering \begin{tabular}[t]{|l||r|r|} \hline \textbf{Fitting Method} & V2V & Chamfer \\ \hline $L_{j}$ & $10$ & $13$ \\ \hline $L_{j} + L_{m}$ & $\textbf{8}$ & $\textbf{11}$ \\ \hline \end{tabular} \caption{\small Fitting evaluation with vertex to vertex errors and bidirectional Chamfer distance. Values are reported in mm. Please see fig. \ref{fig:sample_sequences} and our Sup. Mat. for qualitative visualizations.} \label{tbl:ghum_fitting_errrors} \end{table} In all experiments we train models for 3d human pose and shape estimation based on the THUNDR architecture\cite{Zanfir_2021_ICCV}. We report standard 3d reconstruction errors used in the literature: mean per joint position errors with and without Procrustes alignment (MPJPE, MPJE-PA) for the 3d pose and mean per vertex errors with and without Procrustes alignment (MPVPE, MPVPE-PA) for the 3d shape, as well as global translation errors. We present the experimental results on the test set of \textbf{HSPACE} in table \ref{tbl:ablation_space}. First we report results for various state of the art 3d pose and shape estimation models such as HUND\cite{zanfir2020neural}, THUNDR\cite{Zanfir_2021_ICCV}, SPIN\cite{kolotouros2019learning} and VIBE\cite{kocabas2019vibe}. The first two methods estimate GHUM mesh parameters, while the last two methods output SMPL mesh parameters. Both SPIN\cite{kolotouros2019learning} and VIBE\cite{kocabas2019vibe} use orthographic projection camera models so we can not report translation errors. We train a weakly supervised (WS) version of THUNDR on the HITI training dataset and fine tune it on HSPACE in a fully supervised (FS) regime. This model performs better than all other state of the art methods. The best reconstruction results are obtained by a modified temporal version of THUNDR (labeled as T-THUNDR in table \ref{tbl:ablation_space}) with the same number of parameters as the single-frame version. We provide details of this architecture in the Sup. Mat. We also train and evaluate on a widely used dataset in the literature, the \textbf{Human3.6M}~\cite{Ionescu14pami} dataset. This is an indoor benchmark with ground-truth 3d joints obtained from a motion capture system. We report results on protocol P1 (100,000 images) where subjects S1, S5-S8 are used for training, and subjects S9 and S11 are used for testing. In table~\ref{tbl:H36MP1} we show that a refined variant of the THUNDR\cite{Zanfir_2021_ICCV} architecture on HSPACE training data achieves the lowest reconstruction errors under all metrics. We also performed a comprehensive study in order to understand the impact of increasing the size of synthetic data on model performance. Other important factors are the sim-to-real gap, the importance of real data, and the influence of model capacity on performance. One of the most practical approaches would be to use large amounts of supervised synthetic data, as well as potentially large amounts of real images without supervision. The question is whether this combination would help and how would the different factors (synthetic data, real data, model capacity, initialisation and curriculum ordering) play on performance. We trained a battery of models with different fractions of weakly supervised real data (10\%, 30\% or 100\% of HITI-TRAIN), fully supervised synthetic data (0\%, 10\%, 30\%, 60\%, 100\% of HSPACE-TRAIN), and for two model sizes (small THUNDR model with a transformer component of 1.9M parameters, and a big THUNDR model with a transformer component of 3.8M parameters). All models were evaluated on HSPACE-TEST (first and second columns in figure \ref{fig:thundr_ws_fs_ablations}) as well as on HITI-TEST for complex real images. Results are presented in fig. \ref{fig:thundr_ws_fs_ablations}. Empirically we found that models trained on synthetic data alone do not perform the best, not even when tested on synthetic data. Moreover, we found that pre-training with real data and refining on synthetic data produces better results than vice-versa. Large volumes of synthetic data improve model performance in conjunction with increasing amounts of weakly annotated real data, which is important as this is a practical setting and the symbiosis of synthetic and real data during training appears to address the sim-to-real gap. An increase in model capacity seems however necessary in order to take advantage of larger datasets. \begin{table}[!htbp] \small \centering \begin{tabular}[t]{|l||r|r|r|} \hline \textbf{Method} & {MPJPE-PA} & {MPJPE} & {MPJPE-T} \\ \hline \hline HMR \cite{Kanazawa2018} & $58.1$ & $88.0$& NR \\ \hline HUND \cite{zanfir2020neural} & $53.0$ & $72.0$& $160.0$ \\ \hline THUNDR \cite{Zanfir_2021_ICCV} & $39.8$ & $55.0$ & $143.9$ \\ \hline \hline THUNDR (HSPACE) & $\mathbf{39.0}$ & $\mathbf{53.3}$ & $\mathbf{132.5}$ \\ \hline \end{tabular} \caption{\small Results obtained when refining THUNDR \cite{Zanfir_2021_ICCV} on the HSPACE training set and evaluated on Human3.6M under training/testing assumptions of protocol P1 (100K testing samples). Refining on HSPACE improves over the previous SOTA under MPJPE-PA, MPJPE and translation errors (MPJPE-T).} \label{tbl:H36MP1} \end{table} \begin{figure*}[!htbp] \begin{center} \includegraphics[width=0.245\linewidth ]{Figures/Plots/mpjpe_pa_space_test.png} \includegraphics[width=0.245\linewidth ]{Figures/Plots/mpjpe_space_test.png} \includegraphics[width=0.245\linewidth ]{Figures/Plots/mpjpe_pa_hiti_test.png} \includegraphics[width=0.245\linewidth ]{Figures/Plots/mpjpe_hiti_test.png} \end{center} \caption{\small Performance on HSPACE-TEST set (plots in the first and second rows) and HITI-TEST set (plots in third and fourth rows) for THUNDR (WS+FS) models with different capacities (SMALL for a THUNDR model with a transformer component of 1.9M parameters and BIG for a THUNDR model with a transformer component of 3.8M parameters, see supplementary material for more details) trained with various percentages of HITI (real) and HSPACE (synthetic) data. The THUNDR models were first trained in weakly supervised (WS) regime on the percentage of HITI data indicated in the legend and then refined in a fully supervised (FS) regime on different amounts of HSPACE data as well. We report MPJPE-PA and MPJPE metrics. We observe performance improvements when adding greater amounts of both synthetic and real data, as well as when increasing the model capacity.} \label{fig:thundr_ws_fs_ablations} \end{figure*} \begin{table*}[htbp] \small \centering \begin{tabular}[t]{|l||r|r|r|r|r|r|r|r|} \hline \textbf{Method} & {MPJPE-PA} & {MPJPE} & {MPVPE-PA} & {MPVPE} & {MPJPE-T} & R\#{2D} & R\#{2D-3D} & S\#{3D} \\ \hline SPIN \cite{kolotouros2019learning} & $79$ & $125$ & N/A & N/A & N/A & 111K & 300K & $0$\\ \hline VIBE \cite{kocabas2019vibe} & $120$ & $260$ & N/A & N/A & N/A & 150K & 250K & $0$ \\ \hline HUND \cite{zanfir2020neural} & $84$ & $130$ & $96$ & $150$ & $280$ & 80K & 150K & $0$ \\ \hline THUNDR \cite{Zanfir_2021_ICCV} & $65$ & $100$ & $80$ & $120$ & $230$ & 80K & 150K & $0$ \\ \hline \hline THUNDR (HITI + HSPACE) & $\mathbf{50}$ & $\textbf{76}$ & $\textbf{60}$ & $\textbf{90}$ & \textbf{180} & 100K & $0$ & $800K$ \\ \hline T-THUNDR (HITI + HSPACE) & $\mathbf{47}$ & $\textbf{71}$ & $\textbf{58}$ & $\textbf{81}$ & \textbf{171} & 100K & $0$ & $800K$ \\ \hline \end{tabular} \caption{\small Results on the \textbf{HSPACE} test set. All current state of the art methods do not perform well when tested on the HSPACE test set. However performance improves significantly when training on HSPACE. We report mean per joint positional errors (with and without Procrustes alignment) (MPJPE-PA, MPJPE), mean per joint vertex displacement error (with and without Procrustes alignment) (MPVPE-PA, MPVPE) computed against ground truth GHUM meshes and translation error (MPJPE-T) computed against the pelvis joint. We also report the number of real images and the type of annotations used during the training of the listed models, e.g. number of real images with 2d annotations (R\#2D), number of real images with paired 2d-3d annotations (R\#2D-3D) used during training and number of synthetic images with full 3d supervision (S\#3D). See our Sup. Mat. for additional detail and for qualitative visualisations of 3d human pose and shape reconstruction.} \label{tbl:ablation_space} \end{table*} \noindent{\bf Ethical Considerations.} Our dataset creation methodology aims at diversity and coverage in order to build synthetic ground-truth for different human body proportions, poses, motions, ethnicity, age, or clothing. By generating people in new synthetic poses, and by controlling different body proportions in various scenes, we can produce considerable diversity by largely relying on synthetic assets and by varying the parameters of a statistical human pose and shape model (GHUM). This supports, in turn, our long-term goal to build inclusive models that work well for everyone especially in cases where real human data as well as forms of 3d ground truth are difficult to collect. \section{Conclusions} We have introduced HSPACE, a large-scale dataset of humans animated in complex synthetic indoor and outdoor environments. We combine diverse individuals of varying ages, gender, proportions, and ethnicity, with many motions and scenes, as well as parametric variations in body shape, as well as gestures, in order to generate an initial dataset of over 1 million frames. Human animations are obtained by fitting an expressive human body model, GHUM, to single scans of people, followed by re-targeting and re-posing procedures that support realistic animation, statistic variations of body proportions, and jointly consistent scene placement for multiple moving people. All assets are generated automatically, being compatible with existing real time rendering engines. The dataset and an evaluation server will be made available for research. Our quantitative evaluation of 3d human pose and shape estimation in synthetic and mixed (sim-real) regimes, underlines (1) the importance of synthetic, large-scale datasets, but also (2) the need for real data, within weakly supervised training regimes, as well as (3) the effect of increasing (matching) model capacity, for domain transfer, and continuing performance improvement as datasets grow. {\small \bibliographystyle{ieee_fullname} \section{Introduction} \label{sec:intro} Please follow the steps outlined below when submitting your manuscript to the IEEE Computer Society Press. This style guide now has several important modifications (for example, you are no longer warned against the use of sticky tape to attach your artwork to the paper), so all authors should read this new version. \subsection{Language} All manuscripts must be in English. \subsection{Dual submission} Please refer to the author guidelines on the CVPR\ 2022\ web page for a discussion of the policy on dual submissions. \subsection{Paper length} Papers, excluding the references section, must be no longer than eight pages in length. The references section will not be included in the page count, and there is no limit on the length of the references section. For example, a paper of eight pages with two pages of references would have a total length of 10 pages. {\bf There will be no extra page charges for CVPR\ 2022.} Overlength papers will simply not be reviewed. This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in eight pages if it is reviewed in eleven. \subsection{The ruler} The \LaTeX\ style defines a printed ruler which should be present in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document using a non-\LaTeX\ document preparation system, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera-ready copy should not contain a ruler. (\LaTeX\ users may use options of cvpr.sty to switch between different versions.) Reviewers: note that the ruler measurements do not align well with lines in the paper --- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. Just use fractional references (\eg, this line is $087.5$), although in most cases one would expect that the approximate location will be adequate. \subsection{Paper ID} Make sure that the Paper ID from the submission system is visible in the version submitted for review (replacing the ``*****'' you see in this document). If you are using the \LaTeX\ template, \textbf{make sure to update paper ID in the appropriate place in the tex file}. \subsection{Mathematics} Please number all of your sections and displayed equations as in these examples: \begin{equation} E = m\cdot c^2 \label{eq:important} \end{equation} and \begin{equation} v = a\cdot t. \label{eq:also-important} \end{equation} It is important for readers to be able to refer to any particular equation. Just because you did not refer to it in the text does not mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like ``the equation second from the top of page 3 column 1''. (Note that the ruler will not be present in the final copy, so is not an alternative to equation numbers). All authors will benefit from reading Mermin's description of how to write mathematics: \url{http://www.pamitc.org/documents/mermin.pdf}. \subsection{Blind review} Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one's own work---in fact it is often impossible to review a paper unless the previous citations are known and available. Blind review means that you do not use the words ``my'' or ``our'' when citing previous work. That is all. (But see below for tech reports.) Saying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith; it says that you are building on her work. If you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]'' and at the end of the paper, include reference 7 as you would any other cited work. An example of a bad paper just asking to be rejected: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Removed for blind review \end{quote} An example of an acceptable paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of the paper of Smith \etal [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Smith, L and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213. \end{quote} If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission~\cite{Authors14} as supplemental material and cite it as \begin{quote} [1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324, Supplied as supplemental material {\tt fg324.pdf}. \end{quote} Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not {\em require} the reviewer to go to a tech report for further details. Thus, you may say in the body of the paper ``further details may be found in~\cite{Authors14b}''. Then submit the tech report as supplemental material. Again, you may not assume the reviewers will read this material. Sometimes your paper is about a problem which you tested using a tool that is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the CVPR70 audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus \etal. You can handle this paper like any other. Do not write ``We show how to improve our previous work [Anonymous, 1968]. This time we tested the algorithm on a lunar lander [name of lander removed for blind review]''. That would be silly, and would immediately identify the authors. Instead write the following: \begin{quotation} \noindent We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems [Zeus et al. 1968] did not handle case B properly. Ours handles it by including a foo term in the bar integral. ... The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours, which show how well we solved cases A and B: ... \end{quotation} As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus \etal, but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B. \medskip \noindent FAQ\medskip\\ {\bf Q:} Are acknowledgements OK?\\ {\bf A:} No. Leave them for the final copy.\medskip\\ {\bf Q:} How do I cite my results reported in open challenges? {\bf A:} To conform with the double-blind review policy, you can report results of other challenge participants together with your results in your paper. For your results, however, you should not identify yourself and should not mention your participation in the challenge. Instead present your results referring to the method proposed in your paper and draw conclusions based on the experimental comparison to other results.\medskip\\ \begin{figure}[t] \centering \fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:onecol} \end{figure} \subsection{Miscellaneous} \noindent Compare the following:\\ \begin{tabular}{ll} \verb'$conf_a$' & $conf_a$ \\ \verb'$\mathit{conf}_a$' & $\mathit{conf}_a$ \end{tabular}\\ See The \TeX book, p165. The space after \eg, meaning ``for example'', should not be a sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided \verb'\eg' macro takes care of this. When citing a multi-author paper, you may save space by using ``et alia'', shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word). If you use the \verb'\etal' macro provided, then you need not worry about double periods when used at the end of a sentence as in Alpher \etal. However, use it only when there are three or more authors. Thus, the following is correct: ``Frobnication has been trendy lately. It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.'' This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...'' because reference~\cite{Alpher03} has just two authors. \begin{figure*} \centering \begin{subfigure}{0.68\linewidth} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \caption{An example of a subfigure.} \label{fig:short-a} \end{subfigure} \hfill \begin{subfigure}{0.28\linewidth} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \caption{Another example of a subfigure.} \label{fig:short-b} \end{subfigure} \caption{Example of a short caption, which should be centered.} \label{fig:short} \end{figure*} \section{Formatting your paper} \label{sec:formatting} All text must be in a two-column format. The total allowable size of the text area is $6\frac78$ inches (17.46 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the first page) should begin 1 inch (2.54 cm) from the top edge of the page. The second and following pages should begin 1 inch (2.54 cm) from the top edge. On all pages, the bottom margin should be $1\frac{1}{8}$ inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately $1\frac{5}{8}$ inches (4.13 cm) from the bottom edge of the page. \subsection{Margins and page numbering} All printed material, including text, illustrations, and charts, must be kept within a print area $6\frac{7}{8}$ inches (17.46 cm) wide by $8\frac{7}{8}$ inches (22.54 cm) high. Page numbers should be in the footer, centered and $\frac{3}{4}$ inches from the bottom of the page. The review version should have page numbers, yet the final version submitted as camera ready should not show any page numbers. The \LaTeX\ template takes care of this when used properly. \subsection{Type style and fonts} Wherever Times is specified, Times Roman may also be used. If neither is available on your word processor, please use the font closest in appearance to Times to which you have access. MAIN TITLE. Center the title $1\frac{3}{8}$ inches (3.49 cm) from the top edge of the first page. The title should be in Times 14-point, boldface type. Capitalize the first letter of nouns, pronouns, verbs, adjectives, and adverbs; do not capitalize articles, coordinate conjunctions, or prepositions (unless the title begins with such a word). Leave two blank lines after the title. AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title and printed in Times 12-point, non-boldface type. This information is to be followed by two blank lines. The ABSTRACT and MAIN TEXT are to be in a two-column format. MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use double-spacing. All paragraphs should be indented 1 pica (approx.~$\frac{1}{6}$ inch or 0.422 cm). Make sure your text is fully justified---that is, flush left and flush right. Please do not place any additional blank lines between paragraphs. Figure and table captions should be 9-point Roman type as in \cref{fig:onecol,fig:short}. Short captions should be centred. \noindent Callouts should be 9-point Helvetica, non-boldface type. Initially capitalize only the first word of section titles and first-, second-, and third-order headings. FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction}) should be Times 12-point boldface, initially capitalized, flush left, with one blank line before, and one blank line after. SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements}) should be Times 11-point boldface, initially capitalized, flush left, with one blank line before, and one after. If you require a third-order heading (we discourage it), use 10-point Times, boldface, initially capitalized, flush left, preceded by one blank line, followed by a period and your text on the same line. \subsection{Footnotes} Please use footnotes\footnote{This is what a footnote looks like. It often distracts the reader from the main flow of the argument.} sparingly. Indeed, try to avoid footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). If you wish to use a footnote, place it at the bottom of the column on the page on which it is referenced. Use Times 8-point type, single-spaced. \subsection{Cross-references} For the benefit of author(s) and readers, please use the {\small\begin{verbatim} \cref{...} \end{verbatim}} command for cross-referencing to figures, tables, equations, or sections. This will automatically insert the appropriate label alongside the cross-reference as in this example: \begin{quotation} To see how our method outperforms previous work, please see \cref{fig:onecol} and \cref{tab:example}. It is also possible to refer to multiple targets as once, \eg~to \cref{fig:onecol,fig:short-a}. You may also return to \cref{sec:formatting} or look at \cref{eq:also-important}. \end{quotation} If you do not wish to abbreviate the label, for example at the beginning of the sentence, you can use the {\small\begin{verbatim} \Cref{...} \end{verbatim}} command. Here is an example: \begin{quotation} \Cref{fig:onecol} is also quite important. \end{quotation} \subsection{References} List and number all bibliographical references in 9-point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Authors14}. Where appropriate, include page numbers and the name(s) of editors of referenced books. When you cite multiple papers at once, please make sure that you cite them in numerical order like this \cite{Alpher02,Alpher03,Alpher05,Authors14b,Authors14}. If you use the template as advised, this will be taken care of automatically. \begin{table} \centering \begin{tabular}{@{}lc@{}} \toprule Method & Frobnability \\ \midrule Theirs & Frumpy \\ Yours & Frobbly \\ Ours & Makes one's heart Frob\\ \bottomrule \end{tabular} \caption{Results. Ours is better.} \label{tab:example} \end{table} \subsection{Illustrations, graphs, and photographs} All graphics should be centered. In \LaTeX, avoid using the \texttt{center} environment for this purpose, as this adds potentially unwanted whitespace. Instead use {\small\begin{verbatim} \centering \end{verbatim}} at the beginning of your figure. Please ensure that any point you wish to make is resolvable in a printed copy of the paper. Resize fonts in figures to match the font in the body text, and choose line widths that render effectively in print. Readers (and reviewers), even of an electronic copy, may choose to print your paper in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.pdf} \end{verbatim} } \subsection{Color} Please refer to the author guidelines on the CVPR\ 2022\ web page for a discussion of the use of color in your document. If you use color in your plots, please keep in mind that a significant subset of reviewers and readers may have a color vision deficiency; red-green blindness is the most frequent kind. Hence avoid relying only on color as the discriminative feature in plots (such as red \vs green lines), but add a second discriminative feature to ease disambiguation. \section{Final copy} You must include your signed IEEE copyright release form when you submit your finished paper. We MUST have this form before your paper can be published in the proceedings. Please direct any questions to the production editor in charge of these proceedings at the IEEE Computer Society Press: \url{https://www.computer.org/about/contact}. {\small \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The Milky Way is an excellent testing ground of our understanding of Galaxy evolution, due to the ability to resolve individual stars and study stellar populations in greater detail than in other galaxies. However, understanding the composition, structure, and origin of the Milky Way disk is still currently one of the outstanding questions facing astronomy, and there is great debate about this topic (e.g., \citealt{Rix2013}). The ability to resolve individual stars allows one to trace the fossil record of the Milky Way across the disk, as the stars contain the chemical footprint of the gas from which they formed. Observations of stars in the Milky Way have led to the discovery of several chemical and kinematic properties of the disk of the Galaxy, such as the thick disk (e.g., \citealt{Yoshii1982,Gilmore1983}), chemical abundance gradients (e.g., \citealt{Hartkopf1982,Cheng2012b,Anders2014,Hayden2014,Schlesinger2014}), and the G-dwarf problem \citep{VandenBergh1962,Pagel1975} from the study of the metallicity distribution function (MDF) of the solar neighborhood \citep{VandenBergh1962,Casagrande2011,Lee2011,Schlesinger2012}. Much of the previous work on the Milky Way disk has focused on the solar neighborhood, or tracer populations (e.g., Cepheid variables, HII regions) that span a narrow range in age and number only a few hundred objects even in the most thorough studies. The advent of large scale surveys such as SEGUE \citep{Yanny2009}, RAVE \citep{Steinmetz2006}, APOGEE \citep{Majewski2015}, GAIA-ESO \citep{Gilmore2012}, and HERMES-GALAH \citep{Freeman2012} aim to expand observations across the Milky Way and will greatly increase the spatial coverage of the Galaxy with large numbers of stars. In this paper we use observations from the twelfth data release of SDSS-III/APOGEE \citep{Alam2015} to measure the distribution of stars in the [$\alpha$/Fe] vs. [Fe/H] plane and the metallicity distribution function across the Milky Way galaxy with large numbers of stars over the whole radial range of the disk. [$\alpha$/H] is defined by the elements O, Mg, Si, S, Ca, and Ti changing together with solar proportions. Using standard chemical abundance bracket notation, [$\alpha$/Fe] is [$\alpha$/H]--[Fe/H]. Different stellar populations can be identified in chemical abundance space, with the $\alpha$ abundance of stars separating differing populations. The distribution of stars in the [$\alpha$/Fe] vs. [Fe/H] plane shows two distinct stellar populations in the solar neighborhood (e.g., \citealt{Fuhrmann1998,Prochaska2000,Reddy2006,Adibekyan2012,Haywood2013,Anders2014,Bensby2014,Nidever2014,Snaith2014a}), with one track having roughly solar-[$\alpha$/Fe] ratios across a large range of metallicities, and the other track having a high-[$\alpha$/Fe] ratio at low metallicity that is constant with [Fe/H] until [Fe/H] $\sim-0.5$, at which point there is a knee and the [$\alpha$/Fe] ratio decreases at a constant rate as a function of [Fe/H] , eventually merging with the solar-[$\alpha$/Fe] track at [Fe/H] $\sim0.2$ dex. The knee in the high-[$\alpha$/Fe] sequence is likely caused by the delay time for the onset of Type Ia SNe (SNeIa): prior to formation of the knee, core collapse supernovae (SNII) are the primary source of metals in the ISM, while after the knee SNeIa begin to contribute metals, enriching the ISM primarily in iron peak elements and lowering the [$\alpha$/Fe] ratio. Stars on the [$\alpha$/Fe] -enhanced track have much larger vertical scale-heights than solar-[$\alpha$/Fe] stars (e.g., \citealt{Lee2011,Bovy2012,Bovy2012b,Bovy2012a}) and make up the stellar populations belonging to the thick disk. \citet{Nidever2014} used the APOGEE Red Clump Catalog \citep{Bovy2014} to analyze the stellar distribution in the [$\alpha$/Fe] vs. [Fe/H] plane across the Galactic disk, and found that the high-[$\alpha$/Fe] (thick disk) sequence was similar over the radial range covered in their analysis ($5<R<11$ kpc). The thick disk stellar populations are in general observed to be more metal-poor, $\alpha$-enhanced, have shorter radial scale-lengths, larger vertical scale-heights, and hotter kinematics than most stars in the solar neighborhood (e.g., \citealt{Bensby2003,AllendePrieto2006,Bensby2011,Bovy2012a,Cheng2012a,Anders2014}), although there do exist thick disk stars with solar-[$\alpha$/Fe] abundances and super-solar metallicities \citep{Bensby2003,Bensby2005,Adibekyan2011,Bensby2014,Nidever2014,Snaith2014a}. However, the exact structure of the disk is still unknown, and it is unclear whether the disk is the superposition of multiple components (i.e., a thick and thin disk), or if the disk is a continuous sequence of stellar populations (e.g., \citealt{Ivezic2008,Bovy2012,Bovy2012b,Bovy2012a}), or if the structure varies with location in the Galaxy. Meanwhile, \citet{Nidever2014} found that the position of the locus of low-[$\alpha$/Fe] (thin disk) stars depends on location within the Galaxy (see also \citealt{Edvardsson1993}), and it is possible that in the inner Galaxy the high- and low-[$\alpha$/Fe] populations are connected, rather than distinct. Most previous observations were confined to the solar neighborhood, and use height above the plane or kinematics to separate between thick and thin disk populations. However, kinematical selections often misidentify stars \citep{Bensby2014}, and can remove intermediate or transitional populations; which may bias results \citep{Bovy2012a}. Observations of the metallicity distribution function (MDF) at different locations in the Galaxy can provide information about the evolutionary history across the disk. The MDF has generally only been well characterized in the solar neighborhood (e.g., \citealt{VandenBergh1962,Nordstrom2004,Ak2007,Casagrande2011,Siebert2011,Schlesinger2012}) and in the Galactic bulge (\citealt{Zoccali2008,Gonzalez2013,Ness2013}). The first observations of the MDF outside of the solar neighborhood were made using APOGEE observations, and found differences in the MDF as a function of Galactocentric radius \citep{Anders2014}. Metallicity distribution functions have long been used to constrain models of chemical evolution. Early chemical evolution models (e.g., \citealt{Schmidt1963,Pagel1975}) that attempted to explain the observed metal distribution in local G-dwarfs were simple, closed-box systems (no gas inflow or outflow) that over predicted the number of metal-poor stars relative to observations. This result commonly known as the ``G-dwarf problem'' (e.g., \citet{Pagel1975,Rocha-Pinto1996,Schlesinger2012}). Solutions to the G-dwarf problem include gas inflow and outflow (e.g., \citealt{Pagel1997}); observations of the MDF led to the realization that gas dynamics play an important role in the chemical evolution of galaxies. However, it is not clear if the G-dwarf problem exists at all locations in the Galaxy, as there have been limited observations outside of the solar circle. Simulations and models of the chemical and kinematical evolution of the Milky Way have become increasingly sophisticated (e.g., \citealt{Hou2000,Chiappini2001,Schonrich2009a,Kubryk2013,Minchev2013}) and attempt to explain both the chemical and dynamical history of the Galaxy. Recent chemical evolution models (e.g., \citealt{Hou2000,Chiappini2001}) treat the chemistry of gas and stars in multiple elements across the entire disk, rather than just the solar neighborhood. Several simulations and models find that ``inside-out'' (e.g., \citealt{Larson1976,Kobayashi2011}) and ``upside-down'' (e.g., \citealt{Bournaud2009,Bird2013}) formation of the Galactic disk reproduces observed trends in the Galaxy such as the radial gradient and the lower vertical scale heights of progressively younger populations. Centrally concentrated hot old disks, as seen in cosmological simulations, would result in a decrease in scale-height with radius, which is not observed. In order to explain the presence of stars at high altitudes in the outer disk, in an inside-out formation scenario, \citet{Minchev2015} suggested that disk flaring of mono-age populations is responsible. Such a view also explains the inversion of metallicity and [alpha/Fe] gradients when the vertical distance from the disk midplane is increased (e.g., \citealt{Boeche2013,Anders2014,Hayden2014}). Alternatively, the larger scale heights of older populations could be a consequence of satellite mergers The radial mixing of gas and stars from their original birth radii has also been proposed as an important process in the evolution of the Milky Way disk (e.g., \citealt{Wielen1996,Sellwood2002,Roskar2008,Schonrich2009a,Loebman2011,Solway2012,Halle2015}). Radial mixing occurs through blurring, in which stars have increasingly more eccentric orbits and therefore variable orbital radii, and churning, where stars experience a change in angular momentum and migrate to new locations while maintaining a circular orbit. However, there is much debate on the relative strength of mixing processes throughout the disk. Recent observations and modeling of the solar neighborhood have suggested that the local chemical structure of the disk can be explained by blurring alone \citep{Snaith2014a}, and that churning is not required, but see \citet{Minchev2014a}. To disentangle these multiple processes and characterize the history of the Milky Way disk, it is crucial to map the distribution of elements throughout the disk, beyond the solar neighborhood. This is one of the primary goals of the SDSS-III/APOGEE survey observed 146,000 stars across the Milky Way during three years of operation. APOGEE is a high-resolution (R$\sim$22,500) spectrograph operating in the $H$-band, where extinction is 1/6 that of the $V$ band. This allows observations of stars lying directly in the plane of the Galaxy, giving an unprecedented coverage of the Milky Way disk. The main survey goals were to obtain a uniform sample of giant stars across the disk with high resolution spectroscopy to study the chemical and kinematical structure of the Galaxy, in particular the inner Galaxy where optical surveys cannot observe efficiently due to high extinction. The APOGEE survey provides an RV precision of $\sim100$ m s$^{-1}$ \citep{Nidever2015}, and chemical abundances to within 0.1--0.2 dex for 15 different chemical elements \citep{GarciaPerez2015}, in addition to excellent spatial coverage of the Milky Way from the bulge to the edge of the disk. In this paper we present results from the Twelfth Data Release (DR12; \citealt{Alam2015}) of SDSS-III/APOGEE on the distribution of stars in the [$\alpha$/Fe] vs. [Fe/H] plane and their metallicity distribution functions, across the Milky Way and at a range of heights above the plane. In Section 2 we discuss the APOGEE observations, data processing, and sample selection criteria. In Section 3 we present our observed results for the distribution of stars in [$\alpha$/Fe] vs. [Fe/H] plane and metallicity distribution functions. In Section 4 we discuss our findings in the context of chemical evolution models. In the Appendix we discuss corrections for biases due to survey targeting, sample selection, population effects, and errors in the [$\alpha$/Fe] determination. \section{Data and Sample Selection} Data are taken from DR12, which contains stellar spectra and derived stellar parameters for stars observed during the three years of APOGEE. APOGEE is one of the main SDSS-III surveys \citep{Eisenstein2011}, which uses the SDSS 2.5m telescope \citep{Gunn2006} to obtain spectra for hundreds of stars per exposure. These stars cover a wide spatial extent of the Galaxy, and span a range of magnitudes between $8<H<13.8$ for primary science targets. Target selection is described in detail in the APOGEE targeting paper \citep{Zasowski2013} and the APOGEE DR10 paper \citep{Ahn2014}. Extinction and dereddening for each individual star is determined using the Rayleigh-Jeans Color Excess method (RJCE, \citealt{Majewski2011}), which uses 2MASS photometry \citep{Skrutskie2006} in conjunction with near-IR photometry from the Spitzer/IRAC \citep{Fazio2004} GLIMPSE surveys \citep{Benjamin2003,Churchwell2009} where available, or from WISE \citep{Wright2010}. In-depth discussion of observing and reduction procedures is described in \citep{Hayden2014}, the DR10 paper \citep{Ahn2014}, the APOGEE reduction pipeline paper \citep{Nidever2015}, the DR12 calibration paper \citep{Holtzman2015}, the APOGEE linelist paper \citep{Zamora2015}, and the APOGEE Stellar Parameters and Chemical Abundances Pipeline (ASPCAP, \citealt{GarciaPerez2015}) paper. For this paper, we select cool (T$_\textrm{eff}<5500$ K) main survey (e.g., no ancillary program or \textit{Kepler} field) giant stars ($1.0<\log{g}<3.8$) with S/N$>80$. Additionally, stars flagged as ``Bad'' due to being near the spectral library grid edge(s) or having poor spectral fits are removed. The cuts applied to the H-R diagram for DR12 are shown in Figure \ref{survey}. ASPCAP currently has a cutoff temperature of $3500$K on the cool side of the spectral grid (see \citealt{GarciaPerez2015,Zamora2015}), which could potentially bias our results against metal-rich stars. We correct for this metallicity bias by imposing the lower limit on surface gravity of $\log{\textrm{g}}>1.0$, as this mitigates much of the potential bias due to the temperature grid edge in the observed metallicities across the disk; for a detailed discussion see the Appendix. Applying these restrictions to the DR12 catalog, we have a main sample of 69,919 giants within 2 kpc of the midplane of the Milky Way. We restrict our study to the disk of the Milky Way ($R>3$ kpc). For a detailed discussion of the MDF of the Galactic bulge with APOGEE observations see \citet{GarciaPerez2015a}. [Fe/H] has been calibrated to literature values for a large set of reference stars and clusters, and all other spectroscopic parameters are calibrated using clusters to remove trends with temperature, as described in \citet{Holtzman2015}. These corrections are similar to those applied to the DR10 sample by \citet{Meszaros2013}, as there are slight systematic offsets observed in the ASPCAP parameters compared to reference values, and in some cases trends with other parameters (e.g., abundance trends with effective temperature). The accuracy of the abundances has improved from DR10, in particular for [$\alpha$/Fe] , as self-consistent model atmospheres rather than scaled solar atmospheres were used in the latest data release (see \citealt{Zamora2015}), improving the accuracy of many parameters. The typical uncertainties in the spectroscopic parameters from \citet{Holtzman2015} are: 0.11 dex in $\log{g}$, 92 K in T$_{\textrm{eff}}$, and 0.05 dex in [Fe/H] and [$\alpha$/Fe] . Giants are not perfectly representative of underlying stellar populations, as they are evolved stars. There is a bias against the oldest populations when using giants as a tracer population, with the relative population sampling being $\propto\tau^{-0.6}$, where $\tau$ is age (e.g., \citep{Girardi2001}). It is difficult to correct for this population sampling effect, as it depends on the detailed star formation histories at different locations throughout the Galaxy, and likely requires detailed population synthesis models. Because of this, we do not correct for the non-uniform age sampling of giants and our MDFs are slightly biased against the oldest (and potential more metal-poor) stars of the disk. For additional discussion on population sampling, see the Appendix. \begin{figure}[t!] \centering \includegraphics[width=3.3in]{hrpaper.png} \caption{The spectroscopic H-R diagram for the full calibrated APOGEE sample, where the mean metallicity in each $\log{g}, T_{\textrm{eff}}$ bin is shown. The gray box denotes the selected sample of 69,919 stars presented in this paper. \label{survey} \end{figure} \subsection{Distances} Distances for each star are determined from the derived stellar parameters and PARSEC isochrones from the Padova-Trieste group \citep{Bressan2012} based on Bayesian statistics, following methods described by \citet{Burnett2010},\citet{Burnett2011},and \citet{Binney2014}; see also \citet{Santiago2015}. The isochrones range in metallicity from $-2.5<\textrm{[Fe/H] }<+0.6$, with a spacing of 0.1 dex, and ages ($\tau$) ranging from 100 Myr to 20 Gyr with spacing of 0.05 dex in $\log{\tau}$. We calculate the probability of all possible distances using the extinction-corrected magnitude (from RJCE, as referenced above), the stellar parameters from ASPCAP, and the PARSEC isochrones using Bayes' theorem: \begin{equation}\nonumber P(model|data) = \frac{P(data|model)P(model)}{P(data)} \end{equation} \noindent where model refers to the isochrone parameters (T$_{\textrm{eff}}$, $\log{g}$, $\tau$, [Fe/H] , etc.) and physical location (in our case the distance modulus). Data refers to the observed spectroscopic and photometric parameters for the star. For our purposes, we are interested in the distance modulus ($\mu$) only, so P(model$|$data) is: \begin{equation}\nonumber P(model|data) = P(\mu) \propto \int \prod_j^n \ \ \displaystyle{\exp\left({\frac{-(o_j-I_j)^2}{2\sigma_{o_{\tiny{j}}}^2}}\right)}dI \label{b1} \end{equation} \noindent where o$_j$ is the observed spectroscopic parameter, I$_j$ is the corresponding isochrone parameter, and $\sigma_{o_{j}}$ is the error in the observed spectroscopic parameter. Additional terms can be added if density priors are included, but we did not include density priors for the distances used in this paper; our effective prior is flat in distance modulus. The distance modulus most likely to be correct given the observed parameters and the stellar models is determined by creating a probability distribution function (PDF) of all distance moduli. We use isochrone points within $3\sigma$ of our observed spectroscopic temperature, gravity, and metallicity to compute the distance moduli, where the errors in the observed parameters are given in the data section above. To generate the PDF, the equation above is integrated over all possible distance moduli, although in practice we use a range of distance moduli between the minimum and maximum magnitudes from the isochrone grid matches to reduce the required computing time. The peak, median, or average of the PDF can be used to estimate the most likely distance modulus for a given star. For this paper, we use the median of the PDF to characterize the distance modulus. The error in the distance modulus is given by the variance of the PDF: \begin{equation}\nonumber \sigma_{\mu} = \sqrt{\frac{\int P(\mu)\mu^2d\mu}{\int P(\mu)d\mu}-<\mu>^2} \end{equation} \noindent The radial distance from the Galactic center is computed assuming a solar distance of 8 kpc from the Galactic center. Distance accuracy was tested by comparing to clusters observed by APOGEE and using simulated observations from TRILEGAL \citep{Girardi2005}. On average, the distances are accurate at the $15-20$\% level. A more detailed discussion of the distances can be found in \citet{Holtzman2015a}. The stellar distribution in the $R$-$z$ plane for the sample used in this paper is shown in Figure \ref{rzmap}. \begin{figure}[t!] \centering \includegraphics[width=3.3in]{rzmap.png} \caption{The Galactic $R$-$z$ distribution for the sample of 69,919 stars used in this analysis. $R$ is the projected planar distance from the Galactic center, while $z$ is the distance from plane of the Galaxy. Each star is plotted at the location implied by the median of its distance modulus PDF.} \label{rzmap} \end{figure} \section{Results} \subsection{[$\alpha$/Fe] vs. [Fe/H] } \begin{figure*}[t!] \centering \includegraphics[width=6in]{snalpha2.png} \caption{The observed [$\alpha$/Fe] vs. [Fe/H] distribution for the solar neighborhood ($7<R<9$ kpc, $0<|z|<0.5$ kpc). The left panel is down-sampled and shows only 20\% of the observed data points in the solar circle. The right panel shows the entire sample in the solar neighborhood, with contours denoting $1,2, 3\sigma$ of the overall densities. There are two sequences in the distribution of stars in the [$\alpha$/Fe] vs. [Fe/H] plane, one at solar-[$\alpha$/Fe] abundances, and one at high-[$\alpha$/Fe] abundances that eventually merges with the solar-[$\alpha$/Fe] sequence at [Fe/H] $\sim0.2$.} \label{alpham1} \end{figure*} \begin{figure*}[ht!] \centering \includegraphics[trim=80bp 0 20bp 0,clip,width=7.3in]{alphaz.png} \caption{The stellar distribution of stars in the [$\alpha$/Fe] vs. [Fe/H] plane as a function of $R$ and $|z|$. \textbf{Top:} The observed [$\alpha$/Fe] vs. [Fe/H] distribution for stars with $1.0<|z|<2.0$ kpc. \textbf{Middle:} The observed [$\alpha$/Fe] vs. [Fe/H] distribution for stars with $0.5<|z|<1.0$ kpc. \textbf{Bottom:} The observed [$\alpha$/Fe] vs. [Fe/H] distribution for stars with $0.0<|z|<0.5$ kpc. The grey line on each panel is the same, showing the similarity of the shape of the high-[$\alpha$/Fe] sequence with $R$. The extended solar-[$\alpha$/Fe] sequence observed in the solar neighborhood is not present in the inner disk ($R<5$ kpc), where a single sequence starting at high-[$\alpha$/Fe] and low metallicity and ending at solar-[$\alpha$/Fe] and high metallicity fits our observations. In the outer disk ($R>11$ kpc), there are very few high-[$\alpha$/Fe] stars.} \label{alphaz} \end{figure*} We present results of the distribution of stars in the [$\alpha$/Fe] vs. [Fe/H] plane in the solar neighborhood ($7<R<9$ kpc, $0<|z|<0.5$ kpc) in Figure \ref{alpham1}. The stellar distribution in the [$\alpha$/Fe] vs. [Fe/H] plane in the solar neighborhood is characterized by two distinct sequences, one starting at high-[$\alpha$/Fe] and the other at approximately solar-[$\alpha$/Fe] abundances. We use the term sequences or tracks to describe the behavior of the low and high-[$\alpha$/Fe] populations, but this description does not necessarily imply the sequences to be evolutionary in nature. The high-[$\alpha$/Fe] sequence has a negative slope, with the [$\alpha$/Fe] ratio decreasing as [Fe/H] increases, and eventually merges with the low-[$\alpha$/Fe] sequence at [Fe/H] $\sim+0.2$. The low-[$\alpha$/Fe] sequence has a slight decrease in [$\alpha$/Fe] abundance as metallicity increases, except for the most metal-rich stars ([Fe/H] $>0.2$) where there the trend flattens. The lower envelope of the distribution has a concave-upward, bowl shape. However, these trends are small, and the sequence is within $\sim0.1$ dex in $\alpha$ abundance across nearly a decade in metallicity. It is unclear which sequence the most metal-rich stars belong to in the solar neighborhood, because the low- and high-[$\alpha$/Fe] sequences appear to merge at these super-solar metallicities. These observations are similar to previous studies of the solar neighborhood (e.g., \citealt{Adibekyan2012,Ramirez2013,Bensby2014,Nidever2014,Recio-Blanco2014}), which also find two distinct sequences in the [$\alpha$/Fe] vs. [Fe/H] plane. While some surveys targeted kinematic ``thin'' and ``thick'' disk samples in a way that could amplify bimodality, our sample (and that of \citealt{Adibekyan2011}) has no kinematic selection. APOGEE allows us to extend observations of the distribution of stars in the [$\alpha$/Fe] vs. [Fe/H] plane across much of the disk ($3<R<15$ kpc, $|z|<2$ kpc) as shown in Figure \ref{alphaz}). The most striking feature of the stellar distribution in the [$\alpha$/Fe] vs. [Fe/H] plane in the inner disk ($3<R<5$ kpc) is that the separate low-[$\alpha$/Fe] sequence evident in the solar neighborhood is absent--- there appears to be a single sequence starting at low metallicities and high-[$\alpha$/Fe] abundances, which ends at approximately solar-[$\alpha$/Fe] and high metallicity ([Fe/H] $\sim+0.5$). The metal-rich solar-[$\alpha$/Fe] stars dominate the overall number density of stars close to the plane. These stars are confined to the midplane, while for $|z|>1$ kpc the majority of the stars in the inner Galaxy are metal-poor with high-[$\alpha$/Fe] abundances. In the $5<R<7$ kpc annulus, the population of low-[$\alpha$/Fe] , sub-solar metallicity stars becomes more prominent, revealing the two-sequence structure found in the solar neighborhood. However, the locus of the low alpha sequence is significantly more metal rich ([Fe/H]$\sim0.35$) than our local sample. The mean metallicity of the low alpha sequence and its dependence on radius largely drives the observed Galactic metallicity gradients. As $|z|$ increases, the relative fraction of high- and low-[$\alpha$/Fe] stars changes; for $|z|>1$ kpc, the bulk of the stars belong to the high-[$\alpha$/Fe] sequence and have sub-solar metallicities. Towards the outer disk ($R>9$ kpc), the locus of the low-[$\alpha$/Fe] sequence shifts towards lower metallicities. Much like the rest of the Galaxy, the low-[$\alpha$/Fe] sequence dominates the number density close to the plane of the disk. As $|z|$ increases, the high-[$\alpha$/Fe] fraction increases (for $9<R<13$ kpc), but it never becomes the dominant population at high $|z|$ as it does in the solar neighborhood or inner disk. For $R>13$ kpc, there are almost no high-[$\alpha$/Fe] stars present; at all heights above the plane most stars belong to the low-[$\alpha$/Fe] sequence. For $R>11$ kpc, the relative number of super-solar metallicity stars is low compared to the rest of the Galaxy; these stars are confined to the inner regions ($R<11$ kpc) of the disk. The spread in metallicity for the very outer disk ($R>13$ kpc) is small: most stars are within [Fe/H] $\sim-0.4\pm0.2$ dex at all heights about the plane. \citet{Nidever2014} used the APOGEE Red Clump Catalog \citep{Bovy2014} to categorize the distribution of stars in the [$\alpha$/Fe] vs. [Fe/H] plane outside of the solar neighborhood. The \citet{Nidever2014} sample is a subset of the same data presented in this paper. The red clump offers more precise distance and abundance determinations compared to the entire DR12 sample, but it covers a more restricted distance and metallicity range. \citet{Nidever2014} find that the high-[$\alpha$/Fe] sequence found in the solar neighborhood is similar in shape in all areas of the Galaxy where it could be observed ($5<R<11$ kpc, $0<|z|<2$ kpc). Here we expand these observations to larger distances and find a similar result; the shape of the high-[$\alpha$/Fe] sequence does not vary significantly with radius, although the very inner Galaxy does show a hint of small differences. There appears to be a slight shift towards lower-[$\alpha$/Fe] for the same metallicities by $~0.05$ dex compared to the high-[$\alpha$/Fe] sequence observed in the rest of the disk. This variation may be caused by temperature effects; the stars in the inner Galaxy are all cool (T$_{\textrm{eff}}<4300$ K), and there is a slight temperature dependence of [$\alpha$/Fe] abundance for cooler stars, as discussed further in the Appendix. Although the high-[$\alpha$/Fe] sequence appears similar at all observed locations, as noted above the number of stars along the high-[$\alpha$/Fe] sequence begins to decrease dramatically for $R>11$ kpc: there are almost no stars along the high-[$\alpha$/Fe] sequence in the very outer disk ($13<R<15$ kpc). To summarize our results for the distribution of stars in the [$\alpha$/Fe] vs. [Fe/H] plane: \begin{itemize} \item There are two distinct sequences in the solar neighborhood, one at high-[$\alpha$/Fe] , and one at solar-[$\alpha$/Fe] , which appear to merge at [Fe/H] $\sim+0.2$. At sub-solar metallicities there is a distinct gap between these two sequences. \item The abundance pattern for the inner Galaxy can be described as a single sequence, starting at low-metallicity and high-[$\alpha$/Fe] and ending at approximately solar-[$\alpha$/Fe] and [Fe/H] $=+0.5$. The most metal-rich stars are confined to the midplane. \item The high-[$\alpha$/Fe] sequence appears similar at all locations in the Galaxy where it is observed ($3<R<13$ kpc). \item Stars with high-[$\alpha$/Fe] ratios and the most metal-rich stars ([Fe/H] $>0.2$) have spatial densities that are qualitatively consistent with short radial scale-lengths or a truncation at larger radii and have low number density in the outer disk. \item The relative fraction of stars between the low- and high-[$\alpha$/Fe] sequences varies with disk height and radius. \end{itemize} \subsection{Metallicity Distribution Functions} \begin{deluxetable*}{crccccc} \tabletypesize{\footnotesize} \tablecolumns{7} \tablewidth{0pt} \tablecaption{Metallicity Distribution Functions in the Milky Way \label{tab:tabmdf}} \tablehead{ \colhead{$R$ Range (kpc)} & \colhead{N*} & \colhead{$<$[Fe/H] $>$} & \colhead{Peak [Fe/H] } & \colhead{$\sigma_{\textrm{[Fe/H] }}$} & \colhead{Skewness} & \colhead{Kurtosis}} \startdata \multicolumn{7}{c}{$1.00< |z| < 2.00$}\\ \hline $ 3 < R < 5 $ & 465 & -0.42 & -0.27 & 0.29 & -0.48$\pm$0.14 & 1.05$\pm$0.28 \\ $ 5 < R < 7 $ & 846 & -0.36 & -0.33 & 0.29 & -0.32$\pm$0.13 & 1.27$\pm$0.26 \\ $ 7 < R < 9 $ & 4136 & -0.31 & -0.27 & 0.28 & -0.53$\pm$0.06 & 1.52$\pm$0.17 \\ $ 9 < R < 11 $ & 1387 & -0.29 & -0.27 & 0.25 & -0.37$\pm$0.13 & 1.69$\pm$0.36 \\ $ 11 < R < 13 $ & 827 & -0.29 & -0.38 & 0.23 & -0.40$\pm$0.21 & 2.58$\pm$0.62 \\ $ 13 < R < 15 $ & 207 & -0.39 & -0.43 & 0.17 & -0.60$\pm$0.73 & 3.84$\pm$3.16 \\ \hline \multicolumn{7}{c}{$0.50< |z| < 1.00$}\\ \hline $ 3 < R < 5 $ & 841 & -0.19 & -0.33 & 0.32 & -0.50$\pm$0.11 & 0.50$\pm$0.31 \\ $ 5 < R < 7 $ & 1408 & -0.12 & -0.18 & 0.29 & -0.50$\pm$0.09 & 0.39$\pm$0.35 \\ $ 7 < R < 9 $ & 4997 & -0.10 & -0.02 & 0.25 & -0.49$\pm$0.06 & 0.67$\pm$0.22 \\ $ 9 < R < 11 $ & 3702 & -0.15 & -0.23 & 0.21 & -0.22$\pm$0.10 & 0.99$\pm$0.47 \\ $ 11 < R < 13 $ & 2169 & -0.23 & -0.27 & 0.19 & +0.28$\pm$0.11 & 0.92$\pm$0.39 \\ $ 13 < R < 15 $ & 568 & -0.33 & -0.33 & 0.19 & -0.60$\pm$0.39 & 4.63$\pm$1.50 \\ \hline \multicolumn{7}{c}{$0.00< |z| < 0.50$}\\ \hline $ 3 < R < 5 $ & 2410 & +0.08 & +0.23 & 0.24 & -1.68$\pm$0.12 & 4.01$\pm$0.83 \\ $ 5 < R < 7 $ & 5195 & +0.11 & +0.23 & 0.22 & -1.26$\pm$0.08 & 2.53$\pm$0.52 \\ $ 7 < R < 9 $ & 13106 & +0.01 & +0.02 & 0.20 & -0.53$\pm$0.04 & 0.86$\pm$0.26 \\ $ 9 < R < 11 $ & 19930 & -0.11 & -0.12 & 0.19 & -0.02$\pm$0.03 & 0.49$\pm$0.14 \\ $ 11 < R < 13 $ & 6730 & -0.21 & -0.23 & 0.18 & +0.17$\pm$0.06 & 0.79$\pm$0.21 \\ $ 13 < R < 15 $ & 912 & -0.31 & -0.43 & 0.18 & +0.47$\pm$0.13 & 1.00$\pm$0.29 \\ \enddata \vspace{-0.4cm} \tablecomments{Statistics for the different MDFs in the across the Milky Way disk. The kurtosis is defined as the fourth standardized moment-3, such that a normal distribution has a kurtosis of 0.} \label{tabmdf} \end{deluxetable*} \begin{figure}[ht!] \centering \includegraphics[width=3.3in]{mdf2.png} \caption{The observed MDF for the entire sample as a function of Galactocentric radius over a range of distances from the plane. The shape and skewness is a function of radius and height. Close to the plane, the inner Galaxy ($3<R<5$ kpc) is a negatively skewed distribution and a peak metallicity at $\sim0.25$ dex, while the outer disk ($R>11$ kpc) has a positively skewed distribution with peak metallicity of $\sim-0.4$ dex . For $|z|>1$ kpc, the MDF is fairly uniform at across all radii.\\} \label{mdf1} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=3.3in]{highmdf2.png} \caption{The observed MDF for stars with [$\alpha$/Fe] $>0.18$ as a function of Galactocentric radius over a range of distances from the plane. There is little variation in the MDF with Galactocentric radius. There is a shallow negative vertical gradient, as stars with $|z|>1$ kpc are slightly more metal-poor than stars close to the plane.\\} \label{highmdf} \end{figure} \begin{deluxetable*}{crccccc} \tabletypesize{\footnotesize} \tablecolumns{7} \tablewidth{0pt} \tablecaption{$\alpha$ Distribution Functions in the Milky Way \label{tab:tabadf}} \tablehead{ \colhead{$R$ Range (kpc)} & \colhead{N*} & \colhead{$<[\alpha$/H]$>$} & \colhead{Peak [$\alpha$/H]} & \colhead{$\sigma_{[\alpha/H]}$} & \colhead{Skewness} & \colhead{Kurtosis}} \startdata \multicolumn{7}{c}{$1.00< |z| < 2.00$}\\ \hline $ 3 < R < 5 $ & 468 & -0.20 & -0.08 & 0.26 & -1.12$\pm$0.23 & 3.02$\pm$0.91 \\ $ 5 < R < 7 $ & 853 & -0.16 & -0.08 & 0.27 & -1.25$\pm$0.18 & 4.17$\pm$0.67 \\ $ 7 < R < 9 $ & 4145 & -0.14 & -0.08 & 0.24 & -1.11$\pm$0.09 & 3.70$\pm$0.34 \\ $ 9 < R < 11 $ & 1393 & -0.16 & -0.18 & 0.23 & -0.93$\pm$0.22 & 4.68$\pm$0.90 \\ $ 11 < R < 13 $ & 831 & -0.20 & -0.27 & 0.22 & -0.82$\pm$0.29 & 4.61$\pm$1.06 \\ $ 13 < R < 15 $ & 210 & -0.30 & -0.38 & 0.21 & -1.91$\pm$0.57 & 9.19$\pm$1.80 \\ \hline \multicolumn{7}{c}{$0.50< |z| < 1.00$}\\ \hline $ 3 < R < 5 $ & 844 & -0.03 & +0.02 & 0.27 & -1.12$\pm$0.20 & 3.34$\pm$0.87 \\ $ 5 < R < 7 $ & 1409 & +0.02 & +0.07 & 0.23 & -0.73$\pm$0.15 & 1.73$\pm$0.74 \\ $ 7 < R < 9 $ & 4997 & -0.01 & +0.02 & 0.21 & -0.39$\pm$0.06 & 0.78$\pm$0.25 \\ $ 9 < R < 11 $ & 3703 & -0.08 & -0.12 & 0.19 & -0.05$\pm$0.10 & 0.98$\pm$0.48 \\ $ 11 < R < 13 $ & 2170 & -0.16 & -0.23 & 0.18 & 0.34$\pm$0.19 & 1.74$\pm$1.00 \\ $ 13 < R < 15 $ & 568 & -0.25 & -0.27 & 0.17 & -0.42$\pm$0.45 & 4.65$\pm$1.88 \\ \hline \multicolumn{7}{c}{$0.00< |z| < 0.50$}\\ \hline $ 3 < R < 5 $ & 2414 & +0.16 & +0.27 & 0.20 & -1.91$\pm$0.20 & 7.17$\pm$1.53 \\ $ 5 < R < 7 $ & 5195 & +0.16 & +0.23 & 0.19 & -1.07$\pm$0.10 & 2.54$\pm$0.78 \\ $ 7 < R < 9 $ & 13109 & +0.06 & +0.07 & 0.18 & -0.34$\pm$0.07 & 1.20$\pm$0.50 \\ $ 9 < R < 11 $ & 19930 & -0.07 & -0.08 & 0.17 & +0.27$\pm$0.03 & 0.35$\pm$0.11 \\ $ 11 < R < 13 $ & 6731 & -0.16 & -0.23 & 0.16 & +0.38$\pm$0.07 & 1.17$\pm$0.37 \\ $ 13 < R < 15 $ & 914 & -0.25 & -0.33 & 0.17 & +0.04$\pm$0.39 & 3.75$\pm$1.82 \\ \enddata \vspace{-0.4cm} \tablecomments{Statistics for the different ADFs in the across the Milky Way disk. The kurtosis is defined as the fourth standardized moment-3, such that a normal distribution has a kurtosis of 0.} \label{tabadf} \end{deluxetable*} \begin{figure}[ht!] \centering \includegraphics[width=3.3in]{amdf2.png} \caption{The observed distribution of [$\alpha$/H] for the entire sample as a function of Galactocentric radius over a range of distances from the plane. The results are quite similar to that of the MDF, except for stars with $|z|>1$ kpc and $R<9$ kpc, where the ADF has larger abundances than the MDF at the same locations.\\} \label{amdf} \end{figure} With three years of observations, there are sufficient numbers of stars in each Galactic zone to measure MDFs in a number of radial bins and at different heights above the plane. We present the MDFs in radial bins of 2 kpc between $3<R<15$ kpc, and at a range of heights above the plane between $0<|z|<2$ kpc, in Figure \ref{mdf1}. The MDFs are computed with bins of 0.05 dex in [Fe/H] for each zone. Splitting the sample into vertical and radial bins allows us to analyze the changes in the MDF across the Galaxy, but also minimizes selection effects due to the volume sampling of the APOGEE lines of sight and our target selection. Close to the plane (top panel of Figure \ref{mdf1}, $|z|<0.5$ kpc) radial gradients are evident throughout the disk. The peak of the MDF is centered at high metallicities in the inner Galaxy ([Fe/H] $=0.32$ for $3<R<5$ kpc), roughly solar in the solar neighborhood (M/H]=$+0.02$ for $7<R<9$ kpc), and low metallicities in the outer disk ([Fe/H] $=-0.48$ for $13<R<15$ kpc). The radial gradients observed in the MDF are similar to those measured across the disk in previous studies with APOGEE, as is the shift in the peak of the MDF (e.g., \citealt{Anders2014,Hayden2014}) The most striking feature of the MDF close to the plane is the change in shape with radius. The inner disk has a large negative skewness ($-1.68\pm0.12$ for $3<R<5$ kpc, see Table \ref{tabmdf}), with a tail towards low metallicities, while the solar neighborhood is more Gaussian in shape with a slight negative skewness ($-0.53\pm0.04$), and the outer disk is positively skewed with a tail towards high metallicities ($+0.47\pm0.13$ for $13<R<15$ kpc). The shape of the observed MDF of the solar neighborhood is in good agreement with the MDF measured by the GCS \citep{Nordstrom2004,Holmberg2007,Casagrande2011}, who measure a peak of just below solar metallicity and also find a similar negative skewness to our observations. There is a slight offset in the peak metallicity, with the APOGEE observations being more metal-rich by $\sim0.1$ dex, but the shapes are extremely similar. Close to the plane, the distributions are all leptokurtic, with the inner Galaxy ($3<R<7$ kpc) being more strongly peaked than the rest of the disk. As $|z|$ increases, the MDF exhibits less variation with radius. For $|z|>1$ kpc (bottom panel of Figure \ref{mdf1}), the MDF is uniform with a roughly Gaussian shape across all radii, although it is more strongly peaked for the very outer disk ($R>13$ kpc). However, the populations comprising the MDF(s) at these heights are not the same. In the inner disk, at large heights above the plane the high-[$\alpha$/Fe] sequence dominates the number density of stars. In the outer disk ($R>11$ kpc), the stars above the plane are predominantly solar-[$\alpha$/Fe] abundance. The uniformity of the MDF at these large heights is surprising given the systematic change in the [$\alpha$/Fe] of the populations contributing to the MDF. The MDF is similar for all heights above the plane in the outer disk ($R>11$ kpc). At these larger heights above the plane, the MDFs are leptokurtic as well, but the trend with radius is reversed compared to the distributions close to the plane. For $|z|>1$ kpc, the distributions in the outer disk ($R>11$ kpc) are more strongly peaked than the MDFs for the rest of the disk. As noted above, the high-[$\alpha$/Fe] sequence is fairly constant in shape with radius. The MDF for stars with high-[$\alpha$/Fe] abundance ([$\alpha$/Fe] $>0.18$) is presented in Figure \ref{highmdf}. The high-[$\alpha$/Fe] sequence appears uniform across the radial range in which it is observed ($3<R<11$ kpc); but does display variation with height above the plane. Close to the plane, the MDF is peaked at [Fe/H] $=-0.3$ over the entire radial extent where there are large numbers of high-[$\alpha$/Fe] stars; and the shape is also the same at all locations. However, as $|z|$ increases, the MDF shifts to slightly lower metallicities, with a peak [Fe/H] $=-0.45$ for $|z|>1$ kpc. At any given height, there is little variation in the shape or peak of the MDF with radius for these high-[$\alpha$/Fe] stars. Simple chemical evolution models often use instantaneous recycling approximations where metals are immediately returned to the gas reservoir after star formation occurs. This approach may not be a good approximation for all chemical elements in the stellar population, but it is more accurate for $\alpha$ elements, which are produced primarily in SNII. We present the distribution of [$\alpha$/H] (ADF) across the disk in Figure \ref{amdf}. The ADF is similar in appearance to the MDF, with radial gradients across the disk and the most $\alpha$-rich stars belonging to the inner Galaxy. The observed change in skewness with radius of the ADF (Table \ref{tabadf}) is similar to that of the MDF (Table \ref{tabmdf}). Close to the plane, the ADF of the inner disk ($3<R<5$ kpc) is more strongly peaked than the MDF in the same zone, with a significantly larger kurtosis. The main difference between the ADF and the MDF is above the plane of the disk, we observe radial gradients in the ADF at all heights above the plane, which is not the case for the MDF. At large $|z|$, the ADF is significantly more positive than the MDF from the inner Galaxy out to the solar neighborhood by $\sim0.25$ dex. This result does not hold true in the outer disk above the plane ($R>11$ kpc, $|z|>1$ kpc), where the ADF again has similar abundance trends to that of the MDF. To summarize our results for the MDFs across the disk: \begin{itemize} \item Metallicity gradients are clearly evident in the MDFs, with the most metal-rich populations in the inner Galaxy. \item The shape and skewness of the MDF in the midplane is strongly dependent on location in the Galaxy: the inner disk has a large negative skewness, the solar neighborhood MDF is roughly Gaussian, and the outer disk has a positive skewness. \item The MDF becomes more uniform with height. For stars with $|z|>1$ kpc it is roughly Gaussian with a peak metallicity of [Fe/H] $\sim-0.4$ across the entire radial range covered by this study ($3<R<15$ kpc). \item The MDF for the outer disk ($R>11$ kpc)is uniform at all heights above the plane, showing less variation in metallicity with $|z|$ than the rest of the disk. \item The MDF for stars with [$\alpha$/Fe] $>0.18$ is uniform with $R$, but has a slight negative vertical gradient. \item The ADF has many of the same features observed in the MDF, but shows differences for stars out of the plane ($|z|>1$ kpc) and with $R<9$ kpc, where stars tend to have higher [$\alpha$/H] than [Fe/H] . \end{itemize} \section{Discussion} Consistent with previous studies, we find that: (a) the solar-neighborhood MDF is approximately Gaussian in [Fe/H], with a peak near solar metallicity, (b) the distribution of stars in the [$\alpha$/Fe] -[Fe/H] plane is bimodal, with a high-[$\alpha$/Fe] sequence and a low-[$\alpha$/Fe] sequence, (c) the fraction of stars with high [$\alpha$/Fe] increases with $|z|$, and (d) there is a radial gradient in the mean or median value of [Fe/H] for stars near the midplane. Like \cite{Nidever2014}, we find that the location of the high-[$\alpha$/Fe] sequence is nearly independent of radius and height above the plane, a result we are able to extend to larger and smaller $R$ and to lower metallicity. Two striking new results of this study are (e) that the [$\alpha$/Fe] -[Fe/H] distribution of the inner disk ($3<R<5$ kpc) is consistent with a single evolutionary track, terminating at [Fe/H] $\sim+0.4$ and roughly solar [$\alpha$/Fe] , and (f) that the midplane MDF changes shape, from strong negative skewness at $3<R<7$ kpc to strong positive skewness at $11<R<15$ kpc, with the solar annulus lying at the transition between these regimes. The midplane inner disk MDF has the characteristic shape predicted by one-zone chemical evolution models, with most stars formed after the ISM has been enriched to an ``equilibrium'' metallicity controlled mainly by the outflow mass loading parameter $\eta$ (see \citealt{Andrews2015}, for detailed discussion). Traditional closed box or leaky box models (see, e.g., \S 5.3 of \citealt{Binney1998}) are a limiting case of such models, with no accretion. These models predict a metallicity distribution $dN/d\ln Z \propto Z \exp(-Z/p_{\rm eff})$ where the effective yield is related to the IMF-averaged population yield by $p_{\rm eff}=p/(1+\eta)$. The positively skewed MDFs of the outer disk could be a distinctive signature of radial migration, with a high-metallicity tail populated by stars that were born in the inner Galaxy. The change of [$\alpha$/Fe] distributions with height could be a consequence of heating of the older stellar populations or of forming stars in progressively thinner, ``cooler'' populations as turbulence of the early star-forming disk decreases. As discussed by \cite{Nidever2014}, the constancy of the high-[$\alpha$/Fe] sequence implies uniformity of the star formation efficiency and outflow mass loading during the formation of this population. In the next subsection we discuss the qualitative comparison between our results and several recent models of Milky Way chemical evolution. We then turn to a more detailed discussion of radial migration with the aid of simple quantitative models. We conclude with a brief discussion of vertical evolution. \subsection{Comparison to Chemical Evolution Models} Metallicity distribution functions are useful observational tools in constraining the chemical history of the Milky Way. The first chemical evolution models were simple closed-box systems, with no gas inflow or outflow, and often employed approximations such as instantaneous recycling. These models over-predicted the number of metal-poor stars in the solar-neighborhood compared to observations of G-dwarfs (e.g., \citealt{Schmidt1963,Pagel1975}), a discrepancy known as the ``G-Dwarf Problem''. These first observations made it clear that the chemical evolution of the solar neighborhood could not be described by a simple closed-box model; inflow and outflow of gas, along with more realistic yields (i.e., no instantaneous recycling) were required to reproduce observations. The MDF can therefore be used to inform and tune chemical evolution models and provide information such as the star formation history and relative gas accretion or outflow rates at every location where the MDF can be measured. APOGEE observations provide the first thorough characterizations of the MDF of the disk outside of the solar neighborhood, allowing a more complete characterization of the chemical evolutionary history of the Galaxy. Additions such as gas inflow and outflow to chemical evolution models have been able to better reproduce observations of the solar neighborhood, in particular the MDF and stellar distribution [$\alpha$/Fe] vs. [Fe/H] plane. The two-infall model from \citet{Chiappini1997,Chiappini2001} treats the disk as a series of annuli, into which gas accretes. In this model, an initial gas reservoir forms the thick disk, following the high-[$\alpha$/Fe] evolutionary track. As the gas reservoir becomes depleted, a lull in star formation occurs. SNeIa gradually lower the [$\alpha$/Fe] ratio of the remaining gas reservoir as a second, more gradual infall of pristine gas dilutes the reservoir, lowering the overall metallicity but retaining the low-[$\alpha$/Fe] abundance of the ISM. Once the surface density of the gas is high enough star formation resumes, forming the metal-poor end of the solar-[$\alpha$/Fe] sequence. The MDF from the two-infall model is in general agreement with our observations of the solar neighborhood (see Figure 7 of \citealt{Chiappini1997}), with a peak metallicity near solar and a slight negative skewness towards lower metallicities. Additionally, this model reproduces general trends found in the distribution of stars in the [$\alpha$/Fe] vs. [Fe/H] plane, in particular with the dilution of the metallicity of the existing gas reservoir with pristine gas to form the low-[$\alpha$/Fe] sequence. This is one possible explanation for the observed low-[$\alpha$/Fe] sequence: pristine gas accretes onto the disk and mixes with enriched gas from the inner Galaxy, keeping solar-[$\alpha$/Fe] (or intermediate [$\alpha$/Fe] that are later lowered to solar-[$\alpha$/Fe] ratios by SNeIa) ratios but lowering the metallicity from +0.5 dex in the inner disk to the lower metallicities found in the outer Galaxy. This model, and its ability to explain phenomenologically both the MDF and stellar distribution in the [$\alpha$/Fe] vs. [Fe/H] plane of the solar neighborhood, highlight the potential importance of gas flow in the evolutionary history of the Galaxy. The chemical evolution model from \citet{Schonrich2009a} includes radial migration in the processes governing the evolution of the disk. Their models find that the peak of the MDF is a strong function of radius, with the inner Galaxy being more metal-rich than the outer Galaxy. The peaks of their MDFs are similar to those observed in APOGEE at different radii (see Figure 11 of \citealt{Schonrich2009a}). However, their distributions are significantly more Gaussian and less skewed than we observe in our sample. The model distributions from \citet{Schonrich2009a} appear to have a shift from negative to positive skewness from the inner Galaxy to the outer Galaxy, as we observe, but the magnitude of the skewness is not large. The \citet{Schonrich2009a} results for [O/Fe] vs. [Fe/H] is not in good agreement to the distribution of stars in the [$\alpha$/Fe] vs. [Fe/H] plane observed with the APOGEE sample (see Figure 9 of \citealt{Schonrich2009a}). Their models do not have two distinct sequences in this plane, but a more continuous distribution of populations that start at low-metallicity with high-[$\alpha$/Fe] ratios and end at high-metallicity and solar-[$\alpha$/Fe] , similar to many other models with inside-out formation. Their model has have a much larger dynamic range than the APOGEE sample in [$\alpha$/Fe] abundance, with their [O/Fe] ratio extending to $\sim+0.6$, while the APOGEE [$\alpha$/Fe] abundances extend only to $\sim+0.3$, with the thin disk sequence in APOGEE shifted to slightly higher metallicities and lower-[$\alpha$/Fe] abundances than the thin-disk sequence presented by \citet{Schonrich2009a}. Recent N-body smooth particle hydrodynamics simulations from \citet{Kubryk2013} track the impact of migration on the stellar distribution of their disc galaxy. Their MDFs (see Figure 10 of \citealt{Kubryk2013}) are fairly uniform throughout the disk, appearing similar to the MDF of the solar neighborhood in the APOGEE observations, with a peak near solar abundances and a negative skewness. \citet{Kubryk2013} do not find significant shifts in the peak or skewness with radius in their simulations, contrary to what is observed in the APOGEE observations. They postulate that the uniformity of the peak of the MDF is due to the lack of gas infall in the simulation, highlighting the importance that gas dynamics can play in the MDF across the disk. The most sophisticated chemical evolution models to date use cosmological simulations of a Milky Way analog Galaxy, and paint a chemical evolution model on top of the simulated galaxy. Recent simulations by \citet{Minchev2013} match many observed properties of the disk, such as the stellar distribution in the [$\alpha$/Fe] vs. [Fe/H] plane (see Figure 12 of \citealt{Minchev2013}), and the flattening of the radial gradient with height (e.g., \citealt{Anders2014,Hayden2014}, see Figure 10 of \citealt{Minchev2014a}). Their simulations also provide detailed metallicity distributions for a large range of radii. We find that the MDFs from the simulations closely match the APOGEE observations of the MDF in the solar neighborhood and the uniformity of the MDF above the plane (see Figure 9 of \citealt{Minchev2014a}). While we have good agreement with the MDFs from \citet{Minchev2014a} above the plane at all radii, the MDFs from their simulation do not reproduce the change in the peak and skewness of the MDF observed close to the plane in the inner and outer disk. The MDFs presented in \citet{Minchev2014a} have the same peak metallicity at all radii, which is not observed in APOGEE close to the plane. The metal-rich components of the MDFs from the simulation are in the wings of the distributions, leading to positively skewed MDFs in the inner Galaxy, and roughly Gaussian shapes in the outer disk. The APOGEE observations have the opposite behavior, with negatively skewed distributions in the inner Galaxy and positively skewed distributions in the outer disk. For this paper, we did not correct for the APOGEE selection function. In the future, we plan to do a more detailed comparison between APOGEE observations and the simulation from \citet{Minchev2013}, in which the selection function is taken into account. \subsection{Radial Mixing} Simple chemical evolution models (closed or leaky box) are unable to produce the positively skewed MDFs that we observe in the outer disk. Models that include radial mixing (e.g.,\citealt{Schonrich2009a}) are able to at least produce a more Gaussian-shaped MDF across the disk. The fraction of stars that undergo radial migration is difficult to predict from first principles because it depends in detail on spiral structure, bar perturbations, and perturbations by and mergers with satellites (e.g., \cite{Roskar2008,Bird2012,Bird2013,Minchev2013}) To test the effects of blurring and churning on our observed MDFs, we create a simple model of the MDF across the disk. We assume that the intrinsic shape of the MDF is uniform across the disk, and that the observed change in skewness with radius is due to mixing of populations from different initial birth radii. The disk is modeled as a Dehnen distribution function \citep{Dehnen1999} with a velocity dispersion of $31.4$ km s$^{-1}$, a radial scale length of 3 kpc, and a flat radial velocity dispersion profile. These distribution function parameters adequately fit the kinematics of the main APOGEE sample \citep{BovyVcirc}. We model the initial MDF as a skew-normal distribution with a peak at $+0.4$ dex in the inner Galaxy, a dispersion of $0.1$ dex, and a skewness of $-4$; we assume a radial gradient of $-0.1$ dex kpc$^{-1}$ to shift the peak of the MDFs as a function of radius, keeping the dispersion and skewness fixed. We then determine the distribution of guiding radii with blurring or churning to determine the effect on the observed MDFs. \subsubsection{Blurring} \begin{figure}[ht!] \centering \includegraphics[width=3.0in]{blur.png} \caption{The MDF as a function of $R$ for our simple blurring model. The dashed lines show the initial MDFs (which the stars on circular orbits would have) and the solid lines represent the MDFs including the effect of blurring. The skewness of the blurred MDFs are indicated. The magnitude of the skewness diminishes with radius, but it does not change sign.} \label{blur} \end{figure} To determine the effect of blurring on the observed MDF, we use the model as described above and compute the distribution of guiding radii $R_{g}$ due to blurring. Assuming a flat rotation curve and axisymmetry, $R_g\, V_c$ is equal to $R\,V_{\mathrm{rot}}$, where $V_{\mathrm{rot}}$ is the rotational velocity in the Galactocentric frame. The blurring distribution $p_b(R_g|R)$ is given by \begin{equation}\nonumber p_b(R_g|R) \sim p(V_{\mathrm{rot}} = \frac{R_g}{R}V_c|R)\,. \end{equation} The probability $p(V_{\mathrm{rot}}|R)$ can be evaluated using the assumed Dehnen distribution function. The resulting MDF is \begin{equation}\nonumber p([\mathrm{Fe/H}]|R) = \int \mathrm{d}R_g\,p([\mathrm{Fe/H}]|R_g)\,p_b(R_g|R)\, \end{equation} In Figure \ref{blur} we compare the initial MDF and the MDF with the effects of blurring included. While blurring does reduce the observed skewness of the MDFs, the MDFs are still negatively skewed at all radii. This model is simplistic and it is possible that our underlying assumption regarding the intrinsic shape of the MDF may not be correct. However, because the intrinsic MDF is unlikely to have positive skewness anywhere in the Galaxy, it appears that blurring alone is unable to reproduce the change in sign of the MDF skewness seen in the APOGEE observations. \citet{Snaith2014a} find that solar neighborhood observations can be adequately explained by the blurring of inner and outer disk populations, but we conclude that such a model cannot explain the full trends with Galactocentric radius measured by APOGEE. \subsubsection{Radial Migration} We expand our simple model to include churning to determine if radial migration is better able to reproduce the observations. We model the effect of churning on the guiding radii by assuming a diffusion of initial guiding radii $R_{g,i}$ to final guiding radii $R_{g,f}$, for stars of age ($\tau$), given by \begin{equation}\nonumber \begin{split} p(& R_{g,f}|R_{g,i},\tau) = \\ & \mathcal{N}\left(R_{g,f}|R_{g,i},0.01+0.2\,\tau\,R_{g,i}\,e^{-(R_{g,i}-8\,\mathrm{kpc})^2/16\,\mathrm{kpc}^2}\right)\,, \end{split} \end{equation} where $\mathcal{N}(\cdot|m,V)$ is a Gaussian with mean $m$ and variance $V$. The spread increases as the square root of time and the largest spread in guiding radius occurs around 8 kpc. Analysis of numerical simulations has found that migration can be accurately described as a Gaussian diffusion (e.g., \citealt{Brunetti2011,Kubryk2014,VeraCiro2014}); we use the simple analytic form above to approximate the effect of churning seen in these more realistic simulations. To obtain the churning distribution $p_c(R_g|R,\tau)$ of initial $R_g$ at a given radius $R$ and age $\tau$, we need to convolve $p(R_{g,f}|R_{g,i},\tau)$ with the blurring distribution $p_b(R_g|R)$ above \begin{equation} p_c(R_g|R,\tau) = \int \mathrm{d}R_{g,f} p_b(R_{g,f}|R)\,p(R_{g,f}|R_g,\tau)\,, \end{equation} where we have used the fact that churning leaves the total surface density approximately unchanged and therefore that $p(R_{g}|R_{g,f},\tau) \approx p(R_{g,f}|R_{g},\tau)$. In order to determine the MDF due to churning, we must also integrate over the age distribution at a given radius and therefore need (a) the metallicity as a function of age at a given initial radius and (b) the age distribution at every $R$. For the former, we assume that metallicity increases logarithmically with age as a function of radius, starting at [Fe/H]$=-0.9$ and up to the peak of the skew-normal intrinsic MDF plus its dispersion; this relation is shown for a few radii in Figure \ref{agemetchurn}. We approximate the final age distribution at each radius as the initial age distribution, which is approximately the case in a more detailed calculation that takes the effects of churning into account. The initial age distribution $p(\tau|R)$ at each radius is simply a consequence of the assumed initial MDF and initial age--metallicity relation. \begin{figure*}[ht!] \centering \includegraphics[width=6.5in]{agemet.png} \caption{The age--metallicity relation at different radii for our simple model. The blue line shows the metallicity of the ISM as a function of age at different radii, while the solid black lines denote 1, 2, and 3$\sigma$ of the distribution of stars in the age-metallicity plane respectively; the solid red line gives the mean metallicity as a function of age. The solar neighborhood and the outer disk both display a wide range of metallicities for all but the oldest stars. \vspace{0.35cm}} \label{agemetchurn} \end{figure*} We obtain the churning MDF as \begin{equation}\nonumber p([\mathrm{Fe/H}]|R) = \int \mathrm{d} R_g p_c(R_g|R,\tau)\,p(\tau|R)/\left|\mathrm{d}[\mathrm{Fe/H}]/\mathrm{d}\tau\right|\,, \end{equation} where the age $\tau$ is a function of $R_g$ and [Fe/H]. We evaluate this expression at different radii and obtain MDFs that include the effects of both blurring and churning, displayed in Figure \ref{churning}. With the addition of churning, we are able to reproduce the change in skewness observed in the MDFs across the plane of the disk and in particular the change in sign around $R=9$ kpc. Stars need to migrate at least 6 kpc to the outer disk and at least 3 kpc around the solar neighborhood to produce the observed change in skewness. The model used in this section is a very simple model, and the predictions from the model do not match our observations perfectly; more detailed and realistic modeling is required that consistently takes into account the chemo-dynamical evolution of the disk. However, these tests demonstrate that blurring alone, as suggested by \citet{Snaith2014a}, is unable to reproduce our observations, and that the addition of churning to our model yields significantly better agreement with the observed MDF, in particular the change in skewness with radius. Therefore, the APOGEE MDFs indicate that migration is of global importance in the evolution of the disk. In principle, the gas can undergo radial mixing as well, causing enrichment (or dilution) of the ISM at other locations in the Galaxy through processes such as outflows and Galactic fountains. However, in this case the gas must migrate before it forms stars, so the timescales are much shorter. Additionally, many processes that would cause gas to migrate, such as non-axisymmetric perturbations, will also induce stellar migration and can therefore not be decoupled from the stars. It is likely that a combination of both gas and stellar migration is required to reproduce our observations in chemo-dynamical models for the Milky Way. \begin{figure}[t!] \centering \includegraphics[width=3.0in]{churning.png} \caption{The MDF as a function of $R$ with the inclusion of blurring and churning. The dashed lines show the initial MDF and the solid lines display the MDF including the effects of churning and blurring. The redistribution of guiding radii due to churning is able to significantly change the skewness compared to the initial MDFs and can explain the changes in skewness and its sign observed in the APOGEE sample.} \label{churning} \end{figure} Finally, we can also calculate the predicted age--metallicity relation at different locations throughout the disk from this model: \begin{equation}\nonumber p(\tau,[\mathrm{Fe/H}]|R) = p(R_g|R,\tau)\,p(\tau|R)/\left|\mathrm{d}[\mathrm{Fe/H}]/\mathrm{d}R_g\right|\,, \end{equation} where $R_g$ is the initial guiding radius corresponding to $\tau$ and [Fe/H]. The age--metallicity relation from the model is shown in Figure \ref{agemetchurn}. This relation is in qualitative agreement with observations of the age--metallicity relationship of the solar neighborhood (e.g., \citealt{Nordstrom2004,Haywood2013,Bergemann2014}). In our model, the outer disk has a wide range of metallicities at any age due to radial migration, in agreement with models from \citet{Minchev2014a}. \subsection{Vertical Structure: Heating vs. Cooling} Our measurements confirm the well established trend of enhanced [$\alpha$/Fe] in stars at greater distances from the plane, and they show that this trend with $|z|$ holds over radii $3<R<11$ kpc. At still larger radii, the fraction of high-[$\alpha$/Fe] stars is small at all values of $|z|$. The fact that stars with thin disk chemistry can be found at large distances about the plane in the outer disk can be explained by viewing the disk as composed of embedded flared mono-age populations, as suggested by \citet{Minchev2015}. The trend of [$\alpha$/Fe] with $|z|$ is particularly striking in the $3-5$ kpc annulus, where the stars lie along the sequence expected for a single evolutionary track but the dominant locus of stars shifts from low-[Fe/H] , high-[$\alpha$/Fe] at $1<|z|<2$ kpc to high-[Fe/H] , low-[$\alpha$/Fe] at $0<|z|<0.5$ kpc. Along an evolutionary track, both [Fe/H] and [$\alpha$/Fe] are proxies for age. Heating of stellar populations by encounters with molecular clouds, spiral arms, or other perturbations will naturally increase the fraction of older stars at greater heights above the plane simply because they have more time to experience heating. However, even more than previous studies of the solar neighborhood, the transition that we find is remarkably sharp, with almost no low-[$\alpha$/Fe] stars at $|z|>1$ kpc and a completely dominant population of low-[$\alpha$/Fe] stars in the midplane. Explaining this sharp dichotomy is a challenge for any model that relies on continuous heating of an initially cold stellar population. One alternative is a discrete heating event associated with a satellite merger or other dynamical encounter that occurred before SNIa enrichment produced decreasing [$\alpha$/Fe] ratios. Another alternative is ``upside down'' evolution, in which the scale height of the star-forming gas layer decreases with time, in combination with flaring of mono-age populations with radius (e.g., \citet{Minchev2015}), as the level of turbulence associated with vigorous star formation decreases (e.g., \citealt{Bournaud2009,Krumholz2012,Bird2013}). In this scenario, the absence of low-[$\alpha$/Fe] stars far from the plane implies that the timescale for the vertical compression of the disk must be comparable to the $1-2$ Gyr timescale of SNIa enrichment. This timescale appears at least roughly consistent with the predictions of cosmological simulations (e.g., Figure~18 of \citealt{Bird2013}). More generally, the measurements presented here of the spatial dependence of [$\alpha$/Fe] -[Fe/H] distributions and MDFs provide a multitude of stringent tests for models of the formation and the radial and vertical evolution of the Milky Way disk. \section{Conclusions} The solar neighborhood MDF has proven itself a linchpin of Galactic astronomy, enabling major advances in our understanding of chemical evolution (e.g., \citealt{VandenBergh1962}) in conjunction with stellar dynamics \citep{Schonrich2009a}. Galaxy formation models must ultimately reproduce the empirical MDF as well \citep{Larson1998}. In this paper, we report the MDF and alpha-abundance as measured by 69,919 red giant stars observed by APOGEE across much of the disk ($3<R<15$ kpc, $0<|z|<2$ kpc). Our simple dynamical model reveals the exciting prospect that the detailed shape of the MDF is likely a function of the dynamical history of the Galaxy. Our conclusions are as follows: \begin{itemize} \item The inner and outer disk have very different stellar distributions in the [$\alpha$/Fe] vs. [Fe/H] plane. The inner disk is well characterized by a single sequence, starting at low metallicities and high-[$\alpha$/Fe] , and ending at solar-[$\alpha$/Fe] abundances with [Fe/H] $\sim+0.5$, while the outer disk lacks high-[$\alpha$/Fe] stars and is comprised of primarily solar-[$\alpha$/Fe] stars. \item The scale-height of the inner Galaxy decreased with time, as the (older) metal-poor high-[$\alpha$/Fe] stars have large vertical scale-heights, while (younger) metal-rich solar-[$\alpha$/Fe] populations are confined to the midplane. \item The peak metallicity and skewness of the MDF is a function of location within the Galaxy: close to the plane, the inner disk has a super-solar metallicity peak with a negative skewness, while the outer disk is peaked at sub-solar metallicities and has a positive skewness. \item Models of the MDF as a function of $R$ that include blurring are unable to reproduce the observed change in skewness of the MDF with location. \item Models with migration included match our observations of the change in skewness; migration is likely to be an important mechanism in the observed structure of the disk. \end{itemize} M.R.H. and J.H. acknowledge support from NSF Grant AST-1109718, J.B. acknowledges support from a John N. Bahcall Fellowship and the W.M. Keck Foundation, D.L.N. was supported by a McLaughlin Fellowship at the University of Michigan, J.C.B. acknowledges the support of the Vanderbilt Office of the Provost through the Vanderbilt Initiative in Data-intensive Astrophysics (VIDA), D.H.W, B.A. and J.A.J. received partial support from NSF AST-1211853, S.R.M. acknowledges support from NSF Grant AST-1109718, T.C.B. acknowledges partial support for this work from grants PHY 08-22648; Physics Frontier Center/{}Joint Institute for Nuclear Astrophysics (JINA), and PHY 14-30152; Physics Frontier Center/{}JINA Center for the Evolution of the Elements (JINA-CEE), awarded by the US National Science Foundation. D.A.G.H. and O.Z. acknowledge support provided by the Spanish Ministry of Economy and Competitiveness under grant AYA-2011-27754. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. \bibliographystyle{apj}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{} \tiny \fontsize{8}{11}\helveticabold { \section{Keywords:} neutron star, gravitational waves, gamma-ray burst, nucleosynthesis, r-process, kilonova} \end{abstract} \section{Introduction}\label{sec:intro} Although the birth of "multi-messenger" astronomy dates back to the detection of the first solar neutrinos in the '60s and was rejuvenated by the report of MeV neutrinos from SN1987A in the Large Magellanic Cloud, the detection of gravitational radiation from the binary neutron star merger on 17 August 2017 (GW170817A) marks the transition to maturity of this approach to observational astrophysics, as it is expected to open an effective window into the study of astrophysical sources that is not limited to exceptionally close (the Sun) or rare (Galactic supernova) events. GW170817 is a textbook case for gravitational physics, because, with its accompanying short gamma-ray burst (GRB) and afterglow, and its thermal aftermath ("kilonova"), it has epitomized the different epiphanies of the coalescence of a binary system of neutron stars, and finally allowed us to unify them. Owing its name to a typical peak luminosity of $\sim 10^{42}$ erg~s$^{-1}$, i.e. 1000 times larger than that of a typical nova outburst, kilonova is the characteristic optical and infrared source accompanying a binary neutron star merger and due to the radioactive decay of the many unstable isotopes of large atomic weight elements synthesized via rapid neutron capture in the promptly formed dynamical ejecta and in the delayed post-merger ejecta. Its evolution, as well as that of the GRB afterglow, was recorded with exquisite detail, thanks to its closeness (40 Mpc). Scope of this paper is to review the electromagnetic multi-wavelength observations of GW170817 with particular attention to the kilonova phenomenon. The outline of the paper is as follows: Section \ref{sec:bnsm} sets the context of binary systems of neutrons stars and describes the predicted outcomes of their coalescences; Section \ref{sec:gw170817} presents the case of GW170817, the only so far confirmed example of double neutron star merger and the multi-wavelength features of its electromagnetic counterpart (short GRB and kilonova); Section \ref{sec:rddcyhel} focusses on the kilonova, elaborates on its observed optical and near-infrared light curves and spectra, draws the link with nucleosynthesis of heavy elements, and outlines the theoretical framework that is necessary to describe the kilonova properties and implications; Section \ref{sec:concl} summarizes the results and provides an outlook of this line of research in the near future. \section{Binary neutron star mergers}\label{sec:bnsm} Neutron stars are the endpoints of massive stars evolution and therefore ubiquitous in the Universe: on average, they represent about 0.1\% of the total stellar content of a galaxy. Since massive stars are mostly in binary systems \citep{Sana2012}, neutron star binaries should form readily, if the supernova explosion of either progenitor massive star does not disrupt the system \citep{Renzo2019}. Alternatively, binary neutron star systems can form dynamically in dense environments like stellar clusters (see Ye et al., 2020, and references therein). Binary systems composed by a neutron star and a black hole are also viable, but rare \citep{Pfahl2005}, which may account for the fact that none has so far been detected in our Galaxy. The prototype binary neutron star system in our Galaxy is PSR B1913+16, where one member was detected as a pulsar in a radio survey carried out at the Arecibo observatory \citep{HulseTaylor1974}, and the presence of its companion was inferred from the periodic changes in the observed pulsation period of 59 ms \citep{HulseTaylor1975}. Among various tests of strong general relativity enabled by the radio monitoring of this binary system, which earned the Nobel Prize for Physics to the discoverers in 1993, was the measurement of the shrinking of the binary system orbit, signalled by the secular decrease of the 7.75 hours orbital period, that could be entirely attributed to energy loss via gravitational radiation (Taylor \& Weisberg, 1982; Weisberg \& Huang, 2016, and references therein). With an orbital decay rate of $\dot{P} = -2.4 \times 10^{-12}$ s s$^{-1}$, the merging time of the PSR B1913+16 system is $\sim$300 Myr. Following the detection of PSR B1913+16, another dozen binary neutron stars systems were detected in our Galaxy (e.g. Wolszczan, 1991; Burgay et al., 2003; Tauris et al., 2017; Martinez et al., 2017). Almost half of these have estimated merging times significantly shorter than a Hubble time. The campaigns conducted by the LIGO interferometers in Sep 2015-Jan 2016 (first observing run) and, together with Virgo, in Nov 2016-Aug 2017 (second observing run), the latter leading to the first detection of gravitational waves from a merging double neutron star system (see Section \ref{sec:gw170817}), constrained the local merger rate density to be 110-3840 Gpc$^{-3}$ yr$^{-1}$ \citep{Abbottapj2019}. This is consistent with previous estimates (see e.g. Burgay et al., 2003), and, under a series of assumptions, marginally consistent with independent estimates based on double neutron star system formation in the classical binary evolution scenario \citep{Chruslinska2018}. Ye et al. (2020) have estimated that the fraction of merging binary neutron stars that have formed dynamically in globular clusters is negligible. Under the assumption that the event detected by LIGO on 25 April 2019 was produced by a binary neutron star coalescence the local rate of neutron star mergers would be updated to $250-2810$ Gpc$^{-3}$ yr$^{-1}$ \citep{Abbottapj2020a}. The merger of a binary neutron star system has four predicted outcomes: 1) a gravitational wave signal that is mildly isotropic, with a stronger intensity in the polar direction than in the equatorial plane; 2) a relativistic outflow, which is highly anisotropic and can produce an observable high energy transient; 3) a thermal, radioactive source emitting most of its energy at ultraviolet, optical and near-infrared wavelengths; 4) a burst of MeV neutrinos \citep{Eichler1989, RosswogLiebendoerfer2003} following the formation of the central remnant, and possibly of high-energy ($>$GeV) neutrinos from hadronic interactions within the relativistic jet \citep{FangMetzger2017,Kimura2018}. While neutrinos are extremely elusive and detectable only from very small distances with present instrumentation (see Section \ref{sec:concl}), the first three observables have been now all detected, as detailed in the next three sub-sections. \subsection{Gravitational waves}\label{sec:gws} Coalescing binary systems of degenerate stars and stellar mass black holes are optimal candidates for the generation of gravitational waves detectable from ground-based interferometers as the strong gravity conditions lead to huge velocities and energy losses (Shapiro \& Teukolsky, 1983), and the frequency of the emitted gravitational waves reaches several kHz, where the sensitivity of the advanced LIGO, Virgo and KAGRA interferometers is designed to be maximal \citep{Abbott2018}. The time behavior of binary systems of compact stars consists of three phases: a first inspiral phase in a close orbit that shrinks as gravitational radiation of frequency proportional to the orbital frequency is emitted, a merger phase where a remnant compact body is produced as a result of the coalescence of the two stars, and a post-merger, or ringdown, phase where the remnant still emits gravitational radiation while settling to its new stable configuration. During the inspiral, the amplitude of the sinusoidal gravitational signal rapidly increases as the distance between the two bodies decreases and the frequency increases (chirp), while in the ringdown phase the signal is an exponentially damped sinusoid. This final phase may encode critical information on the equation of state of the newly formed remnant (a black hole or, in the case of light neutron stars, a massive neutron star or metastable supramassive neutron star). The mathematical tool that is used to describe this evolution is the waveform model, that aims at reproducing the dynamics of the system through the application of post-Newtonian corrections of increasing order and at providing the essential parameters that can then be compared with the interferometric observations \citep{Blanchet2014,Nakano2019}. Since the amplitude of gravitational waves depends on the masses of the binary member stars, the signal will be louder, and thus detectable from larger distances, for binary systems that involve black holes than those with neutron stars. The current horizon for binary neutron star merger detection with LIGO is $\sim$200 Mpc, and 25-30\% smaller with Virgo and KAGRA (Abbott et al. 2018). The dependence of the gravitational waves amplitude on the physical parameters of the system implies that gravitational wave sources are standard sirens \citep{Schutz1986}, provided account is taken of the correlation between the luminosity distance and the inclination of the orbital plane with respect to the line of sight \citep{Nissanke2010,Abbott2016}. \subsection{Short gamma-ray bursts}\label{sec:sgrbs} GRBs, flashes of radiation of 100-1000 keV that outshine the entire Universe in this band, have durations between a fraction of a second and hundreds or even thousands of seconds. However, the duration distribution is bimodal, with a peak around 0.2 seconds (short or sub-second GRBs) and one around 20 seconds (long GRBs; Kouveliotou et al., 1993). This bimodality is reflected in the spectral hardness, which is on average larger in short GRBs, and in a physical difference between the two groups. While most long GRBs are associated with core-collapse supernovae \citep{Galama1998,WoosleyBloom2006,Levan2016}, sub-second GRBs are produced by the merger of two neutron stars or a neutron star and a black hole, as long predicted based on circumstantial evidence \citep{Blinnikov1984,Eichler1989,Fong2010,Berger2013,Tanvir2013} and then proven by the detection of GW170817 and of its high energy counterpart GRB170817A (Section \ref{sec:gw170817}). The observed relative ratio of long versus short GRBs depends on the detector sensitivity and effective energy band (e.g. Burns et al., 2016). However, the duration overlap of the two populations is very large, so that the minimum of the distribution has to be regarded as a rather vaguely defined value \citep{Bromberg2013}. About 140 short GRBs were localized so far to a precision that is better than 10 arc-minutes\footnote{http://www.mpe.mpg.de/$\sim$jcg/grbgen.html}; of these, $\sim$100, $\sim$40 and $\sim$10 have a detected afterglow in X-rays, optical and radio wavelengths, respectively, and $\sim$30 have measured redshifts (these range between $z = 0.111$ and $z = 2.211$, excluding the nearby GRB170817A, see Section \ref{sec:grb170817a}, and GRB090426, $z = 2.61$, whose identification as a short GRB is not robust, Antonelli et al., 2009). Short GRBs are located at projected distances of a fraction of, to several kiloparsecs from, the centers of their host galaxies, which are of both early and late type, reflecting the long time delay between the formation of the short GRB progenitor binary systems and their mergers \citep{Berger2014}. According to the classical fireball model, both prompt event and multi-wavelength afterglow of short GRBs are produced in a highly relativistic jet directed at a small angle with respect to the line of sight, whose aperture can be derived from the achromatic steepening (or "jet break") of the observed afterglow light curve \citep{Nakar2007}. In principle, this could be used to reconstruct the collimation-corrected rate of short GRBs, to be compared with predictions of binary neutron star merger rates. However, these estimates proved to be very uncertain, owing to the difficulty of measuring accurately the jet breaks in short GRB afterglows \citep{Fong2015,Jin2018,Wang2018,Lamb2019,Pandey2019}. \subsection{r-process nucleosynthesis}\label{sec:rprocess} Elements heavier than iron cannot form via stellar nucleosynthesis, as not enough neutrons are available for the formation of nuclei and temperatures are not sufficiently high to overcome the repulsive Coulomb barrier that prevents acquisition of further baryons into nuclei \citep{Burbidge1957}. Supernovae (especially the thermonuclear ones) produce large amounts of iron via decay (through $^{56}$Co) of radioactive $^{56}$Ni synthesized in the explosion. Heavier nuclei form via four neutron capture processes \citep{Thielemann2011}, the dominant ones being slow and rapid neutron capture, in brief s- and r-process, respectively, where "slow" and "rapid" refer to the timescale of neutron accretion into the nucleus with respect to that of the competing process of $\beta^-$ decay. In the s-process, neutron captures occur with timescales of hundreds to thousands of years, making $\beta^-$ decay highly probable, while r-process neutron capture occurs on a timescale of $\sim$0.01 seconds, leading to acquisition of many neutrons before $\beta^-$ decay can set on. As a consequence, the s-process produces less unstable, longer-lived isotopes, close to the so-called valley of $\beta$-stability (the decay time of a radioactive nucleus correlates inversely with its number of neutrons), while the r-process produces the heaviest, neutron-richest and most unstable isotopes of heavy nuclei, up to uranium \citep{Sneden2008,MennekensVanbeveren2014,Thielemann2017,Cote2018,Horowitz2019,Kajino2019,Cowan2020}. Among both s-process and r-process elements, some are particularly stable owing to their larger binding energies per nucleon, which causes their abundances to be relatively higher than others. In the abundances distribution in the solar neighbourhood these are seen as maxima ("peaks") centered around atomic numbers $Z$ = 39 (Sr-Y-Zr) , 57 (Ba-La-Ce-Nd), 82 (Pb) for the s-process and, correspondingly somewhat lower atomic numbers $Z$ = 35 (Se-Br-Kr), 53 (Te-I-Xe), 78 (Ir-Pt-Au) for the r-process (e.g. Cowan et al., 2020). Both s-process and r-process naturally occur in environments that are adequately supplied with large neutron fluxes. For the s-process, these are eminently asymptotic giant branch stars, where neutron captures are driven by the $^{13}$C($\alpha$, n)$^{16}$O and $^{22}$Ne($\alpha$, n)$^{25}$Mg reactions \citep{Busso1999}. The r-process requires much higher energy and neutron densities, which are only realized in most physically extreme environments. While it can be excluded that big-bang nucleosynthesis can accommodate heavy elements formation in any significant amount \citep{Rauscher1994}, there is currently no consensus on the relative amounts of nucleosynthetic yields in the prime r-process candidate sites: core-collapse supernovae and mergers of binary systems composed by neutron stars or a neutron star and a black hole. Core-collapse supernovae have been proposed starting many decades ago as sites of r-process nucleosynthesis through various mechanisms and in different parts of the explosion, including dynamical ejecta of prompt explosions of O-Ne-Mg cores \citep{Hillebrandt1976,Wheeler1998,Wanajo2002}; C+O layer of O-Ne-Mg-core supernovae \citep{Ning2007}; He-shell exposed to intense neutrino flux \citep{Epstein1988,Banerjee2011}; re-ejection of fallback material \citep{Fryer2006}; neutrino-driven wind from proto-neutron stars \citep{Woosley1994,Takahashi1994}; magnetohydrodynamic jets of rare core-collapse SNe \citep{Nishimura2006,Winteler2012}. Similarly old is the first proposal that the tidal disruption of neutron stars by black holes in close binaries \citep{Lattimer1974,Lattimer1976,SymbalistySchramm1982,Davies1994} and coalescences of binary neutron star systems \citep{Eichler1989} could be at the origin of r-process nucleosynthesis. This should manifest as a thermal optical-infrared source of radioactive nature of much lower luminosity (a factor of 1000) and shorter duration (rise time of a few days) than supernova \citep{LiPaczynski1998}. The models for r-process elements production in core-collapse supernova all have problems inherent their physics (mostly related to energy budget and neutron flux density). On the other hand, the binary compact star merger origin may fail to explain observed r-process element abundances in very low metallicities stars, i.e. at very early cosmological epochs, owing to the non-negligible binary evolution times (see Cowan et al., 2020 for an accurate review of all arguments in favour and against either channel). While the event of 17 August 2017 (Section \ref{sec:gw170817}) has now provided incontrovertible evidence that binary neutron star mergers host r-process nucleosynthesis, the role of core-collapse supernovae cannot be dismissed although their relative contribution with respect to the binary compact star channel must be assessed \citep{RamirezRuiz2015,Ji2016,Shibagaki2016,Cote2019,Safarzadeh2019,Simonetti2019}. It cannot be excluded that both "weak" and "strong" r-process nucleosynthesis takes place, with the former occurring mainly in supernova and possibly failing to produce atoms up to the third peak of r-process elemental abundance distribution \citep{Cowan2020}. The hint that heavy elements may be produced in low-rate events with high yields \citep{Sneden2008,Wallner2015,MaciasRamirezRuiz2019} points to binary compact star mergers or very energetic (i.e. expansion velocities larger than 20000 km s$^{-1}$) core-collapse supernovae as progenitors, rather than regular core-collapse supernovae. Along these lines, it has been proposed that accretion disks of collapsars (the powerful core-collapse supernovae that accompany long GRBs, Woosley \& Bloom, 2006) produce neutron-rich outflows that synthesize heavy r-process nuclei \citep{Nakamura2013,Kajino2014,Nakamura2015}. \citet{Siegel2019} calculated that collapsars may supply more than 80\% of the r-process content and computed synthetic spectra for models of r-process-enriched supernovae corresponding to an MHD supernova and a collapsar disk outflow scenario. Neutrons are tightly packed together in neutrons stars, but during coalescence of a binary neutron star system the tidal forces disrupt them and the released material forms promptly a disk-like rotating structure (dynamical ejecta, Rosswog et al., 1999; Shibata \& Hotokezaka, 2019) where the neutron density rapidly drops to optimal values for r-process occurrence ($\sim 10^{24-32}$ neutrons~cm$^{-3}$, Freiburghaus et al., 1999) and for copious formation of neutron-rich stable and unstable isotopes of large atomic number elements \citep{FernandezMetzger2016,Tanaka2016,Tanaka2018,Wollaeger2018,Metzger2019}. \section{The binary neutron star merger of 17 August 2017} \label{sec:gw170817} On 17 August 2017, the LIGO and Virgo interferometers detected for the first time a gravitational signal that corresponds to the final inspiral and coalescence of a binary neutron star system \citep{Abbottprl2017a}. The sky uncertainty area associated with the event was 28 square degrees, in principle too large for a uniform search for an electromagnetic counterpart with ground-based and orbiting telescopes. However, its small distance ($40^{+8}_{-14}$ Mpc), estimated via the "standard siren" property of gravitational wave signals associated with binary neutron star mergers, suggested that the aftermath could be rather bright, and motivated a large-scale campaign at all wavelengths from radio to very high energy gamma-rays, which was promptly and largely rewarded by success and then timely followed by a long and intensive monitoring \citep{Abbottapj2017b,Abbottapj2017c}, as described in Section \ref{sec:gwemcp}. Searches of MeV-to-EeV neutrinos directionally coincident with the source using data from the Super-Kamiokande, ANTARES, IceCube, and Pierre Auger Observatories between 500 seconds before and 14 days after the merger returned no detections \citep{Albert2017,Abe2018}. Based on the very detection of electromagnetic radiation, \cite{Bauswein2017} have argued that the merger remnant may not be a black hole or at least the post-merger collapse to a black hole may be delayed. Since the post-merger phase ("ring-down") signal of GW170817 was not detected \citep{Abbottapj2017e}, this hypothesis cannot be tested directly with gravitational data. \cite{Bauswein2017} also derived lower limits on the radii of the neutron stars. Notably, while the gravitational data made it possible to set an upper limit on the tidal-deformability parameter of the binary neutron stars ($\tilde\Lambda \lesssim 800$, Abbott et al. 2017a), the optical observation of kilonova ejecta limited the same parameter from below ($\tilde\Lambda \gtrsim 400$, Radice et al. 2018), based on the consideration that for smaller values of $\tilde\Lambda$ a long-lived remnant would not be favoured, contradicting the result of \citet{Bauswein2017}. The limits on the $\tilde\Lambda$ parameter constrain the neutron star radius to the range 11.8 km $\lesssim R_{1.5} \lesssim 13.1$ km, where $R_{1.5}$ refers to a 1.5 $M_{\odot}$ neutron star \citep{Burgio2018}, and in turn confine the possible ensemble of viable equations of state \citep{Annala2018,LimHolt2018}, a fundamental, yet poorly known descriptor of neutron star physics \citep{OzelFreire2016}. Furthermore, by circumscribing the number of equations of state of the compact stars, their exploration can be brought beyond nucleonic matter, and extended to scenarios of matter presenting a phase transition \citep{Burgio2018,Most2018}. The results on the tidal-deformability of the neutron star progenitors of GW170817 and on the behavior of the remnant thus provide a brilliant confirmation of the added value of a multi-messenger approach over separate observations of individual carriers of information. \subsection{The electromagnetic counterpart of GW170817}\label{sec:gwemcp} Independent of LIGO-Virgo detection of the gravitational wave signal, the Gamma-ray Burst Monitor (GBM) onboard the NASA {\it Fermi} satellite and the Anticoincidence Shield for the gamma-ray Spectrometer (SPI) of the {\it International Gamma-Ray Astrophysics Laboratory} ({\it INTEGRAL}) satellite were triggered by a faint short GRB (duration of $\sim$2 seconds), named GRB170817A \citep{Abbottapj2017b,Goldstein2017,Savchenko2017}. This gamma-ray transient, whose large error box was compatible with that determined by LIGO-Virgo, lags the gravitational merger by 1.7 seconds, a delay that may be dominated by the propagation time of the jet to the gamma-ray production site (Beniamini et al., 2020; see however Salafia et al., 2018). The preliminary estimate of the source distance provided a crucial constraint on the maximum distance of the galaxy that could plausibly have hosted the merger, so that the searching strategy was based on targeting galaxies within a $\sim$50 Mpc cosmic volume (see e.g. Gehrels et al., 2016) with telescopes equipped with large (i.e. several square degrees) field-of-view cameras. About 70 ground-based optical telescopes participated to the hunt and each of them adopted a different pointing sequence. This systematic approach enabled many groups to identify the optical counterpart candidate in a timely manner (with optical magnitude $V \simeq 17$), i.e. within $\sim$12 hours of the merger \citep{Arcavi2017,Lipunov2017,SoaresSantos2017,Valenti2017,Tominaga2018}. \cite{Coulter2017} were the first to report a detection with the optical 1m telescope Swope at Las Campanas Observatory. The optical source lies at 10 arc-seconds angular separation, corresponding to a projected distance of $\sim$2 kpc, from the center of the spheroidal galaxy NGC~4993 at 40 Mpc \citep{Blanchard2017,Im2017,Levan2017,Pan2017,Tanvir2017}. Rapid follow-up of the gravitational wave and GRB signal in X-rays did not show any source comparable to, or brighter than a typical afterglow of a short GRB. Since both the gravitational data and the faintness of the prompt GRB emission suggested a jet viewed significantly off-axis, this could be expected, as the afterglows from misaligned GRB jets have longer rise-times than those of jets observed at small viewing angles \citep{VanEertenMacFadyen2011}. Therefore, X-ray monitoring with {\it Swift}/XRT, {\it Chandra} and {\it Nustar} continued, and $\sim$10 days after merger led to the detection with {\it Chandra} of a faint source ($L_X \simeq 10^{40}$ erg~s$^{-1}$) \citep{Evans2017,Margutti2017,Troja2017}, whose intensity continued to rise up to $\sim$100 days \citep{Davanzo2018,Troja2020}. Similarly, observations at cm and mm wavelengths at various arrays, including VLA and ALMA, failed to detect the source before $\sim$16 days after the gravitational signal, which was interpreted as evidence that a jetted source accompanying the binary neutron star merger must be directed at a significant angle ($\ge$20 degrees) with respect to the line of sight \citep{Alexander2017,Andreoni2017,Hallinan2017,Kim2017,Pozanenko2018}. The {\it Fermi} Large Area Telescope covered the sky region of GW170817 starting only 20 minutes after the merger, and did not detect any emission in the energy range 0.1-1 GeV to a limiting flux of $4.5 \times 10^{-10}$ erg~s$^{-1}$~cm$^{-2}$ in the interval 1153-2027 seconds after the merger \citep{Ajello2018}. Follow-up observation with the atmospheric Cherenkov experiment H.E.S.S. (0.3-8 TeV) from a few hours to $\sim$5 days after merger returned no detection to a limit of a few 10$^{-12}$ erg~s$^{-1}$~cm$^{-2}$ \citep{Abdalla2017}. A summary of the results of the multi-wavelength observing campaign within the first month of gravitational wave signal detection are reported in \cite{Abbottapj2017c}. While the radio and X-ray detections are attributed to the afterglow of the short GRB, the ultraviolet, optical and near-infrared data are dominated by the kilonova at early epochs (with a possible contribution at $\lesssim$4 days at blue wavelengths from cooling of shock-heated material around the neutron star merger, Piro \& Kollmeier, 2018), and later on by the afterglow, as described in the next two Sections. \subsubsection{The gamma-ray burst and its multi-wavelength afterglow}\label{sec:sgrbag} \label{sec:grb170817a} The short GRB170817A, with an energy output of $\sim 10^{46}$ erg, was orders of magnitude dimmer than most short GRBs \citep{Berger2014}. Together with a viewing angle of $\sim$30 deg estimated from the gravitational wave signal \citep{Abbottprl2017a}, this led to the hypothesis that the GRB was produced by a relativistic jet viewed at a comparable angle. However, the early light curve of the radio afterglow is not consistent with the behavior predicted for an off-axis collimated jet and rather suggests a quasi-spherical geometry, possibly with two components, a more collimated one and a nearly isotropic and mildly relativistic one, which is responsible also for producing the gamma-rays \citep{Mooley2018a}. This confirms numerous predictions whereby the shocked cloud surrounding a binary neutron star merger forms a mildly relativistic cocoon that carries an energy comparable to that of the jet and is responsible for the prompt emission and the early multi-wavelength afterglow \citep{Lazzati2017a,Lazzati2017b,NakarPiran2017,Bromberg2018,Xie2018}, and is supported by detailed numerical simulations \citep{Lazzati2018,Gottlieb2018}. Using milli-arcsecond resolution radio VLBI observations at 75 and 230 days Mooley et al. (2018b) detected superluminal motion with $\beta = 3-5$, while Ghirlanda et al. (2019) determine that, at 207 days, the source is still angularly smaller than 2 milli-arcseconds at the 90\% confidence, which excludes that a nearly isotropic, mildly relativistic outflow is responsible for the radio emission, as in this case the source apparent size, after more than six months of expansion, should be significantly larger and resolved by the VLBI observation. These observations point to a structured jet as the source of GRB170817A, with a narrow opening angle ($\theta_{op} \simeq 3.4$ degrees) and an energetic core ($\sim 3 \times 10^{52}$ erg) seen under a viewing angle of $\sim$15 degrees \citep{Ghirlanda2019}. This is further confirmed by later radio observations extending up to 300 days after merger, that show a sharp downturn of the radio light curve, suggestive of a jet rather than a spherical source \citep{Mooley2018c}. The optical/near-infrared kilonova component subsided rapidly (see Section \ref{sec:kn}) leaving room to the afterglow emission: the HST observations at $\sim$100 days after the explosion show a much brighter source than inferred from the extrapolation of the early kilonova curve to that epoch \citep{Lyman2018}. This late-epoch flux is thus not consistent with kilonova emission and rather due to the afterglow produced within an off-axis structured jet \citep{Fong2019}. At X-ray energies, the GRB counterpart is still detected with {\it Chandra} three years after explosion \citep{Troja2020}, but its decay is not fully compatible with a structured jet, indicating that the physical conditions have changed or that an extra component is possibly emerging (e.g. a non-thermal aftermath of the kilonova ejecta, see next Section). \subsubsection{The kilonova}\label{sec:kn} The early ground-based optical and near-infrared and space-based (with {\it Swift}/UVOT) near-ultraviolet follow-up observations started immediately after identification of the optical counterpart of GW170817, detected a rapid rise ($\sim$1 day timescale, Arcavi et al. 2017) and wavelength-dependent time decay, quicker at shorter wavelengths \citep{Andreoni2017,Cowperthwaite2017,Diaz2017,Drout2017,Evans2017, McCully2017,Nicholl2017,Tanvir2017,Utsumi2017,Villar2017}. The optical light is polarized at the very low level of ($0.50 \pm 0.07$)\% at 1.46 days, consistent with intrinsically unpolarized emission scattered by Galactic dust, indicating that no significant alignment effect in the emission or geometric preferential direction is present in the source at this epoch, consistent with expectation for kilonova emission \citep{Covino2017}. Starting the same night when the optical counterpart was detected, low resolution spectroscopy was carried out at the Magellan telescope \citep{Shappee2017}. This spectrum shows that the source is not yet transparent as it is emitting black body radiation, whose maximum lies however blue-ward of the sampled wavelength range, suggesting that the initial temperature may have been larger than $\sim$10000~K. The following night (1.5 days after merger) the spectrum is still described by an almost perfect black body law whose maximum at $\sim$5000 K was fully resolved by spectroscopy at the Very Large Telescope (VLT) with the X-Shooter spectrograph over the wavelength range 3500-24000 \AA\ \citep{Pian2017}. At this epoch, the expansion velocity of the expelled ejecta, whose total mass was estimated to be 0.02-0.05 M$_\odot$ \citep{Pian2017,Smartt2017,Waxman2018}, was $\sim$20\% of the light speed, which is only mildly relativistic and therefore much less extreme than the ultra-relativistic kinematic regime of the GRB and of its early afterglow, analogous to the observed difference between the afterglows and the supernovae accompanying long GRBs. At 2.5 days after merger the spectrum starts deviating from a black body as the ejecta become increasingly transparent and absorption lines are being imprinted on the spectral continuum by the atomic species present in the ejecta \citep{Chornock2017,Pian2017,Smartt2017}. In the following days these features become prominent and they evolve as the ejecta decelerate and the photosphere recedes (Figure \ref{fig:1}). In particular, in the spectrum at day 1.5 an absorption feature extending from $\sim$7000 \AA\ to $\sim$8100 \AA\ is detected, that \citet{Smartt2017} preliminarily identified with atomic transitions occurring in neutral Cs and Te, broadened and blueshifted by $\sim 0.2c$, consistent with the expansion velocity of the photosphere. In the second spectrum (2.5 days) the Cs {\sc I} and Te {\sc I} lines are still detected at somewhat larger wavelengths, compatibly with a reduced photospheric expansion speed. These lines were however later disproved on account of the fact that, at the temperature of the ejecta immediately below the photosphere ($\sim$3700 K), numerous transitions of other lanthanide elements of higher ionisation potential should be observed besides Cs and Te, but are not \citep{Watson2019}. \citet{Watson2019} reanalysed the absorption feature observed at 7000-8100 \AA\ and an absorption feature at $\sim$3500 \AA\ with the aid of local thermodynamic equilibrium models with abundances from a solar-scaled r-process and from metal-poor stars and determined that the absorption features can be identified with Sr {\sc II}. In the spectra at the successive epochs the line at the longer wavelength is still detected and develops a P Cygni profile. Strontium is a very abundant element and is produced close to the first r-process peak. Its possible detection makes it important to consider lighter r-process elements in addition to the lanthanides in shaping the kilonova emission spectrum \citep{Watson2019}. At $\sim$10 days after merger, the kilonova spectrum fades out of the reach of the largest telescopes. The radioactive source could still be monitored photometrically for another week in optical and near-infrared \citep{Cowperthwaite2017,Drout2017,Kasliwal2017,Pian2017,Smartt2017,Tanvir2017}; it was last detected at 4.5 $\mu$m with the {\it Spitzer} satellite 74 days post merger \citep{Villar2018}. The kilonova ejecta are also expected to interact with the circum-binary medium and produce low-level radio and X-ray emission that peaks years after the merger \citep{Kathirgamaraju2019}. Search for this component has not returned (yet) a detection at radio wavelengths \citep{Hajela2019}, but it may start to be revealed at X-rays \citep{Troja2020}. \subsubsection{The host galaxy of GW170817}\label{sec:hostgal} HST and {\it Chandra} images, combined with VLT MUSE integral field spectroscopy of the optical counterpart of GW170817, show that its host galaxy, NGC 4993, is a lenticular (S0) galaxy at $z = 0.009783$ that has undergone a recent ($\sim$1 Gyr) galactic merger \citep{Levan2017,Palmese2017}. This merger may be responsible for igniting weak nuclear activity. No globular or young stellar cluster is detected at the location of GW170817, with a limit of a few thousand solar masses for any young system. The population in the vicinity is predominantly old and the extinction from local interstellar medium low. Based on these data, the distance of NGC4993 was determined to be ($41.0 \pm 3.1$) Mpc \citep{Hjorth2017}. The HST imaging made it also possible to establish the distance of NGC4993 through the surface brightness fluctuation method with an uncertainty of $\sim$6\% ($40.7 \pm 1.4 \pm 1.9$ Mpc, random and systematic errors, respectively), making it the most precise distance measurement for this galaxy \citep{Cantiello2018}. Combining this with the recession velocity measured from optical spectroscopy of the galaxy, corrected for peculiar motions, returns a Hubble constant $H_0 = 71.9 \pm 7.1$ km~s$^{-1}$~Mpc$^{-1}$. Based only on the gravitational data and the standard siren argument, and assuming that the optical counterpart represents the true sky location of the gravitational-wave source instead of marginalizing over a range of potential sky locations, \cite{Abbottnat2017d} determined a "gravitational" distance of $43.8^{+2.9}_{-6.9}$ Mpc that is refined with respect to the one previously reported in \cite{Abbottprl2017a}. Together with the corrected recession velocity of NGC4993 this yields a Hubble constant $H_0 = 70^{+12}_{-8}$ km~s$^{-1}$~Mpc$^{-1}$, comparable to, but less precise than that obtained from the superluminal motion of the radio counterpart core, $H_0 = 70.3^{+5.3}_{-5.0}$ km~s$^{-1}$~Mpc$^{-1}$ \citep{Hotokezaka2019}. \section{Kilonova light curve and spectrum}\label{sec:rddcyhel} The unstable isotopes formed during coalescence of a binary neutron star system decay radioactively and the emitted gamma-ray photons are down-scattered to the ultraviolet, optical and infrared thermal radiation that constitutes the kilonova source (Section \ref{sec:kn}). Its time decline is determined by the convolution of radioactive decay chain curves of all present unstable nuclei. This is analogous to the supernova phenomenon, where however the vastly dominant radioactive chain is $^{56}$Ni decaying into $^{56}$Co, and then into $^{56}$Fe. While radioactive nuclei decay, atoms recombine, as the source is cooling, and absorption features are imprinted in the kilonova spectra. Among neutron-rich nuclei, the lanthanides (atomic numbers 57 to 71) series have full f-shells and therefore numerous atomic transitions that suppress the spectrum at shorter wavelengths ($\lesssim$ 8000 \AA). Spectra of dynamical ejecta of kilonova may therefore be heavily intrinsically reddened, depending on the relative abundance of lanthanides \citep{BarnesKasen2013,Kasen2013,TanakaHotokezaka2013}. Prior to the clear detection of kilonova accompanying GW170817 (Section \ref{sec:gw170817}), such a source may have been detected in HST images in near-infrared H band of the afterglow of GRB130603B \citep{Berger2013,Tanvir2013}. Successive claims for association with short GRBs and kilonova radiation were similarly uncertain \citep{Jin2015,Jin2016}. If the neutron stars coalescence does not produce instantaneously a black hole, and a hypermassive neutron star is formed as a transitory remnant, a neutrino wind is emitted, that may inhibit the formation of neutrons and reduce the amount of neutron-rich elements \citep{FernandezMetzger2013,Kajino2014,Kiuchi2014,MetzgerFernandez2014,Perego2014,Kasen2015,Lippuner2017}. This "post-merger" kilonova component, of preferentially polar direction, is thus relatively poor in lanthanides and gives rise to a less reddened spectrum \citep{Tanaka2017,Kasen2017}. The optical/near-infrared spectral behavior of kilonova is analogous to that of supernovae with the largest kinetic energies ($> 10^{52}$ erg), like those associated with GRBs: the large photospheric velocities broaden the absorption lines and blueshift them in the direction of the observer. Furthermore, broadening causes the lines to blend, which makes it difficult to isolate and identify individual atomic species \citep{Iwamoto1998,Mazzali2000,Nakamura2001}. While these effects can be controlled and de-convolved with the aid of a radiation transport model as it has been done for supernovae of all types \citep{Mazzali2016,Hoeflich2017,Ergon2018,HillierDessart2019,Shingles2020,AshallMazzali2020}, a more fundamental hurdle in modelling kilonova spectra consists in the much larger number of electronic transitions occurring in r-process element atoms than in the lighter ones that populate supernova ejecta, and in our extremely limited knowledge of individual atomic opacities of these neutron-rich elements, owing to the lack of suitable atomic data. First systematic atomic structure calculations for lanthanides and for all r-process elements were presented by Fontes et al. (2020) and Tanaka et al. (2020), respectively. \section{Summary and future prospects}\label{sec:concl} The gravitational and electromagnetic event of 17 August 2017 provided the long-awaited confirmation that binary neutron star mergers are responsible for well identifiable gravitational signals at kHz frequencies, for short GRBs, and for thermal sources, a.k.a. kilonovae or macronovae, produced by the radioactive decay of unstable heavy elements synthesized via r-process during the coalescence. The intensive and longterm electromagnetic monitoring from ground and space allowed clear detection of the counterpart at all wavelengths. Brief ($\sim$2 s) gamma-ray emission, peaking at $\sim$200 keV and lagging the gravitational signal by 1.7 seconds, is consistent with a weak short GRB. At ultraviolet-to-near-infrared wavelengths, the kilonova component - never before detected to this level of accuracy and robustness - dominates during the first 10 days and decays rapidly under detection threshold thereafter, while an afterglow component emerges around day $\sim$100. Up to the most recent epochs of observation (day $\sim$1000 at X-rays), the kilonova does not add significantly to the bright radio and X-ray afterglow component. Multi-epoch VLBI observations measured - for the first time in a GRB - superluminal motion of the radio source, thus providing evidence of late-epoch emergence of a collimated off-axis relativistic jet. Doubtlessly, this series of breakthroughs were made possible by the closeness of the source (40 Mpc), almost unprecedented for GRBs, and by the availability of first-class ground-based and space-borne instruments. The many findings and exceptional new physical insight afforded by GW170817/GRB170817A make it a {\it rosetta stone} for future similar events. When a sizeable group of sources with good gravitational and electromagnetic detections will be available, the properties of binary systems containing at least one neutron star, of their mergers and their aftermaths can be mapped. It will then become possible to clarify how the dynamically ejected mass depends on the binary system parameters, mass asymmetry and neutron stars equation of state \citep{RuffertJanka2001,Hotokezaka2013}, how the jet forms and evolves, which kinematic regimes and geometry it takes up in time, and how can the GRB and afterglow observed phenomenologies help distinguish the intrinsic properties from viewing angle effects \citep{Janka2006,LambKobayashi2018,IokaNakamura2019}, what is the detailed chemical content of kilonova ejecta and how the r-process abundance pattern inferred from kilonova spectra compares with the history of heavy elements cosmic enrichment \citep{Rosswog2018}, how can kilonovae help constrain the binary neutron star rates and how does the parent population of short GRBs evolve \citep{GuettaStella2009,Yang2017,Belczynski2018,Artale2019,MatsumotoPiran2020}, how gravitational and electromagnetic data can be used jointly to determine the cosmological parameters \citep{Schutz1986,DelPozzo2012,Abbottnat2017d}, to mention only some fundamental open problems. Comparison of the optical and near-infrared light curves of GW170817 kilonova with those of short GRBs with known redshift suggests infact significant diversity in the kilonova component luminosities \citep{Gompertz2018,Rossi2020}. Regrettably, short GRBs viewed at random angles, and not pole on, are relativistically beamed away from the observer direction and kilonovae are intrinsically weak. These circumstances make electromagnetic detections very difficult if the sources lie at more than $\sim$100 Mpc, as proven during the third and latest observing run (Apr 2019 - Mar 2020) of the gravitational interferometers network. In this observing period, two merger events possibly involving neutron stars were reported by the LIGO-Virgo consortium: GW190425, caused by the coalescence of two compact objects of masses each in the range 1.12--2.52 M$_\odot$, at $\sim$160 Mpc \citep{Abbottapj2020a}, and GW190814, caused by a 23 M$_\odot$ black hole merging with a compact object of 2.6 M$_\odot$ at $\sim$240 Mpc \citep{Abbottapj2020b}. In neither case did the search for an optical or infrared counterpart return a positive result \citep{Coughlin2019,Gomez2019,Ackley2020,Andreoni2020,Antier2020,Kasliwal2020}, owing presumably to the large distance and sky error areas, although a short GRB may have been detected by the {\it INTEGRAL} SPI-ACS simultaneously with GW190425 \citep{Pozanenko2019}. Note that all coalescing stars may have been black holes, as the neutron star nature of the binary members lighter than 3 M$_\odot$ could not be confirmed. The search for electromagnetic counterparts of gravitational radiation signals is currently thwarted primarily by the large uncertainty of their localization in the sky, which is usually no more accurate than several dozens of square degrees. Much smaller error boxes are expected to be available when the KAGRA (which had already joined LIGO-Virgo in the last months of the 2019-2020 observing run) and the INDIGO interferometers will operate at full regime as part of the network during the next observing run \citep{Abbott2018}. Observing modes, strategies, and simulations are being implemented to optimize the electromagnetic multi-wavelength search and follow-up \citep{Bartos2016,Patricelli2018,Cowperthwaite2019,Graham2019,Artale2020}, and new dedicated space-based facilities are designed with critical capabilities of large sky area coverage and rapid turnaround (e.g. {\it ULTRASAT}, Sagiv et al., 2014; {\it THESEUS}, Amati et al., 2018, Stratta et al., 2018; {\it DORADO}, Cenko et al. 2019), to maximize the chance of detection of dim, fast-declining transients. Finally, the possible detection of elusive MeV and $>$GeV neutrinos associated with the kilonova \citep{KyutokuKashiyama2018} and with the GRB \citep{Bartos2019,Aartsen2020}, respectively, will bring an extra carrier of information into play, and thus complete the multi-messenger picture associated with the binary neutron star merger phenomenon. Gravitational waves from binary neutron star inspirals and mergers; gamma-ray photons -- downscattered to UV/optical/infrared light -- from radioactive decay of unstable nuclides of heavy elements, freshly formed after the merger; multi-wavelength photons from non-thermal mechanisms in the relativistic jet powered by the merger remnant; thermal and high-energy neutrinos accompanying the remnant cooling and hadronic processes in the jet, respectively, all collectively underpin the role of the four physical interactions. This fundamental role of compact star merger phenomenology thus points to the formidable opportunity offered by a multi-messenger approach: bringing the communities of astrophysicists and nuclear physicists closer will foster that cross-fertilization and inter-disciplinary approach that is not only beneficial, but also essential for progress in this field. \section*{Acknowledgments} The author is indebted to T. Belloni, S. Cristallo, Th. Janka, P. Mazzali, A. Possenti, M. Tanaka, and F. Thielemann for discussion, and to the reviewers, F. Burgio and T. Kajino, for their critical comments and suggestions. She acknowledges hospitality from Liverpool John Moores University; Weizmann Institute of Science, Rehovot, and the Hebrew University of Jerusalem, Israel; National Astronomical Observatory of Japan, Tokyo; Beihang University, Beijing, and Yunnan National Astronomical Observatory, Kunming, China; Max-Planck Institute for Astrophysics and Munich Institute for Astro- and Particle Physics, Garching, Germany, where part of this work was accomplished. \section*{Data Availability Statement} The ESO VLT X-Shooter spectra reported in Figure \ref{fig:1}, first published in Pian et al. (2017) and Smartt et al. (2017), are available in the Weizmann Interactive Supernova Data Repository (https://wiserep.weizmann.ac.il; Yaron \& Gal-Yam 2012).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{Introduction}Introduction} To understand what controls $T_{\rm c}$~in high temperature superconductors remains a major challenge. Several studies suggest that in contrast to cuprates where chemical substitution controls electron concentration, the dominant effect of chemical substitution in iron-based superconductors is to tune the structural parameters -- such as the As-Fe-As bond angle -- which in turn control $T_{\rm c}$. \cite{zhao_structural_2008, rotter_superconductivity_2008} This idea is supported by the parallel tuning of $T_c$ and the structural parameters of the 122 parent compounds BaFe$_2$As$_2$ and SrFe$_2$As$_2$. \cite{kimber_similarities_2009, alireza_superconductivity_2009} In the case of Ba$_{1-x}$K$_{x}$Fe$_2$As$_2$, at optimal doping ($x=0.4$, $T_{\rm c}$~=~38~K) the As-Fe-As bond angle is $\alpha=109.5\,^{\circ}$, the ideal angle of a non-distorted FeAs$_4$ tetrahedral coordination. Underdoping, overdoping, or pressure would tune the bond angle away from this ideal value and reduce $T_{\rm c}$~by changing the electronic bandwidth and the nesting conditions. \cite{kimber_similarities_2009} CsFe$_2$As$_2$~is an iron-based superconductor with $T_{\rm c}$~=~1.8~K and $H_{\rm c2}$~=~1.4~T. \cite{sasmal_superconducting_2008, wang_calorimetric_2013, hong_nodal_2013} Based on the available X-ray data, \cite{sasmal_superconducting_2008} the As-Fe-As bond angle in CsFe$_2$As$_2$~is $109.58\,^{\circ}$, close to the ideal bond angle that yields $T_{\rm c}$~=~38~K in optimally-doped Ba$_{0.6}$K$_{0.4}$Fe$_2$As$_2$. If the bond angle were the key tuning factor for $T_{\rm c}$, CsFe$_2$As$_2$~should have a much higher transition temperature than 1.8 K. In this article, we show evidence that $T_{\rm c}$~in (K,Cs)Fe$_2$As$_2$ may be controlled by details of the inelastic scattering processes that are not directly related to structural parameters, but are encoded in the electrical resistivity $\rho(T)$. The importance of inter- and intra-band inelastic scattering processes in determining $T_{\rm c}$~and the pairing symmetry of iron pnictides has been emphasized in several theoretical works. \cite{graser_near-degeneracy_2009, maiti_evolution_2011, maiti_gap_2012} Recently, it was shown that a change of pairing symmetry can be induced by tuning the relative strength of different competing inelastic scattering processes, {\it i.e.}~different magnetic fluctuation wavevectors. \cite{fernandes_suppression_2013} In a previous paper, we reported the discovery of a sharp reversal in the pressure dependence of $T_{\rm c}$~in KFe$_2$As$_2$, the fully hole-doped member of the Ba$_{1-x}$K$_x$Fe$_2$As$_2$ series. \cite{tafti_sudden_2013} No sudden change was observed in the Hall coefficient or resistivity across the critical pressure $P_{\rm c}$~=~17.5~kbar, indicating that the transition is not triggered by a change in the Fermi surface. Recent dHvA experiments under pressure confirm that the Fermi surface is the same on both sides of $P_{\rm c}$, ruling out a Lifshitz transition and strengthening the case for a change of pairing state. \cite{terashima_two_2014} We interpret the sharp $T_{\rm c}$~reversal as a phase transition from $d$-wave to $s$-wave symmetry. Bulk measurements such as thermal conductivity\cite{reid_universal_2012, dong_quantum_2010} and penetration depth\cite{hashimoto_evidence_2010} favor $d$-wave symmetry at zero pressure. Because the high-pressure phase is very sensitive to disorder, a likely $s$-wave state is one that changes sign around the Fermi surface, as in the $s_{\pm}$ state that changes sign between the $\Gamma$-centered hole pockets, as proposed by Maiti \etal \cite{maiti_gap_2012} It appears that in KFe$_2$As$_2$~$s$-wave and $d$-wave states are nearly degenerate, and a small pressure is enough the push the system from one state to the other. In this article, we report the discovery of a similar $T_{\rm c}$~reversal in CsFe$_2$As$_2$. The two systems have the same tetragonal structure, but their lattice parameters are notably different. \cite{sasmal_superconducting_2008} Our high-pressure X-ray data reveal that at least 30~kbar of pressure is required for the lattice parameters of CsFe$_2$As$_2$~to match those of KFe$_2$As$_2$. Yet, surprisingly, we find that $P_{\rm c}$~is {\it smaller} in CsFe$_2$As$_2$~than in KFe$_2$As$_2$. This observation clearly shows that structural parameters alone are not the controlling factors for $P_{\rm c}$~in (K,Cs)Fe$_2$As$_2$. Instead, we propose that competing inelastic scattering processes are responsible for tipping the balance between pairing symmetries. \begin{figure} \includegraphics[width=3.5in]{resistivity \caption{\label{resistivity} a) Pressure dependence of $T_{\rm c}$~in CsFe$_2$As$_2$. The blue and red circles represent data from samples 1 and 2, respectively. $T_{\rm c}$~is defined as the temperature where the zero-field resistivity $\rho(T)$ goes to zero. The critical pressure $P_{\rm c}$~marks a change of behaviour from decreasing to increasing $T_{\rm c}$. Dotted red lines are linear fits to the data from sample 2 in the range $P_{\rm c}$~$- 10$ kbar and $P_{\rm c}$~$+ 5$ kbar. The critical pressure $P_{\rm c}$~$= 14 \pm 1$~kbar is defined as the intersection of the two linear fits. b) Low-temperature $\rho(T)$ data, from sample 2, normalized to unity at $T=2.5$~K. Three isobars are shown at $P<$~$P_{\rm c}$, with pressure values as indicated. The arrow shows that $T_{\rm c}$~decreases with increasing pressure. c) Same as in b), but for $P>$~$P_{\rm c}$, with $\rho$ normalized to unity at $T=1.5$~K. The arrow shows that $T_{\rm c}$~now {\it increases} with increasing pressure. } \end{figure} \section{\label{Experiments}Experiments} Single crystals of CsFe$_2$As$_2$~were grown using a self-flux method. \cite{hong_nodal_2013} Resistivity and Hall measurements were performed in in an adiabatic demagnetization refrigerator, on samples placed inside a clamp cell, using a six-contact configuration. Hall voltage is measured at plus and minus 10~T from $T=20$ to 0.2~K and antisymmetrized to calculate the Hall coefficient $R_{\rm H}$. Pressures up to 20~kbar were applied and measured with a precision of $\pm~0.1$ kbar by monitoring the superconducting transition temperature of a lead gauge placed besides the samples inside the clamp cell. A pentane mixture was used as the pressure medium. Two samples of CsFe$_2$As$_2$, labelled ``sample 1" and ``sample 2", were measured and excellent reproducibility was observed. High pressure X-ray experiments were performed on polycrystalline powder specimens of KFe$_2$As$_2$~up to 60~kbar with the HXMA beam line at the Canadian Light Source, using a diamond anvil cell with silicon oil as the pressure medium. Pressure was tuned blue with a precision of $\pm~2$ kbar using the R$_1$ fluorescent line of a ruby chip placed inside the sample space. XRD data were collected using angle-dispersive techniques, employing high energy X-rays ($E_i = 24.35$~keV) and a Mar345 image plate detector. Structural parameters were extracted from full profile Rietveld refinement using the GSAS software. \cite{GSAS_2000} Representative refinements of the X-ray data are presented in appendix \ref{rawX}. \section{\label{Results}Results} Fig.~\ref{resistivity}a shows our discovery of a sudden reversal in the pressure dependence of $T_{\rm c}$~in CsFe$_2$As$_2$~at a critical pressure $P_{\rm c}$~=~$14\pm1$~kbar. The shift of $T_{\rm c}$~as a function of pressure clearly changes direction from decreasing (Fig.~\ref{resistivity}b) to increasing (Fig.~\ref{resistivity}c) across the critical pressure $P_{\rm c}$. $T_{\rm c}$~varies linearly near $P_{\rm c}$, resulting in a $V$-shaped phase diagram similar to that of KFe$_2$As$_2$. \cite{tafti_sudden_2013} \begin{figure} \includegraphics[width=3.5in]{hall \caption{\label{hall} Temperature dependence of the Hall coefficient $R_{\rm H}$$(T)$ in CsFe$_2$As$_2$~(sample 2), at five selected pressures, as indicated. The low-temperature data converge to the same value for all pressures, whether below or above $P_{\rm c}$. {\it Inset}: The value of $R_{\rm H}$~extrapolated to $T=0$ is plotted at different pressures. Horizontal and vertical error bars are smaller than symbol dimensions. $R_{\rm H}$$(T=0)$ is seen to remain unchanged across $P_{\rm c}$. } \end{figure} Measurements of the Hall coefficient $R_{\rm H}$~allow us to rule out the possibility of a Lifshitz transition, {\it i.e.}~a sudden change in the Fermi surface topology. Fig.~\ref{hall} shows the temperature dependence of $R_{\rm H}$~at five different pressures. In the zero-temperature limit, $R_{\rm H}(T\to 0)$ is seen to remain unchanged across $P_{\rm c}$~(Fig. \ref{hall}, inset). If the Fermi surface underwent a change, such as the disappearance of one sheet, this would affect $R_{\rm H}(T\to 0)$, which is a weighted average of the Hall response of the various sheets. Similar Hall measurements were also used to rule out a Lifshitz transition in KFe$_2$As$_2$,\cite{tafti_sudden_2013} in agreement with the lack of any change in dHvA frequencies.\cite{terashima_two_2014} Several studies on the Ba$_{1-x}$K$_{x}$Fe$_2$As$_2$ series suggest that lattice parameters, in particular the As-Fe-As bond angle, control $T_{\rm c}$. \cite{rotter_superconductivity_2008, kimber_similarities_2009, alireza_superconductivity_2009, budko_heat_2013} To explore this hypothesis, we measured the lattice parameters of KFe$_2$As$_2$~as a function of pressure, up to 60 kbar, in order to find out how much pressure is required to tune the lattice parameters of CsFe$_2$As$_2$~so they match those of KFe$_2$As$_2$. Cs has a larger atomic size than K, hence one can view CsFe$_2$As$_2$~as a negative-pressure version of KFe$_2$As$_2$. The four panels of Fig.~\ref{lattice} show the pressure variation of the lattice constants $a$ and $c$, the unit cell volume ($V=a^2c$), and the intra-planar As-Fe-As bond angle ($\alpha$) in KFe$_2$As$_2$. The red horizontal line in each panel marks the value of the corresponding lattice parameter in CsFe$_2$As$_2$. \cite{sasmal_superconducting_2008} In order to tune $a$, $c$, $V$, and $\alpha$ in KFe$_2$As$_2$~to match the corresponding values in CsFe$_2$As$_2$, a negative pressure of approximately $-10$, $-75$, $-30$, and $-30$ kbar is required, respectively. Adding these numbers to the critical pressure for KFe$_2$As$_2$~($P_{\rm c}$~=~17.5~kbar), we would naively estimate that the critical pressure in CsFe$_2$As$_2$~should be $P_{\rm c}$~$\simeq 30$~kbar or higher. We find instead that $P_{\rm c}$~=~14~kbar, showing that other factors are involved in controlling $P_{\rm c}$. \begin{figure} \includegraphics[width=3.5in]{lattice \caption{\label{lattice} Structural parameters of KFe$_2$As$_2$~as a function of pressure, up to 60~kbar: a) lattice constant $a$; b) lattice constant $c$; c) unit cell volume $V=a^2c$; d) the intra-planar As-Fe-As bond angle $\alpha$ as defined in the {\it inset} (See appendix \ref{alphabeta} for the inter-planar bond angle). Experimental errors on lattice parameters are smaller than symbol dimensions. The black dotted line in panel a, b, and c is a fit to the standard Murnaghan equation of state extended smoothly to negative pressures. \cite{murnaghan_finite_1937} From the fits, we extract the moduli of elasticity and report them in appendix \ref{compressibility}. The black dotted line in panel d is a third order power law fit. In each panel, the horizontal red line marks the lattice parameter of CsFe$_2$As$_2$, and the vertical red line gives the negative pressure required for the lattice parameter of KFe$_2$As$_2$~to reach the value in CsFe$_2$As$_2$.} \end{figure} \begin{figure} \includegraphics[width=3.5in]{PhaseDiagram \caption{\label{PhaseDiagram} Pressure dependence of $T_{\rm c}$~in three samples: pure KFe$_2$As$_2$~(black circles), less pure KFe$_2$As$_2$~(grey circles), and CsFe$_2$As$_2$~(sample 2, red circles). Even though the $T_{\rm c}$~values for the two KFe$_2$As$_2$~samples are different due to different disorder levels, measured by their different residual resistivity $\rho_0$, the critical pressure is the same ($P_{\rm c}$~=~17.5~kbar). This shows that the effect of disorder on $P_{\rm c}$~in KFe$_2$As$_2$~is negligible. For comparable $\rho_0$, the critical pressure in CsFe$_2$As$_2$, $P_{\rm c}$~=~14~kbar, is clearly smaller than in KFe$_2$As$_2$.} \end{figure} \begin{figure}[t] \includegraphics[width=3.5in]{InelasticScattering_filled \caption{\label{InelasticScattering} a) Resistivity data for the KFe$_2$As$_2$~sample with $\rho_0=1.3$~$\mathrm{\mu\Omega\, cm}$~at five selected pressures. The black vertical arrow shows a cut through each curve at $T=20$~K and the dashed line is a power law fit to the curve at $P=23.8$~kbar from 5 to 15~K that is used to extract the residual resistivity $\rho_0$. Inelastic resistivity, defined as $\rho(T = 20~\rm{K}) - \rho_0$ is plotted vs $P / P_{\rm c}$ in b) the less pure KFe$_2$As$_2$~sample, c) the purer KFe$_2$As$_2$~sample, and d) CsFe$_2$As$_2$~(sample 2) where $P_{\rm c}$~=~17.5~kbar for KFe$_2$As$_2$~and $P_{\rm c}$~=~14~kbar for CsFe$_2$As$_2$. In panel (b), (c), and (d) the dashed black line is a linear fit to the data above $P / P_{\rm c} = 1$. } \end{figure} It is possible that the lower $P_{\rm c}$~in CsFe$_2$As$_2$~could be due to the fact that $T_{\rm c}$~itself is lower than in KFe$_2$As$_2$~at zero pressure, {\it i.e.}~that the low-pressure phase is weaker in CsFe$_2$As$_2$. One hypothesis for the lower $T_{\rm c}$~in CsFe$_2$As$_2$~is a higher level of disorder. To test this idea, we studied the pressure dependence of $T_{\rm c}$~in a less pure KFe$_2$As$_2$~sample. Fig.~\ref{PhaseDiagram} compares the $T$-$P$ phase diagram in three samples: 1) a high-purity KFe$_2$As$_2$~sample, with $\rho_0=0.2~\mu \Omega$~cm (from ref.~\onlinecite{tafti_sudden_2013}); 2) a less pure KFe$_2$As$_2$~sample, with $\rho_0=1.3~\mu \Omega$~cm, measured here; 3) a CsFe$_2$As$_2$~sample (sample 2), with $\rho_0=1.5~\mu \Omega$~cm. Different disorder levels in our samples are due to growth conditions, not to deliberate chemical substitution or impurity inclusions. First, we observe that a 6-fold increase of $\rho_0$ has negligible impact on $P_{\rm c}$~in KFe$_2$As$_2$. Secondly, we observe that $P_{\rm c}$~is 4 kbar smaller in CsFe$_2$As$_2$~than in KFe$_2$As$_2$, for samples of comparable $\rho_0$. These observations rule out the idea that disorder could be responsible for the lower value of $P_{\rm c}$~in CsFe$_2$As$_2$~compared to KFe$_2$As$_2$. \section{\label{Discussion}Discussion} We have established a common trait in CsFe$_2$As$_2$~and KFe$_2$As$_2$: both systems have a sudden reversal in the pressure dependence of $T_{\rm c}$, with no change in the underlying Fermi surface. The question is: what controls that transition? Why does the low-pressure superconducting state become unstable against the high-pressure state? In a recent theoretical work by Fernandes and Millis, it is demonstrated that different pairing interactions in 122 systems can favour different pairing symmetries. \cite{fernandes_suppression_2013} In their model, SDW-type magnetic fluctuations, with wavevector $(\pi,0)$, favour $s_{\pm}$ pairing, whereas N\'eel-type fluctuations, with wavevector $(\pi,\pi)$, strongly suppress the $s_{\pm}$ state and favour $d$-wave pairing. A gradual increase in the $(\pi,\pi)$ fluctuations eventually causes a phase transition from an $s_{\pm}$ superconducting state to a $d$-wave state, producing a V-shaped $T_{\rm c}$~vs $P$ curve.\cite{fernandes_suppression_2013} In KFe$_2$As$_2$~and CsFe$_2$As$_2$, it is conceivable that two such competing interactions are at play, with pressure tilting the balance in favor of one versus the other. We explore such a scenario by looking at how the inelastic scattering evolves with pressure, measured via the inelastic resistivity, defined as $\rho(T) - \rho_0$, where $\rho_0$ is the residual resistivity. Fig.~\ref{InelasticScattering}(a) shows raw resistivity data from the KFe$_2$As$_2$~sample with $\rho_0=1.3$~$\mathrm{\mu\Omega\, cm}$~below 30~K. To extract $\rho(T) - \rho_0$ at each pressure, we make a cut through each curve at $T=20$~K and subtract from it the residual resistivity $\rho_0$ that comes from a power-law fit $\rho=\rho_0+AT^n$ to each curve. $\rho_0$ is determined by disorder level and does not change as a function of pressure. The resulting $\rho(T=20~\rm{K})-\rho_0$ values for this sample are then plotted as a function of normalized pressure $P / $$P_{\rm c}$~in Fig.~\ref{InelasticScattering}(b). Through a similar process we extract the pressure dependence of $\rho(20~\rm{K})-\rho_0$ in CsFe$_2$As$_2$~and the purer KFe$_2$As$_2$~sample with $\rho_0 = 0.2$~$\mathrm{\mu\Omega\, cm}$~in Fig.~\ref{InelasticScattering}(c) and (d). In all three samples, at $P / $$P_{\rm c}$~$>1$, the inelastic resistivity varies linearly with pressure. As $P$ drops below $P_{\rm c}$, the inelastic resistivity in (K,Cs)Fe$_2$As$_2$ shows a clear rise below their respective $P_{\rm c}$, over and above the linear regime. Fig. \ref{InelasticScattering} therefore suggests a connection between the transition in the pressure dependence of $T_{\rm c}$~ and the appearance of an additional inelastic scattering process. Note that our choice of $T=20$~K for the inelastic resistivity is arbitrary. Resistivity cuts at any finite temperature above $T_c$ give qualitatively similar results. The Fermi surface of KFe$_2$As$_2$~includes three $\Gamma$-centered hole-like cylinders. A possible pairing state is an $s_{\pm}$ state where the change of sign occurs between the inner cylinder and the middle cylinder, favored by a small-$Q$ interaction. \cite{maiti_gap_2012} By contrast, the intraband inelastic scattering wavevectors that favour $d$-wave pairing are large-$Q$ processes. \cite{thomale_exotic_2011} Therefore, one scenario in which to understand the evolution in the inelastic resistivity with pressure (Fig.~5), and its link to the $T_{\rm c}$~reversal, is the following. At low pressure, the large-$Q$ scattering processes that favor $d$-wave pairing make a substantial contribution to the resistivity, as they produce a large change in momentum. These weaken with pressure, causing a decrease in both $T_{\rm c}$~and the resistivity. This decrease persists until the low-$Q$ processes that favor $s_{\pm}$ pairing, less visible in the resistivity, come to dominate, above $P_{\rm c}$. In summary, we discovered a pressure-induced reversal in the dependence of the transition temperature $T_{\rm c}$~on pressure in the iron-based superconductor CsFe$_2$As$_2$, similar to a our previous finding in KFe$_2$As$_2$. We interpret the $T_{\rm c}$~reversal at the critical pressure $P_{\rm c}$~as a transition from one pairing state to another. The fact that $P_{\rm c}$~in CsFe$_2$As$_2$~is smaller than in KFe$_2$As$_2$, even though all lattice parameters would suggest otherwise, shows that structural parameters alone do not control $P_{\rm c}$. We also demonstrate that disorder has negligible effect on $P_{\rm c}$. Our study of the pressure dependence of resistivity in CsFe$_2$As$_2$~and KFe$_2$As$_2$~reveals a possible link between $T_{\rm c}$~and inelastic scattering. Our proposal is that the high-pressure phase in both materials is an $s_{\pm}$ state that changes sign between $\Gamma$-centered pockets. As the pressure is lowered, the large-$Q$ inelastic scattering processes that favor $d$-wave pairing in pure KFe$_2$As$_2$~and CsFe$_2$As$_2$~grow until at a critical pressure $P_{\rm c}$~they cause a transition from one superconducting state to another, with a change of pairing symmetry from $s$-wave to $d$-wave. The experimental evidence for this is the fact that below $P_{\rm c}$~the inelastic resistivity, measured as the difference $\rho(20~\mathrm{K})-\rho_0$, deviates upwards from its linear pressure dependence at high pressure. \section*{ACKNOWLEDGMENTS} We thank A.~V.~Chubukov, R.~M.~Fernandes and A.~J.~Millis for helpful discussions, and S.~Fortier for his assistance with the experiments. The work at Sherbrooke was supported by the Canadian Institute for Advanced Research and a Canada Research Chair and it was funded by NSERC, FRQNT and CFI. Work done in China was supported by the National Natural Science Foundation of China (Grant No. 11190021), the Strategic Priority Research Program (B) of the Chinese Academy of Sciences, and the National Basic Research Program of China. Research at the University of Toronto was supported by the NSERC, CFI, Onatrio Ministry of Research and Innovation, and Canada Research Chair program. The Canadian Light Source is funded by CFI, NSERC, the National Research Council Canada, the Canadian Institutes of Health Research, the Government of Saskatchewan, Western Economic Diversification Canada, and the University of Saskatchewan.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The last decade has seen two major breakthroughs in the field of experimental astronomy. First, the detection in 2015 of gravitational waves from a binary black hole merger \cite{LIGOScientific:2016aoc} heralded the arrival of gravitational wave astronomy as an experimental science. Second, the picture from the shadow of the M87 supermassive black hole from the Event Horizon Telescope \cite{EventHorizonTelescope:2019dse} provided the first direct observation of a black hole shadow. By providing the first direct time and spatial signals from black holes, these measurements have taken black holes from a purely theoretical realm into concrete physical objects in our matter-filled, inhomogeneous Universe. Hence, it is now necessary to study black holes not only as isolated objects but also within their environment. This is by no means a new consideration and there is extensive literature on the study of black hole embedded into rich and varied environments (see for example the review articles~\cite{Barausse:2014tra,Barausse:2014pra}, motivated by the detection of gravitational waves). A particular emphasis was put on the impact of dark matter surrounding black holes and their imprint of gravitational waveforms \cite{Macedo:2013qea,Kavanagh:2020cfn,Baryakhtar:2022hbu}, (however see also, e.g., \cite{Kiselev:2003ah,Sotiriou:2011dz,Chadburn:2013mta} for studies of the interaction of black holes with dark energy). Studies of black holes surrounded by matter have focused on two directions: i) the environmental impact on the inspirals of compact objects \cite{Macedo:2013qea,Cardoso:2019upw,Cardoso:2021wlq} and ii) the ringdown emission~\cite{Brown:1997jv,Yoshida:2003zz,Pani:2009ss}. The consensus on the latter was that while the resonance spectrum of black holes surrounded with extra structures could be widely different from that of isolated black holes, the modifications were somehow irrelevant for practical considerations and could not be seen. Interestingly, this conclusion is being revisited with the recent identification and characterisation of the so-called QNMs spectral instability~\cite{PhysRevX.11.031003,PhysRevLett.128.211102}. In the present study, we consider the particular case of what is referred to in the literature as a \emph{dirty black hole} (DBH), that is, a black hole surrounded by a thin-shell of matter. DBHs were introduced as practical toy models to investigate the impact of a local environment on black hole effects \cite{Visser:1992qh,Visser:1993qa,Visser:1993nu}. Surprisingly and despite the many prospects offered by gravitational wave astronomy, the quasinormal mode (QNM) spectrum of DBHs has received very limited attention. Generic properties of the QNM spectrum were discussed in \cite{Medved:2003rga,Medved:2003pr} and the concrete case of a DBH was considered by Leung et al.\ \cite{Leung:1997was,Leung:1999iq}. In their study, Leung et al.\ adopted a perturbative approach which allowed for an analytical investigation but restricted their conclusion to DBH spacetimes where the shell had a limited impact. Similar DBH configurations (that is with the shell having a limited impact on the scattering) were investigated recently and the impact of the shell on both the absorption and scattering cross section were computed \cite{Macedo:2015ikq,Leite:2019uql}. These recent studies as well as the need for concrete resonance calculations beyond the perturbative regime are the main motivations for this study. There are two main goals of this paper: First, to investigate wave scattering by DBHs beyond the perturbative cases found in the literature. We therefore consider physical DBH configurations where the shell has a substantial effect on the scattering of waves leaving an imprint on measurable quantities. Second, to compute the spectrum of resonances of DBHs from the Regge pole (RP) paradigm. RPs and the associated complex angular momentum (CAM) technique are the counterparts to QNMs. While QNMs have a real angular momentum and a complex frequency, the RPs have a real frequency and a complex angular momentum. This paradigm allows for a different interpretation of resonances in a system. The RP and CAM approaches have shed new light in many different domains of physics involving resonant scattering theory, notably, in quantum mechanics, electromagnetism, optics, nuclear physics, seismology and high-energy physics (see, for example, \cite{deAlfaro:1965zz,Newton:1982qc,Watson18,Sommerfeld49, Nussenzveig:2006,Grandy2000,Uberall1992,AkiRichards2002,Gribov69, Collins77,BaronePredazzi2002,DonnachieETAL2005} and references therein), and have been successfully extended to black hole physics \cite{Andersson_1994,Andersson_1994b,Decanini:2002ha, Decanini:2011xi,Folacci:2018sef,Folacci:2019cmc,Folacci:2019vtt}. As an illustration of the power of the CAM approach, we note that it provides a unifying framework describing the glory and orbiting effects of black hole scattering \cite{Folacci:2019cmc,Folacci:2019vtt}. The paper is organised as follows: in Sec.\ \ref{sec:DBH_spacetime}, we review the derivation of the DBH spacetime and show that there exist static configurations fully characterised by the mass of the shell and its equation of state; in Sec.\ \ref{sec:geodesics} we give a qualitative description of the geodesic motion in DBH spacetimes which paves the way for the future interpretation of our results; in Sec.\ \ref{sec:waves_in_DBH} we review the description of scalar wave propagation on a DBH spacetime. Sec.\ \ref{sec:resonances} contains the main results of the paper, that is, the calculation of the Regge pole spectrum of DBH's. We reveal in particular that in contrast with the isolated black hole case, the DBH resonance spectrum contains several branches which correspond to the extra structure and complexity of the spacetime. The spectra are computed numerically by adapting the continued fraction method originally developed by Leaver. We supplement our analysis with a WKB calculation to verify our numerical results. In Sec.\ \ref{sec:scattering}, we compute numerically the scattering cross section of planar waves impinging on a DBH and show that characteristic oscillations appear in some configurations. The cross sections are then described using the RP spectrum identified in Sec.\ \ref{sec:resonances} via the CAM representation. Finally we conclude with a discussion of our results and future prospects in Sec.\ \ref{sec:conclusion}. We use natural units $(c=G=\hbar=1)$ throughout the paper. \section{Dirty Black Hole spacetime}\label{sec:DBH_spacetime} Here we briefly review the Israel formalism \cite{Israel:1966rt} applied to our case where there are two distinct Schwarzschild geometries: \begin{equation} ds^2 = f_\pm(r) dt_\pm^2 - \frac{dr^2}{f_\pm(r)} - r^2 d\Omega^2 \end{equation} where $d\Omega^2 = d\theta^2+ \sin^2\theta d\phi^2$ is the line element on a unit sphere, $f_\pm(r) = 1 - 2M_\pm/r$ is the Schwarzschild potential for each side of the shell, and we have already applied the knowledge that the shell is at a given radius $r=R_s(\tau)$, which implies that the angular and radial coordinates are the same on each side. At this point, we are not assuming the wall is static, hence there is a different local time coordinate on each side of the shell, which has, in general, a time dependent trajectory given by $\left (t_\pm(\tau),R_s(\tau)\right)$ with $\tau$ the proper time on the shell: $f_\pm {\dot t}_\pm^2 - \dot{R}_s^2/f_\pm = 1$. The Israel junction conditions read \begin{equation} \Delta K_{ab} - \Delta K h_{ab} = 8\pi S_{ab} = Eu_au_b - P \left ( h_{ab} - u_au_b \right) \end{equation} where $h_{ab} = g_{ab} + n_a n_b$ is the induced metric on the shell (with $n_\pm^a = ( \dot{R}_s, \dot{t}_\pm )$ an outward pointing unit normal on each side of the shell), $\Delta K_{ab} = K^+_{ab} - K^-_{ab}$ is the jump in extrinsic curvature across the shell, and $S_{ab} = \int_-^+ T_{ab}$ is the energy momentum of the wall, here assumed to take a perfect fluid form. Inputting the form of the geometry into the Israel equations results in two independent ``cosmological'' equations \begin{equation} \begin{aligned} \dot{R}_s^2 + 1 &= (4\pi E)^2R_s^2 + \frac{(M_++M_-)}{R_s} + \frac{(M_+-M_-)^2}{(8\pi E)^2R_s^4}, \\ \dot{E} &+ 2 \frac{\dot{R}_s}{R_s} (E+P) =0. \end{aligned} \end{equation} A static shell requires both $\dot{R}_s$ and $\ddot{R}_s$ to be zero, which places constraints between the values of the mass and shell energy-momentum. As is conventional, we assume an equation of state for the shell \begin{equation} P=wE\;, \qquad \Rightarrow \qquad E = \frac{\rho_0}{4\pi R_s^{2(1+w)}} \end{equation} which then gives \begin{equation} \begin{aligned} M_\mp &= \frac{4w(1+2w)R_s - (1+4w)^2M_\pm}{f_\pm(R_s)(1+4w)^2},\\ \rho_0^2 &= \frac{4R_s^{4w}(2wR_s - (1+4w)M_\pm)^2}{f_\pm(R_s)(1+4w)^2}. \end{aligned} \end{equation} For a given equation of state, the minimum value of $R_s$ is when the shell has vanishing energy (and $M_+=M_-$ trivially) \begin{equation} R_{s,min} = \frac{(1+4w)}{2w} M_\pm. \label{minshell} \end{equation} As $R_s$ increases, the shell gradually contributes more mass to the spacetime, leading to a larger disparity between $M_+$ and $M_-$. (If $R_s<R_{s,min}$, $M_+<M_-$, and the energy of the shell is negative.) An important feature of a black hole spacetime of relevance to QNM's and RP's is the \emph{light ring}, or the unstable null circular geodesic orbit at $r =3M$. We now see three different possibilities for the shell location depending on where it is situated with respect to the light-rings of the interior and exterior Schwarzschild masses ($3M_\pm$). (i) $R_s < 3M_-$: If $w>1/2$, i.e.\ the equation of state is stiffer than radiation, then \eqref{minshell} shows that it is possible for the shell to lie `inside' the light-ring of the black hole. Since the local Schwarzschild mass is $M_+$ outside the shell, this means that the $3M_-$ light-ring no longer exists, and the spacetime will have only one light-ring at $r=3M_+$. Conversely, if $w\leq1/2$, then \eqref{minshell} shows that the shell can never lie inside the inner light-ring. (ii) $3M_- \leq R_S \leq 3M_+$: This configuration, where the shell lies between the light-rings, is possible for equations of state with $w\in(\frac{\sqrt{3}-1}{4},1]$, and the spacetime will have both light-rings present. (iii) $R_s>3M_+$: If the shell lies outside the light-ring of the exterior geometry, then again there is only one light-ring in the spacetime, although it is now \emph{inside} the shell at $3M_-$. Demanding positivity of $R_s-3M_+$ for $R_s\geq R_{s,min}$ shows that this requires the equation of state parameter $w<1/2$. Having derived all the possible configurations of the shell, we conclude this section by altering our notation slightly to align with that of Macedo et al.\ \cite{Macedo:2015ikq}, in which a global coordinate system $(t,r,\theta,\phi)$ is used: \begin{equation}\label{lineele} ds^2 = A(r)dt^2-B(r)^{-1}dr^2-r^2d\Omega^2. \end{equation} Note, we did not begin with this global system as it is not possible to define a global time coordinate unless the shell is static, and we wanted to derive all possibly consistent static configurations commensurate with physical equations of state. Replacing $M_- = M_{\text{\tiny BH}}$ and $M_+ = M_\infty$, these metric functions are: \begin{equation} \begin{aligned} A(r) &= \begin{cases} \alpha(1-2M_{\text{\tiny BH}}/r), & r < R_s \\ 1-2M_{\infty}/r, & r > R_s, \end{cases} \\ B(r) &= \begin{cases} 1-2M_{\text{\tiny BH}}/r, & r < R_s \\ 1-2M_{\infty}/r, & r > R_s, \end{cases} \end{aligned} \label{AandB} \end{equation} where $\alpha$ is inserted to ensure the induced metric on the shell is well defined from each side \begin{equation} \alpha = \frac{R_s-2M_\infty}{R_s - 2M_{\text{\tiny BH}}} \; , \end{equation} and $B$ contains a jump at the shell to represent the discontinuity in the extrinsic curvature. A cartoon of the spacetime geometry is given in Fig.\ \ref{fig:Schematic_diagram_DBH}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Schematic_diagram_DBH} \caption{Schematic diagram of a black hole surrounded by a thin shell (the thickness of the shell is ignored) located at $r=R_s$.} \label{fig:Schematic_diagram_DBH} \end{figure} \section{Geodesic motion}\label{sec:geodesics} DBH spacetimes can exhibit critical effects of geometrical optics. We will see later that this rich geodesic structure can be used to understand features in the scattering of waves presented in this paper. The important notion emerging from the geodesic motion relevant to the study of resonances is the \emph{light-ring} or \emph{photon sphere}. It corresponds to a local maximum and an unstable equilibirum point of the effective geodesic potential and offers an intuitive link to QNMs~\cite{Cardoso:2008bp}. It is therefore natural to ask about the structure of the light-rings in DBH spacetimes. We note that the link between QNMs and light-rings is however not exact \cite{Khanna:2016yow} and a simple geodesic analysis is not always sufficient to characterise the resonances of a system, as we will see later. Geodesic motion on static DBH spacetimes has been studied in, e.g., \cite{Macedo:2015ikq}, hence we simply note the key steps of the calculation. The equation of motion of a null geodesic can, by a judicious choice of affine parameter, be brought into the form \begin{equation}\label{eq:geo_pot} \dot{r}^2 + B(r) \left ( \frac{L^2}{r^2} - \frac{1}{A(r)}\right) = \dot{r}^2 + U_{\text{\tiny{eff}}}(r) = 0 \end{equation} where without loss of generality the geodesic is taken to lie in the equatorial plane, $A$ and $B$ were given in \eqref{AandB}, and $L = r^2\dot{\phi}$ is a conserved (not necessarily integer) quantity along the geodesic related to the angular momentum (the affine parameter is chosen so that $A\dot{t}=1$). The discontinuity of $B$ across the shell means this potential is also discontinuous, however the interpretation of the level of the potential as a ``kinetic'' energy remains, and we can use the graph of $U_{\text{\tiny{eff}}}(r) $ to not only interpret geodesic motion, but also to infer more general scattering phenomena. The main feature of the potential, resulting from the discontinuity, is the possible existence of two local maxima. A further interesting feature of these geometries is the possibility of geodesics that remain trapped between the light-rings. While these obviously will not be visible from far away, we might expect the existence of these resonant trajectories to correspond to some imprint in the scattering cross sections. Indeed, even for the case of a single light-ring, when that is located inside the shell, we see the sharp discontinuity in the potential at the shell location gives rise to a sharp ``dip'' in the potential (see Fig.\ \ref{fig:geodesic_potential}). While this dip results in a change of direction for the light ray, it does not `trap' the (classical) geodesic, however a quantum mechanical system would display an effect. The scattering of scalar waves of the dirty black hole might therefore be expected to detect this dip. As we will see in Sec.\ \ref{sec:resonances}, the properties of the local maxima of the geodesic potential can be seen in the resonance spectrum leading to two separated branches. We also identify a third branch related precisely to the dip referred to above - indicative of quasibound states trapped between the light-ring and the shell. Before turning our attention to the spectrum of resonances of DBH, we first anticipate the effects expected to be present in the scattering based on the geodesic structure presented above. The link between geodesic motion and scattering effect is best seen through the deflection angle which we now discuss. \begin{figure} \centering \includegraphics[width=0.5\textwidth, trim = 0 0 0cm 0]{schematic_potential.pdf} \caption{Illustration of the geodesic potential for the two configurations of DBH studied later on with $M=1.5M_{\rm{BH}}$ and $R_S=4M_{\rm{BH}}$ (blue curve) or $R_S=5M_{\rm{BH}}$ (red curve). In the latter case, the shell is located outside of the outer light-ring (OLR) and therefore only exhibits a single light-ring structure. The former has the shell between the OLR and inner light-ring (ILR) and exhibits the two light-rings structure. (Features of the potential have been exaggerated to beter illustrate the structure.)} \label{fig:geodesic_potential} \end{figure} \subsection{Deflection angle and classical scattering} \begin{figure*} \includegraphics[scale=0.5]{critical_geo_dirty_BH.pdf} \caption{Illustration of the various critical effects in a dirty black hole spacetime. Here we have chosen the configuration where the shell lies outside the light-ring of the exterior geometry, i.e., $R_S=5M_{\rm{BH}}$ and $M_{\infty}=1.5M_{\rm{BH}}$. In all pictures, the black disc represents the inner black hole with mass $M_{\rm{BH}}$, the dashed green circle represents the location of the shell at $r=R_{S}$ and the dashed black circle depicts the inner light-ring at $r =3M_\text{BH}$. a) The picture depicts glory scattering. In this case, the defection angle function $\Theta(b)$ passes smoothly through $\pi$, i.e., geodesics are scattered in the backward direction. b) The picture illustrates rainbow scattering. In this case, a congruence of geodesics centred around the rainbow impact parameter $b =b_r$ is presented (solid red curves). The rainbow ray (black solid curve) defines an extremal angle, called the rainbow angle, beyond which rays cannot be deflected locally. c) The picture depicts the orbiting phenomenon. In this case, the deflection angle is divergent for a critical impact parameter $b =b_c$ and a particle orbits indefinitely at $r = 3M_\text{BH}$ around the light-ring. d) The picture represents grazing rays. A congruence of rays centred around the grazing impact parameter $b=b_{R_s}$ is shown (solid red curves). The grazing ray (solid black curve) sets the boundary of the so-called edge region.} \label{fig:critical_geo} \end{figure*} The deflection angle is an important geometrical quantity that allows us to analyze in the classical limit the differential scattering cross section. In the case of DBH, the classical scattering cross section for null geodesics can be defined as \cite{FORD1959259} (see also \cite{Collins73}) \begin{equation}\label{classical_scatt_cross_sec} \frac{d\sigma}{d\Omega} =\frac{b}{\sin(\theta)}\frac{1}{\Big{|} \frac{d\Theta_\text{geo}(b)}{db}\Big{|}} \end{equation} and the geodesic deflection angle is given by \begin{equation} \label{Deflection_angle} \Theta_\text{geo}(b) = 2\int_{0}^{u_0} du\left[B(u)\frac{1-A(u)b^2u^2}{A(u)b^2}\right]^{-1/2} - \pi, \end{equation} where $u=1/r$ and $u_0 = 1/r_0$ with $r_0$ being a turning point. In Eqs.\ \eqref{classical_scatt_cross_sec}~and~\eqref{Deflection_angle}, the geodesic deflection angle is related to the scattering angle $\theta$ by \begin{equation}\label{rela_def_angle_scatt_angle} \Theta(b) +2n\pi =\pm \theta \end{equation} with $n \in \mathbb{N}$ such that $\theta$ remains in ``its interval of definition'', i.e.\ $[0,\pi]$ and $b$ is the impact parameter of the scattered null geodesic. There are four main scattering effects, each associated with a specific property of the deflection angle that we briefly review for the reader before qualitatively describing the scattering of waves in DBH spacetimes. They can be cast into two classes: the divergent class or the interference class. \subsubsection{The divergent class: glories and rainbows} \textit{- The glory:} If the deflection function passes smoothly through $0$ or $\pi$, i.e.\ if geodesics are scattered in the forward or backward direction, then the semi-classical cross section contributions from these geodesics will diverge. Expanding the deflection function around the glory point to linear order, one can cure this singularity and obtain a semi-classical expression for the glory in terms of Bessel functions \cite{ADAM2002229}. The typical behaviour associated with the glory is therefore an increase of the scattering amplitude in the forward or backward direction. The glory effect is well known in black hole physics and also appears in these DBH spacetimes. Fig.\ \ref{fig:critical_geo}\,a) shows one backscattered ray contributing to the glory effect in the case of a DBH studied in the following. \textit{- Rainbow scattering:} From \eqref{classical_scatt_cross_sec}, we can see that $d\sigma/d\Omega $ can diverge for $\theta\neq 0$ or $\pi$ if $d\Theta_{geo}/db = 0$. An impact parameter $b_r$, for which the deflection function is stationary defines an extremal angle, called the rainbow angle $\theta_r$. For $b \sim b_r$, no rays can be deflected beyond $\theta_r$. This will result in interferences on one side of the rainbow angle (known as the illuminated side) and to exponential decay on the other (known as the dark side). Similarly to the glory, one can expand the deflection function to second order in the vicinity of the rainbow point to obtain a semi-classical description built on the Airy function. The typical behaviour of rainbow scattering is an enhanced amplitude near the rainbow angle with oscillations on one side of the rainbow and exponential decay on the other side. This effect was shown to appear in astrophysical settings \cite{Dolan:2017rtj} as well as in gravitational analogues \cite{Torres:2022zua}. Fig.\ \ref{fig:critical_geo}\,b) shows rainbow rays in the case of a DBH studied in the following. \subsubsection{The interference class: orbiting and grazing} \textit{- Orbiting:} Orbiting belongs to the interference class of critical effects and is associated with a critical impact parameter $b_c$ for which the deflection angle diverges $\lim_{b\rightarrow b_c}\Theta_{geo} =\infty$. This implies that geodesics with an impact parameter close to $b_c$ can be deflected with arbitrary angles. In other words, they orbit around the scatterer. This divergence also implies that there will be infinitely many geodesics deflected at any angles, causing interference at arbitrary angles. Orbiting is well known in black hole physics and is associated with the existence of the light-ring previously mentioned \cite{Andersson:2000tf}. Fig.\ \ref{fig:critical_geo}\,c) shows the orbiting of geodesics in a DBH spacetime. Since orbiting allows geodesics to be deflected to arbitrary angles, it is naturally linked to the glory effect. Indeed it was shown, using the CAM approach, that the two effects can be incorporated in a unified semi-classical formula built around the properties of surface waves propagating on the light-ring~\cite{Folacci:2019vtt}. \textit{- Grazing:} Grazing is another critical effect belonging to the interference class. Grazing (also known as edge effects) appear when the scatterer exhibits a discontinuity \cite{Nus92,Nussenzveig1664764}. The discontinuity of the scatterer will lead to a singular point in the deflection function which defines an extremal angle beyond which rays cannot be deflected. This is similar to the rainbow effect with the main difference being the fact that the deflection function is not stationary at the grazing angle. Therefore, grazing will lead to interferences but does not necessarily imply an increase of the scattering amplitude as was the case in rainbow scattering. Fig.\ \ref{fig:critical_geo}\,d) shows grazing rays in a DBH spacetime studied in the following. \subsection{Critical effects in DBH spacetimes} Equipped with the terminology and intuition from the semi-classical description of critical effects, we now turn our attention to scattering in DBH spacetimes. As we noted in the previous section, there are three main geometric cases classified by the location of the shell relative to the light-ring radii of the inner and outer masses. \\ \textbf{Case 1: $R_S<3M_{\rm{BH}}$.} In this case, the shell is located inside the inner light-ring and therefore cannot be probed by geodesics escaping to infinity. From the point of view of geodesics, this case is therefore similar to the one of an isolated black hole. \\ \textbf{Case 2: $3M_{\rm{BH}}<R_S<3M_{\infty}$.} If the shell lies between the light-rings, then there is the possibility for geodesics to probe each light-ring separately. Each light-ring is associated with a divergence of the deflection angle, leading to orbiting. In the case where both light-rings are accessible to geodesics, we expect two such divergences to be present, associated with orbiting around the inner and outer light-rings. Hence we further anticipate that there exists a value of $b_{\ell_-}<b<b_{\ell_+}$ such that the deflection angle is extremal. This implies the presence of a fold caustic and leads to rainbow scattering~\cite{FORD1959259}. The deflection angle revealing these critical effects is represented in Fig.\ \ref{fig:deflection_angle_rs_4} for the configuration where the shell is located between the two light-rings. In this configuration, and for the choice of parameters $R_S=4M_{\rm{BH}}$ and $M_{\infty}=1.5M_{\rm{BH}}$, the rainbow angle $\theta_r\approx 35.8^{\circ}$ and is associated with the impact parameter $b_r$. The presence of rainbow scattering in DBH spacetimes was noted in~\cite{Leite:2019uql} but its characteristic amplification was not clearly observed in their simulation due to the specific choice of DBH configurations. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Deflection_angle_Rs_4.pdf} \caption{Deflection angle as a function of the impact parameter obtained from \eqref{Deflection_angle}. Here we assume $M_\infty = 1.5 M_\text{BH}$, and the shell position $R_s = 4 M_\text{BH}$. The deflection angle diverges at both critical parameters $b_{\ell_-} = 3 \sqrt{3} M_\text{BH}/\sqrt{\alpha}$ and $b_{\ell_+} = 3\sqrt{3} M_\infty$ associated with a light-ring at $r_{\ell_-} = 3M_\text{BH}$ and $r_{\ell_+} = 3M_\infty$ respectively. There is also a stationary point in the deflection angle, i.e.\ $\Theta^{'}_\text{geo}(b_r) = 0$, leading to the rainbow effect.} \label{fig:deflection_angle_rs_4} \end{figure} \\ \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Deflection_angle_Rs_5.pdf} \caption{Deflection angle as a function of the impact parameter obtained from \eqref{Deflection_angle}. Here we assume $M_\infty = 1.5 M_\text{BH}$, and the shell position $R_s = 5 M_\text{BH}$. The deflection angle diverges at a critical parameter $b_{\ell_-} = 3 \sqrt{3} M_\text{BH}/\sqrt{\alpha}$ associated with a light-ring at $r_{\ell_-} =3M_\text{BH}$. There is also a stationary point in the deflection angle, i.e.\ $\Theta^{'}_\text{geo}(b_r) = 0$, leading to the rainbow effect. A local maximum occurring at $b = b_{R_s}=\sqrt{R_S^3/(R_S-2M_\infty)}$ is also present, leading to grazing. A congruence of geodesics in this specific DBH configuration is represented in Fig.\ \ref{fig:ray_optic_Rs_5}.} \label{fig:deflection_angle_rs_5} \end{figure} \\ \textbf{Case 3: $R_S > 3M_\infty$}. If the shell lies outside $3M_\infty$, a single light-ring is present in the geometry, which is now located \emph{inside} the shell. In this case, the orbiting effect is qualitatively the same as for an isolated black hole and is associated with the critical impact parameter $b = 3\sqrt{3}M_\text{BH}/\sqrt{\alpha}$. In this configuration, a second local maximum of the potential is present. Contrary to the previous case, this maximum is not associated with the second light-ring but rather to the presence of the shell (see Fig.\ \ref{fig:geodesic_potential}). Since this second local maximum is not an (unstable) equilibrium point, it will not be associated with a divergence of the deflection angle but rather with a singular point (i.e.\ the deflection function is not differentiable). It is important to note that the singular point occurring at $b=b_{R_S} = \sqrt{R_S^3/(R_S-2M_\infty)}$ is a local maximum of the deflection angle, but does not satisfy the condition $d\Theta_{geo}/db = 0$, which implies that it is \emph{not} associated to rainbow scattering as stated in~\cite{Leite:2019uql}. Instead, this singular point will lead to grazing as discussed previously. Similar to the previous case, there is an impact parameter associated to rainbow scattering between the orbiting impact parameter and the grazing one. In this configuration, and for the choice of parameters $R_S=5M_{\rm{BH}}$ and $M_{\infty}=1.5M_{\rm{BH}}$, the rainbow angle $\theta_r\approx 157.8^{\circ}$. Fig.\ \ref{fig:deflection_angle_rs_5} depicts the deflection function of such a configuration and Fig.\ \ref{fig:ray_optic_Rs_5} depicts a congruence of geodesics in this DBH set-up. The red and dark green rays in Fig.\ \ref{fig:ray_optic_Rs_5} represent the rainbow and grazing ray respectively and are both associated with an extremal angle. Note the qualitative difference between the rainbow and grazing ray which resides in the concentration of geodesics around each critical trajectory. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{ray_optic_Rs_5} \caption{Null geodesics scattered by a DBH. In this case, we assume $M_\infty = 1.5 M_\text{BH}$ and $R_s = 5M_\text{BH}$ and the incident geodesics are equally spaced. The impact parameter varies between $0.95 b_c < b < 1.35 b_c$ with a fixed step size $\Delta b = 0.04 b_c$. The red solid line is the rainbow ray and has an impact parameter ($b = b_r = 1.1677 b_c$) and, we can see a high concentration of rays in this direction leading to the characteristic amplification of the rainbow effect. The olive curve represent the grazing ray. } \label{fig:ray_optic_Rs_5} \end{figure} \section{Waves on a dirty black hole spacetime}\label{sec:waves_in_DBH} We consider a scalar field, $ \Phi(x)$, propagating on the DBH spacetime, governed by the Klein-Gordon equation \begin{equation} \Box \Phi \equiv \frac{1}{\sqrt{-g}} \partial_{\mu} \left( \sqrt{-g} g^{\mu \nu} \partial_{\nu} \Phi \right) = 0 \end{equation} where $g^{\mu \nu}$ is the inverse metric and $g$ is the metric determinant. Performing a standard separation of variables, \begin{equation} \Phi = \frac{1}{r} \sum_{\omega \ell m} \phi_{\omega \ell}(r) Y_{\ell m}(\theta, \phi) e^{-i \omega t} , \label{eq:sepvariables} \end{equation} leads to a radial equation of the form \begin{equation} \label{H_Radial_equation} \left[\frac{d^{2}}{dr_{\ast}^{2}}+\omega^{2}-V_{\ell}(r)\right]\phi_{\omega\ell}= 0, \end{equation} where $V_{\ell}(r)$ is the effective potential which is given by \begin{equation} \label{Potential} \begin{aligned} V_{\ell}(r) = &A(r) \Bigl[ \frac{\ell(\ell+1)}{r^2}\\ &+\frac{2}{r^3} \left ( M_{\text{\tiny BH}} \Theta(R_s-r) + M_\infty \Theta(r-R_s) \right) \Bigr] \end{aligned} \end{equation} with $\Theta$ being the Heaviside step function, and $r_{\ast}$ denotes the \emph{tortoise coordinate} defined by \begin{equation} \label{tortoise_coord} \frac{dr_*}{dr} = \frac{1}{\sqrt{A(r)B(r)}}, \end{equation} \textit{viz.} \begin{equation} \label{tortoise_coordinate} r_{\ast}(r) = \left\{ \begin{aligned} &\frac{1}{\sqrt{\alpha}}\left[r +M_{\text{\tiny BH}}\ln\left(\frac{r}{M_{\text{\tiny BH}}} -1\right)\right]+ \kappa & \scriptstyle{M_{\text{\tiny BH}} < r \leq R_s,}\\ & r + 2M_\infty\ln\left(\frac{r}{2M_\infty}-1\right) & \scriptstyle{r > R_s,} \end{aligned} \right. \end{equation} where the constant $\kappa$ is fixed so that $r_{\ast}(r)$ is continuous. In the following, we introduce the modes $\phi_{\omega\ell}^{\text{in}}$ which are solutions of \eqref{H_Radial_equation} and are defined by their behaviour at the horizon $r =2M_{\text{\tiny BH}}$ (i.e., for $r_* \to -\infty$) and at spatial infinity $r \to +\infty$ (i.e., for $r_* \to +\infty$): \begin{eqnarray} \label{bc_in} & & \phi_{\omega \ell}^{\mathrm {in}}(r_{*}) \sim \left\{ \begin{aligned} &\displaystyle{e^{-i\omega r_\ast}} & \, (r_\ast \to -\infty),\\ &\displaystyle{A^{(-)}_\ell (\omega) e^{-i\omega r_\ast} + A^{(+)}_\ell (\omega) e^{+i\omega r_\ast}} & (r_\ast \to +\infty). \end{aligned} \right. \nonumber\\ & & \end{eqnarray} Here, the coefficients $A^{(-)}_\ell (\omega)$ and $A^{(+)}_\ell (\omega)$ appearing in \eqref{bc_in} are complex amplitudes and allow us to define the scattering $S$-matrix elements, \begin{equation}\label{Matrix_S} S_{\ell}(\omega) = e^{i(\ell+1)\pi} \, \frac{A_{\ell}^{(+)}(\omega)}{A_{\ell}^{(-)}(\omega)}. \end{equation} It should be noted that, due to the choice of coordinates (keeping $r$ as the areal radius on both sides of the shell) the transverse metric is discontinuous at the shell meaning that we have to also place a boundary condition on the eigenfunctions $\phi_{\omega\ell}$ at the shell \cite{Macedo:2015ikq}: \begin{equation} \begin{aligned} \bigl[ &\sqrt{B(R_s)} \left (R_s \phi_{\omega \ell}^\prime(R_s) - \phi_{\omega \ell}(R_s) \right )\bigr]_{+} \\ &\qquad = \left[ \sqrt{B(R_s)} \left (R_s \phi_{\omega \ell}^\prime(R_s) - \phi_{\omega \ell}(R_s) \right )\right]_{-} \end{aligned} \label{jump_condition} \end{equation} \section{Resonances of the dirty black holes} \label{sec:resonances} \subsection{Quasinormal modes and Regge poles} Resonant modes are characteristic solutions of the wave equation, \eqref{H_Radial_equation}, satisfying purely ingoing/outgoing boundary conditions at the horizon/infinity. Their spectrum is the set of zeros of the scattering matrix $S_{\ell}(\omega)$ \eqref{Matrix_S}, i.e., a simple pole of $A_\ell^{(-)}(\omega)$. It can be seen as a set of frequencies $\omega_{\ell n}$ in the complex-$\omega$ plane at which the scattering matrix $S_{\ell}(\omega)$ has a simple pole for $\ell \in \mathbb{N}$ and $\omega_{\ell n} \in \mathbb{C}$ (the so-called \textit{quasinormal mode spectrum}), or a set of angular momenta $\lambda_n(\omega)\equiv \ell_n+1/2$ in the complex-$\lambda$ plane at which the scattering matrix has a simple pole for $\omega \in \mathbb{R}$ and $\lambda_{n}(\omega) \in \mathbb{C}$ (the so-called \textit{Regge pole spectrum}). Here $n = 1,2,3,\ldots $ labels the different elements of the spectrum and is referred to as the overtone number. The quasinormal mode spectrum of DBHs has been investigated for configurations where $M_\infty - M_{\text{\tiny BH}}\ll 1$ using perturbative techniques \cite{Leung:1997was,Leung:1999iq}. We shall now consider, for the first time, the Regge poles of generic DBHs. \subsection{Numerical method} To compute the QNM/Regge pole spectrum of dirty black holes, we follow the method of Ould El Hadj et al.\ \cite{OuldElHadj:2019kji} who calculated the Regge pole spectrum of scalar and gravitational waves (in the axial sector) for a gravitating compact body. Their method is an extension of the original continued fraction method developed originally by Leaver~\cite{Leaver:1985ax,leaver1986solutions}. The method involves writing the solution to the wave equation \eqref{H_Radial_equation} as a power series around a point $b$ located outside the shell, \begin{equation}\label{ansatz_CFM} \phi_{\omega,l}(r) = e^{i\omega r_*(r)} \sum_{n=0}^{+\infty} a_n \left(1 - \frac{b}{r}\right)^n\,, \end{equation} where the coefficients $a_n$ obey a four-term recurrence relation: \begin{equation}\label{Recurrence_4_terms} \alpha_n a_{n+1} + \beta_n a_{n} +\gamma_{n} a_{n-1} +\delta_{n} a_{n-2} = 0, \quad \forall n\geq 2 , \end{equation} where \begin{subequations} \begin{eqnarray}\label{Coeffs_3_termes} && \alpha_n = n (n+1)\left(\!1-\frac{2M_\infty}{b}\!\right), \\ && \beta_n = n\left[\left(\!\frac{6M_\infty}{b}-2\!\right)n + 2ib\omega\right] , \\ && \gamma_n = \left(\!1-\frac{6M_\infty}{b}\!\right)n(n-1)-\frac{2M_\infty}{b}-\ell(\ell+1) , \\ && \delta_n = \left(\!\frac{2M_\infty}{b}\!\right)\left(n-1\right)^2 . \end{eqnarray} \end{subequations} The initialisation coefficients, $a_0$ and $a_1$, are found directly from \eqref{ansatz_CFM}, \begin{eqnarray}\label{Initial_Conds} && a_0 = e^{-i\omega r_*(b)}\phi_{\omega_\ell}(b) , \\ && a_1 = b e^{-i\omega r_*(b)}\left(\frac{d}{dr}\phi_{\omega_\ell}(r) \Big{|}_{r=b}-\frac{i\omega b}{b-2M_\infty}\phi_{\omega_\ell}(b)\right). \end{eqnarray} In practise, the coefficients $a_0$ and $a_1$ are found numerically by integrating \eqref{H_Radial_equation} from the horizon up to $r = b> R_s$. In order to apply Leaver's method, we first perform a Gaussian elimination step in order to reduce the 4-term recurrence relation to a 3-term recurrence relation: \begin{equation} \hat{\alpha}_n a_{n+1} + \hat{\beta}_n a_n + \hat{\gamma}_n a_{n-1} = 0, \end{equation} where we have defined the new coefficients, for $n\geq 2$: \begin{subequations} \begin{eqnarray} &&\hat{\alpha}_n = \alpha_n, \\ && \hat{\beta}_n = \beta_n - \hat{\alpha}_{n-1}\frac{\delta_n}{\hat{\gamma}_{n-1}}, \ \text{and} \\ && \hat{\gamma}_n = \gamma_n - \hat{\beta}_{n-1}\frac{\delta_n}{\hat{\gamma}_{n-1}}. \end{eqnarray} \end{subequations} The series expansion \eqref{ansatz_CFM} is convergent outside the shell provided that $a_n$ is a minimal solution to the recurrence relation and $b/2<R_s<b$ \cite{Benhar:1998au}. The existence of a minimal solution implies that the following continued fraction holds: \begin{equation}\label{eq:CF} \frac{a_1}{a_0} = \frac{-\hat{\gamma}_1}{\hat{\beta}_1 -}\, \frac{\hat{\gamma}_2 \hat{\alpha}_1}{\hat{\beta}_2 -} \, \frac{\hat{\gamma}_3 \hat{\alpha}_2}{\hat{\beta}_3 -} ... \end{equation} The above relation (or any of its inversions) is the equation, written in the standard form of continued fractions, we are solving in order to find the RP/QNM spectrum. In practise, we fix $\omega$ (equivalently $\ell$), and define a function $f(\ell\in \mathbb{C})$ (equivalently $f(\omega \in \mathbb{C})$), which gives the difference between the left-hand side and right-hand side of the condition \eqref{eq:CF}. We then find the zeros of the function $f$ starting from an initial guess. An alternative method to the continued fraction expression is to use the Hill determinant~\cite{mp}, where one looks for the zeros of the following determinant \begin{equation} \label{Determinant_Hill_4_termes} D = \begin{vmatrix} \beta_0 & \alpha_0 & 0 & 0 & 0 & \ldots & \ldots & \ldots \\ \gamma_1 & \beta_1 & \alpha_1 & 0 & 0 & \ldots & \ldots & \ldots \\ \delta_2 & \gamma_2 & \beta_2 & \alpha_2 & 0 & \ldots & \ldots & \ldots \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \ldots & \ldots \\ \vdots & \vdots & \delta_{n-1} & \gamma_{n-1} & \beta_{n-1} & \alpha_{n-1} & \ddots & \ldots \\ \vdots & \vdots & \vdots & \delta_n & \gamma_{n} & \beta_{n} & \alpha_{n} & \ddots \\ \vdots & \vdots & \vdots & \vdots & \ddots & \ddots & \ddots & \ddots \end{vmatrix} = 0. \end{equation} By assuming $D_n$ \begin{equation} \label{derecurrence_4_termes} D_n=\beta_n D_{n-1} - \gamma_{n}\alpha_{n-1}D_{n-2} + \delta_n \alpha_{n-1} \alpha_{n-2} D_{n-3} , \end{equation} to be the determinant of the $n \times n$ submatrix of $D$ with the initial conditions \begin{equation} \label{Determinant_initial_conds} \begin{split} D_0 &=\beta_0, \\ D_1 &=\beta_1\beta_0-\gamma_1\alpha_0 , \\ D_2 &=\beta_0(\beta_1 \beta_2 - \alpha_1 \gamma_2) - \alpha_0(\alpha_1 \delta_2 -\gamma_1 \beta_2) . \end{split} \end{equation} the Regge poles (QNM frequencies) are found by solving numerically the roots $\lambda_n$ ($\omega_n$) of $D_n$. We note that in order to confirm our results, we have used both methods and they give the same results, up to the numerical precision. \subsection{Results: The Regge pole spectrum} In order to check the robustness of our numerical code, we first compute the QNM spectrum for a configuration of DBH considered by Leung et al.\ \cite{Leung:1997was,Leung:1999iq}, i.e., for $M_\infty = 1.02 M_{\text{\tiny BH}}$, $R_s = 2,52 M_{\text{\tiny BH}}$ and assuming $M_{\text{\tiny BH}} =1/2$. \begin{figure}[htb] \centering \includegraphics[scale=0.50]{QNMs_Plot_DBH_n_1_10} \caption{\label{fig:QNMs_Plot_DBH_n_1_10} The ($\ell = 1, n = 1,\ldots,10$) quasinormal modes of the scalar field. The results agree with Fig.\ 4 of \cite{Leung:1999iq}. We assume $2M_{\text{\tiny BH}} = 1$.} \label{fig:rp1} \end{figure} \begingroup \begin{table}[htp] \caption{\label{tab:table1} A sample of the first quasinormal frequencies $\omega_{\ell n}$ of the scalar field. The radius of the thin shell is \mbox{$R_s = 2.52M_{\text{\tiny BH}}$} and the ADM mass is $M_\infty = 1.02 M_{BH}$. We assume $2M_{\text{\tiny BH}}=1/2$.} \smallskip \centering \begin{ruledtabular} \begin{tabular}{ccc} $\ell$ & $n$ & $2M_\infty\omega_{\ell n}$ \\ \hline $1$ & $1$ & $0.586628 - 0.190512 i$ \\[-1ex] & $2$ & $0.538645 - 0.601030 i$ \\[-1ex] & $3$ & $0.478474 - 1.064024 i$ \\[-1ex] & $4$ & $0.435656 - 1.555056 i$ \\[-1ex] & $5$ & $0.408698 - 2.055042 i$ \\[-1ex] & $6$ & $0.392365 - 2.556656 i $ \\[-1ex] & $7$ & $0.381985 - 3.058495 i$ \\[-1ex] & $8$ & $0.376255 - 3.559562 i $ \\[-1ex] & $9$ & $0.372953 - 4.060316 i $ \\[-1ex] & $10$ & $0.372266 - 4.560224 i$ \\ \end{tabular} \end{ruledtabular} \end{table} \endgroup In Fig.\ \ref{fig:QNMs_Plot_DBH_n_1_10}, we show the QNM spectrum corresponding to the DBH configuration studied by Leung et al.\ for ($\ell =1, n = 1,\ldots, 10$). We see that it agrees with Fig.\ 4 of~\cite{Leung:1999iq}. The data for quasinormal frequencies $\omega_{\ell n}$ is listed in Table~\ref{tab:table1}. In Figs~\ref{fig:PRs_2Mw_3_2Mw_6_Rs_4}, \ref{fig:PRs_2Mw_16_2Mw_32_Rs_4} and \ref{fig:PRs_2Mw_16_2Mw_32_Rs_5}, we present the numerical results for the Regge pole spectrum of DBH's in two configurations: (i) a DBH where the shell is located between the inner and outer light-rings ($3M_\text{\tiny BH} < R_s < 3M_\infty$), so that both light-rings are present in the geometry, and (ii) a DBH where the shell is located outside the outer light-ring ($R_s > 3M_\infty$), so that only a single light-ring is present in the geometry, but this is \emph{inside} the shell. The Regge poles for each configuration are presented for various frequencies. Fig.\ \ref{fig:PRs_2Mw_3_2Mw_6_Rs_4} shows the Regge pole spectrum for the first configuration with parameters $M_\infty = 1.5M_{\text{\tiny BH}}$ and \mbox{$R_s = 4M_{\text{\tiny BH}}$} for two different frequencies $2M_\infty\omega = 3$ and $6$. For both frequencies, the Regge pole spectrum exhibits two branches, represented in blue and red. For low overtones, the two branches merge, with the distinction becoming clear at higher overtones. Note that the splitting occurs for smaller values of $n$ as one increases the frequency. We can identify the origin of both branches as coming from the existence of two light-rings if $3M_{\rm{BH}}<R_S<3M_\infty$ or from the inner-ring and the shell if $R_S > 3M_{\infty}$. This can be seen from the low overtone origin of the branches. Indeed for the fundamental modes, their Regge poles can be estimated from the critical impact parameters $b_c=b_{\ell_\pm}$ or $b_c = b_{R_S}$, i.e., $\text{Re}(\lambda_n(\omega)) \sim \omega b_{c}$~\cite{PhysRevD.81.024031} with $b_{\ell_+} = 3\sqrt{3} M_\infty$, $b_{\ell_-} = 3\sqrt{3} M_{_\text{\tiny BH}}/\sqrt{\alpha}$ and $b_{R_S} = \sqrt{R_S^3/(R_S-2M_\infty)}$. For example, for the configuration with $M_{\infty} = 1.5M_{\text{\tiny BH}}$, $R_S = 5M_{\text{\tiny BH}}$ and $2M_{\infty}\omega = 32$, and we find that the real part of the fundamental Regge pole associated with the inner photon sphere and the shell are approximately $66.88$ and $84.33$ respectively which agree remarkably well with the numerical values presented in Table~\ref{tab:table3}. \begin{figure}[htb] \centering \includegraphics[scale=0.50]{PRs_2Mw_3_2Mw_6_Rs_4} \caption{\label{fig:PRs_2Mw_3_2Mw_6_Rs_4} The Regge poles $\lambda_n(\omega)$ for the scalar field in a DBH spacetime with parameters $M_\infty =1.5M_{\rm{BH}}$, $R_S = 4M_{\rm{BH}}$ and for frequencies $2M_{\infty}\omega = 3$ (upper panel) and $2M_{\infty}\omega = 6$ (lower panel). (We take $2M_{\text{\tiny BH}} = 1$ to produce these plots). In both panels, the blue circle and red square branches correspond to the outer and inner surface waves for the DBH spacetime while the black diamond branch is the one for an isolated black hole of mass $M_{\text{\tiny BH}}$.} \label{fig:rp1} \end{figure} \begin{figure}[htb] \centering \includegraphics[scale=0.50]{PRs_2Mw_16_2Mw_32_Rs_4} \caption{\label{fig:PRs_2Mw_16_2Mw_32_Rs_4} The Regge poles $\lambda_n(\omega)$ for the scalar field in the DBH spacetime with parameters $M_\infty =1.5M_{\rm{BH}}$ and $R_s = 4M_{\rm{BH}}$ at frequencies $2M_{\infty}\omega = 16$ (upper panel) and $2M_{\infty}\omega = 32$ (lower panel). We assume $2M_{\text{\tiny BH}} = 1$. In both panels, the blue circle and red square branches correspond to the outer and inner surface waves for the DBH spacetime while the black diamond branch is the one for an isolated black hole. The purple triangles in the lower panel depict the third branch of broad resonances.} \label{fig:rp1} \end{figure} \begin{figure}[htb] \centering \includegraphics[scale=0.50]{PRs_2Mw_16_2Mw_32_Rs_5} \caption{\label{fig:PRs_2Mw_16_2Mw_32_Rs_5} The Regge poles $\lambda_n(\omega)$ for the scalar field in the DBH spacetime with parameters $M_\infty =1.5M_{\rm{BH}}$ and $R_s = 5M_{\rm{BH}}$ at frequencies $2M_{\infty}\omega = 16$ (upper panel) and $2M_{\infty}\omega = 32$ (lower panel). We assume $2M_{\text{\tiny BH}} = 1$. In both panels, the blue circle and red square branches correspond to the outer (creeping modes) and inner surface waves for the DBH spacetime while the black diamond branch is the one for an isolated black hole. The purple triangles depict the third branch of broad resonances. As the frequency increases the surface wave branches move away and more broad resonances appear in between them.} \label{fig:rp2} \end{figure} Fig.\ \ref{fig:PRs_2Mw_16_2Mw_32_Rs_4} shows the spectrum of the Regge pole also for the first configuration with the same parameters but for two different high frequencies $2M_\infty\omega = 16$ and $32$. We can see that the structure of the spectrum remains the same, however for very high frequencies (lower panel), we can see the emergence of a new branch (purple triangles) between the two branches associated with the inner and outer light-rings. Fig.\ \ref{fig:PRs_2Mw_16_2Mw_32_Rs_5} shows the Regge pole spectrum for $2M_\infty\omega =16$ and $32$ and this for the second configuration ($R_s >3M_\infty$) with parameters $M_\infty=1.5 M_{\text{\tiny BH}}$ and $R_s = 5 M_{\text{\tiny BH}}$. The structure remains the same for both frequencies but the number of poles in the middle branch has increased with frequency. The Regge poles fall into three distinct classes. We can see that they are relatively similar to the Regge pole spectrum studied in the case of compact objects (see \cite{OuldElHadj:2019kji}). Therefore, we will adopt the terminology already introduced by these authors (see, also, Nussenzveig~\cite{nussenzveig2006diffraction} for the origin of this terminology). We have thus : \begin{enumerate} \item \emph{Broad resonances}: poles with a relatively constant imaginary part which is therefore parallel to the real axis with an approximately uniform spacing. They are sensitive to the position of the shell and to its internal structure ($r < R_s$), i.e., its matter content. \item \emph{Inner surface waves}: strongly damped modes and their behaviour are also entirely determined by the geometry of the object. The lowest modes can be associated with waves propagating on the light-ring $r_{\ell_-} = 3M_\text{\tiny BH}$, i.e., with the impact parameter \mbox{$b_{\ell_-} =3\sqrt{3}M_{\textit{\tiny BH}}/\sqrt{\alpha}$}. \item \emph{ Outer surface waves}: modes that depend essentially on the geometry of the object. They are highly damped and the lowest modes can be associated with: \begin{enumerate}[label=(\roman*)] \item the \textit{surface waves} propagating on the outer light ring $r_{\ell_+} = 3M_\infty$, i.e.\ with the impact parameter $b_{\ell_+} =3\sqrt{3} M_\infty$ for the configuration where the shell is located between the two light-rings ($3M_\text{BH}< R_s< 3M_\infty$). \item the \textit{creeping modes} propagating along the shell surface at $r=R_s$, with impact parameter $b_{R_S} = \sqrt{\frac{R_s^3}{R_s-2M_\infty}}$ for the configuration where the shell is outside $3M_\infty$. They are generated by the edge rays (or grazing rays) in the edge region. \end{enumerate} \end{enumerate} In Figs~\ref{fig:PRs_2Mw_3_2Mw_6_Rs_4}, \ref{fig:PRs_2Mw_16_2Mw_32_Rs_4} and \ref{fig:PRs_2Mw_16_2Mw_32_Rs_5} the outer surface waves (surface waves and/or creeping modes), broad resonances, and inner surface waves are shown as red squares, purple triangles, and blue circles, respectively. The black diamonds represent the Regge poles of the isolated Schwarzschild black hole of mass $M_{\text{\tiny BH}}$. The lowest Regge poles are listed in (i) Table~\ref{tab:table2} for DBH with parameters $M_\infty=1.5 M_{\text{\tiny BH}}$ and $R_s = 4 M_{\text{\tiny BH}}$ and for $2M_\infty\omega = 3$, $6$, $16$ and $32$, and (ii) Table~\ref{tab:table3} for DBH with parameters $M_\infty=1.5 M_{\text{\tiny BH}}$ and $R_s = 5 M_{\text{\tiny BH}}$ and for $2M_\infty\omega = 16$ and $32$. \begingroup \begin{table*}[htp] \begin{threeparttable}[htp] \caption{\label{tab:table2} The lowest Regge poles $\lambda_{n}(\omega)$ for the scalar field. The radius of the thin shell is $R_s = 4M_{BH}$ and the ADM mass is $M_\infty = 1.5 M_{BH}$.} \smallskip \centering \begin{ruledtabular} \begin{tabular}{ccccc} $2M_\infty\omega$ & $n$ & $\lambda^{\text{(O-S-W)\tnote{1}}}_n(\omega)$ & $\lambda^{\text{(I-S-W)\tnote{2}}}_n(\omega)$ & $\lambda^{\text{(B-R)\tnote{3}}}_n(\omega)$ \\ \hline $3$ & $1$ & $7.539449 + 0.223849 i$ & $7.095909 + 1.687068 i $& $/ $ \\[-1ex] & $2$ & $7.134677 + 0.790874 i$ & $7.022852 + 3.3708566 i$ & $/ $ \\[-1ex] & $3$ & $6.956727 + 2.438228 i$ & $6.980154 + 5.790685 i $ & $/ $ \\[-1ex] & $4$ & $6.934368 + 4.114637 i$ & $7.049810 + 7.549115 i $ & $/ $ \\[-1ex] & $5$ & $7.154584 + 5.016518 i$ & $7.063185 + 9.371734 i $ & $/ $ \\[-1ex] & $6$ & $7.390189 + 6.5269387i$ & $7.066195 + 11.160583 i $ & $/ $ \\[-1ex] & $7$ & $7.652774 + 7.865429 i$ & $7.067851 + 12.940759 i$ & $/ $ \\[-1ex] & $8$ & $8.001365 + 9.087813 i$ & $7.062229 + 14.719719 i $ & $/ $ \\[-1ex] & $9$ & $8.362859 + 10.289717i$ & $7.051567 + 16.502463 i $ & $/ $ \\[-1ex] & $10$ & $8.715833 + 11.441810i$ & $7.039533 + 18.290428 i $ & $/ $ \\[+1ex] $6$ & $1$ & $15.370841+0.179585 i$ & $14.611707+1.414356 i $ & $/ $ \\[-1ex] & $2$ & $14.548787+0.528288 i $ & $14.209917+3.071463 i$ & $/ $ \\[-1ex] & $3$ & $14.255707+1.927388 i $ & $13.820181+4.931434 i $ & $/ $ \\[-1ex] & $4$ & $14.075994 + 3.476095 i $ & $13.520911+6.640027 i $ & $/ $ \\[-1ex] & $5$ & $14.169925+4.925956 i $ & $13.303758+8.406231 i $ & $/ $ \\[-1ex] & $6$ & $14.352709+6.542016 i $ & $13.132015+10.207303 i$ & $/ $ \\[-1ex] & $7$ & $14.572460+8.082838 i $ & $12.995979+12.028643 i $ & $/ $ \\[-1ex] & $8$ & $14.832787+9.558134 i $ & $12.889802+13.859215 i $ & $/ $ \\[-1ex] & $9$ & $15.124109+10.975998 i $ & $12.808015+15.691713 i $ & $/ $ \\[-1ex] & $10$ & $15.437413+12.345040 i $ & $12.745238+17.521595 i $ & $/ $ \\[+1ex] $16$ & $1$ & $41.670935 + 0.046352 i $ & $39.308955 + 0.526143 i$& $/ $ \\[-1ex] & $2$ & $40.757178 + 0.815017 i $ & $39.123982 + 0.945890 i $ & $/ $\\[-1ex] & $3$ & $39.944801 + 2.350849 i$ & $38.477482 + 2.002715 i$& $/ $ \\[-1ex] & $4$ & $39.562328 + 4.058189 i $ & $37.936546 + 3.242729 i$& $/ $ \\[-1ex] & $5$ & $39.354113 + 5.746829 i$ & $37.428640 + 4.601256 i $ & $/ $\\[-1ex] & $6$ & $39.253632 + 7.417623 i $ & $36.946769 + 6.047137 i $& $/ $ \\[-1ex] & $7$ & $39.231469 + 9.068162 i $ & $36.487997 + 7.562487 i $ & $/ $\\[-1ex] & $8$ & $39.269947 + 10.696820 i$ & $36.051294 + 9.134979 i $ & $/ $\\[-1ex] & $9$ & $39.357086 + 12.302735 i$ & $35.636215 + 10.755451 i $& $/ $ \\[-1ex] & $10$ & $39.484150 + 13.885587 i $ & $35.242599 + 12.416733 i $ & $/ $\\[+1ex] $32$ & $1$ & $83.931483+0.001076 i $ & $78.343315+0.569228 i$ & $78.717561+0.863328 i$ \\[-1ex] & $2$ & $82.718881+0.302994 i $ & $77.708251+1.655406 i $ & $80.826502+1.279706 i$\\[-1ex] & $3$ & $81.765639+1.257533 i $ & $77.049161+2.705208 i $ & $ / $\\[-1ex] & $4$ & $80.934661+2.942484 i $ & $76.407917+3.858777 i $ & $ / $\\[-1ex] & $5$ & $80.484718+4.574505 i $ & $75.788245+5.093969 i $ & $ /$\\[-1ex] & $6$ & $80.144197+6.211451 i $ & $75.189451+6.395209 i $ & $/ $ \\[-1ex] & $7$ & $79.887645+7.854207 i $ & $74.609971+7.751822 i $ & $/ $ \\[-1ex] & $8$ & $79.697425+9.498636 i $ & $74.048266+9.156061 i $ & $/ $ \\[-1ex] & $9$ & $79.561618+11.141534 i$ & $73.503036+10.602050 i $ & $/ $ \\[-1ex] & $10$ &$79.471725+12.780560 i $ & $72.973244+12.085172 i $ & $/ $ \\ \end{tabular} \end{ruledtabular} \begin{tablenotes} \item[1] O-S-W : Outer surface waves \item[2] I-S-W : Inner surface waves \item[3] B-R : Broad resonances \end{tablenotes} \end{threeparttable} \end{table*} \endgroup \begingroup \begin{table*}[htp] \begin{threeparttable}[htp] \caption{\label{tab:table3} The lowest Regge poles $\lambda_{n}(\omega)$ for the scalar field. The radius of the thin shell is $R_s = 5M_{BH}$ and the ADM mass is $M_\infty = 1.5M_{BH}$.} \smallskip \centering \begin{ruledtabular} \begin{tabular}{ccccc} $2M_\infty\omega$ &$n$ & $\lambda^{\text{(O-S-W)\tnote{1}}}_n(\omega)$ & $\lambda^{\text{(I-S-W)\tnote{2}}}_n(\omega)$ & $\lambda^{\text{(B-R)\tnote{3}}}_n(\omega)$ \\ \hline $16$ & $1$ & $41.860118 + 1.784810 i $ & $33.945579 + 0.477813 i $ & $35.086529 + 1.294345 i$ \\[-1ex] & $2$ & $42.120571 + 3.964122 i $ & $33.695542 + 1.278693 i $ & $37.393437 + 1.734247 i$ \\[-1ex] & $3$ & $42.451238 + 6.041693 i $ & $33.057993 + 2.190304 i $ & $/ $ \\[-1ex] & $4$ & $42.818104 + 8.030389 i $ & $32.399192 + 3.251233 i $ & $/ $ \\[-1ex] & $5$ & $43.205922 + 9.949368 i $ & $31.759081 + 4.409909 i $ & $/ $ \\[-1ex] & $6$ & $43.608414 + 11.811534 i $ & $31.143492 + 5.647996 i $ & $/ $ \\[-1ex] & $7$ & $44.022224 + 13.625389 i $ & $30.554809 + 6.952932 i $ & $ /$ \\[-1ex] & $8$ & $44.445123 + 15.396893 i $ & $29.994263 + 8.315144 i $ & $/ $ \\[-1ex] & $9$ & $44.875425 + 17.130499 i $ & $29.462624 + 9.726841 i $ & $/ $ \\[-1ex] & $10$ & $45.311767 + 18.829705 i $ & $28.960402 + 11.181362 i $ & $/ $ \\[+1ex] $32$ & $1$ & $84.123387 + 2.204783 i $ & $67.582661 + 1.284484 i$ & $68.724166 + 1.194667 i $\\[-1ex] & $2$ & $84.576124 + 4.572500 i $ & $66.893567 + 2.072605 i$ & $70.375698 + 1.488244 i $\\[-1ex] & $3$ & $85.005575 + 6.802744 i$ & $66.166349 + 2.972146 i$ & $72.419573 + 1.755117 i $\\[-1ex] & $4$ & $85.428874 + 8.940819 i $ & $65.438785 + 3.951276 i$ & $74.941664 + 2.039113 i $\\[-1ex] & $5$ & $85.849013 + 11.011312 i$ & $64.718695 + 4.996764 i$ & $78.206962 + 2.394064 i $\\[-1ex] & $6$ & $86.267934 + 13.029013 i$ & $64.009870 + 6.099179 i$ & $/ $ \\[-1ex] & $7$ & $86.687058 + 15.003322 i$ & $63.314252 + 7.251465 i$ & $/ $ \\[-1ex] & $8$ & $87.107350 + 16.940586 i$ & $62.632891 + 8.448162 i$ & $/ $ \\[-1ex] & $9$ & $87.529435 + 18.845339 i$ & $61.966385 + 9.684914 i$ & $/ $ \\[-1ex] & $10$ &$87.953689 + 20.720964 i$ & $61.315094 + 10.958141i$ & $/ $ \\ \end{tabular} \end{ruledtabular} \begin{tablenotes} \item[1] O-S-W : Outer surface waves (creeping modes) \item[2] I-S-W : Inner surface waves \item[3] B-R : Broad resonances \end{tablenotes} \end{threeparttable} \end{table*} \endgroup \subsection{The WKB approximation} In order to gain insight into the physical origin of the broad resonances identified in Figs.~\ref{fig:rp1} and~\ref{fig:rp2}, we now turn to the WKB approximation. \subsubsection{Propagation of WKB modes} We begin by constructing the WKB solution to \eqref{H_Radial_equation} by assuming that solutions are rapidly oscillating functions with slowly varying amplitudes: \begin{equation} \phi_\ell (r_*) = A(r_*)e^{i \int p(r'_*) dr_*'}. \end{equation} Inserting this ansatz into the wave equation, and assuming that $|p'| \ll |p^2| $ and $|A'| \ll |pA|$, we get that \begin{equation} p(r_*)^2 + V(r_*) = 0, \quad A = a |p|^{-1/2}, \end{equation} where $a$ is a constant coefficient, $V = V_\ell - \omega^2$, and we suppress $\omega$ and $\ell$ indices for clarity. The first of the above equations is quadratic in $p$, thus it will admit two solutions that can be interpreted as waves travelling radially inward and outward. Taking into account contributions from both modes, we can write the following WKB solution to the wave equation: \begin{equation} \phi_\ell = |p|^{-1/2}\left( a^{in} e^{-i\int{p dr_*}} + a^{out} e^{i\int{p dr_*}}\right). \end{equation} We see from the above expression that, in regions where the WKB approximation holds, the only change to the amplitudes of the modes comes from the variation of the function $p$. It is also clear that the WKB approximation breaks down at those points which satisfy $V(r_*) = 0$, since it implies that $p = 0$, leading to an infinite amplitude of the WKB modes. The points satisfying $V(r_*) = 0$, are called turning points and one can write a WKB solution on both sides of the turning points, with different coefficients $(a^{in},a^{out})$ on each side. One then relates the WKB modal coefficients on each side of the turning points by applying connection formulae (see~\cite{Berry_1972}). Between turning points, the WKB modes do not mix and their propagation between two points $r_{*i}$ and $r_{*j}$ is captured by the following propagation matrices: \begin{equation} \binom{a^{out}_i}{a^{in}_i} = P_{ij}\binom{a^{out}_j}{a^{in}_j}, \end{equation} where \begin{equation} P_{ij} = \begin{pmatrix} e^{-iS_{ij}} & 0 \\ 0 & e^{iS_{ij}} \end{pmatrix} \quad \text{when}\quad p^2(r_{*i}<r_*<r_{*j}) > 0, \end{equation} or \begin{equation} P_{ij} = \begin{pmatrix} 0 & e^{-S_{ij}} \\ e^{S_{ij}} & 0 \end{pmatrix} \quad \text{when}\quad p^2(r_{*i}<r_*<r_{*j}) < 0. \end{equation} In the above formula, $S_{ij}$ is the WKB action between the points $r_{*i}$ and $r_{*j}$ and is given by \begin{equation} S_{ij} = \int_{r_{*i}}^{r_{*j}} |p(r_*)|dr_*. \end{equation} Combining the propagation matrix for modes below the potential barrier and the connection matrices across isolated turning points, we define the tunneling matrix, $T_{12}$, which connects WKB modes on the outside of two turning points $r_{*_1}$ and $r_{*_2}$: \begin{equation} T_{12} = \begin{pmatrix} 1/\mathcal{T} & -\mathcal{R}/\mathcal{T}, \\ \mathcal{R}^*/\mathcal{T} & 1/\mathcal{T}^*. \end{pmatrix}, \end{equation} where $\mathcal{R}$ and $\mathcal{T}$ are the local reflection and transmission coefficients across the potential barrier and are given by: \begin{eqnarray} \mathcal{R} &=& -i\frac{1-e^{-2S_{12}}/4}{1+e^{-2S_{12}}/4}\\ \mathcal{T} &=& \frac{e^{-2S_{12}}}{1+e^{-2S_{12}}/4} \end{eqnarray} \subsubsection{WKB modes across the shell} In order to construct the full WKB solution across the entire radial range, we also need to connect WKB modes on both sides of the shell. Since we are only interested in the qualitative behaviour of the WKB modes in order to interpret the broad resonance, we assume here that the jump discontinuity in the potential is much smaller than the frequencies of the WKB modes, $\omega$, and that this frequency does not vary significantly near the shell. Within this framework, we write the WKB solution for $r\sim R_s$ as: \begin{equation} \phi_{\omega,l}(r_*) = \begin{cases} a^{out}_{L}e^{i\omega r_*} + a^{in}_L e^{-i\omega r_*} \ \text{for} \ r_* \leq R_s^*, \\ a^{out}_{R}e^{i\omega r_*} + a^{in}_R e^{-i\omega r_*} \ \text{for} \ r_* \geq R_s^*. \end{cases} \end{equation} From the junction condition at the shell given in \eqref{jump_condition}, we can define the connection matrix, $\mathcal{C}$, which connects the WKB modal coefficients on the left and right sides of the shell: \begin{equation} \mathcal{C} = \begin{pmatrix} 1 & - e^{-2i\omega R_s} \frac{\Delta}{2i\omega} \\ e^{2i\omega R_s} \frac{\Delta}{2i\omega} & 1 \end{pmatrix}, \end{equation} where $\Delta = \frac{\sqrt{AB}_+ - \sqrt{AB}_{-}}{R_s}$. Note that $\Delta$ is not the discontinuity of the potential but rather the coefficient entering in the discontinuity of the field's derivative. From the connection matrix, we can also define a reflection coefficient across the shell: \begin{eqnarray} \mathcal{R}_s = \frac{\Delta}{2i\omega}. \end{eqnarray} Note here that the reflection across the discontinuity is of order $\mathcal{O}(\omega^{-1})$. This is due to the fact that the metric itself is discontinuous at the shell and is in contrast with the case of a compact object which usually has a continuous metric and for which the reflection coefficient at the surface of the object is at least of order $\mathcal{O}(\omega^{-2})$~\cite{Berry_1982,Zhang:2011pq}. \subsubsection{WKB estimate of the broad resonances} We now have all the necessary quantities to relate the modal coefficient at $r_{*}\rightarrow -\infty$ to the one at $r_* \rightarrow + \infty$. This is done by combining the propagating, tunneling and connection matrix as follows (see Fig.\ \ref{fig:WKB} for illustration): \begin{figure} \includegraphics[trim=1cm 0 0 0, scale=0.7]{WKB_broadres.pdf} \caption{Illustration of the connection formula \eqref{eq:connection_WKB} obtained by combining the propagating, tunneling and connection matrix to relate the WKB modal coefficient across the turning points and the shell. The oscillating (growing/decaying) lines represent the propagation of WKB modes with real (imaginary) momenta $p$. } \label{fig:WKB} \end{figure} \begin{equation}\label{eq:connection_WKB} \binom{a^{out}_{-\infty}}{a^{in}_{-\infty}} = P_{-\infty 1}\times T_{12} \times P_{2R_s} \times \mathcal{C} \times P_{R_s\infty} \binom{a^{out}_{\infty}}{a^{in}_\infty}. \end{equation} Solving the above system for outgoing boundary conditions, that is $a^{out}_{-\infty} = a^{in}_{\infty}=0$, leads to the following condition: \begin{equation} e^{-2iS_{2R_s} - 2i\omega R_s} = \mathcal{R}\mathcal{R}_s. \end{equation} We now anticipate that broad resonances will be slowly damped and therefore the real part of their RPs will be much greater than their imaginary part. We write $\lambda = \lambda_R + i\Gamma$, with $\Gamma \ll \lambda_r$. The real and imaginary parts of the above condition give \begin{equation} \sin\left( S_{2R_S}(\lambda_R) + \omega R_s - \frac{\pi}{4} \right) = 0 \ \ \text{and} \ \ \Gamma = \frac{\ln(|\mathcal{R}\mathcal{R}_s|)}{2\partial_\lambda S_{2Rs} \big|_{\lambda_R}}. \end{equation} Hence the location of the shell sets the real part of the broad resonances part of the RP spectrum while its matter content sets their imaginary part. Contrary to the other branches of the RP spectrum, the above systems admit a finite number of solutions. In particular we must have that the turning points be located between the outer light-ring and the shell, which limits the range of $\lambda_R$. Solving the above condition numerically for the shell parameters of Fig.\ \ref{fig:rp1}, we find that there are only two solutions, $\lambda_{\mathrm{WKB}}^{(1)} = 35.6946 + 1.79675 i $ and $\lambda_{\mathrm{WKB}}^{(2)} = 38.2285 + 2.8487 i$, which differ from the values found numerically by $2.2\%$ and $3.7\%$ respectively. \section{Wave Scattering by a dirty black hole}\label{sec:scattering} In this section, we compute the differential scattering cross sections $d\sigma/d\Omega$ for plane monochromatic scalar waves impinging upon a DBH using the partial wave expansion, and we compare with the results constructed by CAM representations of these cross sections by means of the Sommerfeld-Watson transform and Cauchy theorem~\cite{Newton:1982qc,Watson18,Sommerfeld49}. \subsection{The differential scattering cross section: Partial waves expansion} The differential scattering cross section for a scalar field is given by \cite{Futterman:1988ni} \begin{equation}\label{Scalar_Scattering_diff} \frac{d\sigma}{d\Omega} = |f(\omega,\theta)|^2 \end{equation} where \begin{equation}\label{Scalar_Scattering_amp} f(\omega,\theta) = \frac{1}{2 i \omega} \sum_{\ell = 0}^{\infty} (2\ell+1) [S_{\ell}(\omega)-1]P_{\ell}(\cos\theta) \end{equation} denotes the scattering amplitude. In \eqref{Scalar_Scattering_amp}, the functions $P_{\ell}(\cos\theta)$ are the Legendre polynomials~\cite{AS65} and the $S$-matrix elements $S_{\ell}(\omega)$ were given by \eqref{Matrix_S}. \subsection{CAM representation of the scattering amplitude} Following the steps in \cite{Folacci:2019cmc}, we construct the CAM representation of $f(\theta)$ using a Sommerfeld-Watson transformation \cite{Watson18,Sommerfeld49,Newton:1982qc} \begin{equation} \label{SWT_gen} \sum_{\ell=0}^{+\infty} (-1)^\ell F(\ell)= \frac{i}{2} \int_{\cal C} d\lambda \, \frac{F(\lambda -1/2)}{\cos (\pi \lambda)} , \end{equation} where $F(\cdot)$ is any function without singularities on the real axis $\lambda$. By means of \eqref{SWT_gen}, we replace the discrete sum over the ordinary angular momentum $\ell$ in \eqref{Scalar_Scattering_amp} with a contour integral in the complex $\lambda$ plane (i.e., in the complex $\ell$-plane with $\lambda = \ell +1/2$). By noting that $P_\ell (\cos \theta) =(-1)^\ell P_\ell (-\cos \theta)$, we obtain \begin{eqnarray} \label{SW_Scalar_Scattering_amp} & & f(\omega,\theta) = \frac{1}{2 \omega} \int_{\cal C} d\lambda \, \frac{\lambda} {\cos (\pi \lambda)} \nonumber \\ && \qquad\qquad \times \left[ S_{\lambda -1/2} (\omega) -1 \right]P_{\lambda -1/2} (-\cos \theta). \end{eqnarray} It should be noted that, in \eqref{SWT_gen} and \eqref{SW_Scalar_Scattering_amp}, the integration contour encircles counterclockwise the positive real axis of the complex $\lambda$-plane, and $P_{\lambda -1/2} (z)$ denotes the analytic extension of the Legendre polynomials $P_\ell (z)$ which is defined in terms of hypergeometric functions by~\cite{AS65} \begin{equation}\label{Def_ext_LegendreP} P_{\lambda -1/2} (z) = F[1/2-\lambda,1/2+\lambda;1;(1-z)/2]. \end{equation} Here, $S_{\lambda -1/2} (\omega)$ is given by [see \eqref{Matrix_S}] \begin{equation}\label{Matrix_S_CAM} S_{\lambda -1/2}(\omega) = e^{i(\lambda + 1/2)\pi} \, \frac{A_{\lambda -1/2}^{(+)}(\omega)}{A_{\lambda -1/2}^{(-)}(\omega)} \end{equation} and denotes ``the'' analytic extension of $S_\ell (\omega)$ where the complex amplitudes $A^{(\pm)}_{\lambda -1/2} (\omega)$ are defined from the analytic extension of the modes $\phi_{\omega \ell}$, i.e., from the function $\phi_{\omega ,\lambda -1/2}$. It is also important to recall that the Regge poles $\lambda_n(\omega)$ of $S_{\lambda-1/2}(\omega)$ lie in the first and third quadrants, symmetrically distributed with respect to the origin $O$, and are defined as the zeros of the coefficient $A^{(-)}_{\lambda-1/2} (\omega)$ [see \eqref{Matrix_S_CAM}] \begin{equation}\label{PR_def_Am} A^{(-)}_{\lambda_n(\omega)-1/2} (\omega)=0, \end{equation} with $n=1,2,3,\ldots$., and the associated residues at the poles $\lambda=\lambda_n(\omega)$ are defined by [see \eqref{Matrix_S_CAM}] \begin{equation}\label{residues_RP} r_n(\omega)=e^{i\pi [\lambda_n(\omega)+1/2]} \left[ \frac{A_{\lambda -1/2}^{(+)} (\omega)}{\frac{d}{d \lambda}A_{\lambda -1/2}^{(-)}(\omega)} \right]_{\lambda=\lambda_n(\omega)}. \end{equation} In order to collect the Regge pole contributions, we deform the contour ${\cal C}$ in \eqref{SW_Scalar_Scattering_amp} while using the Cauchy theorem to obtain \begin{equation}\label{CAM_Scalar_Scattering_amp_tot} f (\omega, \theta) = f^\text{\tiny{B}} (\omega, \theta) + f^\text{\tiny{RP}} (\omega, \theta) \end{equation} where \begin{subequations}\label{CAM_Scalar_Scattering_amp_decomp} \begin{equation}\label{CAM_Scalar_Scattering_amp_decomp_Background} f^\text{\tiny{B}} (\omega, \theta) = f^\text{\tiny{B},\tiny{Re}} (\omega, \theta) + f^\text{\tiny{B},\tiny{Im}} (\omega, \theta) \end{equation} is a background integral contribution with \begin{equation}\label{CAM_Scalar_Scattering_amp_decomp_Background_a} f^\text{\tiny{B},\tiny{Re}} (\omega, \theta) = \frac{1}{\pi \omega} \int_{{\cal C}_{-}} d\lambda \, \lambda S_{\lambda -1/2}(\omega) Q_{\lambda -1/2}(\cos \theta +i0) \end{equation} and \begin{eqnarray}\label{CAM_Scalar_Scattering_amp_decomp_Background_b} f && ^\text{\tiny{B},\tiny{Im}} (\omega, \theta) = \frac{1}{2 \omega}\left(\int_{+i\infty}^{0} d\lambda \, \left[S_{\lambda -1/2}(\omega) P_{\lambda-1/2} (-\cos \theta) \right. \right. \nonumber \\ - && \left. \left. S_{-\lambda -1/2}(\omega) e^{i \pi \left(\lambda+1/2\right)}P_{\lambda-1/2} (\cos \theta) \right] \frac{\lambda}{\cos (\pi \lambda) } \right). \end{eqnarray} \end{subequations} The second term in \eqref{CAM_Scalar_Scattering_amp_tot} \begin{eqnarray} \label{CAM_Scalar_Scattering_amp_decomp_RP} & & f^\text{\tiny{RP}} (\omega, \theta) = -\frac{i \pi}{\omega} \sum_{n=1}^{+\infty} \frac{ \lambda_n(\omega) r_n(\omega)}{\cos[\pi \lambda_n(\omega)]} \nonumber \\ && \qquad\qquad \qquad\qquad \times P_{\lambda_n(\omega) -1/2} (-\cos \theta), \end{eqnarray} is a sum over the Regge poles lying in the first quadrant of the CAM plane. Of course, the CAM representation of the scattering amplitude $f (\omega, \theta)$ for the scalar field given by \eqref{CAM_Scalar_Scattering_amp_tot} and \eqref{CAM_Scalar_Scattering_amp_decomp} is equivalent to the initial partial wave expansion \eqref{Scalar_Scattering_amp}. From this CAM representation, we extract the contribution $f^\text{\tiny{RP}}(\omega, \theta)$ given by \eqref{CAM_Scalar_Scattering_amp_decomp_RP} which is only an approximation of $f(\omega, \theta)$, and which provides us with a corresponding approximation of the differential scattering cross section via (\ref{Scalar_Scattering_diff}). \subsection{Computational methods} In order to construct the scattering amplitude \eqref{Scalar_Scattering_amp}, and the Regge pole contribution \eqref{CAM_Scalar_Scattering_amp_decomp_RP}, it is necessary first to obtain the function $\phi_{\omega\ell}^{\text{\tiny in}}(r)$, the coefficients $A_\ell^{(\pm)}(\omega)$, and the $S$-matrix elements $S_\ell(\omega)$ by solving \eqref{H_Radial_equation} with conditions \eqref{bc_in}, and second to compute the Regge poles $\lambda_n(\omega)$ of \eqref{PR_def_Am} and the associated residues \eqref{residues_RP}. To do this, we use the numerical methods of \cite{Folacci:2019cmc,Folacci:2019vtt} (see Secs.\ III B and IVA of these papers). It is important to note that, the scattering amplitude \eqref{Scalar_Scattering_amp} suffers a lack of convergence due to the long range nature of the field propagating on the Schwarzschild spacetime (outside the thin shell) and to accelerate the convergence of this sum, we have used the method described in the Appendix of \cite{Folacci:2019cmc}. All numerical calculations were performed using {\it Mathematica}. \subsection{Numerical Results and comments: Scattering cross sections} \begin{figure*}[htp!] \includegraphics[scale=0.50]{CAM_Sec_2Mw_6_PRs_Rs_4} \caption{\label{fig:CAM_Sec_2Mw_6_PRs_Rs_4} The scalar cross section of DBH for $2M_\infty\omega=6$, $R_s=4M_{\text{\tiny BH}} $ and $M_\infty=1.5M_{\text{\tiny BH}} $ and its Regge pole approximation. The plots show the effect of including successively more Regge poles.} \end{figure*} \begin{figure*}[htp!] \includegraphics[scale=0.50]{CAM_Sec_2Mw_16_PRs_Rs_4} \caption{\label{fig:CAM_Sec_2Mw_16_PRs_Rs_4} The scalar cross section of DBH for $2M_\infty\omega=16$, $R_s=4M_{\text{\tiny BH}}$ and $M_\infty=1.5M_{\text{\tiny BH}}$ and its Regge pole approximation. The plots show the effect of including successively more Regge poles.} \end{figure*} \begin{figure*}[htp!] \includegraphics[scale=0.50]{CAM_Sec_2Mw_16_Rs_5_PRs} \caption{\label{fig:CAM_Sec_2Mw_16_Rs_5_PRs} The scalar cross section of DBH for $2M_\infty\omega=16$, $R_s=5M_{\text{\tiny BH}} $ and $M_\infty=1.5M_{\text{\tiny BH}} $ and its Regge pole approximation. The plots show the effect of including successively more Regge poles.} \end{figure*} \begin{figure}[htp!] \includegraphics[scale=0.50]{Sactt_Cross_section_Rs-4_Vs_Rs-5_2_5dot7_2Mw_16} \caption{\label{fig:Sactt_Cross_section_Rs-4_Vs_Rs-5_2_5dot7_2Mw_16} The scalar cross section of DBH for $2M_\infty\omega=16$, and $M_\infty=1.5M_{\text{\tiny BH}} $. a) We compare $R_s=5M_{\text{\tiny BH}}$ to $R_s=4M_{\text{\tiny BH}}$. We can see that the scattering amplitude is enhanced by the rainbow effect for $R_s=5M_{\text{\tiny BH}}$ at the rainbow angle $\theta_r\approx 157.8^{\circ}$ (blue dotdashed line). b) We compare $R_s=5M_{\text{\tiny BH}}$ to $R_s=5.7M_{\text{\tiny BH}}$. We can see that the scattering amplitude is enhanced at $\theta_r\approx 115.3^{\circ}$ (red line) for $R_s=5M_{\text{\tiny BH}}$ and at $\theta_r\approx 157.8^{\circ}$ (blue dotdashed line). }\label{fig:comparison_critical} \end{figure} We present in Figs~\ref{fig:CAM_Sec_2Mw_6_PRs_Rs_4}, \ref{fig:CAM_Sec_2Mw_16_PRs_Rs_4} and \ref{fig:CAM_Sec_2Mw_16_Rs_5_PRs} various scattering cross sections constructed from the CAM approach, and compare with results obtained from the partial wave expansion method. As in Sec.\ \ref{sec:resonances}, we focus on two configurations of the DBH: (i) a DBH with $3M_{\text{\tiny BH}} < R_s < 3M_\infty$, and (ii) DBH with $R_s >3M_\infty$. Fig.\ \ref{fig:CAM_Sec_2Mw_6_PRs_Rs_4} shows the scattering cross section constructed from the CAM approach at $2M_\infty\omega =6$ for a shell configuration with $M_\infty=1.5 M_{\text{\tiny BH}}$ and $R_s=4M_{\text{\tiny BH}}$, i.e.\ for a DBH configuration with $3 M_{\text{\tiny BH}} < R_s< 3M_\infty$. We can see that it gets progressively closer to the scattering cross section constructed from partial wave expansion by including increasingly more Regge poles in the sum \eqref{CAM_Scalar_Scattering_amp_decomp_RP}. Indeed, the sum over only the first two Regge poles in equation~\eqref{CAM_Scalar_Scattering_amp_decomp_RP} does not reproduce the cross section, however, summing over $54$ Regge poles, but without including the background integral, the agreement is perfect with that constructed from the partial wave expansion for $\theta \gtrsim 20^\circ $. Fig.\ \ref{fig:CAM_Sec_2Mw_16_PRs_Rs_4} illustrates, for the same DBH configuration, the scattering cross section at high frequency $2M_\infty\omega =16$. In this case, with just $18$ Regge poles the glory and the orbiting oscillations are very well reproduced. With $59$ Regge poles and without adding the background integral the result obtained from the CAM approach is again indistinguishable from partial wave expansion for $\theta \gtrsim 30^\circ $ on the plot. Fig.\ \ref{fig:CAM_Sec_2Mw_16_Rs_5_PRs} shows the scattering cross section for a DBH configuration with $R_s> 3M_\infty$ at $2M_\infty\omega = 16$. Here, we have the shell configuration with $M_\infty=1.5 M_{\text{\tiny BH}}$ and $R_s=5M_{\text{\tiny BH}}$. In this case, to construct the result from CAM approach, we sum over three different branches and with $17$ Regge poles the glory as well as the orbiting oscillations are captured. By including $48$ Regge poles, but no background integral, the agreement is excellent with the partial wave expansion result for $\theta \gtrsim 30^\circ $. Through Figs.~\ref{fig:CAM_Sec_2Mw_6_PRs_Rs_4}, \ref{fig:CAM_Sec_2Mw_16_PRs_Rs_4} and \ref{fig:CAM_Sec_2Mw_16_Rs_5_PRs}, we have shown that the scattering cross section can be described and understood in terms of the Regge Poles, i.e., in terms of various contributions from different types of resonances. In particular, we have identified different branches associated with the properties of the light-rings and the shell position and/or its matter content. To conclude our description of the differential scattering cross section, we return to the discussion of critical effects associated with geodesic motion described in Sec.\ \ref{sec:geodesics}. We can indeed understand qualitatively the observed scattering cross sections from the various critical effect. As in the case of the isolated black hole, the orbiting effect and the glory (associated with a single light-ring) will result in oscillations in the scattering cross section as well as an increase for $\theta \sim 180^\circ$. In DBH spacetimes, we have shown that we may expect modulations due to the rainbow effect and to grazing (creeping modes) or secondary orbiting. The rainbow effect will lead to a local amplification of the scattering cross section around the rainbow angle. Grazing, i.e., the edge rays (creeping modes) or second orbiting will lead to further oscillations and global modulations of the scattering cross section. This is illustrated in Fig.\ \ref{fig:comparison_critical}. Note the qualitative difference that for configurations where the shell is outside the outer light-ring, the scattering cross is mainly modulated around the rainbow angle while for the case where the shell lies between the two light-rings the scattering cross sections exhibits noticeable modulation at all angles. Isolating and identifying quantitatively the different contribution from each critical effect would require asymptotic formulae from the RPs and their residues which are beyond the scope of this work. \section{Conclusion and discussion}\label{sec:conclusion} In this paper we have investigated the scattering of planar waves incident on a dirty black hole spacetime, that is, a black hole surrounded by a thin shell of matter. We have considered cases where the shell contributes substantially to the scattering, thereby complementing the analyses presented in \cite{Macedo:2015ikq,Leite:2019uql}. We have focused our attention on the spectrum of resonances of DBHs and calculated the Regge pole spectrum of different DBH configurations. This was done by extending the continued fraction method originally introduced by Leaver, and following the approach of Ould El Hadj et al.\ \cite{OuldElHadj:2019kji}. We have identified two key properties of the RP spectrum of DBHs: i) the existence of two branches and ii) the possible existence of a third branch with a nearly constant imaginary part. The first two branches are associated with the inner light-ring and with either the outer light-ring or the shell depending on the DBH configuration. It appears that the two branches emerge from the original isolated black hole spectrum. The separation from the original branch to the two distinct ones appears first for large overtone numbers while the two branches may overlap for low overtone. The separation is also more prominent for high frequencies and for DBH configuration where the shell contributes significantly. The existence of the two branches results in modulation of the scattering cross section. Such modulations were not seen in previous studies due to the specific DBH configuration considered by the authors. In addition to these two branches, we have identified a third branch corresponding to broad resonances trapped between the inner light-ring and the thin shell. This interpretation was supported by a WKB analysis of the RP spectrum and is reminiscent of the resonances obtained for compact objects~\cite{Zhang:2011pq,OuldElHadj:2019kji,Volkel_2019}. Building on the identification of the RP spectrum, we have constructed the complex angular momentum representation of the scattering. We have shown that the RP spectrum and their associated residues accurately describes the scattering cross section, including the modulations and deviations from the case of an isolated black hole. The accurate reconstruction of the scattering cross section also confirms that the resonances have been correctly identified and further validates our results. Our study fits into the general goal of modelling and understanding black holes within their environments. We have considered here a toy model of a black hole surrounded by a static and spherically symmetric thin shell of matter. In order to account for even more realistic scenarios, one would need to account for asymmetric distributions of matter and take into account the dynamical evolution of the environment. The thin shell model provides a practical toy model for the latter as the dynamics of the shell can be found exactly. It would be an interesting extension of our work to investigate the resonances of dynamical dirty black holes. Indeed the literature of resonances of time dependent black hole is scarce with only numerical results which still lack a complete theoretical description~\cite{Abdalla:2006vb,Chirenti:2010iu}. We note the interesting recent developments in this direction~\cite{Bamber:2021knr,Lin:2021fth}. Another interesting avenue to explore would be the impact of the environment and its dynamics on the resonance spectrum is to mimic gravitational effects using condensed matter platforms. This field of research, known as analogue gravity~\cite{Unruh:1980cg}, has allowed us to observe and better understand several effects predicted from field theory in curved spacetimes. Recently, analogue simulators have investigated experimentally the ringdown phase of analogue black holes~\cite{Torres:2020tzs}, and more experiments are being developed within the Quantum Simulators for Fundamental Physics research programme (https://www.qsimfp.org/). In analogue experiments, environmental and dynamical effects will inherently be present and accurate modelling of their impact will be necessary. These simulators will provide valuable insights to build the necessary tools to describe dynamical dirty black holes. \section*{Acknowledgments} For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. There is no additional data associated with this article. This work was supported in part by the STFC Quantum Technology Grants ST/T005858/1 (RG \& TT), STFC Consolidated Grant ST/P000371/1 (RG). SH is supported by King's College London through a KCSC Scholarship. RG also acknowledges support from the Perimeter Institute. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Research, Innovation and Science.
{ "redpajama_set_name": "RedPajamaArXiv" }
\chapter{Introduction} \epigraph{$\mlq$\textit{Qui est-ce ? Ah, tr\`{e}s bien, faites entrer l'infini.}$\mrq$}{Aragon} \section{Motivation} \subsection{Periods} A \textbf{\textit{period}} \footnote{For an enlightening survey, see the reference article $\cite{KZ}$.} denotes a complex number that can be expressed as an integral of an algebraic function over an algebraic domain.\footnote{We can equivalently restrict to integral of rational functions over a domain in $\mathbb{R}^{n}$ given by polynomial inequalities with rational coefficients, by introducing more variables.} They form the algebra of periods $\mathcal{P}$\nomenclature{$\mathcal{P}$}{the algebra of periods, resp. $\widehat{\mathcal{P}}$ of extended periods}, fundamental class of numbers between algebraic numbers $\overline{\mathbb{Q}}$ and complex numbers.\\ The study of these integrals is behind a large part of algebraic geometry, and its connection with number theory, notably via L-functions \footnote{One can associate a L$-$ function to many arithmetic objects such as a number field, a modular form, an elliptic curve, or a Galois representation. It encodes its properties, and has wonderful (often conjectural) meromorphic continuation, functional equations, special values, and non-trivial zeros (Riemann hypothesis).}; and many of the constants which arise in mathematics, transcendental number theory or in physics turn out to be periods, which motivates the study of these particular numbers.\\ \\ \texttt{Examples:} \begin{itemize} \item[$\cdot$] The following numbers are periods: $$\sqrt{2}= \int_{2x^{2} \leq 1} dx \text{ , } \quad \pi= \int_{x^{2}+y^{2} \leq 1} dx dy \quad\text{ and } \quad \log(z)=\int_{1}^{z} \frac{dx}{x}, z>1, z\in \overline{\mathbb{Q}}.$$ \item[$\cdot$] Famous -alleged transcendental- numbers which conjecturally are not periods: $$e =\lim_{n \rightarrow \infty} \left( 1+ \frac{1}{n} \right)^{n} \text{ , } \quad \gamma=\lim_{n \rightarrow \infty} \left( -ln(n) + \sum_{k=1}^{n} \frac{1}{k} \right) \text{ or } \quad \frac{1}{\pi}.$$ It can be more useful to consider the ring of extended periods, by inverting $\pi$: $$\widehat{\mathcal{P}}\mathrel{\mathop:}=\mathcal{P} \left[ \frac{1}{\pi} \right].$$ \item[$\cdot$] Multiple polylogarithms at algebraic arguments (in particular cyclotomic multiple zeta values), by their representation as iterated integral given below, are periods. Similarly, special values of Dedekind zeta function $\zeta_{F}(s)$ of a number field, of L-functions, of hypergeometric series, modular forms, etc. are (conjecturally at least) periods or extended periods. \item[$\cdot$] Periods also appear as Feynman integrals: Feynman amplitudes $I(D)$ can be written as a product of Gamma functions and meromorphic functions whose coefficients of its Laurent series expansion at any integer $D$ are periods (cf. $\cite{BB}$), where D is the dimension of spacetime. \end{itemize} Although most periods are transcendental, they are constructible; hence, the algebra $\mathcal{P}$ is countable, and any period contains only a finite amount of information. Conjecturally (by Grothendieck's conjecture), the only relations between periods comes from the following rules of elementary integral calculus\footnote{However, finding an algorithm to determine if a real number is a period, or if two periods are equal seems currently out of reach; whereas checking if a number is algebraic, or if two algebraic numbers are equal is rather \say{easy} (with \say{LLL}-type reduction algorithm, resp. by calculating the g.c.d of two vanishing polynomials associated to each).}: \begin{itemize} \item[$(i)$] Additivity (of the integrand and of the integration domain) \item[$(ii)$] Invertible changes of variables \item[$(iii)$] Stokes's formula.\\ \end{itemize} Another way of viewing a period $ \boldsymbol{\int_{\gamma} \omega }$ is via a comparison between two cohomology theories: the algebraic \textit{De Rham} cohomology, and the singular (\textit{Betti}) cohomology. More precisely, let $X$ a smooth algebraic variety defined over $\mathbb{Q}$ and $Y$ a closed subvariety over $\mathbb{Q}$. \begin{itemize} \item[$\cdot$] On the one hand, the \textit{algebraic} De Rham cohomology $H^{\bullet}_{dR}(X)$\nomenclature{$H^{\bullet}_{dR}$}{the \textit{algebraic} De Rham cohomology} is the hypercohomology of the sheaf of algebraic (K{\"a}hler) differentials on $X$. If $X$ is affine, it is defined from the de Rham complex $\Omega^{\bullet}(X)$ which is the cochain complex of global algebraic (K{\"a}hler) differential forms on X, with the exterior derivative as differential. Recall that the \textit{classical} $k^{\text{th}}$ de Rham cohomology group is the quotient of smooth closed $k$-forms on the manifold X$_{\diagup \mathbb{C}}$ modulo the exact $k$-forms on $X$.\\ Given $\omega$ a closed algebraic $n$-form on $X$ whose restriction on $Y$ is zero, it defines an equivalence class $[\omega]$ in the relative de Rham cohomology groups $H^{n}_{dR}(X,Y)$, which are finite-dimensional $\mathbb{Q}-$ vector space. \item[$\cdot$] On the other hand, the Betti homology $H_{\bullet}^{B}(X)$\nomenclature{$H_{\bullet}^{B}(X)$}{the Betti homology} is the homology of the chain complex induced by the boundary operation of singular chains on the manifold $X(\mathbb{C})$; Betti cohomology groups $H_{B}^{n}(X,Y)= H^{B}_{n}(X,Y)^{\vee}$ are the dual $\mathbb{Q}$ vector spaces (taking here coefficients in $\mathbb{Q}$, not $\mathbb{Z}$).\\ Given $\gamma$ a singular $n$ chain on $X(\mathbb{C}) $ with boundary in $ Y(\mathbb{C})$, it defines an equivalence class $[\gamma]$ in the relative Betti homology groups $ H_{n}^{B}(X,Y)=H^{n}_{B}(X,Y)^{\vee}$.\footnote{Relative homology can be calculated using the following long exact sequence: $$ \cdots \rightarrow H_{n}(Y) \rightarrow H_{n}(X) \rightarrow H_{n}(X,Y) \rightarrow H_{n-1} (Y) \rightarrow \cdots.$$ } \end{itemize} Furthermore, there is a comparison isomorphism\nomenclature{$\text{comp}_{B,dR}$}{the comparison isomorphism between de Rham and Betti cohomology} between relative de Rham and relative Betti cohomology (due to Grothendieck, coming from the integration of algebraic differential forms on singular chains): $$\text{comp}_{B,dR}: H^{\bullet}_{dR} (X,Y) \otimes_{\mathbb{Q}} \mathbb{C} \rightarrow H^{\bullet}_{B} (X,Y) \otimes_{\mathbb{Q}} \mathbb{C}.$$ By pairing a basis of Betti homology to a basis of de Rham cohomology, we obtain the \textit{matrix of periods}, which is a square matrix with entries in $\mathcal{P}$ and determinant in $\sqrt{\mathbb{Q}^{\ast}}(2i\pi)^{\mathbb{N}^{\ast}}$\nomenclature{$\mathbb{N}^{\ast}$}{the set of positive integers, $\mathbb{N}:=\mathbb{N}^{\ast}\cup\lbrace 0\rbrace$ }; i.e. its inverse matrix has its coefficients in $\widehat{\mathcal{P}}$. Then, up to the choice of these two basis: \begin{framed} The period $ \int_{\gamma} \omega $ is the coefficient of this pairing $\langle [\gamma], \text{comp}_{B,dR}([\omega]) \rangle$. \end{framed} \noindent \texttt{Example}: Let $X=\mathbb{P}^{1}\diagdown \lbrace 0, \infty \rbrace$, $Y=\emptyset$ and $\gamma_{0}$ the counterclockwise loop around $0$: $$H^{B}_{i} (X)= \left\lbrace \begin{array}{ll} \mathbb{Q} & \text{ if } i=0 \\ \mathbb{Q}\left[\gamma_{0}\right] & \text{ if } i=1 \\ 0 & \text{ else }. \end{array} \right. \quad \text{ and } \quad H^{i}_{dR} (X)= \left\lbrace \begin{array}{ll} \mathbb{Q} & \text{ if } i=0 \\ \mathbb{Q}\left[ \frac{d}{dx}\right] & \text{ if } i=1 \\ 0 & \text{ else }. \end{array} \right. $$ Since $\int_{\gamma_{0}} \frac{dx}{x}= 2i\pi$, $2i\pi$ is a period; as we will see below, it is a period of the Lefschetz motive $\mathbb{L}\mathrel{\mathop:}=\mathbb{Q}(-1)$.\\ \\ Viewing periods from this cohomological point of view naturally leads to the definition of \textbf{\textit{motivic periods}} given below \footnote{The definition of a motivic period is given in $\S 2.4$ in the context of a category of Mixed Tate Motives. In general, one can do with Hodge theory to define $ \mathcal{P}^{\mathfrak{m}}$, which is not strictly speaking \textit{motivic}, once we specify that the mixed Hodge structures considered come from the cohomology of algebraic varieties.}, which form an algebra $\mathcal{P}^{\mathfrak{m}}$, equipped with a period homomorphism: $$\text{per}: \mathcal{P}^{\mathfrak{m}} \rightarrow \mathcal{P}.$$ A variant of Grothendieck's conjecture, which is a presently inaccessible conjecture in transcendental number theory, predicts that it is an isomorphism.\\ There is an action of a so-called motivic Galois group $\mathcal{G}$ on these motivic periods as we will see below in $\S 2.1$. If Grothendieck's period conjecture holds, this would hence extend the usual Galois theory for algebraic numbers to periods (cf. $\cite{An2}$). \\ In this thesis, we will focus on motivic (cyclotomic) multiple zeta values, defined in $\S 2.3$, which are motivic periods of the motivic (cyclotomic) fundamental group, defined in $\S 2.2$. Their images under this period morphism are the (cyclotomic) multiple zeta values; these are fascinating examples of periods, which are introduced in the next section (see also $\cite{An3}$). $$ \quad $$ \subsection{Multiple zeta values} The Zeta function is known at least since Euler, and finds itself nowadays, in its various generalized forms (multiple zeta values, Polylogarithms, Dedekind zeta function, L-functions, etc), at the crossroad of many different fields as algebraic geometry (with periods and motives), number theory (notably with modular forms), topology, perturbative quantum field theory (with Feynman diagrams, cf. $\cite{Kr}$), string theory, etc. Zeta values at even integers are known since Euler to be rational multiples of even powers of $\pi$: \begin{lemme} $$\text{For } n \geq 1, \quad \zeta(2n)=\frac{\mid B_{2n}\mid (2\pi)^{2n}}{2(2n)!}, \text{ where } B_{2n} \text{ is the } 2n^{\text{th}} \text { Bernoulli number.} $$ \end{lemme} However, the zeta values at odd integers already turn out to be quite interesting periods: \begin{conje} $\pi, \zeta(3), \zeta(5), \zeta(7),\cdots $ are algebraically independent. \end{conje} This conjecture raises difficult transcendental questions, rather out of reach; currently we only know $\zeta(3)\notin \mathbb{Q}$ (Ap\'{e}ry), infinitely many odd zeta values are irrational (Rivoal), or other quite partial results (Zudilin, Rivoal, etc.); recently, F. Brown paved the way for a pursuit of these results, in $\cite{Br5}$.\\ \\ \textbf{Multiple zeta values relative to the $\boldsymbol{N^{\text{th}}}$ roots of unity} $\mu_{N}$, \nomenclature{$\mu_{N}$}{$N^{\text{th}}$ roots of unity}which we shall denote by \textbf{MZV}$\boldsymbol{_{\mu_{N}}}$\nomenclature{\textbf{MZV}$\boldsymbol{_{\mu_{N}}}$}{Multiple zeta values relative to the $\boldsymbol{N^{\text{th}}}$ roots of unity, denoted $\zeta()$} are defined by: \footnote{Beware, there is no consensus on the order for the arguments of these MZV: sometimes the summation order is reversed.} \begin{framed} \begin{equation}\label{eq:mzv} \text{ } \zeta\left(n_{1}, \ldots , n_{p} \atop \epsilon_{1} , \ldots ,\epsilon_{p} \right)\mathrel{\mathop:}= \sum_{0<k_{1}<k_{2} \cdots <k_{p}} \frac{\epsilon_{1}^{k_{1}} \cdots \epsilon_{p}^{k_{p}}}{k_{1}^{n_{1}} \cdots k_{p}^{n_{p}}} \text{, } \epsilon_{i}\in \mu_{N} \text{, } n_{i}\in\mathbb{N}^{\ast}\text{, } (n_{p},\epsilon_{p})\neq (1,1). \end{equation} \end{framed} The \textit{weight}, often denoted $w$ below, is defined as $\sum n_{i}$, the \textit{depth} is the length $p$, whereas the \textit{height}, usually denoted $h$, is the number of $n_{i}$ greater than $1$. The weight is conjecturally a grading, whereas the depth is only a filtration. Denote also by $\boldsymbol{\mathcal{Z}^{N}}$\nomenclature{$\mathcal{Z}^{N}$}{the $\mathbb{Q}$-vector space spanned by the multiple zeta values relative to $\mu_{N}$} the $\mathbb{Q}$-vector space spanned by these multiple zeta values relative to $\mu_{N}$.\\ These MZV$_{\mu_{N}}$ satisfy both \textit{shuffle} $\shuffle$ relation (coming from the integral representation below) and \textit{stuffle} $\ast$ relation (coming from this sum expression), which turns $\mathcal{Z}^{N}$ into an algebra. These relations, for $N=1$, are conjectured to generate all the relations between MZV if we add the so-called Hoffman (regularized double shuffle) relation; cf. \cite{Ca}, \cite{Wa} for a good introduction to this aspect. However, the literature is full of other relations among these (cyclotomic) multiple zeta values: cf. $\cite{AO},\cite{EF}, \cite{OW}, \cite{OZa}, \cite{O1}, \cite{Zh2}, \cite{Zh3}$. Among these, we shall require the so-called pentagon resp. \textit{hexagon} relations (for $N=1$, cf. $\cite{Fu}$), coming from the geometry of moduli space of genus $0$ curves with $5$ ordered marked points $X=\mathcal{M}_{0,5}$ resp. with $4$ marked points $X=\mathcal{M}_{0,4}=\mathbb{P}^{1}\diagdown\lbrace 0,1, \infty\rbrace$ and corresponding to a contractible path in X; hexagon relation (cf. Figure $\ref{fig:hexagon}$) is turned into an \textit{octagon} relation (cf. Figure $\ref{fig:octagon}$) for $N>1$ (cf. $\cite{EF}$) and is used below in $\S 4.2$.\\ \\ One crucial point about multiple zeta values, is their \textit{integral representation}\footnote{Obtained by differentiating, considering there variables $z_{i}\in\mathbb{C}$, since: $$\frac{d}{dz_{p}} \zeta \left( n_{1}, \ldots, n_{p} \atop z_{1}, \ldots, z_{p-1}, z_{p}\right) = \left\lbrace \begin{array}{ll} \frac{1}{z_{p}} \zeta \left( n_{1}, \ldots, n_{p}-1 \atop z_{1}, \ldots, z_{p-1}, z_{p}\right) & \text{ if } n_{p}\neq 1\\ \frac{1}{1-z_{p}} \zeta \left( n_{1}, \ldots, n_{p-1} \atop z_{1}, \ldots, z_{p-1} z_{p}\right) & \text{ if } n_{p}=1. \end{array} \right. $$}, which makes them clearly \textit{periods} in the sense of Kontsevich-Zagier. Let us define first the following iterated integrals and differential forms, with $a_{i}\in \lbrace 0, \mu_{N} \rbrace$:\nomenclature{$I(0; a_{1}, \ldots , a_{n} ;1)$}{particular iterated integrals, with $a_{i}\in \lbrace 0, \mu_{N} \rbrace$} \begin{equation}\label{eq:iterinteg} \boldsymbol{I(0; a_{1}, \ldots , a_{n} ;1)}\mathrel{\mathop:}= \int_{0< t_{1} < \cdots < t_{n} < 1} \frac{dt_{1} \cdots dt_{n}}{ (t_{1}-a_{1}) \cdots (t_{n}-a_{n}) }=\int_{0}^{1} \omega_{a_{1}} \ldots \omega_{a_{n}} \text{, with } \omega_{a}\mathrel{\mathop:}=\frac{dt}{t-a}. \end{equation}\nomenclature{$\omega_{a}$}{$\mathrel{\mathop:}=\frac{dt}{t-a}$, differential form.} In this setting, with $\eta_{i}\mathrel{\mathop:}= (\epsilon_{i}\ldots \epsilon_{p})^{-1}\in\mu_{N}$, $n_{i}\in\mathbb{N}^{\ast}$\footnote{The use of bold in the iterated integral writing indicates a repetition of the corresponding number, as $0$ here.}: \begin{framed} \begin{equation}\label{eq:reprinteg} \zeta \left({ n_{1}, \ldots , n_{p} \atop \epsilon_{1} , \ldots ,\epsilon_{p} }\right) = (-1)^{p} I(0; \eta_{1}, \boldsymbol{0}^{n_{1}-1}, \eta_{2},\boldsymbol{0}^{n_{2}-1}, \ldots, \eta_{p}, \boldsymbol{0}^{n_{p}-1} ;1). \end{equation} \end{framed} \nomenclature{$\epsilon_{i}$, $\eta_{i}$}{\textit{(Usually)} The roots of unity corresponding to the MZV resp. to the iterated integral, i.e. $\eta_{i}\mathrel{\mathop:}= (\epsilon_{i}\cdots \epsilon_{p})^{-1}$.} \textsc{Remarks}: \begin{itemize} \item[$\cdot$] Multiple zeta values can be seen as special values of generalized multiple polylogarithms, when $\epsilon_{i}$ are considered in $\mathbb{C}$\footnote{The series is absolutely convergent for $ \mid \epsilon_{i}\mid <1$, converges also for $\mid \epsilon_{i}\mid =1$ if $n_{p} >1$. Cf. \cite{Os} for an introduction.}. First, notice that in weight $2$, $Li_{1}(z)\mathrel{\mathop:}=\zeta\left( 1\atop z\right) $ is the logarithm $-\log (1-z)$. Already the dilogarithm, in weight $2$, $Li_{2}(z)\mathrel{\mathop:}=\zeta \left( 2\atop z\right) =\sum_{k>0} \frac{z^{k}}{k^{2}}$, satisfies nice functional equations\footnote{As the functional equations with $Li_{2}\left( \frac{1}{z}\right) $ or $Li_{2}\left( 1-z \right) $ or the famous five terms relation, for its sibling, the Bloch Wigner function $D(z)\mathrel{\mathop:}= Im\left( Li_{2}(z) +\log(\mid z\mid) \log(1-z) \right)$: $$D(x)+D(y)+ D\left( \frac{1-x}{1-xy}\right) + D\left( 1-xy\right)+ D\left( \frac{1-y}{1-xy}\right)=0. $$} and arises in many places such as in the Dedekind zeta value $\zeta_{F}(2)$ for F an imaginary quadratic field, in the Borel regulator in algebraic K-theory, in the volume of hyperbolic manifolds, etc.; cf. $\cite{GZ}$; some of these connections can be generalized to higher weights. \item[$\cdot$] Recall that an iterated integral of closed (real or complex) differential $1-$forms $\omega_{i}$ along a path $\gamma$ on a 1-dimensional (real or complex) differential manifold $M$ is homotopy invariant, cf. $\cite{Ch}$. If $M=\mathbb{C}\diagdown \lbrace a_{1}, \ldots , a_{N}\rbrace$ \footnote{As for cyclotomic MZV, with $a_{i}\in \mu_{N}\cup\lbrace 0\rbrace$; such an $I=\int_{\gamma}\omega_{1}\ldots \omega_{n}$ is a multivalued function on $M$.} and $\omega_{i}$ are meromorphic closed $1-$forms, with at most simple poles in $a_{i}$, and $\gamma(0)=a_{1}$, the iterated integral $I=\int_{\gamma}\omega_{1}\cdots \omega_{n}$ is divergent. The divergence being polynomial in log $\epsilon$ ($\epsilon \ll 1$) \footnote{More precisely, we can prove that $\int_{\epsilon}^{1} \gamma^{\ast} (\omega_{1}) \cdots \gamma^{\ast} (\omega_{n}) = \sum_{i=0} \alpha_{i} (\epsilon) \log^{i} (\epsilon)$, with $\alpha_{i} (\epsilon)$ holomorphic in $\epsilon=0$; $\alpha_{0} (\epsilon) $ depends only on $\gamma'(0)$.}, we define the iterated integral I as the constant term, which only depends on $\gamma'(0)$. This process is called \textit{regularization}, we need to choose the tangential base points to properly define the integral. Later, we will consider the straight path $dch$ from $0$ to $1$, with tangential base point $\overrightarrow{1}$ at $0$ and $\overrightarrow{-1}$ at $1$, denoted also $\overrightarrow{1}_{0}, \overrightarrow{-1}_{1}$ or simply $\overrightarrow{01}$ for both. \end{itemize} \texttt{Notations}: In the case of \textit{multiple zeta values} (i.e. $N=1$) resp. of \textit{Euler sums} (i.e. $N=2$), since $\epsilon_{i}\in \left\{\pm 1\right\}$, the notation is simplified, using $ z_{i}\in \mathbb{Z}^{\ast}$:\nomenclature{ES}{Euler sums, i.e. multiple zeta values associated to $\mu_{2}=\lbrace\pm 1\rbrace$} \begin{equation}\label{eq:notation2} \zeta\left(z_{1}, \ldots, z_{p} \right) \mathrel{\mathop:}= \zeta\left(n_{1}, \ldots , n_{p} \atop \epsilon_{1} , \ldots ,\epsilon_{p} \right)\text{ with } \left( n_{i} \atop \epsilon_{i} \right)\mathrel{\mathop:}=\left( \mid z_{i} \mid \atop sign(z_{i} ) \right) . \end{equation} Another common notation in the literature is the use of \textit{overlines} instead of negative arguments, i.e.: $ z_{i}\mathrel{\mathop:}=\left\lbrace \begin{array}{ll} n_{i} &\text{ if } \epsilon_{i}=1\\ \overline{n_{i}} &\text{ if } \epsilon_{i}=-1 \end{array} \right. .$\nomenclature{$\overline{n}$}{another notation to denote a negative argument in the Euler sums: when the corresponding root of unity is $\epsilon=-1$.} \section{Contents} In this thesis, we mainly consider the \textit{\textbf{motivic}} versions of these multiple zeta values, denoted $\boldsymbol{\zeta^{\mathfrak{m}}}(\cdot)$ and shortened \textbf{MMZV}$\boldsymbol{_{\mu_{N}}}$ and defined in $\S 2.3$\nomenclature{\textbf{MMZV}$\boldsymbol{_{\mu_{N}}}$}{the motivic multiple zeta values relative to $\mu_{N}$, denoted $\zeta^{\mathfrak{m}}(\ldots)$, defined in $\S 2.3$}. They span a $\mathbb{Q}$-vector space $\boldsymbol{\mathcal{H}^{N}}$ of motivic multiple zetas relative to $\mu_{N}$\nomenclature{$\mathcal{H}^{N}$}{the $\mathbb{Q}$-vector space $\boldsymbol{\mathcal{H}^{N}}$ of motivic multiple zetas relative to $\mu_{N}$}. There is a surjective homomorphism, called the \textit{period map}, which is conjectured to be an isomorphism (this is a special case of the period conjecture)\nomenclature{per}{the surjective period map, conjectured to be an isomorphism.}: \begin{equation}\label{eq:per} \textbf{per} : w:\quad \mathcal{H}^{N} \rightarrow \mathcal{Z}^{N} \text{ , } \quad \zeta^{\mathfrak{m}} (\cdot) \mapsto \zeta ( \cdot ). \end{equation} Working on the motivic side, besides being conjecturally identical to the complex numbers side, turns out to be somehow simpler, since motivic theory provides a Hopf Algebra structure as we will see throughout this thesis. Notably, each identity between motivic MZV$_{\mu_{N}}$ implies an identity for their periods; a motivic basis for MMZV$\mu_{N}$ is hence a generating family (conjecturally basis) for MZV$\mu_{N}$. Indeed, on the side of motivic multiple zeta values, there is an action of a \textit{motivic Galois group} $\boldsymbol{\mathcal{G}}$\footnote{Later, we will define a category of Mixed Tate Motives, which will be a tannakian category: consequently equivalent to a category of representation of a group $\mathcal{G}$; cf. $\S 2.1$. }, which, passing to the dual, factorizes through a \textit{coaction} $\boldsymbol{\Delta}$ as we will see in $\S 2.4$. This coaction, which is given by an explicit combinatorial formula (Theorem $(\ref{eq:coaction})$, [Goncharov, Brown]), is the keystone of this PhD. In particular, it enables us to prove linear independence of MMZV, as in the theorem stated below (instead of adding yet another identity to the existing zoo of relations between MZV), and to study Galois descents. From this, we deduce results about numbers by applying the period map. \\ \\ This thesis is structured as follows: \begin{description} \item[Chapter $2$] sketches the background necessary to understand this work, from Mixed Tate Motives to the Hopf algebra of motivic multiple zeta values at $\mu_{N}$, with some specifications according the values of $N$, and results used throughout the rest of this work. The combinatorial expression of the coaction (or of the weight graded \textit{derivation operators} $D_{r}$ extracted from it, $(\ref{eq:Der})$) is the cornerstone of this work. We shall also bear in mind Theorem $2.4.4$ stating which elements are in the kernel of these derivations), which sometimes allows to lift identities from MZV to motivic MZV, up to rational coefficients, as we will see throughout this work.\\ \texttt{Nota Bene}: A \textit{motivic relation} is indeed stronger; it may hence require several relations between MZV in order to lift an identity to motivic MZV. An example of such a behaviour occurs with some Hoffman $\star$ elements, in Lemma $\ref{lemmcoeff}$.\\ \item[Chapter $3$] explains the main results of this PhD, ending with a wider perspective and possible future paths. \item[Chapter $4$] focuses on the cases $N=1$, i.e. multiple zeta values and $N=2$, i.e. Euler sums, providing some new bases: \begin{itemize} \item[$(i)$] First, we introduce \textit{Euler $\sharp$ sums}, variants of Euler sums, defined in $\S 2.3$ as in $(\ref{eq:reprinteg})$, replacing each $\omega_{\pm 1}$ by $\omega_{\pm \sharp}\mathrel{\mathop:}=2 \omega_{\pm 1}-\omega_{0}$, except for the first one and prove: \begin{theom} Motivic Euler $\sharp$ sums with only positive $odd$ and negative $even$ integers as arguments are unramified: i.e. motivic multiple zeta values. \end{theom} By application of the period map above: \begin{corol} Each Euler $\sharp$ sums with only positive $odd$ and negative $even$ integers as arguments is unramified, i.e. $\mathbb{Q}$ linear combination of multiple zeta values. \end{corol} Moreover, we can extract a basis from this family: \begin{theom} $ \lbrace \zeta^{\sharp, \mathfrak{m}} \left( 2a_{0}+1, 2a_{1}+3, \ldots, 2a_{p-1}+3 , -(2 a_{p}+2)\right) , a_{i} \geq 0 \rbrace$ is a graded basis of the space of motivic multiple zeta values. \end{theom} By application of the period map: \begin{corol} Each multiple zeta value is a $\mathbb{Q}$ linear combination of elements of the same weight in $\lbrace \zeta^{ \sharp} \left( 2a_{0}+1, 2a_{1}+3, \ldots, 2a_{p-1}+3 , -(2 a_{p}+2)\right) , a_{i} \geq 0 \rbrace$. \end{corol} \item[$(ii)$] We also prove the following, where Euler $\star$ sums are defined (cf. $\S 2.3$) as in $(\ref{eq:reprinteg})$, replacing each $\omega_{\pm 1}$ by $\omega_{\pm \star}\mathrel{\mathop:}=\omega_{\pm 1}-\omega_{0}$, except the first: \begin{theom} If the analytic conjecture ($\ref{conjcoeff}$) holds, then the motivic \textit{Hoffman} $\star$ family $\lbrace \zeta^{\star,\mathfrak{m}} (\lbrace 2,3 \rbrace^{\times})\rbrace$ is a basis of $\mathcal{H}^{1}$, the space of MMZV. \end{theom} \item[$(iii)$] Conjecturally, the two previous basis, namely the Hoffman $^{\star}$ family and the Euler$^{\sharp}$ family, are the same. Indeed, we conjecture a generalized motivic Linebarger-Zhao equality (Conjecture $\ref{lzg}$) which expresses each motivic multiple zeta $\star$ value as a motivic Euler $\sharp$ sum. It extends the Two One formula [Ohno-Zudilin], the Three One Formula [Zagier], and Linebarger Zhao formula, and applies to motivic MZV. If this conjecture holds, then $(i)$ implies that the Hoffman$^{\star}$ family is a basis. \end{itemize} Such results on linear independence of a family of motivic MZV are proved recursively, once we have found the \textit{appropriate level} filtration on the elements; ideally, the family considered is stable under the derivations \footnote{If the family is not \textit{a priori} \textit{stable} under the coaction, we need to incorporate in the recursion an hypothesis on the coefficients which appear when we express the right side with the elements of the family.}; the filtration, as we will see below, should correspond to the \textit{motivic depth} defined in $\S 2.4.3$, and decrease under the derivations \footnote{In the case of Hoffman basis ($\cite{Br2}$), or Hoffman $\star$ basis (Theorem $4.4.1$) it is the number of $3$, whereas in the case of Euler $\sharp$ sums basis (Theorems $4.3.2$), it is the depth minus one; for the \textit{Deligne} basis given in Chapter $5$ for $N=2,3,4, \mlq 6 \mrq ,8$, it is the usual depth. The filtration by the level has to be stable under the coaction, and more precisely, the derivations $D_{r}$ decrease the level on the elements of the conjectured basis, which allows a recursion.}; if the derivations, modulo some spaces, act as a deconcatenation on these elements, linear independence follows naturally from this recursion. Nevertheless, to start this procedure, we need an analytic identity\footnote{Where F. Brown in $\cite{Br2}$, for the Hoffman basis, used the analytic identity proved by Zagier in $\cite{Za}$, or $\cite{Li}$.}, which is left here as a conjecture in the case of the Hoffman $\star $ basis. This conjecture is of an entirely different nature from the techniques developed in this thesis. We expect that it could be proved using analytic methods along the lines of $\cite{Za}, \cite{Li}$.\\ \item[Chapter $5$] applies ideas of Galois descents on the motivic side. Originally, the notion of Galois descent was inspired by the question: which linear combinations of Euler sums are \textit{unramified}, i.e. multiple zeta values?\footnote{This was already a established question, studied by Broadhurst (which uses the terminology \textit{honorary}) among others. Notice that this issue surfaces also for motivic Euler sums in some results in Chapter $3$ and $5$.} More generally, looking at the motivic side, one can ask which linear combinations of MMZV$_{\mu_{N}}$ lie in MMZV$_{\mu_{N'}}$ for $N'$ dividing $N$. This is what we call \textit{descent} (the first level of a descent) and can be answered by exploiting the motivic Galois group action. General descent criteria are given; in the particular case of $N=2,3,4,\mlq 6 \mrq,8$\footnote{\texttt{Nota Bene}: $N=\mlq 6 \mrq$ is a special case; the quotation marks indicate here that we restrict to \textit{unramified} MMZV cf. $\S 2.1.1$.}, Galois descents are made explicit and our results lead to new bases of MMZV relative to $\mu_{N'}$ in terms of a basis of MMZV relative to $\mu_{N}$, and in particular, a new proof of P. Deligne's results $\cite{De}$.\\ Going further, we define ramification spaces which constitute a tower of intermediate spaces between the elements in MMZV$_{\mu_{N}}$ and the whole space of MMZV$_{\mu_{N'}}$. This is summed up in $\S 3.2$ and studied in detail Chapter $5$ or article $\cite{Gl1}$.\\ Moreover, as we will see below, these methods enable us to construct the motivic periods of categories of mixed Tate motives which cannot be reached by standard methods: i.e. are not simply generated by a motivic fundamental group. \item[Chapter $6$] gathers some applications of the coaction, from maximal depth terms, to motivic identities, via unramified motivic Euler sums; other potential applications of these Galois ideas to the study of these periods are still waiting to be further investigated. \\ \\ \\ \end{description} \texttt{\textbf{Consistency:}}\\ Chapter $2$ is fundamental to understand the tools and the proofs of both Chapter $4$, $5$ and $6$ (which are independent between them), but could be skimmed through before the reading of the main results in Chapter $3$. The proofs of Chapter $4$ are based on the results of Annexe $A.1$, but could be read independently. \chapter{Background} \section{Motives and Periods} Here we sketch the motivic background where the motivic iterated integrals (and hence this work) mainly take place; although most of it can be taken as a black box. Nevertheless, some of the results coming from this rich theory are fundamental to our proofs. \subsection{Mixed Tate Motives} \paragraph{Motives in a nutshell.} Motives are supposed to play the role of a universal (and algebraic) cohomology theory (see $\cite{An}$). This hope is partly nourished by the fact that, between all the classical cohomology theories (de Rham, Betti, $l$-adique, crystalline), we have comparison isomorphisms in characteristic $0$ \footnote{Even in positive characteristic, $\dim H^{i}(X)$ does not depend on the cohomology chosen among these.}. More precisely, the hope is that there should exist a tannakian (in particular abelian, tensor) category of motives $\mathcal{M}(k)$, and a functor $\text{Var}_{k} \xrightarrow{h} \mathcal{M}(k) $ such that:\\ \textit{For each Weil cohomology}\footnote{This functor should verify some properties, such as Kunneth formula, Poincare duality, etc. as the classic cohomology theories. \\ If we restrict to smooth projective varieties, $\text{SmProj}_{k}$, we can construct such a category, the category of pure motives $ \mathcal{M}^{pure}(k)$ starting from the category of correspondence of degree $0$. For more details, cf. $\cite{Ka}$.}: $\text{Var}_{k} \xrightarrow{H} \text{Vec}_{k}$, \textit{there exists a realization map $w_{H}$ such that the following commutes}:\nomenclature{$\text{Var}_{k}$}{the category of varieties over k}\nomenclature{$\text{Vec}_{k}$}{the category of $k$-vector space of finite dimension}\nomenclature{$\text{SmProj}_{k}$}{the category of smooth projective varieties over k.} $$\xymatrix{ \text{Var}_{k} \ar[d]^{\forall H} \ar[r]^{ h} & \mathcal{M}(k) \ar[dl]^{\exists w_{H}}\\ \text{Vec}_{K} & },$$ \textit{where $h$ satisfy properties such as }$h(X\times Y)=h(X)\oplus h(Y)$, $h(X \coprod Y)= h(X)\otimes h(Y)$. The realizations functors are conjectured to be full and faithful (conjecture of periods of Grothendieck, Hodge conjecture, Tate conjecture, etc.)\footnote{In the case of Mixed Tate Motives over number fields as seen below, Goncharov proved it for Hodge and l-adique Tate realizations, from results of Borel and Soule.}.\\ To this end, Voedvosky (cf. $\cite{Vo}$) constructed a triangulated category of Mixed Motives $DM^{\text{eff}}(k)_{\mathbb{Q}}$, with rational coefficients, equipped with tensor product and a functor: \begin{center} $M_{gm}: \text{Sch}_{\diagup k} \rightarrow DM^{\text{eff}}$ satisfying some properties such as: \end{center} \begin{description} \item[Kunneth] $M_{gm}(X \times Y)=M_{gm}(X)\otimes M_{gm}(Y)$. \item[$\mathbb{A}^{1}$-invariance] $M_{gm}(X \times \mathbb{A}^{1})= M_{gm}(X)$. \item[Mayer Vietoris] $M_{gm}(U\cap V)\rightarrow M_{gm}(U) \otimes M_{gm}(V) \rightarrow M_{gm}(U\cup V)\rightarrow M_{gm}(U\cap V)[1] $, $U,V$ open, is a distinguished triangle.\footnote{Distinguished triangles in $DMT^{\text{eff}}(k)$, i.e. of type Tate, become exact sequences in $\mathcal{MT}(k)$.} \item[Gysin] $M_{gm}(X\diagdown Z)\rightarrow M_{gm}(X) \rightarrow M_{gm}(Z)(c)[2c]\rightarrow M_{gm}(X\diagdown Z)[1] $, $X$ smooth, $Z$ smooth, closed, of codimension $c$, is a distinguished triangle. \end{description} We would like to extract from the triangulated category $DM^{\text{eff}}(k)_{\mathbb{Q}}$ an abelian category of Mixed Motives over k\footnote{ A way would be to define a $t$ structure on this category, and the heart of the t-structure, by Bernstein, Beilinson, Deligne theorem is a full admissible abelian sub-category.}. However, we still are not able to do it in the general case, but it is possible for some triangulated tensor subcategory of type Tate, generated by $\mathbb{Q}(n)$ with some properties.\\ \\ \textsc{Remark}: $\mathbb{L}\mathrel{\mathop:}=\mathbb{Q}(-1)= H^{1}(\mathbb{G}_{\mathfrak{m}})=H^{1}(\mathbb{P}^{1} \diagdown \lbrace 0, \infty\rbrace)$\nomenclature{$\mathbb{G}_{\mathfrak{m}}$}{the multiplicative group} which is referred to as the \textit{Lefschetz motive}, is a pure motive, and has period $(2i\pi)$. Its dual is the so-called \textit{Tate motive} $\mathbb{T}\mathrel{\mathop:}=\mathbb{Q}(1)=\mathbb{L}^{\vee}$. More generally, let us define $\mathbb{Q}(-n)\mathrel{\mathop:}= \mathbb{Q}(-1)^{\otimes n}$ resp. $\mathbb{Q}(n)\mathrel{\mathop:}= \mathbb{Q}(1)^{\otimes n}$ whose periods are in $(2i\pi)^{n} \mathbb{Q}$ resp. $(\frac{1}{2i\pi})^{n} \mathbb{Q}$, hence extended periods in $\widehat{P}$; we have the decomposition of the motive of the projective line: $h(\mathbb{P}^{n})= \oplus _{k=0}^{n}\mathbb{Q}(-k)$. \paragraph{Mixed Tate Motives over a number field.} Let's first define, for $k$ a number field, the category $\mathcal{DM}(k)_{\mathbb{Q}}$ from $\mathcal{DM}^{\text{eff}}(k)_{\mathbb{Q}}$ by formally \say{inverting} the Tate motive $\mathbb{Q}(1)$, and then $\mathcal{DMT}(k)_{\mathbb{Q}}$ as the smallest triangulated full subcategory of $\mathcal{DM}(k)_{\mathbb{Q}}$ containing $\mathbb{Q}(n), n\in\mathbb{Z}$ and stable by extension.\\ By the vanishing theorem of Beilinson-Soule, and results from Levine (cf. $\cite{Le}$), there exists:\footnote{A \textit{tannakian} category is abelian, $k$-linear, tensor rigid (autoduality), has an exact faithful fiber functor, compatible with $\otimes$ structures, etc. Cf. $\cite{DM}$ about Tannakian categories.} \begin{framed} A tannakian \textit{category of Mixed Tate motives} over k with rational coefficients, $\mathcal{MT}(k)_{\mathbb{Q}}$\nomenclature{$\mathcal{MT}(k)$}{category of Mixed Tate Motives over $k$} and equipped with a weight filtration $W_{r}$ indexed by even integers such that $gr_{-2r}^{W}(M)$ is a sum of copies of $\mathbb{Q}(r)$ for $M\in \mathcal{MT}(k)$, i.e.,\\ Every object $M\in \mathcal{MT}(k)_{\mathbb{Q}}$ is an iterated extension of Tate motives $\mathbb{Q}(n), n\in\mathbb{Z}$. \end{framed} such that (by the works of Voedvodsky, Levine $\cite{Le}$, Bloch, Borel (and K-theory), cf. $\cite{DG}$): $$\begin{array}{ll} \text{Ext}^{1}_{\mathcal{MT}(k)}(\mathbb{Q}(0),\mathbb{Q}(n) )\cong K_{2n-1}(k)_{\mathbb{Q}} \otimes \mathbb{Q} \cong & \left\lbrace \begin{array}{ll} k^{\ast}\otimes_{\mathbb{Z}} \mathbb{Q} & \text{ if } n=1 .\\ \mathbb{Q}^{r_{1}+r_{2}} & \text{ if } n>1 \text{ odd }\\ \mathbb{Q}^{r_{2}} & \text{ if } n>1 \text{ even } \end{array} \right. . \\ \text{Ext}^{i}_{\mathcal{MT}(k)}(\mathbb{Q}(0),\mathbb{Q}(n) )\cong 0 & \quad \text{ if } i>1 \text{ or } n\leq 0.\\ \end{array}$$ Here, $r_{1}$ resp $r_{2}$ stand for the number of real resp. complex (and non real, up to conjugate) embeddings from $k$ to $\mathbb{C}$.\\ In particular, the weight defines a canonical fiber functor\nomenclature{$\omega$}{the canonical fiber functor}: $$\begin{array}{lll} \omega: & \mathcal{MT}(k) \rightarrow \text{Vec}_{\mathbb{Q}} & \\ &M \mapsto \oplus \omega_{r}(M) & \quad \quad \text{ with } \left\lbrace \begin{array}{l} \omega_{r}(M)\mathrel{\mathop:}= \text{Hom}_{\mathcal{MT}(k)}(\mathbb{Q}(r), gr_{-2r}^{W}(M))\\ \text{ i.e. } \quad gr_{-2r}^{W}(M)= \mathbb{Q}(r)\otimes \omega_{r}(M). \end{array} \right. \end{array} $$ \\ The category of Mixed Tate Motives over $k$, since tannakian, is equivalent to the category of representations of the so-called \textit{\textbf{motivic Galois group}} $\boldsymbol{\mathcal{G}^{\mathcal{MT}}}$\nomenclature{$\mathcal{G}^{\mathcal{M}}$}{the motivic Galois group of the category of Mixed Tate Motives $\mathcal{M}$ } of $\mathcal{MT}(k)$ \footnote{With the equivalence of category between $A$ Comodules and Representations of the affine group scheme $\text{Spec}(A)$, for $A$ a Hopf algebra. Note that $\text{Rep}(\mathbb{G}_{m})$ is the category of $k$-vector space $\mathbb{Z}$-graded of finite dimension.}:\nomenclature{$\text{Rep}(\cdot)$}{the category of finite representations} \begin{framed} \begin{equation}\label{eq:catrep} \mathcal{MT}(k)_{\mathbb{Q}}\cong \text{Rep}_{k} \mathcal{G}^{\mathcal{MT}} \cong \text{Comod } (\mathcal{O}(\mathcal{G}^{\mathcal{MT}})) \quad \text{ where } \mathcal{G}^{\mathcal{MT}}\mathrel{\mathop:}=\text{Aut}^{\otimes } \omega . \end{equation} \end{framed} The motivic Galois group $\mathcal{G}^{\mathcal{MT}}$ decomposes as, since $\omega$ is graded: \begin{center} $\mathcal{G}^{\mathcal{MT}}= \mathbb{G}_{m} \ltimes \mathcal{U}^{\mathcal{MT}}$, $\quad \text{i.e. } \quad 1 \rightarrow \mathcal{U}^{\mathcal{MT}} \rightarrow \mathcal{G}^{\mathcal{MT}} \leftrightarrows \mathbb{G}_{m} \rightarrow 1 \quad \textit{ is an exact sequence, }$ \end{center} \begin{center} \textit{where} $\mathcal{U}^{\mathcal{MT}}$\nomenclature{$\mathcal{U}^{\mathcal{M}}$}{the pro-unipotent part of the motivic Galois group $\mathcal{G}^{\mathcal{M}}$.} \textit{is a pro-unipotent group scheme defined over }$\mathbb{Q}$. \end{center} The action of $\mathbb{G}_{m}$ is a grading, and $\mathcal{U}^{\mathcal{MT}}$ acts trivially on the graded pieces $\omega(\mathbb{Q}(n))$.\\ \\ Let $\mathfrak{u}$ denote the completion of the pro-nilpotent graded Lie algebra of $\mathcal{U}^{\mathcal{MT}}$ (defined by a limit); $\mathfrak{u}$ is free and graded with negative degrees from the $\mathbb{G}_{m}$-action. Furthermore\footnote{Since $\text{Ext}^{2}_{\mathcal{MT}} (\mathbb{Q}(0), \mathbb{Q}(n))=0$, which implies $\forall M$, $H^{2}(\mathfrak{u},M)=0$, hence $\mathfrak{u}$ free. Moreover, $(\mathfrak{u}^{ab})= \left( \mathfrak{u} \diagup [\mathfrak{u}, \mathfrak{u}]\right)= H_{1}(\mathfrak{u}; \mathbb{Q})$, then, for $\mathcal{U}$ unipotent: $$\left( \mathfrak{u}^{ab}\right)^{\vee}_{m-n} \cong \text{Ext}^{1}_{\text{Rep} _{\mathbb{Q}}} (\mathbb{Q}(n), \mathbb{Q}(m)).$$}: \begin{equation} \label{eq:uab} \mathfrak{u}^{ab} \cong \bigoplus \text{Ext}^{1}_{\mathcal{MT}} (\mathbb{Q}(0), \mathbb{Q}(n))^{\vee} \text{ in degree } n. \end{equation} Hence the \textit{fundamental Hopf algebra} is\nomenclature{$\mathcal{A}^{\mathcal{M}}$}{the fundamental Hopf algebra of $\mathcal{M}$} \footnote{Recall the anti-equivalence of Category, between Hopf Algebra and Affine Group Schemes: $$\xymatrix@R-1pc{ k-Alg^{op} \ar[r]^{\sim} & k-\text{AffSch } \\ k-\text{HopfAlg}^{op} \ar@{^{(}->}[u] \ar[r]^{\sim} & k-\text{ AffGpSch } \ar@{^{(}->}[u]\\ A \ar@{|->}[r] & \text{ Spec } A \\ \mathcal{O}(G) & G: R \mapsto \text{Hom}_{k}(\mathcal{O}(G),R) \ar@{|->}[l] }. $$ It comes from the fully faithful Yoneda functor $C^{op} \rightarrow \text{Fonct}(C, \text{Set})$, leading to an equivalence of Category if we restrict to Representable Functors: $k-\text{AffGpSch } \cong \text{RepFonct } (\text{Alg }^{op},Gp)$. Properties for Hopf algebra are obtained from Affine Group Scheme properties by 'reversing the arrows' in each diagram.\\ Remark that $G$ is unipotent if and only if $A$ is commutative, finite type, connected and filtered.}: \begin{equation} \label{eq:Amt} \mathcal{A}^{\mathcal{MT}}\mathrel{\mathop:}=\mathcal{O}(\mathcal{U}^{\mathcal{MT}})\cong (U^{\wedge} (\mathfrak{u}))^{\vee} \cong T(\oplus_{n\geq 1} \text{Ext}^{1}_{\mathcal{MT}_{N}} (\mathbb{Q}(0), \mathbb{Q}(n))^{\vee} ). \end{equation} Hence, by the Tannakian dictionary $(\ref{eq:catrep})$: $\mathcal{MT}(k)_{\mathbb{Q}}\cong \text{Rep}_{k}^{gr} \mathcal{U}^{\mathcal{MT}} \cong \text{Comod}^{gr} \mathcal{A}^{\mathcal{MT}} .$ \\ \\ Once an embedding $\sigma: k \hookrightarrow \mathbb{C}$ is fixed, Betti cohomology leads to a functor \textit{Betti realization}:\nomenclature{$\omega_{B_{\sigma}}$}{Betti realization functor} $$\omega_{B_{\sigma}}: \mathcal{MT}(k) \rightarrow \text{Vec}_{\mathbb{Q}} ,\quad M \mapsto M_{\sigma}.$$ De Rham cohomology leads similarly to the functor \textit{de Rham realization}\nomenclature{$\omega_{dR}$}{de Rham realization functor}: $$\omega_{dR}: \mathcal{MT}(k) \rightarrow \text{Vec}_{k} , \quad M \mapsto M_{dR} \text{ , } \quad M_{dR} \text{ weight graded}.$$ Beware, the de Rham functor $\omega_{dR}$ here is not defined over $\mathbb{Q}$ but over $k$ and $\omega_{dR}=\omega \otimes_{\mathbb{Q}} k$, so the de Rham realization of an object $M$ is $M_{dR}=\omega(M)\otimes_{\mathbb{Q}} k$.\\ Between all these realizations, we have comparison isomorphisms, such as: $$ M_{\sigma}\otimes_{\mathbb{Q}} \mathbb{C} \xrightarrow[\sim]{\text{comp}_{dR, \sigma}} M_{dR} \otimes_{k,\sigma} \mathbb{C} \text{ with its inverse } \text{comp}_{\sigma,dR}.$$ $$ M_{\sigma}\otimes_{\mathbb{Q}} \mathbb{C} \xrightarrow[\sim]{\text{comp}_{\omega, B_{\sigma}}} M_{\omega} \otimes_{\mathbb{Q}} \mathbb{C} \text{ with its inverse } \text{comp}_{B_{\sigma}, \omega}.$$ Define also, looking at tensor-preserving isomorphisms: $$\begin{array}{lll} \mathcal{G}_{B}\mathrel{\mathop:}=\text{Aut}^{\otimes}(\omega_{B}), & \text{resp. } \mathcal{G}_{dR}\mathrel{\mathop:}=\text{Aut}^{\otimes}(\omega_{dR}) & \\ P_{\omega, B}\mathrel{\mathop:}=\text{Isom}^{\otimes}(\omega_{B},\omega), & \text{resp. } P_{B,\omega}\mathrel{\mathop:}=\text{Isom}^{\otimes}(\omega,\omega_{B}), & (\mathcal{G}^{\mathcal{MT}}, \mathcal{G}_{B}) \text{ resp. } (\mathcal{G}_{B}, \mathcal{G}^{\mathcal{MT}}) \text{ bitorsors }. \end{array}$$ Comparison isomorphisms above define $\mathbb{C}$ points of these schemes: $\text{comp}_{\omega,B}\in P_{B,\omega} (\mathbb{C})$.\\ \\ \textsc{Remarks}: By $(\ref{eq:catrep})$:\footnote{The different cohomologies should be viewed as interchangeable realizations. Etale chomology, with the action of the absolute Galois group $\text{Gal}(\overline{\mathbb{Q}}\diagup\mathbb{Q})$ (cf $\cite{An3}$) is related to the number $N_{p}$ of points of reduction modulo $p$. For Mixed Tate Motives (and conjecturally only for those) $N_{p}$ are polynomials modulo $p$, which is quite restrictive.} \begin{framed} A Mixed Tate motive over a number field is uniquely defined by its de Rham realization, a vector space $M_{dR}$, with an action of the motivic Galois group $\mathcal{G}^{\mathcal{MT}}$. \end{framed} \texttt{Example:} For instance $\mathbb{Q}(n)$, as a Tate motive, can be seen as the vector space $\mathbb{Q}$ with the action $\lambda\cdot x\mathrel{\mathop:}= \lambda^{n} x$, for $\lambda\in\mathbb{Q}^{\ast}=\text{Aut}(\mathbb{Q})=\mathbb{G}_{m}(\mathbb{Q})$. \paragraph{Mixed Tate Motives over $\mathcal{O}_{S}$.} Before, let's recall for $k$ a number field and $\mathcal{O}$\nomenclature{$\mathcal{O}_{k}$}{ring of integers of $k$} its ring of integers, \textit{archimedian values} of $k$ are associated to an embedding $k \xhookrightarrow{\sigma} \mathbb{C}$, such that: $$\mid x \mid \mathrel{\mathop:}= \mid\sigma(x)\mid_{\infty}\text{ , where } \mid\cdot \mid_{\infty} \text{ is the usual absolute value},$$ and \textit{non archimedian values} are associated to non-zero prime ideals of $\mathcal{O}$\footnote{$ \mathcal{O}$ is a Dedekind domain, $ \mathcal{O}_{\mathfrak{p}}$ a discrete valuation ring whose prime ideals are prime ideals of $\mathcal{O}$ which are included in $(\mathfrak{p})\mathcal{O}_{\mathfrak{p}}$.}: $$v_{\mathfrak{p}}: k^{\times} \rightarrow \mathbb{Z}, \quad v_{\mathfrak{p}}(x) \text{ is the integer such that } x \mathcal{O}_{\mathfrak{p}} = \mathfrak{p}^{v_{\mathfrak{p}}(x)} \mathcal{O}_{\mathfrak{p}} \text{ for } x\in k^{\times}.$$ For $S$ a finite set of absolute values in $k$ containing all archimedian values, the \textit{ring of S-integers}\nomenclature{$\mathcal{O}_{S}$}{ring of $S$-integers}: $$\mathcal{O}_{S}\mathrel{\mathop:}= \left\lbrace x\in k \mid v(x)\geq 0 \text{ for all valuations } v\notin S \right\rbrace. $$ Dirichlet unit's theorem generalizes for $\mathcal{O}^{\times}_{S}$, abelian group of type finite:\nomenclature{$\mu(K)$}{the finite cyclic group of roots of unity in $K$} \footnote{ It will be used below, for dimensions, in $\ref{dimensionk}$. Here, $\text{ card } (S)= r_{1}+r_{2}+\text{ card }(\text{non-archimedian places}) $; as usual, $r_{1}, r_{2}$ standing for the number of real resp. complex (and non real, and up to conjugate) embeddings from $k$ to $\mathbb{C}$; $\mu(K) $ is the finite cyclic group of roots of unity in $K$.} $$\mathcal{O}_{S}^{\times} \cong \mu(K) \times \mathbb{Z}^{\text{ card }(S)-1}.$$ \texttt{Examples}: \begin{itemize} \item[$\cdot$] Taking S as the set of the archimedian values leads to the usual ring of integers $\mathcal{O}$, and would lead to the unramified category of motives $\mathcal{MT}(\mathcal{O})$ below. \item[$\cdot$] For $k=\mathbb{Q}$, $p$ prime, with $S=\lbrace v_{p}, \mid \cdot \mid_{\infty}\rbrace$, we obtain $\mathbb{Z}\left[ \frac{1}{p} \right] $. Note that the definition does not allow to choose $S= \lbrace \cup_{q \text{prime} \atop q\neq p } v_{q}, \mid \cdot \mid_{\infty}\rbrace$, which would lead to the localization $\mathbb{Z}_{(p)}\mathrel{\mathop:}=\lbrace x\in\mathbb{Q} \mid v_{p}(x) \geq 0\rbrace$. \end{itemize} Now, let us define the categories of Mixed Tate Motives which interest us here:\nomenclature{$\mathcal{MT}_{\Gamma}$}{a tannakian category associated to $\Gamma$ a sub-vector space of $\text{Ext}^{1}_{\mathcal{MT}(k)}( \mathbb{Q}(0), \mathbb{Q}(1))$} \begin{defin}\label{defimtcat} \begin{description} \item[$\boldsymbol{\mathcal{MT}_{\Gamma}}$:] For $\Gamma$ a sub-vector space of $\text{Ext}^{1}_{\mathcal{MT}(k)}( \mathbb{Q}(0), \mathbb{Q}(1)) \cong k^{\ast}\otimes \mathbb{Q}$: \begin{center} $\mathcal{MT}_{\Gamma}:$ the tannakian subcategory formed by objects $M$ such that each subquotient $E$ of $M$: $$0 \rightarrow \mathbb{Q}(n+1) \rightarrow E \rightarrow \mathbb{Q}(n) \rightarrow 0 \quad \Rightarrow \quad [E]\in \Gamma \subset \text{Ext}^{1}_{\mathcal{MT}(k)}( \mathbb{Q}(0), \mathbb{Q}(1)) \footnote{$\text{Ext}^{1}_{\mathcal{MT}(k)}( \mathbb{Q}(0), \mathbb{Q}(1)) \cong \text{Ext}^{1}_{\mathcal{MT}(k)}( \mathbb{Q}(n), \mathbb{Q}(n+1)).$}.$$ \end{center} \item[$\boldsymbol{\mathcal{MT}(\mathcal{O}_{S})}$:] The category of mixed Tate motives unramified in each finite place $v\notin S$: \begin{center} $\mathcal{MT}(\mathcal{O}_{S})\mathrel{\mathop:}=\mathcal{MT}_{\Gamma}, \quad $ for $\Gamma=\mathcal{O}_{S}^{\ast}\otimes \mathbb{Q}$. \end{center} \end{description} \end{defin} Extension groups for these categories are then identical to those of $\mathcal{MT}(k)$ except: \begin{equation} \label{eq:extension} \text{Ext}^{1}_{\mathcal{MT}_{\Gamma}}( \mathbb{Q}(0), \mathbb{Q}(1))= \Gamma, \quad \text{ resp. } \quad \text{Ext}^{1}_{\mathcal{MT}(\mathcal{O}_{S})}( \mathbb{Q}(0), \mathbb{Q}(1))= K_{1}(\mathcal{O}_{S})\otimes \mathbb{Q}=\mathcal{O}^{\ast}_{S}\otimes \mathbb{Q}. \end{equation} \paragraph{Cyclotomic Mixed Tate Motives.} In this thesis, we focus on the cyclotomic case and consider the following categories, and sub-categories, for $k_{N}$\nomenclature{$k_{N}$}{the $N^{\text{th}}$ cyclotomic field} the $N^{\text{th}}$ cyclotomic field, $\mathcal{O}_{N}\mathrel{\mathop:}= \mathbb{Z}[\xi_{N}]$\nomenclature{$\mathcal{O}_{N}$}{the ring of integers of $k_{N}$ i.e. $ \mathbb{Z}[\xi_{N}]$ } its ring of integers, with $\xi_{N}$\nomenclature{$\xi_{N}$}{a primitive $N^{\text{th}}$ root of unity} a primitive $N^{\text{th}}$ root of unity:\nomenclature{$\mathcal{MT}_{N,M}$}{the tannakian Mixed Tate subcategory of $\mathcal{MT}(k_{N})$ ramified in $M$} \begin{framed} $$\begin{array}{ll} \boldsymbol{\mathcal{MT}_{N,M}} & \mathrel{\mathop:}= \mathcal{MT} \left( \mathcal{O}_{N} \left[ \frac{1}{M}\right] \right).\\ \boldsymbol{\mathcal{MT}_{\Gamma_{N}}}, & \text{ with $\Gamma_{N}$ the $\mathbb{Q}$-sub vector space of } \left( \mathcal{O}\left[ \frac{1}{N} \right] \right) ^{\ast} \otimes \mathbb{Q}\\ & \text{ generated by $\lbrace 1-\xi^{a}_{N}\rbrace_{0< a < N}$ (modulo torsion). } \end{array}$$ \end{framed} \noindent Hence: $$ \mathcal{MT} \left( \mathcal{O}_{N} \right) \subsetneq \mathcal{MT}_{\Gamma_{N}} \subset \mathcal{MT} \left( \mathcal{O}_{N} \left[ \frac{1}{N}\right] \right)$$ The second inclusion is an equality if and only if $N$ has all its prime factors inert\footnote{I.e. each prime $p$ dividing $N$, generates $(\mathbb{Z} /m \mathbb{Z})^{\ast}$, for $m$ such as $N=p^{v_{p}(N)} m$.\nomenclature{$v_{p}(\cdot)$}{$p$-adic valuation} It could occur only in the following cases: $N= p^{s}, 2p^{s}, 4p^{s}, p^{s}q^{k}$, with extra conditions in most of these cases such as: $2$ is a primitive root modulo $p^{s}$ etc.}, since: \begin{equation}\label{eq:gamma} \Gamma_{N}= \left\lbrace \begin{array}{ll} \left( \mathcal{O}\left[ \frac{1}{p} \right] \right) ^{\ast} \otimes \mathbb{Q} & \text{ if } N = p^{r} \\ (\mathcal{O} ^{\ast} \otimes \mathbb{Q} )\oplus \left( \oplus_{ p \text{ prime } \atop p\mid N} \langle p \rangle\otimes \mathbb{Q} \right) &\text{ else .} \end{array} \right. . \end{equation} The motivic cyclotomic MZVs lie in the subcategory $\mathcal{MT}_{\Gamma_{N}}$, as we will see more precisely in $\S 2.3$.\\ \\ \texttt{Notations:} We may sometimes drop the $M$ (or even $N$), to lighten the notations:\footnote{For instance, $\mathcal{MT}_{3}$ is the category $\mathcal{MT} \left( \mathcal{O}_{3} \left[ \frac{1}{3} \right] \right) $.}:\nomenclature{$\mathcal{MT}_{N}$}{a tannakian Mixed Tate subcategory of $\mathcal{MT}(k_{N})$}\\ $$\mathcal{MT}_{N}\mathrel{\mathop:}= \left\lbrace \begin{array}{ll} \mathcal{MT}_{N,N} & \text{ if } N=2,3,4,8\\ \mathcal{MT}_{6,1} & \text{ if } N=\mlq 6 \mrq. \\ \end{array} \right. $$ \subsection{Motivic periods} Let $\mathcal{M}$ a tannakian category of mixed Tate motives. Its \textit{algebra of motivic periods} is defined as (cf. $\cite{D1}$, $\cite{Br6}$, and $\cite{Br4}$, $\S 2$):\nomenclature{$\mathcal{P}_{\mathcal{M}}^{\mathfrak{m}}$}{the algebra of motivic period of a tannakian category of MTM $\mathcal{M}$} $$\boldsymbol{\mathcal{P}_{\mathcal{M}}^{\mathfrak{m}}}\mathrel{\mathop:}=\mathcal{O}(\text{Isom}^{\otimes}_{\mathcal{M}}(\omega, \omega_{B}))=\mathcal{O}(P_{B,\omega}).$$ \begin{framed} A \textbf{\textit{motivic period}} denoted as a triplet $\boldsymbol{\left[M,v,\sigma \right]^{\mathfrak{m}}}$,\nomenclature{$\left[M,v,\sigma \right]^{\mathfrak{m}}$}{a motivic period} element of $\mathcal{P}_{\mathcal{M}}^{\mathfrak{m}}$, is constructed from a motive $M\in \text{ Ind } (\mathcal{M})$, and classes $v\in\omega(M)$, $\sigma\in\omega_{B}(M)^{\vee}$. It is a function $P_{B,\omega} \rightarrow \mathbb{A}^{1}$, which, on its rational points, is given by: \begin{equation}\label{eq:mper} \quad P_{B,\omega} (\mathbb{Q}) \rightarrow \mathbb{Q}\text{ , } \quad \alpha \mapsto \langle \alpha(v), \sigma\rangle . \end{equation} Its \textit{period} is obtained by the evaluation on the complex point $\text{comp}_{B, dR}$: \begin{equation}\label{eq:perm} \begin{array}{lll} \mathcal{P}_{\mathcal{M}}^{\mathfrak{m}} & \rightarrow & \mathbb{C} \\ \left[M,v,\sigma \right]^{\mathfrak{m}} & \mapsto & \langle \text{comp}_{B,dR} (v\otimes 1), \sigma \rangle . \end{array} \end{equation} \end{framed} \noindent \texttt{Example}: The first example is the \textit{Lefschetz motivic period}: $\mathbb{L}^{\mathfrak{m}}\mathrel{\mathop:}=[H^{1}(\mathbb{G}_{m}), [\frac{dx}{x}], [\gamma_{0}]]^{\mathfrak{m}}$, period of the Lefschetz motive $\mathbb{L}$; it can be seen as the \textit{motivic} $(2 i\pi)^{\mathfrak{m}}$; this notation appears below.\nomenclature{$\mathbb{L}$}{Lefschetz motive, $\mathbb{L}^{\mathfrak{m}}$ the Lefschetz motivic period}\\ \\ This construction can be generalized for any pair of fiber functors $\omega_{1}$, $\omega_{2}$ leading to: \begin{center} \textit{Motivic periods of type} $(\omega_{1},\omega_{2})$, which are in the following algebra of motivic periods: $$\boldsymbol{\mathcal{P}_{\mathcal{M}}^{\omega_{1},\omega_{2}}}\mathrel{\mathop:}= \mathcal{O}\left( P_{\omega_{1},\omega_{2}}\right) = \mathcal{O}\left( \text{Isom}^{\otimes}(\omega_{2}, \omega_{1})\right).$$ \end{center} \textsc{Remarks}: \begin{itemize} \item[$\cdot$] The groupoid structure (composition) on the isomorphisms of fiber functors on $\mathcal{M}$, by dualizing, leads to a coalgebroid structure on the spaces of motivic periods: $$ \mathcal{P}_{\mathcal{M}}^{\omega_{1},\omega_{3}} \rightarrow \mathcal{P}_{\mathcal{M}}^{\omega_{1},\omega_{2}} \otimes \mathcal{P}_{\mathcal{M}}^{\omega_{2},\omega_{3}}.$$ \item[$\cdot$] Any structure carried by these fiber functors (weight grading on $\omega_{dR}$, complex conjugation on $\omega_{B}$, etc.) is transmitted to the corresponding ring of periods. \end{itemize} \texttt{Examples}: \begin{itemize} \item[$\cdot$ ] For $(\omega,\omega_{B})$, it comes down to (our main interest) $\mathcal{P}_{\mathcal{M}}^{\mathfrak{m}}$ as defined in $(\ref{eq:mper})$. By the last remark, $\mathcal{P}_{\mathcal{M}}^{\mathfrak{m}}$ inherits a weight grading and we can define (cf. $\cite{Br4}$, $\S 2.6$):\nomenclature{$\mathcal{P}_{\mathcal{M}}^{\mathfrak{m},+}$}{the ring of geometric motivic periods of $\mathcal{M}$} \begin{framed} \begin{center} $\boldsymbol{\mathcal{P}_{\mathcal{M}}^{\mathfrak{m},+}} \subset \mathcal{P}_{\mathcal{M}}^{\mathfrak{m}}$, the ring of \textit{geometric periods}, is generated by periods of motives with non-negative weights: $\left\lbrace \left[M,v,\sigma \right]^{\mathfrak{m}}\in \mathcal{P}_{\mathcal{M}}^{\mathfrak{m}} \mid W_{-1} M=0 \right\rbrace $. \end{center} \end{framed} \item[$\cdot$] The ring of periods of type $(\omega,\omega)$ is $\mathcal{P}_{\mathcal{M}}^{\omega}\mathrel{\mathop:}= \mathcal{O} \left( \text{Aut}^{\otimes}(\omega)\right)= \mathcal{O} \left(\mathcal{G}^{\mathcal{MT}}\right)$.\footnote{ In the case of a mixed Tate category over $\mathbb{Q}$, as $\mathcal{MT}(\mathbb{Z})$, this is equivalent to the \textit{De Rham periods} in $\mathcal{P}_{\mathcal{M}}^{\mathfrak{dR}}\mathrel{\mathop:}= \mathcal{O} \left( \text{Aut}^{\otimes}(\omega_{dR})\right)$, defined in $\cite{Br4}$; however, for other cyclotomic fields $k$ considered later ($N>2$), we have to consider the canonical fiber functor, since it is defined over $k$.} \\ \textit{Unipotent variants} of these periods are defined when restricting to the unipotent part $\mathcal{U}^{\mathcal{MT}}$ of $\mathcal{G}^{\mathcal{MT}}$, and appear below (in $\ref{eq:intitdr}$):\nomenclature{$\mathcal{P}_{\mathcal{M}}^{\mathfrak{a}}$}{the ring of unipotent periods} $$\boldsymbol{\mathcal{P}_{\mathcal{M}}^{\mathfrak{a}}}\mathrel{\mathop:}=\mathcal{O} \left( \mathcal{U}^{\mathcal{MT}}\right)= \mathcal{A}^{\mathcal{MT}}, \quad \text{ the fundamental Hopf algebra}.$$ They correspond to the notion of framed objects in mixed Tate categories, cf. $\cite{Go1}$. By restriction, there is a map: $$ \mathcal{P}_{\mathcal{M}}^{\omega} \rightarrow \mathcal{P}_{\mathcal{M}}^{\mathfrak{a}}.$$ \end{itemize} By the remark above, there is a coaction: $$\boldsymbol{\Delta^{\mathfrak{m}, \omega}}:\mathcal{P}_{\mathcal{M}}^{\mathfrak{m}} \rightarrow \mathcal{P}_{\mathcal{M}}^{\omega} \otimes \mathcal{P}_{\mathcal{M}}^{\mathfrak{m}}.$$ Moreover, composing this coaction by the augmentation map $\epsilon: \mathcal{P}_{\mathcal{M}}^{\mathfrak{m},+} \rightarrow (\mathcal{P}_{\mathcal{M}}^{\mathfrak{m},+})_{0} \cong \mathbb{Q}$, leads to the morphism (details in $\cite{Br4}$, $\S 2.6$): \begin{equation}\label{eq:projpiam} \boldsymbol{\pi_{\mathfrak{a},\mathfrak{m}}}: \quad \mathcal{P}_{\mathcal{M}}^{\mathfrak{m},+} \rightarrow \mathcal{P}_{\mathcal{M}}^{\mathfrak{a}}, \end{equation} which is, on periods of a motive $M$ such that $W_{-1} M=0$: $\quad \left[M,v,\sigma \right]^{\mathfrak{m}} \rightarrow \left[M,v,^{t}c(\sigma) \right]^{\mathfrak{a}}, $ $$ \text{ where } c \text{ is defined as the composition}: \quad M_{\omega} \twoheadrightarrow gr^{W}_{0}M_{\omega}= W_{0} M_{\omega} \xrightarrow{\text{comp}_{B,\omega}} W_{0} M_{B} \hookrightarrow M_{B} .$$ Bear in mind also the non-canonical isomorphisms, compatible with weight and coaction ($\cite{Br4}$, Corollary $2.11$) between those $\mathbb{Q}$ algebras: \begin{equation} \label{eq:periodgeom} \mathcal{P}_{\mathcal{M}}^{\mathfrak{m}} \cong \mathcal{P}_{\mathcal{M}}^{\mathfrak{a}} \otimes_{\mathbb{Q}} \mathbb{Q} \left[ (\mathbb{L}^{\mathfrak{m}} )^{-1} ,\mathbb{L}^{\mathfrak{m}} \right], \quad \text{and} \quad \mathcal{P}_{\mathcal{M}}^{\mathfrak{m},+} \cong \mathcal{P}_{\mathcal{M}}^{\mathfrak{a}} \otimes_{\mathbb{Q}} \mathbb{Q} \left[ \mathbb{L}^{\mathfrak{m}} \right]. \end{equation} In particular, $\pi_{\mathfrak{a},\mathfrak{m}}$ is obtained by sending $\mathbb{L}^{\mathfrak{m}}$ to $0$.\\ \\ In the case of a category of mixed Tate motive $\mathcal{M}$ defined over $\mathbb{Q}$, \footnote{As, in our concerns, $\mathcal{MT}_{N}$ above with $N=1,2$; in these exceptional (real) cases, we want to keep track of only even Tate twists.} the complex conjugation defines the \textit{real Frobenius} $\mathcal{F}_{\infty}: M_{B} \rightarrow M_{B}$, and induces an involution on motivic periods $\mathcal{F}_{\infty}: \mathcal{P}_{\mathcal{M}}^{\mathfrak{m}} \rightarrow\mathcal{P}_{\mathcal{M}}^{\mathfrak{m}}$. Furthermore, $\mathbb{L}^{\mathfrak{m}}$ is anti invariant by $\mathcal{F}_{\infty}$ (i.e. $\mathcal{F}_{\infty} (\mathbb{L}^{\mathfrak{m}})=-\mathbb{L}^{\mathfrak{m}}$). Then, let us define:\nomenclature{$\mathcal{F}_{\infty}$}{real Frobenius} \begin{framed} \begin{center} $\boldsymbol{\mathcal{P}_{\mathcal{M}, \mathbb{R} }^{\mathfrak{m},+}}$ the subset of $\mathcal{P}_{\mathcal{M}}^{\mathfrak{m},+}$ invariant under the real Frobenius $\mathcal{F}_{\infty}$, \text{ which, by } $(\ref{eq:periodgeom})$ \text{ satisfies }:\nomenclature{$\mathcal{P}_{\mathcal{M}, \mathbb{R} }^{\mathfrak{m},+}$}{the ring of the Frobenius-invariant geometric periods} \end{center} \begin{equation}\label{eq:periodgeomr} \mathcal{P}_{\mathcal{M}}^{\mathfrak{m},+}\cong \mathcal{P}_{\mathcal{M}, \mathbb{R}}^{\mathfrak{m},+} \oplus \mathcal{P}_{\mathcal{M}, \mathbb{R}}^{\mathfrak{m},+}. \mathbb{L}^{\mathfrak{m}} \quad \text{and} \quad \mathcal{P}_{\mathcal{M}, \mathbb{R}}^{\mathfrak{m},+}\cong \mathcal{P}_{\mathcal{M}}^{\mathfrak{a}} \otimes_{\mathbb{Q}} \mathbb{Q}\left[ (\mathbb{L}^{\mathfrak{m}})^{2} \right] . \end{equation} \end{framed} \paragraph{Motivic Galois theory.} The ring of motivic periods $\mathcal{P}_{\mathcal{M}}^{\mathfrak{m}} $ is a bitorsor under Tannaka groups $(\mathcal{G}^{\mathcal{MT}}, \mathcal{G}_{B})$. If Grothendieck conjecture holds, via the period isomorphism, there is therefore a (left) action of the motivic Galois group $\mathcal{G}^{\mathcal{MT}}$ on periods. \\ More precisely, for each period $p$ there would exist: \begin{itemize} \item[$(i)$] well defined conjugates: elements in the orbit of $\mathcal{G}^{\mathcal{MT}}(\mathbb{Q})$. \item[$(ii)$] an algebraic group over $\mathbb{Q}$, $\mathcal{G}_{p}= \mathcal{G}^{\mathcal{MT}} \diagup Stab(p)$, where $Stab(p)$ is the stabilizer of $p$; $\mathcal{G}_{p}$, the Galois group of $p$, transitively permutes the conjugates. \end{itemize} \texttt{Examples}: \begin{itemize} \item[$\cdot$] For $\pi$ for instance, the Galois group corresponds to $\mathbb{G}_{m}$. Conjugates of $\pi$ are in fact $\mathbb{Q}^{\ast} \pi$, and the associated motive would be the Lefschetz motive $\mathbb{L}$, motive of $\mathbb{G}_{m}=\mathbb{P}^{1}\diagdown \lbrace0,\infty\rbrace$, as seen above. \item[$\cdot$] For $\log t$, $t>0$, $t\in\mathbb{Q} \diagdown \lbrace -1, 0, 1\rbrace$, this is a period of the Kummer motive in degree $1$:\footnote{Remark the short exact sequence: $ 0 \rightarrow \mathbb{Q}(1) \rightarrow H_{1}(X, \lbrace 1,t \rbrace) \rightarrow \mathbb{Q}(0) \rightarrow 0 .$} $$K_{t}\mathrel{\mathop:}=M_{gm}(X, \lbrace 1,t \rbrace)\in \text{Ext}^{1}_{\mathcal{MT}(\mathbb{Q})}(\mathbb{Q}(0),\mathbb{Q}(1)) \text{ , where } X\mathrel{\mathop:}=\mathbb{P}^{1}\diagdown \lbrace 0, \infty \rbrace.$$ Since a basis of $H^{B}_{1}(X, \lbrace 1,t \rbrace)$ is $[\gamma_{0}]$, $[\gamma_{1,t}]$ with $\gamma_{1,t}$ the straight path from $1$ to $t$, and a basis of $H^{1}_{dR}(X, \lbrace 1,t \rbrace) $ is $[dx], \left[ \frac{dx}{x} \right] $, the period matrix is: $$ \left( \begin{array}{ll} \mathbb{Q} & 0\\ \mathbb{Q} \log(t) & 2i\pi \mathbb{Q} \\ \end{array} \right). $$ The conjugates of $\log t$ are $\mathbb{Q}^{\ast}\log t+\mathbb{Q}$, and its Galois group is $\mathbb{Q}^{\ast} \ltimes \mathbb{Q}$. \item[$\cdot$] Similarly for zeta values $\zeta(n)$, $n$ odd in $\mathbb{N}^{\ast}\diagdown\lbrace 1 \rbrace$ which are periods of a mixed Tate motive over $\mathbb{Z}$ (cf. below): its conjugates are $\mathbb{Q}^{\ast}\zeta(n)+\mathbb{Q}$, and its Galois group is $\mathbb{Q}^{\ast} \ltimes \mathbb{Q}$. Grothendieck's conjecture implies that $\pi,\zeta(3), \zeta(5), \ldots$ are algebraically independent.\\ More precisely, $\zeta(n)$ is a period of $E_{n}\in \mathcal{MT}(\mathbb{Q})$, where: $$ 0\rightarrow \mathbb{Q}(n) \rightarrow E_{n} \rightarrow \mathbb{Q}(0) \rightarrow 0.$$ Notice that for even $n$, by Borel's result, $\text{Ext}_{\mathcal{MT}(\mathbb{Q})}^{1}(\mathbb{Q}(0),\mathbb{Q}(n))=0$, which implies $E_{n}= \mathbb{Q}(0)\oplus \mathbb{Q}(n)$, and hence $\zeta(n)\in (2i\pi)^{n}\mathbb{Q}$. \item[$\cdot$] More generally, multiple zeta values at roots of unity $\mu_{N}$ occur as periods of mixed Tate motives over $\mathbb{Z}[\xi_{N}]\left[ \frac{1}{N}\right] $, $\xi_{N}$ primitive $N^{\text{th}}$ root of unity. The motivic Galois group associated to the algebra $\mathcal{H}^{N}$ generated by MMZV$_{\mu_{N}}$ is conjectured to be a quotient of the motivic Galois group $\mathcal{G}^{\mathcal{MT}_{N}}$, equal for some values of $N$: $N=1,2,3,4,8$ for instance, as seen below. We expect MZV to be simple examples in the conjectural Galois theory for transcendental numbers. \end{itemize} \textsc{Remark}: By K-theory results above, non-zero Ext groups for $\mathcal{MT}(\mathbb{Q})$ are: $$\text{Ext}^{1}_{\mathcal{MT}(\mathbb{Q})} (\mathbb{Q}(0), \mathbb{Q}(n))\cong \left\lbrace \begin{array}{ll} \mathbb{Q}^{\ast}\otimes_{\mathbb{Z}} \mathbb{Q} \cong \oplus_{p \text{ prime} } \mathbb{Q} & \text{ if } n=1\\ \mathbb{Q} & \text{ if } n \text{ odd} >1.\\ \end{array}\right. $$ Generators of these extension groups correspond exactly to periods $\log(p)$, $p$ prime in degree 1 and $\zeta(odd)$ in degree odd $>1$, which are periods of $\mathcal{MT}(\mathbb{Q})$. \section{Motivic fundamental group} \paragraph{Prounipotent completion.} Let $\Pi$ be the group freely generated by $\gamma_{0}, \ldots, \gamma_{N}$. The completed Hopf algebra $\widehat{\Pi}$ is defined by: $$\widehat{\Pi}\mathrel{\mathop:}= \varprojlim \mathbb{Q}[\Pi] \diagup I^{n} , \quad \text{ where }I\mathrel{\mathop:}= \langle \gamma-1 , \gamma\in \Pi \rangle \text{ is the augmentation ideal} .$$ Equipped with the completed coproduct $\Delta$ such that the elements of $\Pi$ are primitive, it is isomorphic to the Hopf algebra of non commutative formal series:\footnote{Well defined inverse since the log converges in $\widehat{\Pi}$; $exp(e_{i})$ are then group-like for $\Delta$. Notice that the Lie Algebra of the group of group-like elements is formed by the primitive elements and conversely; besides, the universal enveloping algebra of primitive elements is the whole Hopf algebra.} $$\widehat{\Pi} \xrightarrow[\gamma_{i}\mapsto \exp(e_{i}) ]{\sim} \mathbb{Q} \langle\langle e_{0}, \ldots, e_{N}\rangle\rangle. $$ \\ The \textit{prounipotent completion} of $\Pi$ is an affine group scheme $\Pi^{un}$:\nomenclature{$\Pi^{un}$}{prounipotent completion of $\Pi$} \begin{equation}\label{eq:prounipcompletion} \boldsymbol{\Pi^{un}}(R)=\lbrace x\in \widehat{\Pi} \widehat{\otimes} R \mid \Delta x=x\otimes x\rbrace \cong \lbrace S\in R\langle\langle e_{0}, \ldots, e_{N} \rangle\rangle^{\times}\mid \Delta S=S\otimes S, \epsilon(S)=1\rbrace , \end{equation} i.e. the set of non-commutative formal series with $N+1$ generators which are group-like for the completed coproduct for which $e_{i}$ are primitive. \\ It is dual to the shuffle $\shuffle$ relation between the coefficients of the series $S$\footnote{It is a straightforward verification that the relation $\Delta S= S\otimes S$ implies the shuffle $\shuffle$ relation between the coefficients of S.}. Its affine ring of regular function is the Hopf algebra (filtered, connected) for the shuffle product, and deconcatenation coproduct: \begin{equation} \boldsymbol{\mathcal{O}(\Pi^{un})}= \varinjlim \left( \mathbb{Q}[\Pi] \diagup I^{n+1} \right) ^{\vee} \cong \mathbb{Q} \left\langle e^{0}, \ldots, e^{N} \right\rangle . \end{equation} $$\boldsymbol{\mathcal{O}(\Pi^{\mathfrak{m}}(X_{N},x,y))}\in\mathcal{MT}(k_{N}).$$ \paragraph{Motivic Fundamental pro-unipotent groupoid.}\footnote{ \say{\textit{Esquisse d'un programme}}$\cite{Gr}$, by Grothendieck, vaguely suggests to study the action of the absolute Galois group of the rational numbers $ \text{Gal}(\overline{\mathbb{Q}} \diagup \mathbb{Q} ) $ on the \'{e}tale fundamental group $\pi_{1}^{et}(\mathcal{M}_{g,n})$, where $\mathcal{M}_{g,n}$ is the moduli space of curves of genus $g$ and $n$ ordered marked points. In the case of $\mathcal{M}_{0,4}= \mathbb{P}^{1}\diagdown \lbrace 0, 1, \infty \rbrace$, Deligne proposed to look instead (analogous) at the pro-unipotent fundamental group $\pi_{1}^{un}(\mathbb{P}^{1}\diagdown \lbrace 0, 1, \infty \rbrace)$. This motivates also the study of multiple zeta values, which arose as periods of this fundamental group. } The previous construction can be applied to $\pi_{1}(X,x)$, resp. $\pi_{1}(X,x,y)$, if assumed free, the fundamental group resp. groupoid of $X$ with base point $x$, resp. $x,y$, rational points of $X$, an algebraic variety over $\mathbb{Q}$; the groupoid $\pi_{1}(X,x,y)$, is a bitorsor formed by the homotopy classes of path from $x$ to $y$. \\ From now, let's turn to the case $X_{N}\mathrel{\mathop:}=\mathbb{P}^{1}\diagdown \lbrace 0, \infty, \mu_{N} \rbrace$. There, the group $\pi_{1}(X_{N}, x)$ is freely generated by $\gamma_{0}$ and $(\gamma_{\eta})_{\eta\in\mu_{N}}$, the loops around $0$ resp. $\eta\in\mu_{N}$.\footnote{Beware, since $\pi_{1}(X,x,y)$ is not a group, we have to pass first to the dual in the previous construction: $$ \pi^{un}_{1}(X,x,y)\mathrel{\mathop:}= \text{Spec} \left( \varinjlim \left( \mathbb{Q}[\pi_{1}] \diagup I^{n+1} \right) ^{\vee} \right) .$$}\\ Chen's theorem implies here that we have a perfect pairing: \begin{equation}\label{eq:chenpairing} \mathbb{C}[\pi_{1} (X_{N},x,y)] \diagup I^{n+1} \otimes \mathbb{C}\langle \omega_{0}, (\omega_{\eta})_{\eta\in\mu_{N}} \rangle_{\leq n} \rightarrow \mathbb{C} . \end{equation} In order to define the motivic $\pi^{un}_{1}$, let us introduce (cf. $\cite{Go2}$, Theorem $4.1$): \begin{equation}\label{eq:y(n)} Y^{(n)}\mathrel{\mathop:}=\cup_{i} Y_{i}, \text{ where } \quad \begin{array}{ll} Y_{0}\mathrel{\mathop:}= \lbrace x\rbrace \times X^{n-1}& \\ Y_{i}\mathrel{\mathop:}= X^{i-1}\times \Delta \times X^{n-i-1}, & \Delta \subset X \times X \text{ the diagonal} \\ Y_{n}\mathrel{\mathop:}= X^{n-1} \times \lbrace y\rbrace & \end{array}. \end{equation} Then, by Beilinson theorem ($\cite{Go2}$, Theorem $4.1$), coming from $\gamma \mapsto [\gamma(\Delta_{n})]$: $$H_{k}(X^{n},Y^{(n)}) \cong \left\lbrace \begin{array}{ll} \mathbb{Q}[\pi_{1}(X,x,y)] \diagup I^{n+1}& \text{ for } k=n \\ 0 & \text{ for } k<n \end{array} \right. .$$ The left side defines a mixed Tate motive and: \begin{equation} \label{eq:opiunvarinjlim} \mathcal{O}(\pi_{1}^{un}(X,x,y)) \xrightarrow{\sim} \varinjlim_{n} H^{n}(X^{n}, Y^{(n)}). \end{equation} By $(\ref{eq:opiunvarinjlim})$, $\mathcal{O}\left( \pi_{1}^{un}(X,x,y)\right)$ defines an Ind object \footnote{Ind objects of a category $\mathcal{C}$ are inductive filtered limit of objects in $\mathcal{C}$.} in the category of Mixed Tate Motives over $k$, since $Y_{I}^{(n)}\mathrel{\mathop:}=\cap Y_{i}^{(n)}$ is the complement of hyperplanes, hence of type Tate: \begin{framed} \begin{equation}\label{eq:pi1unTate0} \mathcal{O}\left( \pi_{1}^{un}(\mathbb{P}^{1}\diagdown \lbrace 0, \infty, \mu_{N} \rbrace,x,y)\right) \in \text{Ind } \mathcal{MT}(k). \end{equation} \end{framed} We denote it $\boldsymbol{\mathcal{O}\left( \pi_{1}^{\mathfrak{m}}(X,x,y)\right) }$, and $\mathcal{O}\left( \pi_{1}^{\omega}(X,x,y)\right)$, $\mathcal{O}\left( \pi_{1}^{dR}(X,x,y)\right)$, $\mathcal{O}\left( \pi_{1}^{B}(X,x,y)\right)$ its realizations, resp. $\boldsymbol{\pi_{1}^{\mathfrak{m}}(X)}$ for the corresponding $\mathcal{MT}(k)$-groupoid scheme, called the \textit{\textbf{motivic fundamental groupoid}}, with the composition of path. \\ \\ \textsc{Remark: } The pairing $(\ref{eq:chenpairing})$ can be thought in terms of a perfect pairing between homology and de Rham cohomology, since (Wojtkowiak $\cite{Wo2}$): $$H_{dR}^{n}(X^{n},Y^{(n)}) \cong k_{N}\langle \omega_{0}, \ldots, \omega_{N} \rangle_{\leq n}.$$ \\ The construction of the prounipotent completion and then the motivic fundamental groupoid would still work for the case of \textit{tangential base points }, cf. $\cite{DG}$, $\S 3$\footnote{I.e. here non-zero tangent vectors in a point of $\lbrace 0, \mu_{N}, \infty\rbrace $ are seen as \say{base points at infinite}. Deligne explained how to replace ordinary base points with tangential base points.}. Let us denote $\lambda_{N}$ the straight path between $0$ and $\xi_{N}$, a primitive root of unity. In the following, we will particularly consider the tangential base points $\overrightarrow{0\xi_{N}}\mathrel{\mathop:}=(\overrightarrow{1}_{0}, \overrightarrow{-1}_{\xi_{N}})$, defined as $(\lambda_{N}'(0), -\lambda_{N}'(1))$; but similarly for each $x,y\in \mu_{N}\cup \lbrace 0, \infty\rbrace $, such that $_{x}\lambda_{y}$ the straight path between $x,y$ in in $\mathbb{P}^{1} (\mathbb{C} \diagdown \lbrace 0, \mu_{N}, \infty \rbrace)$, we associate the tangential base points $\overrightarrow{xy}\mathrel{\mathop:}= (_{x}\lambda_{y}'(0), - _{x}\lambda_{y}'(1))$\footnote{In order that the path does not pass by $0$, we have to exclude the case where $x=-y$ if $N$ even.}. Since the motivic torsor of path associated to such tangential basepoints depends only on $x,y$ (cf. $\cite{DG}$, $\S 5$) we will denote it $_{x}\Pi^{\mathfrak{m}}_{y}$. This leads to a groupoid structure via $_{x}\Pi^{\mathfrak{m}}_{y} \times _{y}\Pi^{\mathfrak{m}}_{z} \rightarrow _{x}\Pi^{\mathfrak{m}}_{z}$: cf. Figure $\ref{fig:Pi}$ and $\cite{DG}$.\\ In fact, by Goncharov's theorem, in case of these tangential base points, the motivic torsor of path corresponding has good reduction outside N and (cf. $\cite{DG}, \S 4.11$): \begin{equation}\label{eq:pi1unTate} \mathcal{O}\left( _{x}\Pi^{\mathfrak{m}}_{y} \right) \in \text{ Ind } \mathcal{MT}_{\Gamma_{N}} \subset \text{ Ind } \mathcal{MT}\left( \mathcal{O}_{N}\left[ \frac{1}{N} \right] \right) . \end{equation} The case of ordinary base points, lying in $\text{ Ind } \mathcal{MT}(k)$, has no such good reduction.\\ In summary, from now, we consider, for $x,y\in \mu_{N}\cup\lbrace 0\rbrace$\footnote{$_{x}\Pi^{\mathfrak{m}}_{y}$ is a bitorsor under $(_{x}\Pi^{\mathfrak{m}}_{x}, _{y}\Pi^{\mathfrak{m}}_{y})$.}:\nomenclature{$_{x}\Pi^{\mathfrak{m}}_{y}$}{motivic bitorsor of path, and $_{x}\Pi_{y}$, $_{x}\Pi_{y}^{dR}$, $_{x}\Pi_{y}^{B}$, its $\omega$, resp. de Rham resp. Betti realizations} \begin{framed} \textit{The motivic bitorsors of path} $_{x}\Pi^{\mathfrak{m}}_{y}\mathrel{\mathop:}=\pi_{1}^{\mathfrak{m}} (X_{N}, \overrightarrow{xy})$ on $X_{N}\mathrel{\mathop:}=\mathbb{P}^{1} -\left\{0,\mu_{N},\infty\right\}$ with tangential basepoints given by $\overrightarrow{xy}\mathrel{\mathop:}= (\lambda'(0), -\lambda'(1))$ where $\lambda$ is the straight path from $x$ to $y$, $x\neq -y$.\\ \end{framed} Let us denote $_{x}\Pi_{y}\mathrel{\mathop:}=_{x}\Pi^{\omega}_{y}$, resp. $_{x}\Pi_{y}^{dR}$, $_{x}\Pi_{y}^{B}$ its $\omega$, resp. de Rham resp. Betti realizations. In particular, Chen's theorem implies that we have an isomorphism: $$_{0}\Pi_{1}^{B}\otimes\mathbb{C}\xrightarrow{\sim} {} _{0}\Pi_{1}\otimes \mathbb{C}.$$\\ Therefore, the motivic fundamental group above boils down to: \begin{itemize} \item[$(i)$] The affine group schemes $_{x}\Pi_{y}^{B}$, $x,y\in\mu_{N}\cup \lbrace0, \infty\rbrace $, with a groupoid structure. The Betti fundamental groupoid is the pro-unipotent completion of the ordinary topological fundamental groupoid, i.e. corresponds to $\pi_{1}^{un}(X,x,y)$ above. \item[$(ii)$] $\Pi(X)=\pi^{\omega}_{1}(X)$, the affine group scheme over $\mathbb{Q}$. It does not depend on $x,y$ since the existence of a canonical de Rham path between x and y implies a canonical isomorphism $\Pi(X)\cong _{x}\Pi(X)_{y}$; however, the action of the motivic Galois group $\mathcal{G}$ is sensitive to the tangential base points $x,y$. \item[$(iii)$] a canonical comparison isomorphism of schemes over $\mathbb{C}$, $\text{comp}_{B,\omega}$. \end{itemize} \begin{figure}[H] \centering \includegraphics[]{groupfond.pdf} \caption{Part of the Fundamental groupoid $\Pi$.\\ This picture however does not represent accurately the tangential base points.} \label{fig:Pi} \end{figure} Moreover, the dihedral group\footnote{Symmetry group of a regular polygon with $N$ sides.} $Di_{N}= \mathbb{Z}\diagup 2 \mathbb{Z} \ltimes \mu_{N}$\nomenclature{$Di_{N}$}{dihedral group of order $2n$} acts on $X_{N}=\mathbb{P}^{1}\diagdown \lbrace 0, \mu_{N},\infty\rbrace$\nomenclature{$X_{N}$}{defined as $\mathbb{P}^{1}\diagdown \lbrace 0, \mu_{N},\infty\rbrace$}: the group with two elements corresponding to the action $x \mapsto x^{-1}$ and the cyclic group $\mu_{N}$ acting by $x\mapsto \eta x$. Notice that for $N=1,2,4$, the group of projective transformations $X_{N}\rightarrow X_{N}$ is larger than $Di_{N}$, because of special symmetries, and detailed in $A.3$. \footnote{Each homography $\phi$ defines isomorphisms: $$\begin{array}{lll} _{a}\Pi_{b} & \xrightarrow[\sim]{\phi}& _{\phi(a)}\Pi_{\phi(b)} \\ f(e_{0}, e_{1}, \ldots, e_{n}) &\mapsto &f(e_{\phi(0)}, e_{\phi(1)}, \ldots, e_{\phi(n)}) \end{array} \text{ and, passing to the dual } \mathcal{O}(_{\phi(a)}\Pi_{\phi(b)}) \xrightarrow[\sim]{\phi^{\vee}} \mathcal{O}( _{a}\Pi_{b}) .$$} \\ The dihedral group $Di_{N}$ acts then on the motivic fundamental groupoid $\pi^{\mathfrak{m}}_{1}(X,x,y)$, $x,y \in \lbrace 0 \rbrace \cup \mu_{N}$ by permuting the tangential base points (and its action is respected by the motivic Galois group): $$\text{For } \quad \sigma\in Di_{N}, \quad _{x}\Pi_{y} \rightarrow _{\sigma.x}\Pi_{\sigma.y} $$ The group scheme $\mathcal{V}$ of automorphisms on these groupoids $_{x}\Pi_{y}$, respecting their structure, i.e.: \begin{itemize} \item[$\cdot$] groupoid structure, i.e. the compositions $_{x}\Pi_{y}\times _{y}\Pi_{z} \rightarrow _{x}\Pi_{z}$, \item[$\cdot$] $\mu_{N}$-equivariance as above, \item[$\cdot$] inertia: the action fixes $\exp(e_{x})\in _{x}\Pi_{x}(\mathbb{Q})$, \end{itemize} is isomorphic to (cf. $\cite{DG}$, $\S 5$ for the detailed version): \begin{equation}\label{eq:gpaut} \begin{array}{ll} \mathcal{V}\cong _{0}\Pi_{x} \\ a\mapsto a\cdot _{0}1_{x} \end{array}. \end{equation} In particular, the \textit{Ihara action} defined in $(\ref{eq:iharaaction})$ corresponds via this identification to the composition law for these automorphisms, and then can be computed explicitly. Its dual would be the combinatorial coaction $\Delta$ used through all this work.\\ \\ In consequence of these equivariances, we can restrict our attention to: \begin{framed} $$_{0}\Pi^{\mathfrak{m}}_{ \xi_{N}}\mathrel{\mathop:}=\pi_{1}^{\mathfrak{m}}(X_{N}, \overrightarrow{0\xi_{N}} ) \text{ or equivalently at } _{0}\Pi^{\mathfrak{m}}_{1}.$$ \end{framed} \noindent Keep in mind, for the following, that $_{0}\Pi_{1}$ is the functor:\nomenclature{$R\langle X \rangle$ resp. $R\langle\langle X \rangle\rangle$}{the ring of non commutative polynomials, resp. of non commutative formal series in elements of X} \begin{framed} \begin{equation}\label{eq:pi}_{0}\Pi_{1}: R \text{ a } \mathbb{Q}-\text{algebra } \mapsto \left\{S\in R\langle\langle e_{0}, (e_{\eta})_{\eta\in\mu_{N}}\rangle\rangle^{\times} | \Delta S= S\otimes S \text{ and } \epsilon(S)= 1 \right\} ,\end{equation} whose affine ring of regular functions is the graded (Hopf) algebra for the shuffle product: \begin{equation}\label{eq:opi} \mathcal{O}(_{0}\Pi_{1})\cong \mathbb{Q} \left\langle e^{0}, (e^{\eta})_{\eta\in\mu_{N}} \right\rangle. \end{equation} \end{framed} \noindent The Lie algebra of $_{0}\Pi_{1}(R)$ would naturally be the primitive series ($\Delta S= 1 \otimes S+ S\otimes 1$). \\ \\ Let us denote $dch_{0,1}^{B}=_{0}1^{B}_{1}$,\nomenclature{$dch_{0,1}^{B}$}{the image of the straight path} the image of the straight path (\textit{droit chemin}) in $_{0}\Pi_{1}^{B}(\mathbb{Q})$, and $dch_{0,1}^{dR}$ or $\Phi_{KZ_{N}}$ the corresponding element in $_{0}\Pi_{1}(\mathbb{C})$ via the Betti-De Rham comparison isomorphism: \begin{equation}\label{eq:kz} \boldsymbol{\Phi_{KZ_{N}}}\mathrel{\mathop:}= dch_{0,1}^{dR}\mathrel{\mathop:}= \text{comp}_{dR,B}(_{0}1^{B}_{1})= \sum_{W\in \lbrace e_{0}, (e_{\eta})_{\eta\in\mu_{N}} \rbrace^{\times}} \zeta_{\shuffle}(w) w \quad \in \mathbb{C} \langle\langle e_{0}, (e_{\eta})_{\eta\in\mu_{N}} \rangle\rangle , \end{equation} where the correspondence between MZV and words in $e_{0},e_{\eta}$ is similar to the iterated integral representation $(\ref{eq:reprinteg})$, with $\eta_{i}$. It is known as the \textit{Drinfeld associator} and arises also from the monodromy of the famous Knizhnik$-$Zamolodchikov differential equation.\footnote{Indeed, for $N=1$, Drinfeld associator is equal to $G_{1}^{-1}G_{0}$, where $G_{0},G_{1}$ are solutions, with certain asymptotic behavior at $0$ and $1$ of the Knizhnik$-$Zamolodchikov differential equation:$$ \frac{d}{dz}G(z)= \left(\frac{e_{0}}{z}+ \frac{e_{1}}{1-z} \right) G(z) .$$} \paragraph{Category generated by $\boldsymbol{\pi_{1}^{\mathfrak{m}}}$. } Denote by:\nomenclature{$\mathcal{MT}'_{N}$}{the full Tannakian subcategory of $\mathcal{MT}_{N}$ generated by the fundamental groupoid} \begin{framed} $\boldsymbol{\mathcal{MT}'_{N}}$ the full Tannakian subcategory of $\mathcal{MT}_{N}$ generated by the fundamental groupoid, \end{framed} (i.e. generated by $\mathcal{O}(\pi_{1}^{\mathfrak{m}} (X_{N},\overrightarrow{01}))$ by sub-objects, quotients, $\otimes$, $\oplus$, duals) and let:\nomenclature{$\mathcal{G}^{N}$}{the motivic Galois group of $\boldsymbol{\mathcal{MT}'_{N}}$ }\nomenclature{$\mathcal{A}^{N}$}{the fundamental Hopf algebra of $\boldsymbol{\mathcal{MT}'_{N}}$ }\nomenclature{$\mathcal{L}^{N}$}{the motivic coalgebra associated to $\boldsymbol{\mathcal{MT}'_{N}}$ } \begin{itemize} \item[$\cdot$] $\mathcal{G}^{N}=\mathbb{G}_{m} \ltimes \mathcal{U}^{N} $ its motivic \textit{Galois group} defined over $\mathbb{Q}$, \item[$\cdot$] $\mathcal{A}^{N}=\mathcal{O}(\mathcal{U}^{N})$ its \textit{fundamental Hopf algebra}, \item[$\cdot$] $\mathcal{L}^{N}\mathrel{\mathop:}= \mathcal{A}^{N}_{>0} / \mathcal{A}^{N}_{>0} \cdot\mathcal{A}^{N}_{>0}$ the Lie \textit{coalgebra of indecomposable elements}. \end{itemize} \texttt{Nota Bene}: $\mathcal{U}^{N}$ is the quotient of $\mathcal{U}^{\mathcal{MT}} $ by the kernel of the action on $_{0}\Pi_{1}$: i.e. $\mathcal{U}^{N}$ acts faithfully on $_{0}\Pi_{1}$.\nomenclature{$\mathcal{U}^{N}$}{the motivic prounipotent group associated to $\boldsymbol{\mathcal{MT}'_{N}}$ }\\ \\ \textsc{Remark:} In the case of $N=1$ (by F. Brown in \cite{Br2}), or $N=2,3,4,\mlq 6 \mrq,8 $ (by P. Deligne, in \cite{De}, proven in a dual point of view in Chapter $5$), these categories $\mathcal{MT}'_{N}$ and $\mathcal{MT}(\mathcal{O}_{N}\left[ \frac{1}{N} \right] )$ are equal. More precisely, for $\xi_{N}\in\mu_{N}$ a fixed primitive root, the following motivic torsors of path are sufficient to generate the category: \begin{description} \item[$\boldsymbol{N=2,3,4}$:] $\Pi^{\mathfrak{m}} (\mathbb{P}^{1} \diagdown \lbrace 0, 1, \infty \rbrace, \overrightarrow{0 \xi_{N}})$ generates $\mathcal{MT}(\mathcal{O}_{N}\left[ \frac{1}{N} \right] )$. \item[$\boldsymbol{N=\mlq 6\mrq}$:]\footnote{The quotation marks around $6$ underlines that we consider the unramified category in this case.} $\Pi^{\mathfrak{m}} (\mathbb{P}^{1} \diagdown \lbrace 0, 1, \infty \rbrace, \overrightarrow{0 \xi_{6}})$ generates $\mathcal{MT}(\mathcal{O}_{6})$. \item[$\boldsymbol{N=8}$:] $\Pi^{\mathfrak{m}} (\mathbb{P}^{1} \diagdown \lbrace 0, \pm 1, \infty \rbrace, \overrightarrow{0 \xi_{8}})$ generates $\mathcal{MT}(\mathcal{O}_{8}\left[ \frac{1}{2}\right] )$.\\ \end{description} However, if $N$ has a prime factor which is non inert, the motivic fundamental group is in the proper subcategory $\mathcal{MT}_{\Gamma_{N}}$ and hence can not generate $\mathcal{MT}(\mathcal{O}_{N}\left[ \frac{1}{N}\right] )$. \section{Motivic Iterated Integrals} Taking from now $\mathcal{M}=\mathcal{MT}'_{N}$, $M=\mathcal{O}(\pi^{\mathfrak{m}}_{1}(\mathbb{P}^{1}-\lbrace 0,\mu_{N},\infty\rbrace ,\overrightarrow{xy} ))$, the definition of motivic periods $(\ref{eq:mper})$ leads to motivic iterated integrals relative to $\mu_{N}$. Indeed:\nomenclature{$I^{\mathfrak{m}}(x;w;y)$}{motivic iterated integral} \begin{framed}\label{mii} A \textit{\textbf{motivic iterated integral}} is the triplet $I^{\mathfrak{m}}(x;w;y)\mathrel{\mathop:}= \left[\mathcal{O} \left( \Pi^{\mathfrak{m}} \left( X_{N}, \overrightarrow{xy}\right) \right) ,w,_{x}dch_{y}^{B}\right]^{\mathfrak{m}}$ where $w\in \omega(M)$, $_{x}dch_{y}^{B}$ is the image of the straight path from $x$ to $y$ in $\omega_{B}(M)^{\vee}$ and whose period is: \begin{equation}\label{eq:peri} \text{per}(I^{\mathfrak{m}}(x;w;y))= I(x;w;y) =\int_{x}^{y}w= \langle \text{comp}_{B,dR}(w\otimes 1),_{x}dch_{y}^{B} \rangle \in\mathbb{C}. \end{equation} \end{framed} \noindent \\ \\ \textsc{Remarks: } \begin{itemize} \item[$\cdot$] There, $w\in \omega(\mathcal{O}(_{x}\Pi^{\mathfrak{m}}_{y}))\cong \mathbb{Q} \left\langle \omega_{0}, (\omega_{\eta})_{\eta\in\mu_{N}} \right\rangle $ where $\omega_{\eta}\mathrel{\mathop:}= \frac{dt}{t-\eta}$. Similarly to $\ref{eq:iterinteg}$, let: \begin{equation}\label{eq:iterintegw} I^{\mathfrak{m}} (a_{0}; a_{1}, \ldots, a_{n}; a_{n+1})\mathrel{\mathop:}= I^{\mathfrak{m}} (a_{0}; \omega_{\boldsymbol{a}}; a_{n+1}), \quad \text{ where } \omega_{\boldsymbol{a}}\mathrel{\mathop:}=\omega_{a_{1}} \cdots \omega_{a_{n}}, \text{ for } a_{i}\in \lbrace 0\rbrace \cup \mu_{N} \end{equation} \item[$\cdot$] The Betti realization functor $\omega_{B}$ depends on the embedding $\sigma: k \hookrightarrow \mathbb{C}$. Here, by choosing a root of unity, we fixed the embedding $\sigma$. \end{itemize} For $\mathcal{M}$ a category of Mixed Tate Motives among $\mathcal{MT}_{N}, \mathcal{MT}_{\Gamma_{N}}$ resp. $\mathcal{MT}'_{N}$, let introduce the graded $\mathcal{A}^{\mathcal{M}}$-comodule, with trivial coaction on $\mathbb{L}^{\mathfrak{m}}$ (degree $1$): \begin{equation}\label{eq:hn} \boldsymbol{\mathcal{H}^{\mathcal{M}}} \mathrel{\mathop:}= \mathcal{A}^{\mathcal{M}} \otimes \left\{ \begin{array}{ll} \mathbb{Q}\left[ (\mathbb{L}^{\mathfrak{m}})^{2} \right] & \text{ for } N=1,2 \\ \mathbb{Q}\left[ \mathbb{L}^{\mathfrak{m}} \right] & \text{ for } N>2 . \end{array} \right. \subset \mathcal{O}(\mathcal{G}^{\mathcal{M}})= \mathcal{A}^{\mathcal{M}}\otimes \mathbb{Q}[\mathbb{L}^{\mathfrak{m}}, (\mathbb{L}^{\mathfrak{m}})^{-1}]. \end{equation} \texttt{Nota Bene}: For $N>2$, it corresponds to the geometric motivic periods, $\mathcal{P}_{\mathcal{M}}^{\mathfrak{m},+}$ whereas for $N=1,2$, it is the subset $\mathcal{P}_{\mathcal{M},\mathbb{R}}^{\mathfrak{m},+}$ invariant by the real Frobenius; cf. $(\ref{eq:periodgeom}), (\ref{eq:periodgeomr})$.\\ For $\mathcal{M}=\mathcal{MT}'_{N}$, we will simply denote it $\mathcal{H}^{N}\mathrel{\mathop:}=\mathcal{H}^{\mathcal{MT}'_{N}}$. Moreover: $$\mathcal{H}^{N}\subset \mathcal{H}^{\mathcal{MT}_{\Gamma_{N}}} \subset \mathcal{H}^{\mathcal{MT}_{N}} .$$ \\ Cyclotomic iterated integrals of weight $n$ are periods of $\pi^{un}_{1}$ (of $X^{n}$ relative to $Y^{(n)}$): \footnote{Notations of $(\ref{eq:y(n)})$. Cf. also $(\ref{eq:pi1unTate})$. The case of tangential base points requires blowing up to get rid of singularities. Most interesting periods are often those whose integration domain meets the singularities of the differential form.} \begin{framed} Any motivic iterated integral $I^{\mathfrak{m}}$ relative to $\mu_{N}$ is an element of $\mathcal{H}^{N}$, which is the graded $\mathcal{A}^{N}-$ comodule generated by these motivic iterated integrals relative to $\mu_{N}$. \end{framed} In a similar vein, define:\nomenclature{$ I^{\mathfrak{a}}$ resp. $I^{\mathfrak{l}}$}{versions of motivic iterated integrals in $\mathcal{A}$ resp. in $\mathcal{L}$} \begin{itemize} \item[$\cdot \boldsymbol{I^{\omega}}$: ] A motivic period of type $(\omega,\omega)$, in $\mathcal{O}(\mathcal{G})$: \begin{equation} \label{eq:intitdr} I^{\omega}(x;w;y)=\left[\mathcal{O} \left( _{x}\Pi^{\mathfrak{m}}_{y}\right) ,w,_{x}1^{\omega}_{y} \right]^{\omega}, \quad \text{ where } \left\lbrace \begin{array}{l} w\in\omega(\mathcal{O} \left( _{x}\Pi^{\mathfrak{m}}_{y}\right) )\\ _{x}1^{\omega}_{y}\in \omega(M)^{\vee}=\mathcal{O}\left( _{x}\Pi_{y}\right)^{\vee} \end{array}\right. . \end{equation} where $_{x}1^{\omega}_{y}\in \mathcal{O}\left( _{x}\Pi_{y}\right)^{\vee}$ is defined by the augmentation map $\epsilon:\mathbb{Q}\langle e^{0}, (e^{\eta})_{\eta\in\mu_{N}}\rangle \rightarrow \mathbb{Q}$, corresponding to the unit element in $_{x}\Pi_{y}$. This defines a function on $\mathcal{G}=\text{Aut}^{\otimes}(\omega)$, given on the rational points by $g\in\mathcal{G}(\mathbb{Q}) \mapsto \langle g\omega, \epsilon\rangle \in \mathbb{Q}$. \item[$\cdot \boldsymbol{I^{\mathfrak{a}}}$: ] the image of $I^{\omega}$ in $\mathcal{A}= \mathcal{O}(\mathcal{U})$, by the projection $\mathcal{O}(\mathcal{G})\twoheadrightarrow \mathcal{O}(\mathcal{U})$. These \textit{unipotent} motivic periods are the objects studied by Goncharov, which he called motivic iterated integrals; for instance, $\zeta^{\mathfrak{a}}(2)=0$. \item[$\cdot \boldsymbol{I^{\mathfrak{l}}}$: ] the image of $I^{\mathfrak{a}}$ in the coalgebra of indecomposables $\mathcal{L}\mathrel{\mathop:}=\mathcal{A}_{>0} \diagup \mathcal{A}_{>0}. \mathcal{A}_{>0}$.\footnote{Well defined since $\mathcal{A}= \mathcal{O} (\mathcal{U})$ is graded with positive degrees.} \end{itemize} \textsc{Remark:} It is similar (cf. $\cite{Br2}$) to define $\mathcal{H}^{N}$, as $\mathcal{O}(_{0}\Pi_{1}) \diagup J \quad$, with: \begin{itemize} \item[$\cdot$] $J\subset \mathcal{O}(_{0}\Pi_{1})$ is the biggest graded ideal $\subset \ker per$ closed by the coaction $\Delta^{c}$, corresponding to the ideal of motivic relations, i.e.: $$\Delta^{c}(J)\subset \mathcal{A}\otimes J + J\mathcal{A} \otimes \mathcal{O}(_{0}\Pi_{1}).$$ \item[$\cdot$] the $\shuffle$-homomorphism: $\text{per}: \mathcal{O}(_{0}\Pi_{1}) \rightarrow \mathbb{C} \text{ , } e^{a_{1}} \cdots e^{a_{n}} \mapsto I(0; a_{1}, \ldots, a_{n} ; 1)\mathrel{\mathop:}= \int_{dch} \omega.$ \\ \end{itemize} Once the motivic iterated integrals are defined, motivic cyclotomic multiple zeta values follow, as usual (cf. $\ref{eq:iterinteg}$): \begin{center} \textit{Motivic multiple zeta values} relative to $\mu_{N}$ are defined by, for $\epsilon_{i}\in\mu_{N}, k\geq 0, n_{i}>0$ \begin{equation}\label{mmzv} \boldsymbol{\zeta_{k}^{\mathfrak{m}} \left({ n_{1}, \ldots , n_{p} \atop \epsilon_{1} , \ldots ,\epsilon_{p} }\right) }\mathrel{\mathop:}= (-1)^{p} I^{\mathfrak{m}} \left(0;\boldsymbol{0}^{k}, (\epsilon_{1}\cdots \epsilon_{p})^{-1}, \boldsymbol{0}^{n_{1}-1} ,\cdots, (\epsilon_{i}\cdots \epsilon_{p})^{-1}, \boldsymbol{0}^{n_{i}-1} ,\cdots, \epsilon_{p}^{-1}, \boldsymbol{0}^{n_{p}-1} ;1 \right) \end{equation} \end{center} An \textit{admissible} (motivic) MZV is such that $\left( n_{p}, \epsilon_{p}\right) \neq \left( 1, 1 \right) $; otherwise, they are defined by shuffle regularization, cf. ($\ref{eq:shufflereg}$) below; the versions $\boldsymbol{\zeta_{k}^{\mathfrak{a}}} (\cdots)$, or $\boldsymbol{\zeta_{k}^{\mathfrak{l}}} (\cdots)$ are defined similarly, from $I^{\mathfrak{a}}$ resp. $I^{\mathfrak{l}}$ above. The roots of unity in the iterated integral will often be denoted by $\eta_{i}\mathrel{\mathop:}= (\epsilon_{i}\cdots \epsilon_{p})^{-1}$\\ \\ From $(\ref{eq:projpiam})$, there is a surjective homomorphism called the \textbf{\textit{period map}}, conjectured to be isomorphism: \begin{equation}\label{eq:period}\text{per}:\mathcal{H} \rightarrow \mathcal{Z} \text{ , } \zeta^{\mathfrak{m}} \left(n_{1}, \ldots , n_{p} \atop \epsilon_{1} , \ldots ,\epsilon_{p} \right)\mapsto \zeta\left(n_{1}, \ldots , n_{p} \atop \epsilon_{1} , \ldots ,\epsilon_{p} \right). \end{equation} \texttt{Nota Bene:} Each identity between motivic cyclotomic multiple zeta values is then true for cyclotomic multiple zeta values and in particular each result about a basis with motivic MZV implies the corresponding result about a generating family of MZV by application of the period map.\\ Conversely, we can sometimes \textit{lift} an identity between MZV to an identity between motivic MZV, via the coaction (as in $\cite{Br2}$, Theorem $3.3$); this is discussed below, and illustrated throughout this work in different examples or counterexamples, as in Lemma $\ref{lemmcoeff}$. It is similar in the case of motivic Euler sums ($N=2$). We will see (Theorem $2.4.4$) that for other roots of unity there are several rational coefficients which appear at each step (of the coaction calculus) and prevent us from concluding by identification.\\ \paragraph{Properties.}\label{propii} Motivic iterated integrals satisfy the following properties:\nomenclature{$\mathfrak{S}_{p}$}{set of permutations of $\lbrace 1, \ldots, p\rbrace$.} \begin{itemize} \item[(i)] $I^{\mathfrak{m}}(a_{0}; a_{1})=1$. \item[(ii)] $I^{\mathfrak{m}}(a_{0}; a_{1}, \cdots a_{n}; a_{n+1})=0$ if $a_{0}=a_{n+1}$. \item[(iii)] Shuffle product:\footnote{Product rule for iterated integral in general is: $$ \int_{\gamma} \phi_{1} \cdots \phi_{r} \cdot \int_{\gamma} \phi_{r+1} \cdots \phi_{r+s} = \sum_{\sigma\in Sh_{r,s}} \int_{\gamma} \phi_{\sigma^{-1}(1)} \cdots \phi_{\sigma^{-1}(r+s)} , $$ where $Sh_{r,s}\subset \mathfrak{S}_{r+s}$ is the subset of permutations which respect the order of $\lbrace 1 , \ldots, r\rbrace $ and $\lbrace r+1 , \ldots, r+s\rbrace$. Here, to define the non convergent case, $(iii)$ is sufficient, paired with the other rules.} \begin{multline}\label{eq:shufflereg} \zeta_{k}^{\mathfrak{m}} \left( {n_{1}, \ldots , n_{p} \atop \epsilon_{1}, \ldots ,\epsilon_{p} }\right)= \\ (-1)^{k}\sum_{i_{1}+ \cdots + i_{p}=k} \binom {n_{1}+i_{1}-1} {i_{1}} \cdots \binom {n_{p}+i_{p}-1} {i_{p}} \zeta^{\mathfrak{m}} \left( {n_{1}+i_{1}, \ldots , n_{p}+i_{p} \atop \epsilon_{1}, \ldots ,\epsilon_{p} }\right). \end{multline} \item[(iv)] Path composition: $$ \forall x\in \mu_{N} \cup \left\{0\right\} , I^{\mathfrak{m}}(a_{0}; a_{1}, \ldots, a_{n}; a_{n+1})=\sum_{i=1}^{n} I^{\mathfrak{m}}(a_{0}; a_{1}, \ldots, a_{i}; x) I^{\mathfrak{m}}(x; a_{i+1}, \ldots, a_{n}; a_{n+1}) .$$ \item[(v)] Path reversal: $I^{\mathfrak{m}}(a_{0}; a_{1}, \ldots, a_{n}; a_{n+1})= (-1)^n I^{\mathfrak{m}}(a_{n+1}; a_{n}, \ldots, a_{1}; a_{0}).$ \item[(vi)] Homothety: $\forall \alpha \in \mu_{N}, I^{\mathfrak{m}}(0; \alpha a_{1}, \ldots, \alpha a_{n}; \alpha a_{n+1}) = I^{\mathfrak{m}}(0; a_{1}, \ldots, a_{n}; a_{n+1})$. \end{itemize} \vspace{0,5cm} \textsc{Remark}: These relations, for the multiple zeta values relative to $\mu_{N}$, and for the iterated integrals $I(a_{0}; a_{1}, \cdots ,a_{n}; a_{n+1})$ ($\ref{eq:reprinteg}$), are obviously all easily checked.\\ \\ It has been proven that motivic iterated integrals verify stuffle $\ast$ relations, but also pentagon, and hexagon (resp. octagon for $N>1$) ones, as iterated integral at $\mu_{N}$. In depth $1$, by Deligne and Goncharov, the only relations satisfied by the motivic iterated integrals are distributions and conjugation relations, stated in $\S 2.4.3$. \paragraph{Motivic Euler $\star$, $\boldsymbol{\sharp}$ sums.} Here, assume that $N=1$ or $2$.\footnote{Detailed definitions of these $\star$ and $\sharp$ versions are given in $\S 4.1$.} In the motivic iterated integrals above, $I^{\mathfrak{m}}(\cdots, a_{i}, \cdots)$, $a_{i}$ were in $\lbrace 0, \pm 1 \rbrace$. We can extend by linearity to $a_{i}\in \lbrace \pm \star, \pm \sharp\rbrace$, which corresponds to a $\omega_{\pm \star}$, resp. $\omega_{ \pm\sharp}$ in the iterated integral, with the differential forms:\nomenclature{$\omega_{\pm\star}$, $\omega_{\pm\sharp}$}{specific differential forms} $$\boldsymbol{\omega_{\pm\star}}\mathrel{\mathop:}= \omega_{\pm 1}- \omega_{0}=\frac{dt}{t(\pm t -1)} \quad \text{ and } \quad \boldsymbol{\omega_{\pm\sharp}}\mathrel{\mathop:}=2 \omega_{\pm 1}-\omega_{0}=\frac{(t \pm 1)dt}{t(t\mp 1)}.$$ It means that, by linearity, for $A,B$ sequences in $\lbrace 0, \pm 1, \pm \star, \pm \sharp \rbrace$: \begin{equation} \label{eq:miistarsharp} I^{\mathfrak{m}}(A, \pm \star, B)= I^{\mathfrak{m}}(A, \pm 1, B) - I^{\mathfrak{m}}(A, 0, B), \text{ and } I^{\mathfrak{m}}(A, \pm \sharp, B)= 2 I^{\mathfrak{m}}(A, \pm 1, B) - I^{\mathfrak{m}}(A, 0, B). \end{equation} \nomenclature{$\zeta^{\star, \mathfrak{m}}$, resp. $\zeta^{\sharp, \mathfrak{m}}$}{Motivic Euler $\star$ Sums, resp. Motivic Euler $\sharp$ Sums} \begin{itemize} \item[$\boldsymbol{\zeta^{\star, \mathfrak{m}}}$: ]\textit{Motivic Euler $\star$ Sums} are defined by a similar integral representation as MES ($\ref{eq:reprinteg}$), with $\omega_{\pm \star}$ replacing the $\omega_{\pm 1}$, except the first one, which stays a $\omega_{\pm 1}$. \\ Their periods, Euler $\star$ sums, which are already common in the literature, can be written as a summation similar than for Euler sums replacing strict inequalities by large ones: $$ \zeta^{\star}\left(n_{1}, \ldots , n_{p} \right) = \sum_{0 < k_{1}\leq k_{2} \leq \cdots \leq k_{p}} \frac{\epsilon_{1}^{k_{1}} \cdots \epsilon_{p}^{k_{p}}}{k_{1}^{\mid n_{1}\mid} \cdots k_{p}^{\mid n_{p}\mid}}, \quad \epsilon_{i}\mathrel{\mathop:}=sign(n_{i}), \quad n_{i}\in\mathbb{Z}^{\ast}, n_{p}\neq 1. $$ \item[$\boldsymbol{\zeta^{\sharp, \mathfrak{m}}}$: ] \textit{Motivic Euler $\sharp$ Sums} are defined by a similar integral representation as MES ($\ref{eq:reprinteg}$), with $\omega_{\pm \sharp}$ replacing the $\omega_{\pm 1}$, except the first one, which stays a $\omega_{\pm 1}$. \end{itemize} They are both $\mathbb{Q}$-linear combinations of multiple Euler sums, and appear in Chapter $4$, via new bases for motivic MZV (Hoffman $\star$, or with Euler $\sharp$ sums) and in the Conjecture $\ref{lzg}$.\\ \paragraph{Dimensions.} Algebraic $K$-theory provides an \textit{upper bound} for the dimensions of motivic cyclotomic iterated integrals, since: \begin{equation} \begin{array}{ll} \text{Ext}_{\mathcal{MT}_{N,M}}^{1} (\mathbb{Q}(0), \mathbb{Q}(1)) = (\mathcal{O}_{k_{N}}[\frac{1}{M}])^{\ast} \otimes \mathbb{Q}& \\ \text{Ext}_{\mathcal{MT}_{\Gamma_{N}}}^{1} (\mathbb{Q}(0), \mathbb{Q}(1)) = \Gamma_{N} & \\ \text{Ext}_{\mathcal{MT}_{N,M}}^{1} (\mathbb{Q}(0), \mathbb{Q}(n)) = \text{Ext}_{\mathcal{MT}_{\Gamma}}^{1} (\mathbb{Q}(0), \mathbb{Q}(n)) = K_{2n-1}(k_{N}) \otimes \mathbb{Q} & \text{ for } n >1 .\\ \text{Ext}_{\mathcal{MT}_{N,M}}^{i} (\mathbb{Q}(0), \mathbb{Q}(n))= \text{Ext}_{\mathcal{MT}_{\Gamma}}^{i} (\mathbb{Q}(0), \mathbb{Q}(n)) =0 & \text{ for } i>1 \text{ or } n\leq 0 . \end{array} \end{equation} Let $ n_{\mathfrak{p}_{M}}$\nomenclature{$ n_{\mathfrak{p}_{M}}$}{ the number of different prime ideals above the primes dividing $M$} denote the number of different prime ideals above the primes dividing $M$, $\nu_{N}$\nomenclature{$\nu_{N}$}{the number of primes dividing $N$} the number of primes dividing $N$ and $\varphi$ Euler's indicator function\nomenclature{$\varphi$}{Euler's indicator function}. For $M|N$ (cf. $\cite{Bo}$), using Dirichlet $S$-unit theorem when $n=1$: \begin{equation}\label{dimensionk}\dim K_{2n-1}(\mathcal{O}_{k_{N}} [1/M]) \otimes \mathbb{Q} = \left\{ \begin{array}{ll} 1 & \text{ if } N =1 \text{ or } 2 , \text{ and } n \text{ odd }, (n,N) \neq (1,1) .\\ 0 & \text{ if } N =1 \text{ or } 2 , \text{ and } n \text{ even } .\\ \frac{\varphi(N)}{2}+ n_{\mathfrak{p}_{M}}-1& \text{ if } N >2, n=1 . \\ \frac{\varphi(N)}{2} & \text{ if } N >2 , n>1 . \end{array} \right. \end{equation} The numbers of generators in each degree, corresponding to the categories $\mathcal{MT}_{N,M}$ resp. $\mathcal{MT}_{\Gamma_{N}}$, differ only in degree $1$: \begin{equation}\label{eq:agamma} \begin{array}{ll} \text{In degree } >1 : & b_{N}\mathrel{\mathop:}=b_{N,M}= b_{\Gamma_{N}}= \frac{\varphi(N)}{2} \\ \text{In degree } 1 : & a_{N,M}\mathrel{\mathop:}=\frac{\varphi(N)}{2}+ n_{\mathfrak{p}_{M}}-1 \quad \text{ whereas } \quad a_{\Gamma_{N}}\mathrel{\mathop:}= \frac{\varphi(N)}{2}+\nu(N)-1. \end{array} \end{equation} \texttt{Nota Bene}: The following formulas in this paragraph can be applied for the categories $\mathcal{MT}_{N,M}$ resp. $\mathcal{MT}_{\Gamma_{N}}$, replacing $a_{N}$ by $a_{N,M}$ resp. $a_{\Gamma_{N}}$.\\ \\ In degree $1$, for $\mathcal{MT}_{M,N}$, only the units modulo torsion matter whereas for the category $\mathcal{MT}_{\Gamma_{N}}$, only the cyclotomic units modulo torsion matter in degree $1$, cf. $\S 2.4.3$. Recall that cyclotomic units form a subgroup of finite index in the group of units, and generating families for cyclotomic units modulo torsion are (cf. $\cite{Ba}$)\footnote{If we consider cyclotomic units in $\mathbb{Z}[\xi_{N}]\left[ \frac{1}{M}\right] $, with $M=\prod r_{i}$, $r_{i}$ prime power, we have to add $\lbrace 1- \xi_{r_{i}}\rbrace$.}:\nomenclature{ $a\wedge b$}{$gcd(a,b)$} $$\begin{array}{ll} \text{ For } N=p^{s} : &\left\lbrace \frac{1-\xi_{N}^{a}}{1-\xi_{N}} , a\wedge p=1 \right\rbrace, \quad \text{ where } a\wedge b\mathrel{\mathop:}= gcd(a,b).\\ \text{ For } N=\prod_{i} p_{i}^{s_{i}}= \prod q_{i} : &\left\lbrace \frac{1-\xi_{q_{i}}^{a}}{1-\xi_{q_{i}}} , a\wedge p_{i}=1 \right\rbrace \cup \left\lbrace 1-\xi_{d}^{a}, \quad a\wedge d=1, d\mid N, d\neq q_{i} \right\rbrace \\ \end{array}. $$ Results on cyclotomic units determine depth $1$ weight $1$ results for MMZV$_{\mu_{N}}$ (cf. $\S. 2.4.3$).\\ \\ \\ Knowing dimensions, we lift $(\ref{eq:uab})$ to a non-canonical isomorphism with the free Lie algebra: \begin{equation} \label{eq:LieAlg} \mathfrak{u}^{\mathcal{MT} } \underrel{n.c}{\cong} L\mathrel{\mathop:}= \mathbb{L}_{\mathbb{Q}} \left\langle \left( \sigma^{j}_{1}\right)_{1 \leq j \leq a_{N}}, \left( \sigma^{j}_{i}\right)_{1 \leq j \leq b_{N}}, i>1 \right\rangle \quad \sigma_{i} \text{ in degree } -i. \end{equation} The generators $\sigma^{j}_{i}$\nomenclature{$\sigma^{j}_{i}$}{generators of the graded Lie algebra $\mathfrak{u}$} of the graded Lie algebra $\mathfrak{u}$ are indeed non-canonical, only their classes in the abelianization are.\footnote{In other terms, this means: \begin{equation} H_{1}(\mathfrak{u}^{\mathcal{MT}}; \mathbb{Q}) \cong \bigoplus_{i,j \text{ as above }} [\sigma^{j}_{i}]\mathbb{Q} , \quad H^{B}_{i}(\mathfrak{u}^{\mathcal{MT} }; \mathbb{Q}) =0 \text{ for } i>1. \end{equation}} For the fundamental Hopf algebra, with $f^{j}_{i}=(\sigma^{j}_{i})^{\vee}$\nomenclature{$f^{j}_{i}$}{are defined as $(\sigma^{j}_{i})^{\vee}$} in degree $j$: \begin{equation} \label{HopfAlg} \mathcal{A}^{\mathcal{MT}} \underrel{n.c}{\cong} A\mathrel{\mathop:}= \mathbb{Q} \left\langle \left( f^{j}_{1}\right)_{1 \leq j \leq a_{N}}, \left( f^{j}_{i}\right)_{1 \leq j \leq b_{N}}, i>1 \right\rangle . \end{equation} \begin{framed} $\mathcal{A}^{\mathcal{MT}}$ is a cofree commutative graded Hopf algebra cogenerated by $a_{N}$ elements $f^{\bullet}_{1}$ in degree 1, and $b_{N}$ elements $f^{\bullet}_{r}$ in degree $r>1$. \end{framed} The comodule $\mathcal{H}^{N}\subseteq \mathcal{O}(_{0}\Pi_{1})$ embeds, non-canonically\footnote{We can fix the image of algebraically independent elements with trivial coaction.\\ For instance, for $N=3$, we can choose to send: $ \zeta^{\mathfrak{m}}\left( r \atop j \right) \xmapsto{\phi} f_{r} , \quad \text{ and } \quad \left( 2i \pi \right)^{\mathfrak{m}} \xmapsto{\phi} g_{1}$.}, into $\mathcal{H}^{\mathcal{MT}_{N}}$ and hence:\nomenclature{$\phi^{N}$}{the Hopf algebra morphism $\mathcal{H}^{N} \hookrightarrow H^{N}$}\nomenclature{$H^{N}$}{the Hopf algebra $\mathbb{Q} \left\langle \left(f^{j}_{1}\right) _{1\leq j \leq a_{N}}, \left( f^{j}_{r}\right)_{r>1\atop 1\leq j \leq b_{N}} \right\rangle \otimes \mathbb{Q}\left[ g_{1} \right]$} \begin{framed} \begin{equation}\label{eq:phih} \mathcal{H}^{N} \xhookrightarrow[n.c.]{\quad\phi^{N}\quad} H^{N}\mathrel{\mathop:}= \mathbb{Q} \left\langle \left(f^{j}_{1}\right) _{1\leq j \leq a_{N}}, \left( f^{j}_{r}\right)_{r>1\atop 1\leq j \leq b_{N}} \right\rangle \otimes \mathbb{Q}\left[ g_{1} \right]. \end{equation} \end{framed} \noindent \texttt{Nota Bene:} This comodule embedding is an isomorphism for $N=1,2,3,4,\mlq 6\mrq,8$ (by F. Brown $\cite{Br2}$ for $N=1$, by Deligne $\cite{De}$ for the other cases; new proof in Chapter $5$), since the categories $\mathcal{MT}'_{N}$, $\mathcal{MT}(\mathcal{O}\left[ \frac{1}{N}\right] )$ and $\mathcal{MT}_{\Gamma_{N}}$ are equivalent. However, for some other $N$, such as $N$ prime greater than $5$, it is not an isomorphism.\\ \noindent Looking at the dimensions $d^{N}_{n}\mathrel{\mathop:}= \dim \mathcal{H}^{\mathcal{MT}_{N}}_{n}$:\nomenclature{$d^{N}_{n}$}{the dimension of the $\mathbb{Q}$vector space $\mathcal{H}^{\mathcal{MT}_{N}}_{n}$} \begin{lemm} For $N>2$, $d^{N}_{n}$ satisfies two (equivalent) recursive formulas\footnote{Those two recursive formulas, although equivalent, leads to two different perspective for counting dimensions.}: $$\begin{array}{lll} d^{N}_{n} = & 1 + a_{N} d_{n-1}+ b_{N}\sum_{i=2}^{n} d_{n -i} & \\ d^{N}_{n} = & (a_{N}+1)d_{n-1}+ (b_{N}-a_{N})d_{n -2} & \text{ with } \left\lbrace \begin{array}{l} d_{0}=1\\ d_{1}=a_{N}+1 \end{array}\right. \end{array} .$$ Hence the Hilbert series for the dimensions of $\mathcal{H}^{\mathcal{MT}}$ is: $$h_{N}(t)\mathrel{\mathop:}=\sum_{k} d_{k}^{N} t^{k}=\frac{1}{1-(a_{N}+1)t+ (a_{N}-b_{N})t^{2}}. $$ \end{lemm} In particular, these dimensions (for $\mathcal{H}^{\mathcal{MT}_{\Gamma_{N}}}$) are an upper bound for the dimensions of motivic MZV$_{\mu_{N}}$ (i.e. of $\mathcal{H}^{N}$), and hence of MZV$_{\mu_{N}}$ by the period map. In the case $N=p^{r}$, $p\geq 5$ prime, this upper bound is conjectured to be not reached; for other $N$ however, this bound is still conjectured to be sharp (cf. $\S 3.4$). \\ \\ \texttt{Examples:} \begin{itemize} \item[$\cdot$] For the unramified category $\mathcal{MT}(\mathcal{O}_{N})$: $$d_{n}= \frac{\varphi(N)}{2}d_{n-1}+ d_{n-2}.$$ \item[$\cdot$] For $M \mid N$ such that all primes dividing $M$ are inert, $ n_{\mathfrak{p}_{M}}=\nu(N)$. In particular, it is the case if $N=p^{r}$: $$\text{ For } \mathcal{MT}\left( \mathcal{O}_{p^{r}}\left[ \frac{1}{p} \right] \right) \text{ , } \quad d_{n}= \left( \frac{\varphi(N)}{2}+1\right) ^{n}.$$ Let us detail the cases $N=2,3,4,\mlq 6\mrq,8$ considered in Chapter $5$:\\ \end{itemize} \begin{tabular}{|c|c|c|c|} \hline & & & \\ $N \backslash$ $d_{n}^{N}$& $A$ & Dimension relation $d_{n}^{N}$ & Hilbert series \\ \hline $N=1$\footnotemark[2] & \twolines{$1$ generator in each odd degree $>1$\\ $\mathbb{Q} \langle f_{3}, f_{5}, f_{7}, \cdots \rangle$ } &\twolines{$d_{n}=d_{n-3} +d_{n-2}$,\\ $d_{2}=1$, $d_{1}=0$} & $ \frac{1}{1-t^{2}-t^{3}}$ \\ \hline $N=2$\footnotemark[3] & \twolines{$1$ generator in each odd degree $\geq 1$\\ $\mathbb{Q} \langle f_{1}, f_{3}, f_{5}, \cdots \rangle$ } & \twolines{$d_{n}=d_{n-1} +d_{n-2}$\\ $d_{0}=d_{1}=1$} & $ \frac{1}{1-t-t^{2}}$ \\ \hline $N=3,4$ & \twolines{$1$ generator in each degree $\geq 1$\\ $\mathbb{Q} \langle f_{1}, f_{2}, f_{3}, \cdots \rangle$ } & $d_{k}=2d_{k-1} = 2^{k}$ & $\frac{1}{1-2t}$ \\ \hline $N=8$ & \twolines{$2$ generators in each degree $\geq 1$ \\ $\mathbb{Q} \langle f^{1}_{1}, f^{2}_{1}, f^{1}_{2}, f^{2}_{2}, \cdots \rangle$ } & $d_{k}= 3 d_{k -1}=3^{k}$ & $\frac{1}{1-3t}$ \\ \hline \twolines{$N=6$ \\ $\mathcal{MT}(\mathcal{O}_{6}\left[\frac{1}{6}\right])$} & \twolines{$1$ in each degree $> 1$, $2$ in degree $1$\\ $\mathbb{Q} \langle f^{1}_{1}, f^{2}_{1}, f_{2}, f_{3}, \cdots \rangle$ } & \twolines{$d_{k}= 3 d_{k -1} -d_{k-2}$, \\$d_{1}=3$} & $\frac{1}{1-3t+t^{2}}$ \\ \hline \twolines{$N=6$ \\ $\mathcal{MT}(\mathcal{O}_{6})$} & \twolines{$1$ generator in each degree $>1$\\ $\mathbb{Q} \langle f_{2}, f_{3}, f_{4}, \cdots \rangle$} & \twolines{$d_{k}= 1+ \sum_{i\geq 2} d_{k-i}$\\$=d_{k -1} +d_{k-2}$} & $ \frac{1}{1-t-t^{2}}$ \\ \hline \end{tabular} \footnotetext[2]{For $N=1$, Broadhurst and Kreimer made a more precise conjecture for dimensions of multiple zeta values graded by the depth, which transposes to motivic ones: \begin{equation}\label{eq:bkdepth} \sum \dim (gr^{\mathfrak{D}}_{d} \mathcal{H}^{1}_{n})s^{n}t^{d} = \frac{1+\mathbb{E}(s)t}{1- \mathbb{O}(s)t+\mathbb{S}(s)t^{2}-\mathbb{S}(s) t^{4}} , \quad \text{ where } \begin{array}{l} \mathbb{E}(s)\mathrel{\mathop:}= \frac{s^{2}}{1-s^{2}}\\ \mathbb{O}(s)\mathrel{\mathop:}= \frac{s^{3}}{1-s^{2}}\\ \mathbb{S}(s)\mathrel{\mathop:}= \frac{s^{12}}{(1-s^{4})(1-s^{6})} \end{array} \end{equation} where $ \mathbb{E}(s)$, resp. $ \mathbb{O}(s)$, resp. $ \mathbb{S}(s)$ are the generating series of even resp. odd simple zeta values resp. of the space of cusp forms for the full modular group $PSL_{2}(\mathbb{Z})$. The coefficient $\mathbb{S}(s)$ of $t^{2}$ can be understood via the relation between double zetas and cusp forms in $\cite{GKZ}$; The coefficient $\mathbb{S}(s)$ of $t^{4}$, underlying exceptional generators in depth $4$, is now also understood by the recent work of F. Brown $\cite{Br3}$, who gave an interpretation of this conjecture via the homology of an explicit Lie algebra.} \footnotetext[3]{For $N=2$, the dimensions are Fibonacci numbers.} \section{Motivic Hopf algebra} \subsection{Motivic Lie algebra.} Let $\mathfrak{g}$\nomenclature{$\mathfrak{g}$}{ the free graded Lie algebra generated by $e_{0},(e_{\eta})_{\eta\in\mu_{N}}$ in degree $-1$} the free graded Lie algebra generated by $e_{0},(e_{\eta})_{\eta\in\mu_{N}}$ in degree $-1$. Then, the completed Lie algebra $\mathfrak{g}^{\wedge}$ is the Lie algebra of $_{0}\Pi_{1}(\mathbb{Q})$ and the universal enveloping algebra $ U\mathfrak{g}$ is the cocommutative Hopf algebra which is the graded dual of $O(_{0}\Pi_{1})$: \begin{equation} \label{eq:ug} (U\mathfrak{g})_{n}=\left( \mathbb{Q}e_{0} \oplus \left( \oplus_{\eta\in\mu_{N}} \mathbb{Q}e_{\eta}\right) \right) ^{\otimes n}= (O(_{0}\Pi_{1})^{\vee})_{n}. \end{equation} The product is the concatenation, and the coproduct is such that $e_{0},e_{\eta}$ are primitive.\\ \\ Considering the motivic version of the Drinfeld associator:\nomenclature{$\Phi^{\mathfrak{m}}$}{the motivic Drinfeld associator} \begin{equation} \label{eq:associator} \Phi^{\mathfrak{m}}\mathrel{\mathop:}= \sum_{w} \zeta^{\mathfrak{m}} (w) w \in \mathcal{H}\left\langle \left\langle e_{0},e_{\eta} \right\rangle \right\rangle \text{, where :} \end{equation} $$ \zeta^{\mathfrak{m}} (e_{0}^{n}e_{\eta_{1}}e_{0}^{n_{1}-1}\cdots e_{\eta_{p}}e_{0}^{n_{p}-1}) =\zeta^{\mathfrak{m}}_{n}\left( n_{1}, \ldots, n_{p} \atop \epsilon_{1} , \ldots, \epsilon_{p}\right) \text{ with } \begin{array}{l} \epsilon_{p}\mathrel{\mathop:}=\eta_{p}^{-1}\\ \epsilon_{i}\mathrel{\mathop:}=\eta_{i}^{-1}\eta_{i+1} \end{array}.$$ \texttt{Nota Bene:} This motivic Drinfeld associator satisfies the double shuffle relations, and, for $N=1$, the associator equations defined by Drinfeld (pentagon and hexagon), replacing $2\pi i$ by the Lefschetz motivic period $\mathbb{L}^{\mathfrak{m}}$; for $N>1$, an octagon relation generalizes this hexagon relation, as we will see in $\S 4.2.2$.\\ Moreover, it defines a map: $$\oplus \mathcal{H}_{n}^{\vee} \rightarrow U \mathfrak{g} \quad \text{ which induces a map: } \oplus \mathcal{L}_{n}^{\vee} \rightarrow U \mathfrak{g}.$$ Define $\boldsymbol{\mathfrak{g}^{\mathfrak{m}}}$, the \textit{Lie algebra of motivic elements} as the image of $\oplus \mathcal{L}_{n}^{\vee}$ in $U \mathfrak{g}$:\footnote{The action of the Galois group $\mathcal{U}^{\mathcal{MT}}$ turns $\mathcal{L}$ into a coalgebra, and hence $\mathfrak{g}^{\mathfrak{m}}$ into a Lie algebra.} \begin{equation} \label{eq:motivicliealgebra} \oplus \mathcal{L}_{n}^{\vee} \xrightarrow{\sim} \mathfrak{g}^{\mathfrak{m}} \hookrightarrow U \mathfrak{g}. \end{equation} The Lie algebra $\mathfrak{g}^{\mathfrak{m}}$ is equipped with the Ihara bracket given precisely below. Notice that for the cases $N=1,2,3,4,\mlq 6\mrq,8$, $\mathfrak{g}^{\mathfrak{m}}$ is non-canonically isomorphic to the free Lie algebra $L$ defined in $(\ref{eq:LieAlg})$, generated by $(\sigma_{i})'s$. \paragraph{Ihara action.} As said above, the group scheme $\mathcal{V}$ of automorphisms of $_{x}\Pi_{y}, x,y\in\lbrace 0, \mu_{N} \rbrace$ is isomorphic to $_{0}\Pi_{1}$ ($\ref{eq:gpaut}$), and the group law of automorphisms leads to the Ihara action. More precisely, for $a\in _{0}\Pi_{1}$ (cf. $\cite{DG}$): \begin{equation} \label{eq:actionpi01} \begin{array}{lllll} \text{ The action on } _{0}\Pi_{0} : \quad \quad &\langle a\rangle_{0} : & _{0}\Pi_{0} & \rightarrow &_{0} \Pi_{0} \\ && \exp(e_{0}) &\mapsto & \exp(e_{0}) \\ &&\exp(e_{\eta}) &\mapsto &([\eta]\cdot a) \exp(e_{\eta}) ([\eta]\cdot a)^{-1} \\ \text{ Then, the action on } _{0}\Pi_{1} :\quad \quad & \langle a\rangle : & _{0}\Pi_{1} &\rightarrow &_{0}\Pi_{1} \\ & & b &\mapsto & \langle a\rangle _{0} (b)\cdot a \end{array} \end{equation} This action is called the \textbf{\textit{Ihara action}}:\nomenclature{$\circ$}{Ihara action} \begin{equation} \label{eq:iharaaction} \begin{array}{llll} \circ : & _{0}\Pi_{1} \times _{0}\Pi_{1} & \rightarrow & _{0}\Pi_{1} \\ & (a,b) & \mapsto & a\circ b \mathrel{\mathop:}= \langle a\rangle _{0} (b)\cdot a. \end{array} \end{equation} At the Lie algebra level, it defines the \textit{Ihara bracket} on $Lie(_{0}\Pi_{1})$: \begin{equation} \lbrace a, b\rbrace \mathrel{\mathop:}= a \circ b - b\circ a. \end{equation} \texttt{Nota Bene:} The dual point of view leads to a combinatorial coaction $\Delta^{c}$, which is the keystone of this work. \subsection{Coaction} The motivic Galois group $\mathcal{G}^{\mathcal{MT}_{N}}$ and hence $\mathcal{U}^{\mathcal{MT}}$ acts on the de Rham realization $_{0}\Pi_{1}$ of the motivic fundamental groupoid (cf. $\cite{DG}, \S 4.12$). It is fundamental, since the action of $\mathcal{U}^{\mathcal{MT}}$ is compatible with the structure of $_{x}\Pi_{y}$ (groupoid, $\mu_{N}$ equivariance and inertia), that this action factorizes through the Ihara action, using the isomorphism $\mathcal{V}\cong _{0}\Pi_{1}$ ($\ref{eq:gpaut}$): $$ \xymatrix{ \mathcal{U}^{\mathcal{MT}}\times _{0}\Pi_{1} \ar[r] \ar[d] &_{0}\Pi_{1} \ar[d]^{\sim}\\ _{0}\Pi_{1} \times _{0}\Pi_{1} \ar[r]^{\circ} & _{0}\Pi_{1} \\ }$$ Since $\mathcal{A}^{\mathcal{MT}}= \mathcal{O}(\mathcal{U}^{\mathcal{MT}})$, this action gives rise by duality to a coaction: $\Delta^{\mathcal{MT}}$, compatible with the grading, represented below. By the previous diagram, the combinatorial coaction $\Delta^{c}$ (on words on $0, \eta\in\mu_{N}$), which is explicit (the formula being given below), factorizes through $\Delta^{\mathcal{MT}}$. Remark that $\Delta^{\mathcal{MT}}$ factorizes through $\mathcal{A}$, since $\mathcal{U}$ is the quotient of $\mathcal{U}^{\mathcal{MT}}$ by the kernel of its action on $_{0}\Pi_{1}$. By passing to the quotient, it induces a coaction $\Delta$ on $\mathcal{H}$: $$ \label{Coaction} \xymatrix{ \mathcal{O}(_{0}\Pi_{1}) \ar[r]^{\Delta^{c}} \ar[d]^{\sim} & \mathcal{A} \otimes_{\mathbb{Q}} \mathcal{O} (_{0}\Pi_{1}) \ar[d] \\ \mathcal{O}(_{0}\Pi_{1}) \ar[d]\ar[r]^{\Delta^{\mathcal{MT}}} & \mathcal{A}^{\mathcal{MT}} \otimes_{\mathbb{Q}} \mathcal{O} (_{0}\Pi_{1}) \ar[d]\\ \mathcal{H} \ar[r]^{\Delta} & \mathcal{A} \otimes \mathcal{H}. \\ }$$ \\ The coaction for motivic iterated integrals is given by the following formula, due to A. B. Goncharov (cf. $\cite{Go1}$) for $\mathcal{A}$ and extended by F. Brown to $\mathcal{H}$ (cf. $\cite{Br2}$):\nomenclature{$\Delta$}{Goncharov coaction} \begin{theom} \label{eq:coaction} The coaction $\Delta: \mathcal{H} \rightarrow \mathcal{A} \otimes_{\mathbb{Q}} \mathcal{H}$ is given by the combinatorial coaction $\Delta^{c}$: $$\Delta^{c} I^{\mathfrak{m}}(a_{0}; a_{1}, \cdots a_{n}; a_{n+1}) =$$ $$\sum_{k ;i_{0}= 0<i_{1}< \cdots < i_{k}<i_{k+1}=n+1} \left( \prod_{p=0}^{k} I^{\mathfrak{a}}(a_{i_{p}}; a_{i_{p}+1}, \cdots a_{i_{p+1}-1}; a_{i_{p+1}}) \right) \otimes I^{\mathfrak{m}}(a_{0}; a_{i_{1}}, \cdots a_{i_{k}}; a_{n+1}) .$$ \end{theom} \noindent \textsc{Remark:} It has a nice geometric formulation, considering the $a_{i}$ as vertices on a half-circle: $$\Delta^{c} I^{\mathfrak{m}}(a_{0}; a_{1}, \cdots a_{n}; a_{n+1})=\sum_{\text{ polygons on circle } \atop \text{ with vertices } (a_{i_{p}})} \prod_{p} I^{\mathfrak{a}}\left( \text{ arc between consecutive vertices } \atop \text{ from } a_{i_{p}} \text{ to } a_{i_{p+1}} \right) \otimes I^{\mathfrak{m}}(\text{ vertices } ).$$ \texttt{Example}: In the reduced coaction\footnote{$\Delta'(x):=\Delta(x)-1\otimes x-x\otimes 1$} of $\zeta^{\mathfrak{m}}(-1,3)=I^{\mathfrak{m}}(0; -1,1,0,0;1)$, there are $3$ non zero cuts: \includegraphics[]{dep1.pdf}. Hence: \begin{multline}\nonumber \Delta'(I^{\mathfrak{m}}(0; -1,1,0,0;1))\\ = I^{\mathfrak{a}}(0; -1;1) \otimes I^{\mathfrak{m}}(0; 1,0,0;1)+ I^{\mathfrak{a}}(-1; 1;0) \otimes I^{\mathfrak{m}}(0; -1,0,0;1)+ I^{\mathfrak{a}}(-1; 1,0,0;1) \otimes I^{\mathfrak{m}}(0; -1;1) \end{multline} I.e, in terms of motivic Euler sums, using the properties of motivic iterated integrals ($\S \ref{propii}$): $$\Delta'(\zeta^{\mathfrak{m}}(-1,3))= \zeta^{\mathfrak{a}}(-1)\otimes \zeta^{\mathfrak{m}}(3)-\zeta^{\mathfrak{a}}(-1)\otimes \zeta^{\mathfrak{m}}(-3)+ (\zeta^{\mathfrak{a}}(3)-\zeta^{\mathfrak{a}}(-3) )\otimes \zeta^{\mathfrak{m}}(-1).$$ \\ Define for $r\geq 1$, the \textit{derivation operators}: \begin{equation}\label{eq:dr} \boldsymbol{D_{r}}: \mathcal{H} \rightarrow \mathcal{L}_{r} \otimes_{\mathbb{Q}} \mathcal{H}, \end{equation} composite of $\Delta'= \Delta^{c}- 1\otimes id$ with $\pi_{r} \otimes id$, where $\pi_{r}$ is the projection $\mathcal{A} \rightarrow \mathcal{L} \rightarrow \mathcal{L}_{r}$.\\ \\ \texttt{Nota Bene:} It is sufficient to consider these weight-graded derivation operators to keep track of all the information of the coaction.\\ \\ According to the previous theorem, the action of $D_{r}$ on $I^{\mathfrak{m}}(a_{0}; a_{1}, \cdots a_{n}; a_{n+1})$ is:\nomenclature{$D_{r}$}{the $r$-weight-graded part of the coaction $\Delta$ } \begin{framed} \begin{equation} \label{eq:Der} D_{r}I^{\mathfrak{m}}(a_{0}; a_{1}, \cdots, a_{n}; a_{n+1})= \end{equation} $$\sum_{p=0}^{n-1} I^{\mathfrak{l}}(a_{p}; a_{p+1}, \cdots, a_{p+r}; a_{p+r+1}) \otimes I^{\mathfrak{m}}(a_{0}; a_{1}, \cdots, a_{p}, a_{p+r+1}, \cdots, a_{n}; a_{n+1}) .$$ \end{framed} \textsc{Remarks} \begin{itemize} \item[$\cdot$] Geometrically, it is equivalent to keep in the previous coaction only the polygons corresponding to a unique cut of (interior) length $r$ between two elements of the iterated integral. \item[$\cdot$] These maps $D_{r}$ are derivations: $$D_{r} (XY)= (1\otimes X) D_{r}(Y) + (1\otimes Y) D_{r}(X).$$ \item[$\cdot$] This formula is linked with the differential equation satisfied by the iterated integral $I(a_{0}; \cdots; a_{n+1})$ when the $a_{i} 's$ vary (cf. \cite{Go1})\footnote{Since $I(a_{i-1};a_{i};a_{i+1})= \log(a_{i+1}-a_{i})-\log(a_{i-1}-a_{i})$.}: $$dI(a_{0}; \cdots; a_{n+1})= \sum dI(a_{i-1};a_{i};a_{i+1}) I(a_{0}; \cdots, \widehat{a_{i}}, \cdots; a_{n+1}).$$ \end{itemize} \texttt{Example}: By the previous example:\\ $$D_{3}(\zeta^{\mathfrak{m}}(-1,3))=(\zeta^{\mathfrak{a}}(3)-\zeta^{\mathfrak{a}}(-3) )\otimes \zeta^{\mathfrak{m}}(-1)$$ $$D_{1}(\zeta^{\mathfrak{m}}(-1,3))= \zeta^{\mathfrak{a}}(-1)\otimes ( \zeta^{\mathfrak{m}}(3)- \zeta^{\mathfrak{m}}(-3)) $$ \subsection{Depth filtration} The inclusion of $\mathbb{P}^{1}\diagdown \lbrace 0, \mu_{N},\infty\rbrace \subset \mathbb{P}^{1}\diagdown \lbrace 0,\infty\rbrace$ implies the surjection for the de Rham realizations of fundamental groupoid: \begin{equation} \label{eq:drsurj} _{0}\Pi_{1} \rightarrow \pi_{1}^{dR}(\mathbb{G}_{m}, \overrightarrow{01}). \end{equation} Looking at the dual, it corresponds to the inclusion of: \begin{equation} \label{eq:drsurjdual} \mathcal{O} \left( \pi_{1}^{dR}(\mathbb{G}_{m}, \overrightarrow{01} ) \right) \cong \mathbb{Q} \left\langle e^{0} \right\rangle \xhookrightarrow[\quad \quad]{} \mathcal{O} \left( _{0}\Pi_{1} \right) \cong \mathbb{Q} \left\langle e^{0}, (e^{\eta})_{\eta} \right\rangle . \end{equation} This leads to the definition of an increasing \textit{depth filtration} $\mathcal{F}^{\mathfrak{D}}$ on $\mathcal{O}(_{0}\Pi_{1})$\footnote{It is the filtration dual to the filtration given by the descending central series of the kernel of the map $\ref{eq:drsurj}$; it can be defined also from the cokernel of $\ref{eq:drsurjdual}$, via the decontatenation coproduct.} such that:\nomenclature{$\mathcal{F}_{\bullet}^{\mathfrak{D}}$}{the depth filtration} \begin{equation}\label{eq:filtprofw} \boldsymbol{\mathcal{F}_{p}^{\mathfrak{D}}\mathcal{O}(_{0}\Pi_{1})} \mathrel{\mathop:}= \left\langle \text{ words } w \text{ in }e^{0},e^{\eta}, \eta\in\mu_{N} \text{ such that } \sum_{\eta\in\mu_{N}} deg _{e^{\eta}}w \leq p \right\rangle _{\mathbb{Q}}. \end{equation} This filtration is preserved by the coaction and thus descends to $\mathcal{H}$ (cf. $\cite{Br3}$), on which: \begin{equation}\label{eq:filtprofh} \mathcal{F}_{p}^{\mathfrak{D}}\mathcal{H}\mathrel{\mathop:}= \left\langle \zeta^{\mathfrak{m}}\left( n_{1}, \ldots, n_{r} \atop \epsilon_{1}, \ldots, \epsilon_{r} \right) , r\leq p \right\rangle _{\mathbb{Q}}. \end{equation} In the same way, we define $ \mathcal{F}_{p}^{\mathfrak{D}}\mathcal{A}$ and $\mathcal{F}_{p}^{\mathfrak{D}}\mathcal{L}$. Beware, the corresponding grading on $\mathcal{O}(_{0}\Pi_{1})$ is not motivic and the depth is not a grading on $\mathcal{H}$\footnote{ For instance: $\zeta^{\mathfrak{m}}(3)=\zeta^{\mathfrak{m}}(1,2)$. }. The graded spaces $gr^{\mathfrak{D}}_{p}$ are defined as the quotient $\mathcal{F}_{p}^{\mathfrak{D}}/\mathcal{F}_{p-1}^{\mathfrak{D}}$.\\ Similarly, there is an increasing depth filtration on $U\mathfrak{g}$, considering the degree in $\lbrace e_{\eta}\rbrace_{\eta\in\mu_{N}}$, which passes to the motivic Lie algebra $\mathfrak{g}^{\mathfrak{m}}$($\ref{eq:motivicliealgebra}$) such that the graded pieces $gr^{r}_{\mathfrak{D}} \mathfrak{g}^{\mathfrak{m}}$ are dual to $gr^{\mathfrak{D}}_{r} \mathcal{L}$.\\ In depth $1$, there are canonical elements:\footnote{For $N=1$, there are only the $\overline{\sigma}_{2i+1}\mathrel{\mathop:}=(\text{ad} e_{0})^{2i} (e_{1}) \in gr^{1}_{\mathfrak{D}} \mathfrak{g}^{\mathfrak{m}}$, $i>0$ and the subLie algebra generated by them is not free, which means also there are other \say{exceptional} generators in higher depth, cf. \cite{Br2}.\\ For $N=2,3,4,\mlq 6\mrq,8$, when keeping $\eta_{i}$ as in Lemma $5.2.1$, $(\overline{\sigma}^{(\eta_{i})}_{i})$ then generate a free Lie algebra in $gr_{\mathfrak{D}} \mathfrak{g}$.} \begin{equation}\label{eq:oversigma} \overline{\sigma}^{(\eta)}_{i}\mathrel{\mathop:}=(\text{ad } e_{0})^{i-1} (e_{\eta}) \in gr^{1}_{\mathfrak{D}} \mathfrak{g}^{\mathfrak{m}}. \end{equation} They satisfy the distribution and conjugation relations stated below.\\ \paragraph{Depth $\boldsymbol{1}$.} In depth $1$, it is known for $\mathcal{A}$ (cf. $\cite{DG}$ Theorem $6.8$): \begin{lemm}[Deligne, Goncharov] The elements $\zeta^{\mathfrak{a}} \left( r; \eta \right)$ are subject only to the following relations in $\mathcal{A}$: \begin{description} \item[Distribution] $$\forall d|N \text{ , } \forall \eta\in\mu_{\frac{N}{d}} \text{ , } (\eta,r)\neq(1,1)\text{ , } \zeta^{\mathfrak{a}} \left({r \atop \eta}\right)= d^{r-1} \sum_{\epsilon^{d}=\eta} \zeta^{\mathfrak{a}} \left({r \atop \epsilon}\right).$$ \item[Conjugation] $$\zeta^{\mathfrak{a}} \left({r \atop \eta}\right)= (-1)^{r-1} \zeta^{\mathfrak{a}} \left({r \atop \eta^{-1}}\right).$$ \end{description} \end{lemm} \textsc{Remark}: More generally, distribution relations for MZV relative to $\mu_{N}$ are: $$\forall d| N, \quad \forall \epsilon_{i}\in\mu_{\frac{N}{d}} \text{ , } \quad \zeta\left( { n_{1}, \ldots , n_{p} \atop \epsilon_{1} , \ldots ,\epsilon_{p} } \right) = d^{\sum n_{i} - p} \sum_{\eta_{1}^{d}=\epsilon_{1}} \cdots \sum_{\eta_{p}^{d}=\epsilon_{p}} \zeta \left( {n_{1}, \ldots , n_{p} \atop \eta_{1} , \ldots ,\eta_{p} } \right) .$$ They are deduced from the following identity: $$\text{ For } d|N , \epsilon\in\mu_{\frac{N}{d}} \text { , } \sum_{\eta^{d}=\epsilon} \eta^{n}= \left\{ \begin{array}{ll} d \epsilon ^{\frac{n}{d}}& \text{ if } d|n \\ 0 & \text{ else }.\\ \end{array} \right. $$ These relations are obviously analogous of those satisfied by the cyclotomic units modulo torsion. \\ \\ In weight $r>1$, a basis for $gr_{1}^{\mathfrak{D}} \mathcal{A}$ is formed by depth $1$ MMZV at primitive roots up to conjugation. However, MMZV$_{\mu_{N}}$ of weight $1$, $\zeta^{\mathfrak{m}} \left( 1 \atop \xi^{a}_{N}\right) = -\log(1-\xi^{a}_{N})$, are more subtle. For instance (already in $\cite{CZ})$: \begin{lemme} A $\mathbb{Z}$-basis for $\mathcal{A}_{1}$ is hence: \begin{description} \item[$\boldsymbol{N=p^{r}}$:] $ \quad \quad \left\lbrace \zeta^{\mathfrak{a}}\left( 1 \atop \xi^{k}\right) \quad a\wedge p=1 \quad 1 \leq a \leq \frac{p-1}{2} \right\rbrace$. \item[$\boldsymbol{N=pq}$:] With $p<q$ primes: $$ \quad\left\lbrace \left\lbrace \zeta^{\mathfrak{a}}\left( 1 \atop \xi^{k}\right) \quad a\wedge p=1 \quad 1 \leq a \leq \frac{p-1}{2} \right\rbrace \bigcup_{a\in (\mathbb{Z}/q\mathbb{Z})^{\ast}\diagup \langle -1, p\rangle } \left\lbrace \zeta^{\mathfrak{a}}\left( 1 \atop \xi^{ap}\right)\right\rbrace \diagdown \left\lbrace\zeta^{\mathfrak{a}}\left( 1 \atop \xi^{a}\right) \right\rbrace \right. $$ $$\left. \bigcup_{a\in (\mathbb{Z}/p\mathbb{Z})^{\ast}\diagup \langle -1, q\rangle}\left\lbrace \zeta^{\mathfrak{a}}\left( 1 \atop \xi^{aq}\right)\right\rbrace \diagdown \left\lbrace\zeta^{\mathfrak{a}}\left( 1 \atop \xi^{a}\right) \right\rbrace \right\rbrace $$ \end{description} \end{lemme} \textsc{Remarks}: \begin{itemize} \item[$\cdot$] Indeed, for $N=pq$, a phenomenon of loops occurs: orbits via the action of $p$ and $-1$ on $(\mathbb{Z}/q \mathbb{Z})^{\ast}$, resp. of $q$ and $-1$ on $(\mathbb{Z}/p \mathbb{Z})^{\ast}$. Consequently, for each loop we have to remove a primitive root $\zeta\left( 1 \atop \xi^{a}\right) $ and add the non primitive $\zeta\left( 1 \atop \xi^{ap}\right) $ to the basis.\footnote{Cardinal of an orbit $ \lbrace \pm ap^{i} \mod N\rbrace$ is either the order of $p$ modulo $q$, if odd, or half of the order of $p$ modulo $q$, if even.} The situation for $N$ a product of primes would be analogous, considering different orbits associated to each prime; we just have to pay more attention when orbits intersect, for the choice of the representatives $a$: avoid to withdraw or add an element already chosen for previous orbits. \item[$\cdot$] Depth $1$ results also highlight a nice behavior in the cases $N=2,3,4,\mlq 6\mrq,8$: primitive roots of unity modulo conjugation form a basis (as in the case of prime powers) and if we restrict (for dimension reasons) for non primitive roots to $1$ (or $\pm 1$ for $N=8$), it is annihilated in weight $1$ and in weight $>1$ modulo $p$. \item[$\cdot$] In weight $1$, there always exists a $\mathbb{Z}$- basis.\footnote{Conrad and Zhao conjectured (\cite{CZ}) there exists a basis of MZV$_{\mu_{N}}$ for the $\mathbb{Z}$-module spanned by MZV$_{\mu_{N}}$ for each $N$ and fixed weight $w$, except $N=1$, $w=6,7$.} \end{itemize} \texttt{Example}: For $N=34$, relations in depth $1$, weight $1$ lead to two orbits, with $(a)\mathrel{\mathop:}=\zeta^{\mathfrak{a}} \left( 1 \atop \xi^{a}_{N}\right) $: $$\begin{array}{ll} (2)= (16) +(1) & \quad \quad (6)=(3)+(14) \\ (16)=(8) +(9) & \quad \quad (14)=(7)+(10) \\ (8)= (4) +(13) & \quad \quad (10)=(5)+(12) \\ (4)= (2) +(15) & \quad \quad (12)=(11)+(6) \\ \end{array},$$ Hence a basis could be chosen as: $$\left\lbrace \zeta^{\mathfrak{a}}\left( 1 \atop \xi_{34}^{k}\right), k\in \lbrace 5,7,9,11,13,15,\boldsymbol{2,6} \rbrace \right\rbrace .$$ \paragraph{Motivic depth.} The \textit{motivic depth} of an element in $\mathcal{H}^{\mathcal{MT}_{N}}$ is defined, via the correspondence ($\ref{eq:phih}$), as the degree of the polynomial in the $(f_{i})$. \footnote{Beware, $\phi$ is non-canonical, but the degree is well defined.} It can also be defined recursively as, for $\mathfrak{Z}\in \mathcal{H}^{N}$: $$\begin{array}{lll} \mathfrak{Z} \text{ of motivic depth } 1 & \text{ if and only if } & \mathfrak{Z}\in \mathcal{F}_{1}^{\mathfrak{D}}\mathcal{H}^{N}.\\ \mathfrak{Z} \text{ of motivic depth } \leq p & \text{ if and only if } & \left( \forall r< n, D_{r}(\mathfrak{Z}) \text{ of motivic depth } \leq p-1 \right). \end{array}.$$ For $\mathfrak{Z}= \zeta^{\mathfrak{m}} \left( n_{1}, \ldots, n_{p} \atop \epsilon_{1}, \ldots, \epsilon_{p} \right) \in \mathcal{H}^{N}$ of motivic depth $p_{\mathfrak{m}}$, we clearly have the inequalities: $$ \text {depth } p \geq p_{c} \geq p_{\mathfrak{m}} \text{ motivic depth}, \quad \text{ where $ p_{c}$ is the smallest $i$ such that $\mathfrak{Z}\in \mathcal{F}_{i}^{\mathfrak{D}}\mathcal{H}^{N}$}. $$ \texttt{Nota Bene:} For $N=2,3,4, \mlq 6\mrq, 8$, $p_{\mathfrak{m}}$ always coincides with $p_{c}$, whereas for $N=1$, they may differ. \subsection{Derivation space} Translating ($\ref{eq:dr}$) for cyclotomic MZV: \begin{lemm} \label{drz} $$D_{r}: \mathcal{H}_{n} \rightarrow \mathcal{L}_{r} \otimes \mathcal{H}_{n-r} $$ \begin{multline} D_{r} \left(\zeta^{\mathfrak{m}} \left({n_{1}, \ldots , n_{p} \atop \epsilon_{1}, \ldots ,\epsilon_{p}} \right)\right) = \delta_{r=n_{1}+ \cdots +n_{i}} \zeta^{\mathfrak{l}} \left({n_{1}, \cdots, n_{i} \atop \epsilon_{1}, \ldots, \epsilon_{i}}\right) \otimes \zeta^{\mathfrak{m}} \left( { n_{i+1},\cdots, n_{p} \atop \epsilon_{i+1}, \ldots, \epsilon_{p} }\right) \\ + \sum_{1 \leq i<j\leq p \atop \lbrace r \leq \sum_{k=i}^{j} n_{k} -1\rbrace} \left[ \delta_{ \sum_{k=i+1}^{j} n_{k} \leq r } \zeta^{\mathfrak{l}}_{r- \sum_{k=i+1}^{j}n_{k}} \left({ n_{i+1}, \ldots , n_{j} \atop \epsilon_{i+1}, \ldots, \epsilon_{j}}\right) +(-1)^{r} \delta_{ \sum_{k=i}^{j-1} n_{k} \leq r} \zeta^{\mathfrak{l}}_{r- \sum_{k=i}^{j-1}n_{k}} \left({ n_{j-1}, \cdots, n_{i} \atop \epsilon_{j-1}^{-1}, \ldots, \epsilon_{i}^{-1}}\right) \right] \\ \otimes \zeta^{\mathfrak{m}} \left( {\cdots, \sum_{k=i}^{j} n_{k}-r,\cdots \atop \cdots , \prod_{k=i}^{j}\epsilon_{k}, \cdots}\right) \end{multline} \end{lemm} \begin{proof} Straightforward from $(\ref{eq:dr})$, passing to MZV$_{\mu_{N}}$ notation. \end{proof} A key point is that the Galois action and hence the coaction respects the weight grading and the depth filtration\footnote{Notice that $(\mathcal{F}_{0}^{\mathfrak{D}} \mathcal{L}=0$.}: $$D_{r} (\mathcal{H}_{n}) \subset \mathcal{L}_{r} \otimes_{\mathbb{Q}} \mathcal{H}_{n-r}.$$ $$ D_{r} (\mathcal{F}_{p}^{\mathfrak{D}} \mathcal{H}_{n}) \subset \mathcal{L}_{r} \otimes_{\mathbb{Q}} \mathcal{F}_{p-1}^{\mathfrak{D}} \mathcal{H}_{n-r}.$$ Indeed, the depth filtration is motivic, i.e.: $$\Delta (\mathcal{F}^{\mathfrak{D}}_{n}\mathcal{H}) \subset \sum_{p+q=n} \mathcal{F}^{\mathfrak{D}}_{p}\mathcal{A} \otimes \mathcal{F}^{\mathfrak{D}}_{q}\mathcal{H}.$$ Furthermore, $\mathcal{F}^{\mathfrak{D}}_{0}\mathcal{A}=\mathcal{F}^{\mathfrak{D}}_{0}\mathcal{L}=0$. Therefore, the right side of $\Delta(\bullet)$ is in $\mathcal{F}^{\mathfrak{D}}_{q}\mathcal{H}$, with $q<n$. This feature of the derivations $D_{r}$ (decreasing the depth) will enable us to do some recursion on depth through this work.\\ \\ Passing to the depth-graded, define: $$gr^{\mathfrak{D}}_{p} D_{r}: gr_{p}^{\mathfrak{D}} \mathcal{H} \rightarrow \mathcal{L}_{r} \otimes gr_{p-1}^{\mathfrak{D}} \mathcal{H} \text{, as the composition } (id\otimes gr_{p-1}^{\mathfrak{D}}) \circ D_{r |gr_{p}^{\mathfrak{D}}\mathcal{H}}.$$ By Lemma $\ref{drz}$, all the terms appearing in the left side of $gr^{\mathfrak{D}}_{p} D_{2r+1}$ have depth $1$. Hence, let's consider from now the derivations $D_{r,p}$:\nomenclature{$D_{r,p}$}{depth graded derivations} \begin{lemm} \label{Drp} $$\boldsymbol{D_{r,p}}: gr_{p}^{\mathfrak{D}} \mathcal{H} \rightarrow gr_{1}^{\mathfrak{D}} \mathcal{L}_{r} \otimes gr_{p-1}^{\mathfrak{D}} \mathcal{H} $$ $$ D_{r,p} \left(\zeta^{\mathfrak{m}} \left({n_{1}, \ldots , n_{p} \atop \epsilon_{1}, \ldots ,\epsilon_{p}} \right)\right) = \textsc{(a0) } \delta_{r=n_{1}}\ \zeta^{\mathfrak{l}} \left({r \atop \epsilon_{1}}\right) \otimes \zeta^{\mathfrak{m}} \left( { n_{2},\cdots \atop \epsilon_{2}, \cdots }\right) $$ $$\textsc{(a) } + \sum_{i=2}^{p-1} \delta_{n_{i}\leq r < n_{i}+ n_{i-1}-1} (-1)^{r-n_{i}} \binom {r-1}{r-n_{i}} \zeta^{\mathfrak{l}} \left({ r \atop \epsilon_{i}}\right) \otimes \zeta^{\mathfrak{m}} \left( {\cdots, n_{i}+n_{i-1}-r,\cdots \atop \cdots , \epsilon_{i-1}\epsilon_{i}, \cdots}\right) $$ $$ \textsc{(b) } -\sum_{i=1}^{p-1} \delta_{n_{i}\leq r < n_{i}+ n_{i+1}-1} (-1)^{n_{i}} \binom{r-1}{r-n_{i}} \zeta^{\mathfrak{l}} \left( {r \atop \epsilon_{i}^{-1}}\right) \otimes \zeta^{\mathfrak{m}} \left( {\cdots, n_{i}+n_{i+1}-r, \cdots \atop \cdots , \epsilon_{i+1}\epsilon_{i}, \cdots}\right) $$ $$\textsc{(c) } +\sum_{i=2}^{p-1} \delta_{ r = n_{i}+ n_{i-1}-1 \atop \epsilon_{i-1}\epsilon_{i}\neq 1} \left( (-1)^{n_{i}} \binom{r-1}{n_{i}-1} \zeta^{\mathfrak{l}} \left( {r \atop \epsilon_{i-1}^{-1}} \right) + (-1)^{n_{i-1}-1} \binom{r-1}{n_{i-1}-1} \zeta^{\mathfrak{l}} \left( {r \atop \epsilon_{i}} \right) \right)$$ $$\otimes \zeta^{\mathfrak{m}} \left( {\cdots, 1, \cdots \atop \cdots, \epsilon_{i-1} \epsilon_{i}, \cdots}\right) $$ $$ \textsc{(d) } +\delta_{ n_{p} \leq r < n_{p}+ n_{p-1}-1} (-1)^{r-n_{p}} \binom{r-1}{r-n_{p}} \zeta^{\mathfrak{l}} \left({r \atop \epsilon_{p}} \right) \otimes \zeta^{\mathfrak{m}} \left( {\cdots, n_{p-1}+n_{p}-r\atop \cdots, \epsilon_{p-1}\epsilon_{p}}\right) $$ $$\textsc{(d') } +\delta_{ r = n_{p}+ n_{p-1}-1 \atop \epsilon_{p-1}\epsilon_{p}\neq 1} (-1)^{n_{p-1}}\left( \binom{r-1}{n_{p}-1} \zeta^{\mathfrak{l}} \left( {r \atop \epsilon_{p-1}^{-1}} \right) - \binom{r-1}{n_{p-1}-1} \zeta^{\mathfrak{l}} \left( {r \atop \epsilon_{p}} \right) \right) \otimes \zeta^{\mathfrak{m}} \left( { \cdots, 1 \atop \cdots \epsilon_{p-1}\epsilon_{p}}\right) .$$ \end{lemm} \textsc{Remarks}: \begin{itemize} \item[$\cdot$] The terms of type \textsc{(d, d')}, corresponding to a \textit{deconcatenation}, play a particular role since modulo some congruences (using depth $1$ result for the left side of the coaction), we will get rid of the other terms in the cases $N=2,3,4,\mlq 6\mrq,8$ for the elements in the basis. In the dual point of view of Lie algebra, like in Deligne's article \cite{De} or Wojtkowiak \cite{Wo}, this corresponds to showing that the Ihara bracket $\lbrace,\rbrace$ on these elements modulo some vector space reduces to the usual bracket $[,]$. More generally, for other bases, like Hoffman's one for $N=1$, the idea is still to find an appropriate filtration on the conjectural basis, such that the coaction in the graded space acts on this family, modulo some space, as the deconcatenation, as for the $f_{i}$ alphabet. Indeed, on $H$ ($\ref{eq:phih}$), the weight graded part of the coaction, $D_{r}$ is defined by: \begin{equation}\label{eq:derf} D_{r} : \quad H_{n} \quad \longrightarrow \quad L_{r} \otimes H_{n-r} \quad \quad\text{ such that :} \end{equation} $$ f^{j_{1}}_{i_{1}} \cdots f^{j_{k}}_{i_{k}}\longmapsto\left\{ \begin{array}{ll} f^{j_{1}}_{i_{1}} \otimes f^{j_{2}}_{i_{2}} \ldots f^{j_{k}}_{i_{k}} & \text{ if } i_{1}=r .\\ 0 & \text{ else }.\\ \end{array} \right.$$ \item[$\cdot$] One fundamental feature for a family of motivic multiple zeta values (which makes it \say{natural} and simple) is the \textit{stability} under the coaction. For instance, if we look at the following family which appears in Chapter $5$: $$\zeta^{\mathfrak{m}}\left(n_{1}, \cdots, n_{p-1}, n_{p} \atop \epsilon_{1}, \ldots , \epsilon_{p-1},\epsilon_{p}\right) \quad \text{ with } \epsilon_{p}\in\mu_{N} \quad \text{primitive} \quad \text{ and } (\epsilon_{i})_{i<p} \quad \text{non primitive}.$$ If N is a power of a prime, this family is stable via the coaction. \footnote{Since in this case, $(\text{non primitive}) \cdot (\text{ non primitive})=$ non primitive and non primitive $\cdot$ primitive $=$ primitive root. Note also, for dimensions reasons, if we are looking for a basis in this form, we should have $N-\varphi(N)\geq \frac{\varphi(N)}{2}$, which comes down here to the case where $N$ is a power of $2$ or $3$.} It is also stable via the Galois action if we only need to take $1$ as a non primitive ($1$-dimensional case), as for $\mathcal{MT} (\mathcal{O}_{6})$.\\ \end{itemize} \begin{proof} Straightforward from $\ref{drz}$, using the properties of motivic iterated integrals previously listed ($\S \ref{propii}$). Terms of type \textsc{(a)} correspond to cuts from a $0$ (possibly the very first one) to a root of unity, $\textsc{(b)}$ terms from a root of unity to a $0$, $\textsc{(c)}$ terms between two roots of unity and $\textsc{(d,d')}$ terms are the cuts ending in the last $1$, called \textit{deconcatenation terms}. \end{proof} \paragraph{Derivation space.} By Lemma $2.4.1$ (depth $1$ results), once we have chosen a basis for $gr_{1}^{\mathfrak{D}} \mathcal{L}_{r}$, composed by some $\zeta^{\mathfrak{a}}(r_{i};\eta_{i})$, we can well define: \footnote{Without passing to the depth-graded, we could also define $D^{\eta}_{r}$ as $D_{r}: \mathcal{H}\rightarrow gr^{\mathfrak{D}}_{1}\mathcal{L}_{r} \otimes \mathcal{H}$ followed by $\pi^{\eta}_{r}\otimes id$ where $\pi^{\eta}:gr^{\mathfrak{D}}_{1}\mathcal{L}_{r} \rightarrow \mathbb{Q}$ is the projection on $\zeta^{\mathfrak{m}}\left( r \atop \eta \right) $, once we have fixed a basis for $gr^{\mathfrak{D}}_{1}\mathcal{L}_{r}$; and define as above $\mathscr{D}_{r}$ as the set of the $D^{\eta}_{r,p}$, for $\zeta^{\mathfrak{m}}(r,\eta)$ in the basis of $gr_{1}^{\mathfrak{D}} \mathcal{A}_{r}$.} \begin{itemize} \item[$(i)$] For each $(r_{i}, \eta_{i})$:\nomenclature{$D^{\eta}_{r,p}$}{defined from $D_{r,p}$ followed by a projection} \begin{equation} \label{eq:derivnp} \boldsymbol{D^{\eta_{i}}_{r_{i},p}}: gr_{p}^{\mathfrak{D}}\mathcal{H} \rightarrow gr_{p-1}^{\mathfrak{D}} \mathcal{H}, \end{equation} as the composition of $D_{r_{i},p}$ followed by the projection: $$\pi^{\eta}: gr_{1}^{\mathfrak{D}} \mathcal{L}_{r}\otimes gr_{p-1}^{\mathfrak{D}} \mathcal{H}\rightarrow gr_{p-1}^{\mathfrak{D}} \mathcal{H}, \quad \quad\zeta^{\mathfrak{m}}(r; \epsilon) \otimes X \mapsto c_{\eta, \epsilon, r} X , $$ with $c_{\eta, \epsilon, r}\in \mathbb{Q}$ the coefficient of $\zeta^{\mathfrak{m}}(r; \eta)$ in the decomposition of $\zeta^{\mathfrak{m}}(r; \epsilon)$ in the basis. \item[$(ii)$] \begin{equation}\label{eq:setdrp} \boldsymbol{\mathscr{D}_{r,p}} \text{ as the set of } D^{\eta_{i}}_{r_{i},p} \text{ for } \zeta^{\mathfrak{m}}(r_{i},\eta_{i}) \text{ in the chosen basis of } gr_{1}^{\mathfrak{D}} \mathcal{A}_{r}. \end{equation} \item[$(iii)$] The \textit{derivation set} $\boldsymbol{\mathscr{D}}$ as the (disjoint) union: $\boldsymbol{\mathscr{D}} \mathrel{\mathop:}= \sqcup_{r>0} \left\lbrace \mathscr{D}_{r} \right\rbrace $. \end{itemize}\nomenclature{$\mathscr{D}$}{the derivation set} \textsc{Remarks}: \begin{itemize} \item[$\cdot$] In the case $N=2,3,4,\mlq 6\mrq$, the cardinal of $\mathscr{D}_{r,p}$ is one (or $0$ if $r$ even and $N=2$, or if $(r,N)=(1,6)$), whereas for $N=8$ the space generated by these derivations is $2$-dimensional, generated by $D^{\xi_{8}}_{r}$ and $D^{-\xi_{8}}_{r}$ for instance. \item[$\cdot$] Following the same procedure for the non-canonical Hopf comodule $H$ defined in $(\ref{eq:phih})$, isomorphic to $\mathcal{H}^{\mathcal{MT}_{N}}$, since the coproduct on $H$ is the deconcatenation $(\ref{eq:derf})$, leads to the following derivations operators: $$\begin{array}{llll} D^{j}_{r} : & H_{n} & \rightarrow & H_{n-r} \\ & f^{j_{1}}_{i_{1}} \cdots f^{j_{k}}_{i_{k}} & \mapsto & \left\{ \begin{array}{ll} f^{j_{2}}_{i_{2}} \ldots f^{j_{k}}_{i_{k}} & \text{ if } j_{1}=j \text{ and } i_{1}=r .\\ 0 & \text{ else }.\\ \end{array} \right. \end{array}.$$ \end{itemize} Now, consider the following application, depth graded version of the derivations above, fundamental for several linear independence results in $\S 4.3$ and Chapter $5$:\nomenclature{$\partial _{n,p}$}{a depth graded version of the infinitesimal coactions} \begin{equation} \label{eq:pderivnp} \boldsymbol{\partial _{n,p}} \mathrel{\mathop:}=\oplus_{r<n\atop D\in \mathscr{D}_{r,p}} D : gr_{p}^{\mathfrak{D}}\mathcal{H}_{n} \rightarrow \oplus_{r<n } \left( gr_{p-1}^{\mathfrak{D}}\mathcal{H}_{n-r}\right) ^{\oplus \text{ card } \mathscr{D}_{r,p}} \end{equation} \paragraph{Kernel of $\boldsymbol{D_{<n}}$. } A key point for the use of these derivations is the ability to prove some relations (and possibly lift some from MZV to motivic MZV) up to rational coefficients. This comes from the following theorem, looking at primitive elements:\nomenclature{$D_{<n}$ }{is defined as $\oplus_{r<n} D_{r}$} \begin{theo} Let $D_{<n}\mathrel{\mathop:}= \oplus_{r<n} D_{r}$, and fix a basis $\lbrace \zeta^{\mathfrak{a}}\left( n \atop \eta_{j} \right) \rbrace$ of $gr_{1}^{\mathfrak{D}} \mathcal{A}_{n}$. Then: $$\ker D_{<n}\cap \mathcal{H}^{N}_{n} = \left\lbrace \begin{array}{ll} \mathbb{Q}\zeta^{\mathfrak{m}}\left( n \atop 1 \right) & \text{ for } N=1,2 \text{ and } n\neq 1.\\ \oplus \mathbb{Q} \pi^{\mathfrak{m}} \bigoplus_{1 \leq j \leq a_{N}} \mathbb{Q} \zeta^{\mathfrak{m}}\left( 1 \atop \eta_{j} \right). & \text{ for } N>2, n=1.\\ \oplus \mathbb{Q} (\pi^{\mathfrak{m}})^{n} \bigoplus_{1 \leq j \leq b_{N}} \mathbb{Q} \zeta^{\mathfrak{m}}\left( n \atop \eta_{j} \right). & \text{ for } N>2, n>1. \end{array}\right. .$$ \end{theo} \begin{proof} It comes from the injective morphism of graded Hopf comodules $(\ref{eq:phih})$, which is an isomorphism for $N=1,2,3,4,\mlq 6\mrq,8$: $$\phi: \mathcal{H}^{N} \xrightarrow[\sim]{n.c} H^{N} \mathrel{\mathop:}= \mathbb{Q}\left\langle \left( f^{j}_{i}\right) \right\rangle \otimes_{\mathbb{Q}} \mathbb{Q} \left[ g_{1}\right] .$$ Indeed, for $H^{N}$, the analogue statement is obviously true, for $\Delta'=1\otimes \Delta+ \Delta\otimes 1$: $$\ker \Delta' \cap H_{n} = \oplus_{j} f^{j}_{n} \oplus g_{1}^{n} .$$ \end{proof} \begin{coro}\label{kerdn} Let $D_{<n}\mathrel{\mathop:}= \oplus_{r<n} D_{r}$.\footnote{For $N=1$, we restrict to $r$ odd $>1$; for $N=2$ we restrict to r odd; for $N=\mlq 6\mrq$ we restrict to $r>1$.} Then: $$\ker D_{<n}\cap \mathcal{H}^{N}_{n} = \left\lbrace \begin{array}{ll} \mathbb{Q}\zeta^{\mathfrak{m}}\left( n \atop 1 \right) & \text{ for } N=1,2.\\ \mathbb{Q} (\pi^{\mathfrak{m}})^{n} \oplus \mathbb{Q} \zeta^{\mathfrak{m}}\left( n \atop \xi_{N} \right) & \text{ for } N=3,4,\mlq 6\mrq.\\ \mathbb{Q} (\pi^{\mathfrak{m}})^{n} \oplus \mathbb{Q} \zeta^{\mathfrak{m}}\left( n \atop \xi_{8} \right) \oplus \mathbb{Q} \zeta^{\mathfrak{m}}\left( n \atop -\xi_{8} \right) & \text{ for } N=8.\\ \end{array}\right. .$$ \end{coro} In particular, by this result (for $N=1,2$), proving an identity between motivic MZV (resp. motivic Euler sums), amounts to: \begin{enumerate} \item Prove that the coaction is identical on both sides, computing $D_{r}$ for $r>0$ smaller than the weight. If the families are not stable under the coaction, this step would require other identities. \item Use the corresponding analytic result for MZV (resp. Euler sums) to deduce the remaining rational coefficient; if the analytic equivalent is unknown, we can at least evaluate numerically this rational coefficient. \end{enumerate} Some examples are given in $\S 6.3 $ and $\S 4.4.3$.\\ Another important use of this corollary, is the decomposition of (motivic) multiple zeta values into a conjectured basis, which has been explained by F. Brown in \cite{Br1}.\footnote{He gave an exact numerical algorithm for this decomposition, where, at each step, a rational coefficient has to be evaluated; hence, for other roots of unity, the generalization, albeit easily stated, is harder for numerical experiments.}\\ However, for greater $N$, several rational coefficients appear at each step, and we would need linear independence results before concluding.\\ \chapter{Results} \section{Euler $\star,\sharp$ sums \textit{[Chapter 4]}} In Chapter $4$, we focus on motivic Euler sums ($N=2$), shortened ES, and motivic multiple zeta values ($N=1$), with in particular some new bases for the vector space of MMZV: one with Euler $\sharp$ sums and, under an analytic conjecture, the so-called \textit{Hoffman $\star$ family}. These two variants of Euler sums are (cf. Definition $4.1.1$): \begin{description} \item[Euler $\star$ sums] corresponds to the analogue multiple sums of ES with $\leq$ instead of strict inequalities. It verifies: \begin{equation} \label{eq:esstar}\zeta ^{\star}(n_{1}, \ldots, n_{p})= \sum_{\circ=\mlq + \mrq \text{ or } ,} \zeta (n_{1}\circ \cdots \circ n_{p}). \end{equation} \texttt{Notation:} This $\mlq + \mrq$ operation on $n_{i}\in\mathbb{Z}$, is a summation of absolute values, while signs are multiplied.\\ These have already been studied in many papers: $\cite{BBB}, \cite{IKOO}, \cite{KST}, \cite{LZ}, \cite{OZ}, \cite{Zh3}$. \item[Euler $\sharp$ sums] are, similarly, linear combinations of MZV but with $2$-power coefficients: \begin{equation} \label{eq:essharp} \zeta^{\sharp}(n_{1}, \ldots, n_{p})= \sum_{\circ=\mlq + \mrq \text{ or } ,} 2^{p-n_{+}} \zeta(n_{1}\circ \cdots \circ n_{p}), \quad \text{ with } n_{+} \text{ the number of } +. \end{equation} \end{description} We also pave the way for a motivic version of a generalization of a Linebarger and Zhao's equality (Conjecture $\ref{lzg}$) which expresses each motivic multiple zeta $\star$ as a motivic Euler $\sharp$ sums; under this conjecture, Hoffman $\star$ family is a basis, identical to the one presented with Euler sums $\sharp$.\\ \\ The first (naive) idea, when looking for a basis for the space of multiple zeta values, is to choose: $$\lbrace \zeta\left( 2n_{1}+1,2n_{2}+1, \ldots, 2n_{p}+1 \right) (2 i \pi)^{2s}, n_{i}\in\mathbb{N}^{\ast}, s\in \mathbb{N} \rbrace .$$ However, considering Broadhurst-Kreimer conjecture $(\ref{eq:bkdepth})$, the depth filtration clearly does \textit{not} behave so nicely in the case of MZV \footnote{Remark, as we will see in Chapter $5$, or as we can see in $\cite{De}$ that for $N=2,3,4,\mlq 6\mrq,8$, the depth filtration is dual of the descending central series of $\mathcal{U}$, and, in that sense, does \textit{behave well}. For instance, the following family is indeed a basis of motivic Euler sums: $$\lbrace \zeta^{\mathfrak{m}}\left( 2n_{1}+1,2n_{2}+1, \ldots, 2n_{p-1}+1,-(2n_{p}+1) \right) (\mathbb{L}^{\mathfrak{m}})^{2s}, n_{i}\in\mathbb{N}, s\in \mathbb{N} \rbrace .$$ } and already in weight $12$, they are not linearly independent: $$28\zeta(9,3)+150\zeta(7,5)+168\zeta(5,7) = \frac{5197}{691}\zeta(12).$$ Consequently, in order to find a basis of motivic MZV, we have to: \begin{itemize} \item[\texttt{Either}: ] Allow \textit{higher} depths, as the Hoffman basis (proved by F Brown in $\cite{Br2}$), or the $\star$ analogue version: $$\texttt{ Hoffman } \star \quad : \left\lbrace \zeta^{\star, \mathfrak{m}} \left( \boldsymbol{2}^{a_{0}},3, \boldsymbol{2}^{a_{1}}, \ldots, 3, \boldsymbol{2}^{a_{p}} \right), a_{i}\geq 0 \right\rbrace .$$ The analogous real family Hoffman $\star$ was also conjectured (in $\cite{IKOO}$, Conjecture $1$) to be a basis of the space of MZV. Up to an analytic conjecture ($\ref{conjcoeff}$), we prove (in $\S 4.4$) that the motivic Hoffman $\star$ family is a basis of $\mathcal{H}^{1}$, the space of motivic MZV\footnote{Up to this analytic statement, $\ref{conjcoeff}$, the Hoffman $\star$ family is then a generating family for MZV.}. In this case, the notion of \textit{motivic depth} (explained in $\S 2.4.3$) is the number of $3$, and is here in general much smaller than the depth. \item[\texttt{Or}: ] Pass by motivic Euler sums, as the Euler $\sharp$ basis given below; it is also another illustration of the descent idea of Chapter $5$: roughly, it enables to reach motivic periods in $\mathcal{H}^{N'}$ coming from above, i.e. via motivic periods in $\mathcal{H}^{N}$, for $N' \mid N$. \end{itemize} More precisely, let look at the following motivic Euler $\sharp$ sums: \begin{theom} The motivic Euler sums $\zeta^{\sharp, \mathfrak{m}} \left( \lbrace \overline{\text{even }}, \text{odd } \rbrace^{\times} \right) $ are motivic geometric periods of $\mathcal{MT}(\mathbb{Z})$. Hence, they are $\mathbb{Q}$ linear combinations of motivic multiple zeta values.\footnote{Since, by $\cite{Br2}$, we know that Frobenius invariant geometric motivic periods of $\mathcal{MT}(\mathbb{Z})$ are $\mathbb{Q}$ linear combinations of motivic multiple zeta values.} \end{theom} \texttt{Notations}: Recall that an overline $\overline{x}$ corresponds to a negative sign, i.e. $-x$ in the argument. Here, the family considered is a family of Euler $\sharp$ sums with only positive odd and negative even integers for arguments.\\ This motivic family is even a generating family of motivic MZV from which we are able to extract a basis: \begin{theom} A basis of $\boldsymbol{\mathcal{P}_{\mathcal{MT}(\mathbb{Z}), \mathbb{R}}^{\mathfrak{m},+}}=\mathcal{H}^{1}$, the space of motivic multiple zeta values is: $$\lbrace\zeta^{\sharp,\mathfrak{m}} \left( 2a_{0}+1,2a_{1}+3,\cdots, 2 a_{p-1}+3, \overline{2a_{p}+2}\right) \text{ , } a_{i}\geq 0 \rbrace .$$ \end{theom} The proof is based on the good behaviour of this family with respect to the coaction and the depth filtration; the suitable filtration corresponding to the \textit{motivic depth} for this family is the usual depth minus $1$.\\ By application of the period map, combining these results: \begin{corol} Each Euler sum $\zeta^{\sharp} \left( \lbrace \overline{\text{even }}, \text{odd } \rbrace^{\times} \right) $ (i.e. with positive odd and negative even integers for arguments) is a $\mathbb{Q}$ linear combination of multiple zeta values of the same weight.\\ Conversely, each multiple zeta value of depth $<d$ is a $\mathbb{Q}$ linear combination of elements $\zeta^{\sharp} \left( 2a_{0}+1,2a_{1}+3,\cdots, 2 a_{p-1}+3, \overline{2a_{p}+2}\right) $, of the same weight with $a_{i}\geq 0$, $p\leq d$. \end{corol} \textsc{Remarks}: \begin{itemize} \item[$\cdot$] Finding a \textit{good} basis for the space of motivic multiple zeta values is a fundamental question. Hoffman basis may be unsatisfactory for various reasons, while this basis with Euler sums (linear combinations with $2$ power coefficients) may appear slightly more natural, in particular since the motivic depth is here the depth minus $1$. However, both of those two baess are not bases of the $\mathbb{Z}$ module and the primes appearing in the determinant of the \textit{passage matrix}\footnote{The inverse of the matrix expressing the considered basis in term of a $\mathbb{Z}$ basis.} are growing rather fast.\footnote{Don Zagier has checked this for small weights with high precision; he suggested that the primes involved in the case of this basis could have some predictable features, such as being divisor of $2^{n}-1$.} \item[$\cdot$] Looking at how periods of $\mathcal{MT}(\mathbb{Z})$ embed into periods of $\mathcal{MT}(\mathbb{Z}[\frac{1}{2}])$, is a fragment of the Galois descent ideas of Chapter $5$.\\ Euler sums which belong to the $\mathbb{Q}$-vector space of multiple zeta values, sometimes called \textit{honorary}, have been studied notably by D. Broadhurst (cf. $\cite{BBB1}$) among others. We define then \textit{unramified} motivic Euler sums as motivic ES which are $\mathbb{Q}$-linear combinations of motivic MZVs, i.e. in $\mathcal{H}^{1}$. Being unramified for a motivic period implies that its period is unramified, i.e. honorary; some examples of unramified motivic ES are given in $\S 6.2$, or with the family above. In Chapter 5, we give a criterion for motivic Euler sums to be unramified \ref{criterehonoraire}, which generalizes for some other roots of unity; by the period map, this criterion also applies to Euler sums. \item[$\cdot$] For these two theorems, in order to simplify the coaction, we crucially need a motivic identity in the coalgebra $\mathcal{L}$, proved in $\S 4.2$, coming from the octagon relation pictured in Figure $\ref{fig:octagon2}$. More precisely, we need to consider the linearized version of the anti-invariant part by the Frobenius at infinity of this relation, in order to prove this hybrid relation (Theorem $\ref{hybrid}$), for $n_{i}\in\mathbb{N}^{\ast}$, $\epsilon_{i}\in\pm 1$: $$\zeta^{\mathfrak{l}}_{k}\left(n_{0},\cdots, n_{p} \atop \epsilon_{0} , \ldots, \epsilon_{p} \right) + \zeta^{\mathfrak{l}}_{n_{0}+k}\left( n_{1}, \ldots, n_{p} \atop \epsilon_{1} , \ldots, \epsilon_{p} \right) \equiv (-1)^{w+1}\left( \zeta^{\mathfrak{l}}_{k}\left( n_{p}, \ldots, n_{0} \atop \epsilon_{p} , \ldots, \epsilon_{0}\right) + \zeta^{\mathfrak{l}}_{k+n_{p}}\left( n_{p-1}, \ldots,n_{0} \atop \epsilon_{p-1}, \ldots, \epsilon_{0}\right) \right).$$ Thanks to this hybrid relation, and the antipodal relations presented in $\S 4.2.1$, the coaction expression is considerably simplified in Appendix $A.1$. \end{itemize} \begin{theom} If the analytic conjecture ($\ref{conjcoeff}$) holds, then the motivic \textit{Hoffman} $\star$ family $\lbrace \zeta^{\star,\mathfrak{m}} (\lbrace 2,3 \rbrace^{\times})\rbrace$ is a basis of $\mathcal{H}^{1}$, the space of MMZV. \end{theom} \texttt{Nota Bene:} A MMZV $\star$, in the depth graded, is obviously equal to the corresponding MMZV. However, the motivic Hoffman (i.e. with only $2$ and $3$) multiple zeta $(\star)$ values are almost all zero in the depth graded (the \textit{motivic depth} there being the number of $3$). Hence, the analogous result for the non $\star$ case\footnote{I.e. that the motivic Hoffman family is a basis of the space of MMZV, cf $\cite{Br1}$.}, proved by F. Brown, does not make the result in the $\star$ case anyhow simpler.\\ \\ Denote by $\mathcal{H}^{2,3}$\nomenclature{$\mathcal{H}^{2,3}$}{the $\mathbb{Q}$-vector space spanned by the motivic Hoffman $\star$ family} the $\mathbb{Q}$-vector space spanned by the motivic Hoffman $\star$ family. The idea of the proof is similar as in the non-star case done by Francis Brown. We define an increasing filtration $\mathcal{F}^{L}_{\bullet}$ on $\mathcal{H}^{2,3}$, called the \textit{level}, such that:\footnote{Beware, this notion of level is different than the level associated to a descent in Chapter $5$. It is similar as the level notion for the Hoffman basis, in F. Brown paper's $\cite{Br2}$. It corresponds to the motivic depth, as we will see through the proof.}\nomenclature{$\mathcal{F}^{L}_{l}$}{level filtration on $\mathcal{H}^{2,3}$} \begin{center} $\mathcal{F}^{L}_{l}\mathcal{H}^{2,3}$ is spanned by $\zeta^{\star,\mathfrak{m}} (\boldsymbol{2}^{a_{0}},3,\cdots,3, \boldsymbol{2}^{a_{l}}) $, with less than \say{l} $3$. \end{center} One key feature is that the vector space $\mathcal{F}^{L}_{l}\mathcal{H}^{2,3}$ is stable under the action of $\mathcal{G}$.\\ The linear independence is then proved thanks to a recursion on the level and on the weight, using the injectivity of a map $\partial$ where $\partial$ came out of the level and weight-graded part of the coaction $\Delta$ (cf. $\S 4.4.1$). The injectivity is proved via $2$-adic properties of some coefficients with Conjecture $\ref{conjcoeff}$.\\ One noteworthy difference is that, when computing the coaction on the motivic MZV$^{\star}$, some motivic MZV$^{\star\star}$ arise, which are a non convergent analogue of MZV$^{\star}$ and have to be renormalized. Therefore, where F. Brown in the non-star case needed an analytic formula proven by Don Zagier ($\cite{Za}$), we need some slightly more complicated identities (in Lemma $\ref{lemmcoeff}$) because the elements involved, such as $\zeta^{\star \star,\mathfrak{m}} (\boldsymbol{2}^{a},3, \boldsymbol{2}^{b}) $ for instance, are not of depth $1$ but are linear combinations of products of depth $1$ motivic MZV times a power of $\pi$.\\ \\ \\ These two bases for motivic multiple zeta values turn to be identical, when considering this conjectural motivic identity, more generally: \begin{conje} For $a_{i},c_{i} \in \mathbb{N}^{\ast}$, $c_{i}\neq 2$, \begin{equation} \zeta^{\star, \mathfrak{m}} \left( \boldsymbol{2}^{a_{0}},c_{1},\cdots,c_{p}, \boldsymbol{2}^{a_{p}}\right) = \end{equation} $$(-1)^{1+\delta_{c_{1}}} \zeta^{\sharp, \mathfrak{m}} \left( \pm (2a_{0}+1-\delta_{c_{1}}),\boldsymbol{1}^{ c_{1}-3},\cdots,\boldsymbol{1}^{ c_{i}-3 },\pm(2a_{i}+3-\delta_{c_{i}}-\delta_{c_{i+1}}), \ldots, \pm ( 2 a_{p}+2-\delta_{c_{p}}) \right) . $$ where the sign $\pm$ is always $-$ for an even argument, $+$ for an odd one, $\delta_{c}=\delta_{c=1}$, Kronecker symbol, and $\boldsymbol{1}^{n}:=\boldsymbol{1}^{min(0,n)}$ is a sequence of $n$ 1 if $n\in\mathbb{N}$, an empty sequence else. \end{conje} This conjecture expresses each motivic MZV$^{\star}$ as a linear combination of motivic Euler sums, which gives another illustration of the Galois descent between the Hopf algebra of motivic MZV and the Hopf algebra of motivic Euler sums.\\ \\ \texttt{Nota Bene}: Such a \textit{motivic relation} between MMZV$_{\mu_{N}}$ is stronger than its analogue between MZV$_{\mu_{N}}$ since it contains more information; it implies many other relations because of its Galois conjugates. This explain why its is not always simple to lift an identity from MZV to MMZV from the Theorem $\ref{kerdn}$. If the family concerned is not stable via the coaction, such as $(iv)$ in Lemma $\ref{lemmcoeff}$, we may need other analytic equalities before concluding.\\ \\ This conjecture implies in particular the following motivic identities, whose analogue for real Euler sums are proved as indicated in the brackets\footnote{Beware, only the identity for real Euler sums is proved; the motivic analogue stays a conjecture.}: \begin{description} \item[Two-One] [For $c_{i}=1$, Ohno Zudilin: $\cite{OZ}$]: \begin{equation} \zeta^{\star, \mathfrak{m}} (\boldsymbol{2}^{a_{0}},1,\cdots,1, \boldsymbol{2}^{a_{p}})= - \zeta^{\sharp, \mathfrak{m}} \left( \overline{2a_{0}}, 2a_{1}+1, \ldots, 2a_{p-1}+1, 2 a_{p}+1\right) . \end{equation} \item[Three-One] [For $c_{i}$ alternatively $1$ and $3$, Zagier conjecture, proved in $\cite{BBB}$] \begin{equation} \zeta^{\star, \mathfrak{m}} (\boldsymbol{2}^{a_{0}},1,\boldsymbol{2}^{a_{1}},3 \cdots,1, \boldsymbol{2}^{a_{p-1}}, 3, \boldsymbol{2}^{a_{p}}) = -\zeta^{\sharp, \mathfrak{m}} \left( \overline{2a_{0}}, \overline{2a_{1}+2}, \ldots, \overline{2a_{p-1}+2}, \overline{2 a_{p}+2} \right) . \end{equation} \item[Linebarger-Zhao $\star$] [With $c_{i}\geq 3$, Linebarger Zhao in $\cite{LZ}$]: \begin{equation} \zeta^{\star, \mathfrak{m}} \left( \boldsymbol{2}^{a_{0}},c_{1},\cdots,c_{p}, \boldsymbol{2}^{a_{p}}\right) = -\zeta^{\sharp, \mathfrak{m}} \left( 2a_{0}+1,\boldsymbol{1}^{ c_{1}-3 },\cdots,\boldsymbol{1}^{ c_{i}-3 },2a_{i}+3, \ldots, \overline{ 2 a_{p}+2} \right) \end{equation} In particular, restricting to all $c_{i}=3$: \begin{equation}\label{eq:LZhoffman} \zeta^{\star, \mathfrak{m}} \left( \boldsymbol{2}^{a_{0}},3,\cdots,3,\boldsymbol{ 2}^{a_{p}}\right) = - \zeta^{\sharp, \mathfrak{m}} \left( 2a_{0}+1, 2a_{1}+3, \ldots, 2a_{p-1}+3, \overline{2 a_{p}+2}\right) . \end{equation} \end{description} \texttt{Nota Bene}: Hence the previous conjecture $(\ref{eq:LZhoffman})$ implies that the motivic Hoffman $\star$ is a basis, since we proved the right side of $(\ref{eq:LZhoffman})$ is a basis: $$ \text{ Conjecture } \ref{lzg} \quad \Longrightarrow \quad \text{ Hoffman } \star \text{ is a basis of MMZV} . $$ \\ \texttt{Examples:} The previous conjecture would give such relations:\\ $$\begin{array}{ll} \zeta^{\star, \mathfrak{m}} (2,2,3,3,2) =-\zeta^{\sharp, \mathfrak{m}} (5,3,-4) &\zeta^{\star, \mathfrak{m}} (5,6,2) =-\zeta^{\sharp, \mathfrak{m}} (1,1,1,3,1,1,1,-4) \\ \zeta^{\star, \mathfrak{m}} (1,6) =\zeta^{\sharp, \mathfrak{m}} (-2,1,1,1,-2) &\zeta^{\star, \mathfrak{m}} (2,4, 1, 2,2,3) =-\zeta^{\sharp, \mathfrak{m}} (3,1, -2, -6,-2) . \end{array}$$ \section{Galois Descents \textit{[Chapter 5]}} There, we study Galois descents for categories of mixed Tate motives $\mathcal{MT}_{\Gamma_{N}}$, and how periods of $\pi_{1}^{un}(X_{N'})$ are embedded into periods of $\pi_{1}^{un}(X_{N})$ for $N'\mid N$. Indeed, for each $N, N'$ with $N'|N$ there are the motivic Galois group $\mathcal{G}^{\mathcal{MT}_{N}}$ acting on $\mathcal{H}^{\mathcal{MT}_{N}} $ and a Galois descent between $\mathcal{H}^{\mathcal{MT}_{N'}}$ and $\mathcal{H}^{\mathcal{MT}_{N}}$, such that: $$(\mathcal{H}^{\mathcal{MT}_{N}})^{\mathcal{G}^{N/N'}}=\mathcal{H}^{\mathcal{MT}_{N'}}.$$ Since for $N=2,3,4,\mlq 6\mrq,8$, the categories $\mathcal{MT}_{N}$ and $\mathcal{MT}'_{N}$ are equal, this Galois descent has a parallel for the motivic fundamental group side; we will mostly neglect the difference in this chapter: \begin{figure}[H] $$\xymatrixcolsep{5pc}\xymatrix{ \mathcal{H}^{N} \ar@{^{(}->}[r] ^{\sim}_{n.c} & \mathcal{H}^{\mathcal{MT}_{N}} \\ \mathcal{H}^{N'}\ar[u]_{\mathcal{G}^{N/N'}} \ar@{^{(}->}[r] _{n.c}^{\sim} &\mathcal{H}^{\mathcal{MT}_{N'}} \ar[u]^{\mathcal{G}^{\mathcal{MT}}_{N/N'}} \\ \mathbb{Q}[i\pi^{\mathfrak{m}}] \ar[u]_{\mathcal{U}^{N'}} \ar@{^{(}->}[r]^{\sim} & \mathbb{Q}[i\pi^{\mathfrak{m}}] \ar[u]^{\mathcal{U}^{\mathcal{MT}_{N'}}} \\ \mathbb{Q} \ar[u]_{\mathbb{G}_{m}} \ar@/^2pc/[uuu]^{\mathcal{G}^{N}} & \mathbb{Q} \ar[u]^{\mathbb{G}_{m}} \ar@/_2pc/[uuu]_{\mathcal{G}^{\mathcal{MT}_{N}}} }$$ \caption{Galois descents, $N=2,3,4,\mlq 6\mrq,8$ (level $0$).\protect\footnotemark }\label{fig:paralleldescent} \end{figure} \footnotetext{The (non-canonical) horizontal isomorphisms have to be chosen in a compatible way.} \texttt{Nota Bene:} For $N'=1$ or $2$, $i\pi^{\mathfrak{m}}$ has to be replaced by $\zeta^{\mathfrak{m}}(2)$ or $(\pi^{\mathfrak{m}})^{2}$, since we consider, in $\mathcal{H}^{N'}$ only periods invariant by the Frobenius $\mathcal{F}_{\infty}$. In the descent between $\mathcal{H}^{N}$ and $\mathcal{H}^{N'}$, we require hence invariance by the Frobenius in order to keep only those periods; this condition get rid of odd powers of $i\pi^{\mathfrak{m}}$.\\ The first section of Chapter $5$ gives an overview for the Galois descents valid for any $N$: a criterion for the descent between MMZV$_{\mu_{N'}}$ and MMZV$_{\mu_{N}}$ (Theorem $5.1.1$), a criterion for being unramified (Theorem $5.1.2$), and their corollaries. The conditions are expressed in terms of the derivations $D_{r}$, since they reflect the Galois action. Indeed, looking at the descent between $\mathcal{MT}_{N,M}$ and $\mathcal{MT}_{N',M'}$, sometimes denoted $(\mathcal{d})=(k_{N}/k_{N'}, M/M')$, it has possibly two components:\nomenclature{$\mathcal{d}$}{a specific Galois descent between $\mathcal{MT}_{N,M}$ and $\mathcal{MT}_{N',M'}$} \begin{itemize} \item[$\cdot$] The change of cyclotomic fields $k_{N}/k_{N'}$; there, the criterion has to be formulated in the depth graded. \item[$\cdot$] The change of ramification $M/M'$, which is measured by the $1$ graded part of the coaction i.e. $D_{1}$ with the notations of $\S 2.4$.\\ \end{itemize} The second section specifies the descents for $N\in \left\{2, 3, 4, \mlq 6\mrq, 8\right\}$ \footnote{As above, the quotation marks underline that we consider the unramified category for $N=\mlq 6\mrq$.} represented in Figure $\ref{fig:d248}$, and $\ref{fig:d36}$. In particular, this gives a basis of motivic multiple zeta values relative to $\mu_{N'}$ via motivic multiple zeta values relative to $\mu_{N}$, for these descents considered, $N'\mid N$. It also gives a new proof of Deligne's results ($\cite{De}$): the category of mixed Tate motives over $\mathcal{O}_{k_{N}}[1/N]$, for $N\in \left\{2, 3, 4,\mlq 6\mrq, 8\right\}$ is spanned by the motivic fundamental groupoid of $\mathbb{P}^{1}\setminus\left\{0,\mu_{N},\infty \right\}$ with an explicit basis; as claimed in $\S 2.2$, we can even restrict to a smaller fundamental groupoid.\\ Let us present our results further and fix a descent $(\mathcal{d})=(k_{N}/k_{N'}, M/M')$ among these considered (in Figures $\ref{fig:d248}$, $\ref{fig:d36}$), between the category of mixed Tate motives of $\mathcal{O}_{N}[1/M]$ and $\mathcal{O}_{N'}[1/M']$.\footnote{Usually, the indication of the descent (in the exponent) is omitted when we look at a specific descent.} Each descent $(\mathcal{d})$ is associated to a subset $\boldsymbol{\mathscr{D}^{\mathcal{d}}} \subset \mathscr{D}$ of derivations, which represents the action of the Galois group $\mathcal{G}^{N/N'}$. It defines, recursively on $i$, an increasing motivic filtration $\mathcal{F}^{\mathcal{d}}_{i}$ on $\mathcal{H}^{N}$ called \textit{motivic level}, stable under the action of $\mathcal{G}^{\mathcal{MT}_{N}}$:\nomenclature{$\mathcal{F}^{\mathcal{d}}_{\bullet}$}{the increasing filtration by the motivic level associated to $\mathcal{d}$} $$\texttt{Motivic level:} \left\lbrace \begin{array}{l} \mathcal{F}^{\mathcal{d}} _{-1} \mathcal{H}^{N} \mathrel{\mathop:}=0\\ \boldsymbol{\mathcal{F}^{\mathcal{d}}_{i}} \text{ the largest submodule of } \mathcal{H}^{N} \text{ such that } \mathcal{F}^{\mathcal{d}}_{i}\mathcal{H}^{N}/\mathcal{F}^{\mathcal{d}} _{i-1}\mathcal{H}^{N} \text{ is killed by } \mathscr{D}^{\mathcal{d}}. \end{array} \right. . $$ The $0^{\text{th}}$ level $\mathcal{F}^{\mathcal{d}}_{0}\mathcal{H}^{N}$, corresponds to invariants under the group $\mathcal{G}^{N/N'}$ while the $i^{\text{th}}$ level $\mathcal{F}^{\mathcal{d}}_{i}$, can be seen as the $i^{\text{th}}$ \textit{ramification space} in generalized Galois descents. Indeed, they correspond to a decreasing filtration of $i^{\text{th}}$ ramification Galois groups $\mathcal{G}_{i}$, which are the subgroups of $\mathcal{G}^{N/N'}$ which acts trivially on $\mathcal{F}^{i}\mathcal{H}^{N}$.\footnote{\textit{On \textbf{ramification groups} in usual Galois theory}: let $L/K$ a Galois extension of local fields. By Hensel's lemma, $\mathcal{O}_{L}=\mathcal{O}_{K}[\alpha]$ and the i$^{\text{th}}$ ramification group is defined as: \begin{equation}\label{eq:ramifgroupi} G_{i}\mathrel{\mathop:}=\left\lbrace g\in \text{Gal}(L/K) \mid v(g(\alpha)-\alpha) >i \right\rbrace , \quad \text{ where } \left\lbrace \begin{array}{l} v \text{ is the valuation on } L \\ \mathfrak{p}= \lbrace x\in L \mid v(x) >0 \rbrace \text{ maximal ideal for } L \end{array}\right. . \end{equation} Equivalently, this condition means $g$ acts trivially on $\mathcal{O}_{L}\diagup \mathfrak{p}^{i+1}$, i.e. $g(x)\equiv x \pmod{\mathfrak{p}^{i+1}}$. This decreasing filtration of normal subgroups corresponds, by the Galois fundamental theorem, to an increasing filtration of Galois extensions: $$G_{0}=\text{ Gal}(L/K) \supset G_{1} \supset G_{2} \supset \cdots \supset G_{i} \cdots$$ $$K=K_{0} \subset K_{1} \subset K_{2} \subset \cdots \subset K_{i} \cdots$$ $G_{1}$, the inertia subgroup, corresponds to the subextension of minimal ramification.} \begin{figure}[H] \centering \begin{equation}\label{eq:descent} \xymatrix{ \mathcal{H}^{N} \\ \mathcal{F}_{i}\mathcal{H}^{N} \ar[u]^{\mathcal{G}_{i}}\\ \mathcal{F}_{0}\mathcal{H}^{N} =\mathcal{H}^{N'} \ar@/^2pc/[uu]^{\mathcal{G}_{0}=\mathcal{G}^{N/N'}} \ar[u] \\ \mathbb{Q} \ar[u]^{\mathcal{G}^{N'}} \ar@/_2pc/[uuu]_{\mathcal{G}^{N}} } \quad \quad \begin{array}{l} (\mathcal{H}^{N})^{\mathcal{G}_{i}}=\mathcal{F}_{i}\mathcal{H}^{N} \\ \\ \begin{array}{llll} \mathcal{G}^{N/N'}=\mathcal{G}_{0} & \supset \mathcal{G}_{1} & \supset \cdots &\supset \mathcal{G}_{i} \cdots\\ & & & \\ \mathcal{H}^{N'}= \mathcal{F}_{0}\mathcal{H}^{N} & \subset \mathcal{F}_{1 }\mathcal{H}^{N} & \subset \cdots & \subset \mathcal{F}_{i}\mathcal{H}^{N} \cdots. \end{array} \end{array} \end{equation} \caption{Representation of a Galois descent.}\label{fig:descent} \end{figure} \noindent Those ramification spaces constitute a tower of intermediate spaces between the elements in MMZV$_{\mu_{N}}$ and the whole space of MMZV$_{\mu_{N'}}$.\\ \\ Let define the quotients associated to the motivic level: $$\boldsymbol{\mathcal{H}^{\geq i}} \mathrel{\mathop:}= \mathcal{H}/ \mathcal{F}_{i-1}\mathcal{H}\text{ , } \quad\mathcal{H}^{\geq 0}=\mathcal{H}.$$ \newpage \noindent The descents considered are illustrated by the following diagrams:\\ \begin{figure}[H] \centering $$\xymatrixcolsep{5pc}\xymatrix{ \mathcal{H}^{\mathcal{MT}(\mathcal{O}_{8}\left[ \frac{1}{2}\right] )} & \\ \mathcal{H}^{\mathcal{MT}(\mathcal{O}_{4}\left[ \frac{1}{2}\right] )} \ar[u]^{\mathcal{F}^{k_{8}/k_{4},2/2}_{0}} & \text{\framebox[1.1\width]{$\mathcal{H}^{\mathcal{MT}(\mathcal{O}_{4})}$}} \ar[l]_{\mathcal{F}^{k_{4}/k_{4},2/1}_{0}} \\ \mathcal{H}^{\mathcal{MT}\left( \mathbb{Z}\left[ \frac{1}{2}\right] \right) } \ar[u]^{\mathcal{F}^{k_{4}/\mathbb{Q},2/2}_{0}} & \mathcal{H}^{\mathcal{MT}(\mathbb{Z}),} \ar[l]^{\mathcal{F}^{\mathbb{Q}/\mathbb{Q},2/1}_{0}} \ar[lu]_{\mathcal{F}^{k_{4}/\mathbb{Q},2/1}_{0}} \ar@{.>}@/_1pc/[u] \ar@/_7pc/[uul] ^{\mathcal{F}^{k_{8}/\mathbb{Q},2/1}_{0}} } $$ \caption{\textsc{The cases $N=1,2,4,8$}. }\label{fig:d248} \end{figure} \begin{figure}[H] $$\xymatrixcolsep{5pc}\xymatrix{ & \mathcal{H}^{\mathcal{MT}(\mathcal{O}_{6})} \\ \mathcal{H}^{\mathcal{MT}\left( \mathcal{O}_{3}\left[ \frac{1}{3}\right] \right) } & \text{\framebox[1.1\width]{$\mathcal{H}^{\mathcal{MT}(\mathcal{O}_{3})}$}} \ar[l]_{\mathcal{F}^{k_{3}/k_{3},3/1}_{0}} \\ \text{\framebox[1.1\width]{$\mathcal{H}^{\mathcal{MT}\left( \mathbb{Z}\left[ \frac{1}{3}\right] \right) }$}} \ar[u]^{\mathcal{F}^{k_{3}/\mathbb{Q},3/3}_{0}} & \mathcal{H}^{\mathcal{MT}(\mathbb{Z})} \ar[lu]_{\mathcal{F}^{k_{3}/\mathbb{Q},3/1}_{0}} \ar@{.>}@/_1pc/[u] \ar@/_3pc/[uu]_{\mathcal{F}^{k_{6}/\mathbb{Q},1/1}_{0}} } $$ \caption{\textsc{The cases $N=1,3,\mlq 6\mrq$}. }\label{fig:d36} \end{figure} \textsc{Remarks:} \begin{enumerate} \item[$\cdot$] The vertical arrows represent the change of field and the horizontal arrows the change of ramification. The full arrows are the descents made explicit in Chapter $5$.\\ More precisely, for each arrow $A \stackrel{\mathcal{F}_{0}}{\leftarrow}B$ in the above diagrams, we give a basis $\mathcal{B}^{A}_{n}$ of $\mathcal{H}_{n}^{A}$, and a basis of $\mathcal{H}_{n}^{B}= \mathcal{F}_{0} \mathcal{H}_{n}^{A}$ in terms of the elements of $\mathcal{B}_{n}^{A}$; similarly for the higher level of these filtrations. \item[$\cdot$] The framed spaces $\mathcal{H}^{\cdots}$ appearing in these diagrams are not known to be associated to a fundamental group and there is presently no other known way to reach these (motivic) periods. For instance, we obtain by descent, a basis for $\mathcal{H}_{n}^{\mathcal{MT}(\mathbb{Z}\left[ \frac{1}{3}\right] )}$ in terms of the basis of $\mathcal{H}_{n}^{\mathcal{MT}\left( \mathcal{O}_{3}\left[ \frac{1}{3}\right] \right) }$.\\ \end{enumerate} \texttt{Example: Descent between Euler sums and MZV.} The comodule $\mathcal{H}^{1}$ embeds, non-canonically, into $\mathcal{H}^{2}$. Let first point out that:\footnote{Since all the motivic iterated integrals with only $0,1$ of length $1$ are zero by properties stated in $\S \ref{propii}$, hence the left side of $D_{1}$, defined in $(\ref{eq:Der})$, would always cancel.} $D_{1}(\mathcal{H}^{1})=0$; the Galois descent between $\mathcal{H}^{2}$ and $\mathcal{H}^{1}$ is precisely measured by $D_{1}$: \begin{theom} Let $\mathfrak{Z}\in\mathcal{H}^{2}$, a motivic Euler sum. Then: $$\mathfrak{Z}\in\mathcal{H}^{1}, \text{ i.e. is a motivic MZV } \Longleftrightarrow D_{1}(\mathfrak{Z})=0 \textrm{ and } D_{2r+1}(\mathfrak{Z})\in\mathcal{H}^{1}.$$ \end{theom} This is a useful recursive criterion to determine if a (motivic) Euler sum is in fact a (motivic) multiple zeta value. It can be generalized for other roots of unity, as we state more precisely in $\S 5.1$. These unramified motivic Euler sums are the $0^{\text{th}}$-level of the filtration by the motivic level here defined as: \begin{center} $\mathcal{F}_{i}\mathcal{H}^{2}$ is the largest sub-module such that $\mathcal{F}_{i}/ \mathcal{F}_{i-1}$ is killed by $D_{1}$. \end{center} \paragraph{Results.} More precisely, for $N\in \left\{2, 3, 4, \mlq 6\mrq, 8\right\}$, we define a particular family $\mathcal{B}^{N}$ of motivic multiple zeta values relative to $\mu_{N}$ with different notions of \textbf{\textit{level}} on the basis elements, one for each Galois descent considered above:\nomenclature{$\mathcal{B}^{N}$}{basis of $\mathcal{H}^{N}$} \begin{equation}\label{eq:base} \mathcal{B}^{N}\mathrel{\mathop:}=\left\{ \zeta^{\mathfrak{m}}\left(x_{1}, \cdots x_{p-1}, x_{p} \atop \epsilon_{1}, \ldots, \epsilon_{p-1}, \epsilon_{p}\xi_{N}\right) (2\pi i)^{s ,\mathfrak{m}} \text{ , } x_{i}\in\mathbb{N}^{\ast} , s\in\mathbb{N}^{\ast} , \left\{ \begin{array}{ll} x_{i} \text{ odd, } s \text{ even}, \epsilon_{i}=1 &\text{if } N=2 \\ \epsilon_{i}=1 &\text{if } N=3,4\\ x_{i} >1 \text{, } \epsilon_{i}=1 &\text{if } N=6\\ \epsilon_{i}\in\lbrace\pm 1\rbrace &\text{if } N=8 \end{array} \right. \right\} \end{equation} Denote by $\mathcal{B}_{n,p,i}$ the subset of elements with weight $n$, depth $p$ and level $i$. \\ \\ \texttt{Examples}: \begin{description} \item[$\cdot N=2$: ] The basis for motivic Euler sums: $\mathcal{B}^{2}\mathrel{\mathop:}=\left\{ \zeta^{\mathfrak{m}}\left(2y_{1}+1, \ldots , 2 y_{p}+1 \atop 1, 1, \ldots, 1, -1\right) \zeta^{\mathfrak{m}} (2)^{s}, y_{i} \geq 0, s\geq 0 \right\}$ . The level for the descent from $\mathcal{H}^{2}$ to $\mathcal{H}^{1}$ is defined as the number of $y_{i}'s$ equal to $0$. \item[$\cdot N=4$: ] The basis is: $\mathcal{B}^{4}\mathrel{\mathop:}= \left\{ \zeta^{\mathfrak{m}}\left(x_{1}, \ldots , x_{p} \atop 1,1, \ldots, 1, \sqrt{-1}\right) (2\pi i)^{s ,\mathfrak{m}}, s\geq 0, x_{i} >0 \right\} $. $$\text{The level is:} \begin{array}{l} \cdot \text{ the number of even $x_{i}'s$ for the descent from $\mathcal{H}^{4}$ to $\mathcal{H}^{2}$ }\\ \cdot \text{ the number of even $x_{i}'s + $ number of $x_{i}'s$ equal to 1 for the descent from $\mathcal{H}^{4}$ to $\mathcal{H}^{1}$ } \end{array}$$ \item[$\cdot N=8$: ] the level includes the number of $\epsilon_{i}'s$ equal to $-1$, etc.\\ \end{description} The quotients $\mathcal{H}^{\geq i}$, respectively filtrations $\mathcal{F}_{i}$ associated to the descent $\mathcal{d}$, will match with the sub-families (level restricted) $\mathcal{B}_{n,p, \geq i}$, respectively $\mathcal{B}_{n,p, \leq i}$. Indeed, we prove: \footnote{Cf. Theorem $5.2.4$ slightly more precise.}\nomenclature{$\mathbb{Z}_{1[P]}$}{subring of $\mathbb{Z}$} \begin{theom} With $ \mathbb{Z}_{1[P]} \mathrel{\mathop:}=\left\{ \frac{a}{1+b P}, a,b\in\mathbb{Z} \right\}$ where $ P \mathrel{\mathop:}= \left\lbrace \begin{array}{ll} 2 & \text{ for } N=2,4,8 \\ 3 & \text{ for } N=3,\mlq 6\mrq \end{array} \right. $. \begin{enumerate} \item[$\cdot$] $\mathcal{B}_{n,\leq p, \geq i}$ is a basis of $\mathcal{F}_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i}$ and $\mathcal{B}_{n,\cdot, \geq i} $ a basis of $\mathcal{H}^{\geq i}_{n}$. \item[$\cdot$] $\mathcal{B}_{n, p, \geq i}$ is a basis of $gr_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i}$ on which it defines a $\mathbb{Z}_{1[P]}$-structure: \begin{center} Each $\zeta^{\mathfrak{m}}\left( z_{1}, \ldots , z_{p} \atop \epsilon_{1}, \ldots, \epsilon_{p}\right)$ decomposes in $gr_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i}$ as a $\mathbb{Z}_{1[P]}$-linear combination of $\mathcal{B}_{n, p, \geq i}$ elements. \end{center} \item[$\cdot$] We have the two split exact sequences in bijection: $$ 0\longrightarrow \mathcal{F}_{i}\mathcal{H}_{n} \longrightarrow \mathcal{H}_{n} \stackrel{\pi_{0,i+1}} {\rightarrow}\mathcal{H}_{n}^{\geq i+1} \longrightarrow 0$$ $$ 0 \rightarrow \langle \mathcal{B}_{n, \cdot, \leq i} \rangle_{\mathbb{Q}} \rightarrow \langle\mathcal{B}_{n} \rangle_{\mathbb{Q}} \rightarrow \langle \mathcal{B}_{n, \cdot, \geq i+1} \rangle_{\mathbb{Q}} \rightarrow 0 .$$ \item[$\cdot$] A basis for the filtration spaces $\mathcal{F}_{i} \mathcal{H}_{n}$ is: $$\cup_{p} \left\{ \mathfrak{Z}+ cl_{n, \leq p, \geq i+1}(\mathfrak{Z}), \mathfrak{Z}\in \mathcal{B}_{n, p, \leq i} \right\},$$ $$\text{ where } cl_{n,\leq p,\geq i}: \langle\mathcal{B}_{n, p, \leq i-1}\rangle_{\mathbb{Q}} \rightarrow \langle\mathcal{B}_{n, \leq p, \geq i}\rangle_{\mathbb{Q}} \text{ such that } \mathfrak{Z}+cl_{n,\leq p,\geq i}(\mathfrak{Z})\in \mathcal{F}_{i-1}\mathcal{H}_{n}.$$ \item[$\cdot$] A basis for the graded space $gr_{i} \mathcal{H}_{n}$: $$\cup_{p} \left\{ \mathfrak{Z}+ cl_{n, \leq p, \geq i+1}(\mathfrak{Z}), \mathfrak{Z}\in \mathcal{B}_{n, p, i} \right\}.$$ \end{enumerate} \end{theom} \noindent \texttt{Nota Bene}: The morphism $cl_{n, \leq p, \geq i+1}$ satisfying those conditions is unique.\\ \\ The linear independence is obtained first in the depth graded, and the proof relies on the bijectivity of the following map $\partial^{i, \mathcal{d}}_{n,p}$ by an argument involving $2$ or $3$ adic properties:\footnote{The first $ c ^{\mathcal{d}}_{r}$ components of $\partial^{i, \mathcal{d}}_{n,p}$ correspond to the derivations in $\mathscr{D}^{\mathcal{d}}$ associated to the descent, which hence decrease the motivic level.} \begin{equation}\label{eq:derivintro} \partial^{i, \mathcal{d}}_{n,p}: gr_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i} \rightarrow \oplus_{r<n} \left( gr_{p-1}^{\mathfrak{D}} \mathcal{H}_{n-r}^{\geq i-1}\right) ^{\oplus c ^{\mathcal{d}}_{r}} \oplus_{r<n} \left( gr_{p-1}^{\mathfrak{D}} \mathcal{H}_{n-r}^{\geq i}\right) ^{\oplus c^{\backslash\mathcal{d}}_{r}} \text{, } c ^{\mathcal{d}}_{r}, c^{\backslash\mathcal{d}}_{r}\in\mathbb{N}, \end{equation} which is obtained from the depth and weight graded part of the coaction, followed by a projection for the left side (by depth $1$ results), and by passing to the level quotients ($(\ref{eq:pderivinp})$). Once the freeness obtained, the generating property is obtained from counting dimensions, since K-theory gives an upper bound for the dimensions.\\ \\ This main theorem generalizes in particular a result of P. Deligne ($\cite{De}$), which we could formulate by different ways: \footnote{The basis $\mathcal{B}$, in the cases where $N\in \left\{3, 4,8\right\}$ is identical to P. Deligne's in $\cite{De}$. For $N=2$ (resp. $N=\mlq 6\mrq$ unramified) it is a linear basis analogous to his algebraic basis which is formed by Lyndon words in the odd (resp. $\geq 2$) positive integers (with $\ldots 5 \leq 3 \leq 1$); a Lyndon word being strictly smaller in lexicographic order than all of the words formed by permutation of its letters. Deligne's method is roughly dual to this point of view, working in Lie algebras, showing the action is faithful and that the descending central series of $\mathcal{U}$ is dual to the depth filtration.} \begin{corol} \begin{itemize} \item[$\cdot$] The map $\mathcal{G}^{\mathcal{MT}} \rightarrow \mathcal{G}^{\mathcal{MT}'}$ is an isomorphism. \item[$\cdot$] The motivic fundamental group $\pi_{1}^{\mathfrak{m}} \left( \mathbb{P}^{1}\diagdown \lbrace 0, \mu_{N}, \infty \rbrace, \overline{0 \xi_{N}} \right)$ generates the category of mixed Tate motives $\mathcal{MT}_{N}$. \item[$\cdot$] $\mathcal{B}_{n}$ is a basis of $ \mathcal{H}^{N}_{n}$, the space of motivic MZV relative to $\mu_{N}$. \item[$\cdot$] The geometric (and Frobenius invariant if $N=2$) motivic periods of $\mathcal{MT}_{N}$ are $\mathbb{Q}$-linear combinations of motivic MZV relative to $\mu_{N}$ (unramified for $N=\mlq 6\mrq$). \end{itemize} \end{corol} \textsc{Remarks: } \begin{itemize} \item[$\cdot$] For $N=\mlq 6\mrq$ the result remains true if we restrict to iterated integrals relative not to all $6^{\text{th}}$ roots of unity but only to these relative to primitive roots. \item[$\cdot$] We could even restrict to: $\begin{array}{ll} \pi_{1}^{\mathfrak{m}} \left( \mathbb{P}^{1}\diagdown \lbrace 0, 1, \infty \rbrace, \overline{0 \xi_{N}} \right) & \text{ for $N=2,3,4,\mlq 6\mrq$} \\ \pi_{1}^{\mathfrak{m}} \left( \mathbb{P}^{1}\diagdown \lbrace 0, \pm 1, \infty \rbrace, \overline{0 \xi_{N}} \right) & \text{ for } N=8 \end{array}$ . \end{itemize} The previous theorem also provides the Galois descent from $\mathcal{H}^{\mathcal{MT}_{N}}$ to $\mathcal{H}^{\mathcal{MT}_{N'}}$: \begin{corol} A basis for MMZV$_{\boldsymbol{\mu_{N'}}}$ is formed by MMZV$_{\boldsymbol{\mu_{N}}}$ $\in \mathcal{B}^{N}$ of level $0$ each corrected by a $\mathbb{Q}$-linear combination of MMZV $_{\boldsymbol{\mu_{N}}}$ of level greater than or equal to $1$: $$\text{ Basis of } \mathcal{H}^{N'}_{n} : \quad \left\{ \mathfrak{Z}+ cl_{n,\cdot, \geq 1}(\mathfrak{Z}), \mathfrak{Z}\in \mathcal{B}^{N}_{n, \cdot, 0} \right\}.$$ \end{corol} \textsc{Remark}: Descent can be calculated explicitly in small depth, less than or equal to $3$, as we explain in the Appendix $A.2$. In the general case, we could make the part of maximal depth of $cl(\mathfrak{Z})$ explicit (by inverting a matrix with binomial coefficients) but motivic methods do not enable us to describe the other coefficients for terms of lower depth.\\ \\ \texttt{Example, $N=2$:} A basis for motivic multiple zeta values is formed by: $$ \left\lbrace \zeta^{\mathfrak{m}}\left( 2x_{1}+1, \ldots, \overline{2x_{p}+1}\right) \zeta^{\mathfrak{m}}(2)^{s} + \sum_{\exists i, y_{i}=0 \atop q\leq p} \alpha_{\textbf{y}}^{\textbf{x}} \zeta^{\mathfrak{m}}(2y_{1}+1, \ldots, \overline{2y_{q}+1})\zeta^{\mathfrak{m}}(2)^{s} \text{ , } x_{i}>0, \alpha^{\textbf{x}}_{\textbf{y}}\in\mathbb{Q} \right\rbrace .$$ Starting from a motivic Euler sum with odd numbers greater than $1$, we add some \textit{correction terms}, in order to get an element in $\mathcal{H}^{1}$, the space of MMZV. At this level, correction terms are motivic Euler sums with odds, and at least one $1$ in the arguments; i.e. they are of level $\geq 1$ with the previous terminology. For instance, the following linear combination is a motivic MZV: $$\zeta^{\mathfrak{m}}(3,3,\overline{3})+ \frac{774}{191} \zeta^{\mathfrak{m}}(1,5, \overline{3}) - \frac{804}{191} \zeta^{\mathfrak{m}}(1,3, \overline{5}) -6 \zeta^{\mathfrak{m}}(3,1,\overline{5}) +\frac{450}{191}\zeta^{\mathfrak{m}}(1,1, \overline{7}).$$ \section{Miscellaneous Results \textit{[Chapter 6]}} Chapter $6$ is devoted on the Hopf algebra structure of motivic multiple zeta values relative to $\mu_{N}$, particularly for $N=1,2$, presenting various uses of the coaction, and divided into sections as follows:\\ \begin{enumerate} \item An important use of the coaction, is the decomposition of (motivic) multiple zeta values into a conjectured basis, as explained in $\cite{Br1}$. It is noteworthy to point out that the coaction always enables us to determine the coefficients of the maximal depth terms. We consider in $\S 6.1$ two simple cases, in which the space $gr^{\mathfrak{D}}_{max}\mathcal{H}_{n}$ is $1$ dimensional: \begin{itemize} \item[$(i)$] For $N=1$, when the weight is a multiple of $3$ ($w=3d$), such that the depth $p>d$:\footnote{This was a question asked for by D. Broadhurst: an algorithm, or a formula for the coefficient of $\zeta(3)^{d}$ of such a MZV, when decomposed in Deligne basis.} $$gr^{\mathfrak{D}}_{p}\mathcal{H}_{3d} =\mathbb{Q} \zeta^{\mathfrak{m}}(3)^{d}.$$ \item[$(ii)$] For $N=2,3,4$, when weight equals depth: $$gr^{\mathfrak{D}}_{p}\mathcal{H}_{p} =\mathbb{Q} \zeta^{\mathfrak{m}}\left( 1 \atop \xi_{N}\right) ^{p}.$$ The corresponding Lie algebra, called the \textit{diagonal Lie algebra}, has been studied by Goncharov in $ \cite{Go2}, \cite{Go3} $. \end{itemize} In these cases, we are able to determine the projection: $$\vartheta : gr_{max}^{\mathfrak{D}} \mathcal{H}_{n}^{N} \rightarrow \mathbb{Q},$$ either via the linearized Ihara action $\underline{\circ}$, or via the dual point of view of infinitesimal derivations $D_{r}$. For instance, for $(i)$ ($N=1$, $w=3d$), it boils down to look at: $$\frac{D_{3}^{\circ d }}{d! } \quad \text{ or } \quad \exp_{\circ} ( \overline{\sigma}_{3}), \text{ where } \overline{\sigma}_{2i+1}= (-1)^{i}(\text{ad} e_{0} )^{2i}(e_{1}) \footnote{ These $\overline{\sigma}_{2i+1}$ are the generators of $gr_{1}^{\mathfrak{D}} \mathfrak{g}^{\mathfrak{m}}$, the depth $1$ graded part of the motivic Lie algebra; cf. $(\ref{eq:oversigma})$.}.$$ In general, the space $gr^{\mathfrak{D}}_{max}\mathcal{H}^{N}_{n}$ is more than $1$-dimensional; nevertheless, these methods could be generalized. \item Using criterion $\ref{criterehonoraire}$, we provide in the second section infinite families of honorary motivic multiple zeta values up to depth 5, with specified alternating odd or even integers. It was inspired by some isolated examples of honorary multiple zeta values found by D. Broadhurst\footnote{ Those emerged when looking at the \textit{depth drop phenomena}, cf. $\cite{Bro2}$.}, such as $\zeta( \overline{8}, 5, \overline{8}), \zeta( \overline{8}, 3, \overline{10}), \zeta(3, \overline{6}, 3, \overline{6}, 3)$, where we already could observe some patterns of even and odd. Investigating this trail in a systematic way, looking for any general families of unramified (motivic) Euler sums (without linear combinations first), we arrive at the families presented in $\S 6.2$, which unfortunately, stop in depth $5$. However, this investigation does not cover the unramified $\mathbb{Q}$-linear combinations of motivic Euler sums, such as those presented in Chapter $4$, Theorem $\ref{ESsharphonorary}$ (motivic Euler $\sharp$ sums with positive odds and negative even integers). \item By Corollary $\ref{kerdn}$, we can lift some identities between MZV to motivic MZV (as in $\cite{Br2}$, Theorem $4.4$), and similarly in the case of Euler sums. Remark that, as we will see for depth $1$ Hoffman $\star$ elements (Lemma $\ref{lemmcoeff}$), the lifting may not be straightforward, if the family is not stable under the coaction. In this section $\S 6.3$, we list some identities that we are able to lift to motivic versions, in particular some \textit{Galois trivial} elements\footnote{Galois trivial here means that the unipotent part of the Galois group acts trivially, not $\mathbb{G}_{m}$; hence not strictly speaking Galois trivial.} or product of simple zetas, and sum identities. \end{enumerate} \textsc{Remark}: The stability of a family on the coaction is a precious feature that allows to prove easily (by recursion) properties such as linear independence\footnote{If we find an appropriate filtration respected by the coaction, and such as the $0$ level elements are Galois-trivial, it corresponds then to the motivic depth filtration; for the Hoffman ($\star$) basis it is the number of $3$; for the Euler $\sharp$ sums basis, it is the number of odds, also equal to the depth minus $1$; for Deligne basis relative to $\mu_{N}$, $N=2,3,4,\mlq 6\mrq,8$, it is the usual depth.}, Galois descent features (unramified for instance), identities ($\S 6.3$), etc. $$\quad $$ \section{And Beyond?} For most values of $N$, the situation concerning the periods of $\mathcal{MT}_{\Gamma_{N}} \subset \mathcal{MT} ( \mathcal{O}_{N}[\frac{1}{N}])$ is still hazy, although it has been studied in several articles, notably by Goncharov (\cite{Go2},\cite{Go3}, \cite{Go4}\footnote{Goncharov studied the structure of the fundamental group of $\mathbb{G}_{m} \diagdown \mu_{N}$ and made some parallels with the topology of some modular variety for $GL_{m, \diagup\mathbb{Q}}$, $m>1$ notably. He also proved, for $N=p\geq 5$, that the following morphism, given by the Ihara bracket, is not injective: $$\beta: \bigwedge^{2} gr^{\mathfrak{D}}_{1} \mathfrak{g}^{\mathfrak{m}}_{1} \rightarrow gr^{\mathfrak{D}}_{2} \mathfrak{g}^{\mathfrak{m}}_{2} \quad \text{ and } \quad \begin{array}{ll} \dim \bigwedge^{2} gr^{\mathfrak{D}}_{1} \mathfrak{g}^{\mathfrak{m}}_{1} & = \frac{(p-1)(p-3)}{8}\\ \dim \ker \beta & = \frac{p^{2}-1}{24}\\ \dim Im \beta & = \dim gr^{\mathfrak{D}} \mathfrak{g}_{2}^{\mathfrak{m}}= \frac{(p-1)(p-5)}{12} \\ \dim gr^{\mathfrak{D}} \mathfrak{g}_{3}^{\mathfrak{m}}& \geq \frac{(p-5)(p^{2}-2p-11)}{48}. \end{array}$$ Note that $gr^{\mathfrak{D}}_{2} \mathfrak{g}^{\mathfrak{m}}$ corresponds to the space generated by $\zeta^{\mathfrak{m}}\left(1,1 \atop \epsilon_{1}, \epsilon_{2}\right)$ quotiented by dilogarithms $\zeta^{\mathfrak{m}}\left(2 \atop \epsilon \right)$, modulo torsion.}) and Zhao: some bounds on dimensions, tables in small weight, and other results and thoughts on cyclotomic MZV can be seen in $\cite{Zh2}$, $\cite{Zh1}$, $\cite{CZ}$. \\ \\ \texttt{Nota Bene}: As already pointed out, as soon as $N$ has a non inert prime factor $p$\footnote{In particular, as soon as $N\neq p^{s}, 2p^{s}, 4p^{s}, p^{s}q^{k}$ for $p,q$ odd prime since $(\mathbb{Z} \diagup m\mathbb{Z})^{\ast}$ is cyclic $\Leftrightarrow m=2,4,p^{k}, 2p^{k}$.}, $ \mathcal{MT}_{\Gamma_{N}} \subsetneq \mathcal{MT}(\mathcal{O}_{N}\left[ \frac{1}{N} \right] )$. Hence, some motivic periods of $\mathcal{MT}(\mathcal{O}_{N}\left[ \frac{1}{N}\right] )$ are not motivic iterated integrals on $\mathcal{P}^{1}\diagdown \lbrace 0, \mu_{N}, \infty\rbrace$ as considered above; already in weight $1$, there are more generators than the logarithms of cyclotomic units $\log^{\mathfrak{m}} (1-\xi_{N}^{a})$.\\ \\ Nevertheless, we can \textit{a priori} split the situation (of $\mathcal{MT}_{\Gamma_{N}}$) into two main schemes: \begin{itemize} \item[$(i)$] As soon as $N$ has two distinct prime factors, or $N$ power of $2$ or $3$, it is commonly believed that the motivic fundamental group $\pi_{1}^{\mathfrak{m}}(\mathbb{P}^{1}\diagdown \lbrace 0, \infty, \mu_{N}\rbrace, \overrightarrow{01})$ generates $\mathcal{MT}_{\Gamma_{N}}$, even though no suitable basis has been found. Also, in these cases, Zhao conjectured there were non standard relations\footnote{Non standard relations are these which do not come from distribution, conjugation, and regularised double shuffle relation, cf. $\cite{Zh1}$}. Nevertheless, in the case of $N$ power of $2$ or power of $3$, there seems to be a candidate for a basis ($\ref{eq:firstidea}$) and some linearly independent families were exhibited: \begin{equation}\label{eq:firstidea} \zeta^{\mathfrak{m}}\left(n_{1}, \cdots n_{p-1}, n_{p} \atop \epsilon_{1}, \ldots , \epsilon_{p-1},\epsilon_{p}\right) \quad \text{ with } \epsilon_{p}\in\mu_{N} \quad \text{primitive} \quad \text{ and } (\epsilon_{i})_{i<p} \quad \text{non primitive}. \end{equation} Indeed, when $N$ is a power of $2$ or $3$, linearly independent subfamilies of $\ref{eq:firstidea}$, keeping $\frac{3}{4}$ resp. $\frac{2}{3}$ generators in degree $1$, and all generators in degree $r>1$ are presented in $\cite{Wo}$ (in a dual point of view of the one developed here).\\ \\ \texttt{Nota Bene}: Some subfamilies of $\ref{eq:firstidea}$, restricting to $\lbrace \epsilon_{i}=1, x_{i} \geq 2\rbrace$ (here $\epsilon_{p}$ still as above) can be easily proven (via the coaction, by recursion on depth) to be linearly independent for any $N$; if N is a prime power, we can widen to $x_{i}\geq 1$, and for $N$ even to $\epsilon_{i}\in \lbrace \pm 1 \rbrace$; nevertheless, these families are considerably \textit{small}.\\ \item[$(ii)$] For $N=p^{s}$, $p$ prime greater than $5$, there are \textit{missing periods}: i.e. it is conjectured that the motivic fundamental group $\pi_{1}^{\mathfrak{m}}(\mathbb{P}^{1}\diagdown \lbrace 0, \infty, \mu_{N}\rbrace, \overrightarrow{01})$ does not generate $\mathcal{MT}_{\Gamma_{N}}$. For $N=p \geq 5$, it can already be seen in weight $2$, depth $2$. More precisely, (taking the dual point of view of Goncharov in $\cite{Go3}$), the following map is not surjective: \begin{equation}\label{eq:d1prof2} \begin{array}{llll} D_{1}: & gr^{\mathfrak{D}}_{2} \mathcal{A}_{2} & \rightarrow & \mathcal{A}_{1} \otimes \mathcal{A}_{1}\\ &\zeta^{\mathfrak{a}} \left( 1,1 \atop \xi^{a}, \xi^{b}\right)& \mapsto &(a) \otimes (b) + \delta_{a+b \neq 0} ((b)-(a))\otimes (a+b) \end{array}, \text{ where } \boldsymbol{(a)}\mathrel{\mathop:}= \zeta^{\mathfrak{a}} \left( 1 \atop \xi^{a}\right). \end{equation} These missing periods were a motivation for instance to introduce Aomoto polylogarithms (in \cite{Du})\footnote{Aomoto polylogarithms generalize the previous iterated integrals, with notably differential forms such as $\frac{dt_{i}}{t_{i}-t_{i+1}-a_{i}}$; there is also a coaction acting on them.}.\\ Another idea, in order to reach these missing periods would be to use Galois descents: coming from a category above, in order to arrive at the category underneath, in the manner of Chapter $5$. For instance, missing periods for $N=p$ prime $>5$, could be reached via a Galois descent from the category $\mathcal{MT}_{\Gamma_{2p}}$ \footnote{This category is equal to $\mathcal{MT}(\mathcal{O}_{2p} ([\frac{1}{2p}]))$ iff $2$ is a primitive root modulo $p$. Some conditions on $p$ necessary or sufficient are known: this implies that $p\equiv 3,5 ± \mod 8$; besides, if $p\equiv 3,11 \mod 16$, it is true, etc.}. First, let point out that this category has the same dimensions than $\mathcal{MT}_{p}$ in degree $>1$, and has one more generator in degree $1$, corresponding to $\zeta^{\mathfrak{a}} \left( 1 \atop \xi^{p} \right) $. Furthermore, for $p$ prime, the descent between $\mathcal{H}^{2p}$ and $\mathcal{H}^{p}$ is measured by $D_{1}^{p}$, the component of $D_{1}$ associated to $\zeta^{\mathfrak{a}} \left( 1 \atop \xi^{p}\right) $:\\ $$\text{Let } \mathfrak{Z} \in \mathcal{H}^{2p}, \text{ then } \mathfrak{Z} \in \mathcal{H}^{p} \Leftrightarrow \left\lbrace \begin{array}{l} D^{p}_{1}(\mathfrak{Z})=0\\ D_{r} (\mathfrak{Z}) \in \mathcal{H}^{p} \end{array}\right.$$ The situation is pictured by: \begin{equation}\label{eq:descent2p} \xymatrix{ \mathcal{H}^{2p}:=\mathcal{H}^{\Gamma_{2p}} \ar@{^{(}->}[r] & \mathcal{H}^{\mathcal{MT}\left( \mathcal{O}_{2p}\left[ \frac{1}{2p}\right]\right) }\\ \mathcal{H}^{p}:= \mathcal{H}^{\Gamma_{p}}=\mathcal{H}^{\mathcal{MT}\left( \mathcal{O}_{p}\left[ \frac{1}{p}\right]\right)} \ar[u]^{D^{p}_{1}} \ar@{=}[r] & \mathcal{H}^{\mathcal{MT}\left( \mathcal{O}_{2p}\left[ \frac{1}{p}\right]\right) } \\ \mathcal{H}^{\mathcal{MT}\left( \mathcal{O}_{p}\right) }\ar[u]^{\lbrace D^{2a}_{1}-D^{a}_{1}\rbrace_{2 \leq a \leq \frac{p-1}{2}}} \ar@{=}[r] & \mathcal{H}^{\mathcal{MT}\left( \mathcal{O}_{2p}\right) } }. \end{equation} \texttt{Example, for N=5}:\nomenclature{$\text{Vec}_{k} \left\langle X \right\rangle$}{the $k$ vector space generated by elements in $X$ } A basis of $gr^{\mathfrak{D}}_{p}\mathcal{A}_{1}$ corresponds to the logarithms of the roots of unity $\xi^{1}, \xi^{2}$; here, $\xi=\xi_{5}$ is a primitive fifth root of unity. Moreover, the image of $D_{1}: \mathcal{A}_{2} \rightarrow \mathcal{A}_{1} \otimes \mathcal{A}_{1}$ on $\zeta^{\mathfrak{a}} \left( 1,1 \atop \xi_{5}^{a}, \xi_{5}^{b}\right) $ is (cf. $\ref{eq:d1prof2}$): $$\text{Vec}_{\mathbb{Q}} \left\langle (1)\otimes (1), (2)\otimes (2),(1)\otimes (2)+ (2)\otimes (1) \right\rangle .$$ We notice that one dimension is missing ($3$ instead of $4$). Allowing the use of tenth roots of unity, adding for instance here in depth $2$, $\zeta^{\mathfrak{a}} \left( 1,1 \atop \xi_{10}^{1}, \xi_{10}^{2}\right)$ recovers the surjection for $D_{1}$. Since we have at our disposal criterion to determine if a MMZV$_{\mu_{10}}$ is in $\mathcal{H}^{5}$, we could imagine constructing a base of $\mathcal{H}^{5}$ from tenth roots of unity. \\ \texttt{Nota Bene}: More precisely, we have the following spaces, descents and dimensions: \begin{equation}\label{eq:descent10} \xymatrix{ \mathcal{H}^{\mathcal{MT}\left( \mathcal{O}_{10}\left[ \frac{1}{10}\right]\right) }= \mathcal{H}^{\Gamma_{10}} & \\ \mathcal{H}^{\mathcal{MT}\left( \mathcal{O}_{10}\left[ \frac{1}{5}\right]\right) }=\mathcal{H}^{\mathcal{MT}\left( \mathcal{O}_{5}\left[ \frac{1}{5}\right] \right) }= \mathcal{H}^{\Gamma_{5}} \ar[u]^{D^{5}_{1}} & d_{n}= 2d_{n-1}+3d_{n-2}= 3d_{n-1}\\ \mathcal{H}^{\mathcal{MT}\left( \mathcal{O}_{5}\right) }=\mathcal{H}^{\mathcal{MT}\left( \mathcal{O}_{10}\right) } \ar[u]^{D^{4}_{1}+ D^{2}_{1}} & d'_{n}= 2d'_{n-1}+d'_{n-2} }\\ \end{equation} \end{itemize} \noindent \textsc{Remarks}: \begin{itemize} \item[$\cdot$] Recently (in $\cite{Bro3}$), Broadhurst made some conjectures about \textit{multiple Landen values}, i.e. periods associated to the ring of integers of the real subfield of $\mathbb{Q}(\xi_{5})$, i.e. $\mathbb{Z} \left[\rho \right]$, with $\rho\mathrel{\mathop:}= \frac{1+\sqrt{5}}{2} $, the golden ratio\footnote{He also looked at the case of the real subfield of $\mathbb{Q}(\xi_{7})$ in his latest article: $\cite{Bro4}$}. Methods presented through this thesis could be transposed in such context.\\ \item[$\cdot$] It also worth noticing that, for $N=p>5$, modular forms obstruct the freeness of the Lie algebra $gr_{\mathfrak{D}} \mathfrak{g}^{\mathfrak{m}}$\footnote{Goncharov proved that the subspace of cuspidal forms of weight 2 on the modular curve $X_{1}(p)$ (associated to $\Gamma_{1}(p)$), of dimension $\frac{(p-5)(p-7)}{24}$ embeds into $\ker \beta$, for $N=p \geq 11$ which leaves another part of dimension $\frac{p-3}{2}$.}, as in the case of $N=1$ (cf. $\cite{Br3}$). Indeed, for $N=1$ one can associate, to each cuspidal form of weight $n$, a relation between weight $n$ double and simple multiple zeta values, cf. $\cite{GKZ}$. Notice that, on the contrary, for $N=2,3,4,8$, $gr_{\mathfrak{D}} \mathfrak{g}^{\mathfrak{m}}$ is free. This fascinating connection with modular forms still waits to be explored for cyclotomic MZV. \footnote{We could hope also for an interpretation, in these cyclotomic cases, of exceptional generators and relations in the Lie algebra, in the way of $\cite{Br3}$ for $N=1$.}\\ \item[$\cdot$] In these cases where $gr_{\mathfrak{D}} \mathfrak{g}^{\mathfrak{m}}$ is not free, since we have to turn towards other basis (than $\ref{eq:firstidea}$), we may remember the Hoffman basis (of $\mathcal{H}^{1}_{n}$, cf $\cite{Br2}$): $\left\lbrace \zeta^{\mathfrak{m}} \left( \lbrace 2, 3\rbrace^{\times}\right)\right\rbrace_{\text{ weight } n}$, whose dimensions verify $d_{n}=d_{n-2}+d_{n-3}$. Looking at dimensions in Lemma $2.3.1$, two cases bring to mind a basis in the \textit{\textbf{Hoffman's way}}: \begin{itemize} \item[$(i)$] For $\mathcal{MT}(\mathcal{O}_{N})$, since $d_{n}= \frac{\varphi(N)}{2}d_{n-1}+ d_{n-2}$, this suggests to look for a basis with $\boldsymbol{1}$ (with $\frac{\varphi(N)}{2}$ choices of $N^{\text{th}}$ roots of unity) and $\boldsymbol{2}$ (1 choice of $N^{\text{th}}$ roots of unity). \item[$(ii)$] For $\mathcal{MT}\left( \mathcal{O}_{p^{r}}\left[ \frac{1}{p} \right] \right)$, where $p \mid N$ and $p$ inert, since $ d_{n}= \left( \frac{\varphi(N)}{2}+1\right)^{n},$ this suggests a basis with only $1$ above, and $( \frac{\varphi(N)}{2}+1)$ choices of $N^{\text{th}}$ roots of unity; in particular if $N=p^{k}$.\\ \end{itemize} \texttt{Example}: For $N=2$, the recursion relation for dimensions $d_{n}=d_{n-1}+d_{n-2}$ of $\mathcal{H}_{n}^{2}$ suggests, \textit{in the Hoffman's way}, a basis composed of motivic Euler sums with only $1$ and $2$. For instance, the following are candidates conjectured to be a basis, supported by numerical computations: $$\left\lbrace \zeta^{\mathfrak{m}} \left( n_{1}, \ldots, n_{p-1}, n_{p} \atop 1, \ldots, 1, -1 \right) , n_{i}\in \lbrace 2, 1\rbrace \right\rbrace, \textsc{ or } \left\lbrace \zeta^{\mathfrak{m}} \left( 1, \cdots 1, \atop \boldsymbol{s}, -1 \right)\zeta^{\mathfrak{m}} (2)^{\bullet} ,\boldsymbol{s}\in \left\lbrace \lbrace 1\rbrace, \lbrace -1,-1\rbrace\right\rbrace ^{\ast }\right\rbrace.$$ However, there is not a nice \textit{suitable} filtration\footnote{In the second case, it appears that we could proceed as follows to show the linear independence of these elements, where $p$ equals $1+$ the number of $1$ in the $E_{n}$ element: Prove that, for $x\in E_{n,p}$ there exists a linear combination $cl(x)\in E_{n,>p}$ such that $x+cl(x)\in\mathcal{F}^{\mathfrak{D}}_{p} \mathcal{H}_{n}$, and then that $\lbrace x+cl(x), x\in E_{n,p} \rbrace$ is precisely a basis for $gr^{\mathfrak{D}}_{p} \mathcal{H}_{n}$, considering, for $2r \leq n-p$: $$D_{2r+1}: gr^{\mathfrak{D}}_{p} \mathcal{H}_{n} \rightarrow gr^{\mathfrak{D}}_{p-1} \mathcal{H}_{n-2r-1}.$$} corresponding to the motivic depth which would allow a recursive proof \footnote{A suitable filtration, whose level $0$ would be the power of $\pi$, level $1$ would be linear combinations of $\zeta(odd)\cdot\zeta(2)^{\bullet}$, etc.; as in proofs in $\S 4.5.1$.}. \end{itemize} \chapter{MZV $\star$ and Euler $\sharp$ sums} \paragraph{\texttt{Contents}:} After introducing motivic Euler $\star$, and $\sharp$ sums, with some useful motivic relations (antipodal and hybrid), the third section focuses on some specific Euler $\sharp$ sums, starting by a broad subfamily of \textit{unramified} elements (i.e. which are motivic MZV) and extracting from it a new basis for $\mathcal{H}^{1}$. The fourth section deals with the Hoffman star family, proving it is a basis of $\mathcal{H}^{1}$, up to an analytic conjecture ($\ref{conjcoeff}$). In Appendix $\S 4.7$, some missing coefficients in Lemma $\ref{lemmcoeff}$, although not needed for the proof of the Hoffman $\star$ Theorem $\ref{Hoffstar}$, are discussed. The last section presents a conjectured motivic equality ($\ref{lzg}$) which turns each motivic MZV $\star$ into a motivic Euler $\sharp$ sums of the previous honorary family; in particular, under this conjecture, the two previous bases are identical. The proofs here are partly based on results of Annexe $\S A.1$, which themselves use relations presented in $\S 4.2$. \section{Star, Sharp versions} Here are the different variants of motivic Euler sums (MES) used in this chapter, where a $ \pm \star$ resp. $\pm \sharp$ in the notation below $I(\cdots)$ stands for a $\omega_{\pm \star} $ resp. $\omega_{\pm\sharp}$ in the iterated integral:\footnote{Possibly regularized with $(\ref{eq:shufflereg})$.} \begin{defi} Using the expression in terms of motivic iterated integrals ($\ref{eq:reprinteg}$), motivic Euler sums are, with $n_{i}\in\mathbb{Z}^{\ast}$, $\epsilon_{i}\mathrel{\mathop:}=sign(n_{i})$: \begin{equation}\label{eq:mes} \zeta^{\mathfrak{m}}_{k} \left(n_{1}, \ldots , n_{p} \right) \mathrel{\mathop:}= (-1)^{p}I^{\mathfrak{m}} \left(0; 0^{k}, \epsilon_{1}\cdots \epsilon_{p}, 0^{\mid n_{1}\mid -1} ,\ldots, \epsilon_{i}\cdots \epsilon_{p}, 0^{\mid n_{i}\mid -1} ,\ldots, \epsilon_{p}, 0^{\mid n_{p}\mid-1} ;1 \right). \end{equation} $$\text{ With the differentials: } \omega_{\pm\star}\mathrel{\mathop:}= \omega_{\pm 1}- \omega_{0}=\frac{dt}{t(\pm t-1)}, \quad \quad \omega_{\pm\sharp}\mathrel{\mathop:}=2 \omega_{\pm 1}-\omega_{0}=\frac{(t \pm 1)dt}{t(t\mp 1)},$$ \begin{description} \item[MES ${\star}$] are defined similarly than $(\ref{eq:mes})$ with $\omega_{\pm \star}$ (instead of $\omega_{\pm 1}$), $\omega_{0}$ and a $\omega_{\pm 1}$ at the beginning: $$\zeta_{k}^{\star,\mathfrak{m}} \left(n_{1}, \ldots , n_{p} \right) \mathrel{\mathop:}= (-1)^{p} I^{\mathfrak{m}} \left(0; 0^{k}, \epsilon_{1}\cdots \epsilon_{p}, 0^{\mid n_{1}\mid-1}, \epsilon_{2}\cdots \epsilon_{p}\star, 0^{\mid n_{2}\mid -1}, \ldots, \epsilon_{p}\star, 0^{\mid n_{p}\mid-1} ;1 \right).$$ \item[MES ${\star\star}$] similarly with only $\omega_{\pm \star}, \omega_{0}$ (including the first):\nomenclature{MES ${\star\star}$, $\zeta^{\star\star,\mathfrak{m}}$}{Motivic Euler sums $\star\star$} $$\zeta_{k}^{\star\star,\mathfrak{m}} \left(n_{1}, \ldots , n_{p} \right) \mathrel{\mathop:}= (-1)^{p} I^{\mathfrak{m}} \left(0; 0^{k}, \epsilon_{1}\cdots \epsilon_{p}\star, 0^{\mid n_{1}\mid-1}, \epsilon_{2}\cdots \epsilon_{p}\star, 0^{\mid n_{2}\mid-1}, \ldots, \epsilon_{p}\star, 0^{\mid n_{p}\mid-1} ;1 \right).$$ \item[MES ${\sharp}$] with $\omega_{\pm \sharp},\omega_{0} $ and a $\omega_{\pm 1}$ at the beginning: $$\zeta_{k}^{\sharp,\mathfrak{m}} \left(n_{1}, \ldots , n_{p} \right) \mathrel{\mathop:}= 2 (-1)^{p} I^{\mathfrak{m}} \left(0; 0^{k}, \epsilon_{1}\cdots \epsilon_{p}, 0^{\mid n_{1}\mid-1}, \epsilon_{2}\cdots \epsilon_{p}\sharp, 0^{\mid n_{2}\mid -1}, \ldots, \epsilon_{p}\sharp, 0^{\mid n_{p}\mid-1} ;1 \right).$$ \item[MES $\sharp\sharp$] similarly with only $\omega_{\pm \sharp}, \omega_{0}$ (including the first):\nomenclature{MES ${\sharp\sharp}$, $\zeta^{\sharp\sharp,\mathfrak{m}}$}{Motivic Euler sums $\sharp\sharp$} $$\zeta_{k}^{\sharp\sharp,\mathfrak{m}} \left(n_{1}, \ldots , n_{p} \right) \mathrel{\mathop:}= (-1)^{p} I^{\mathfrak{m}} \left(0; 0^{k}, \epsilon_{1}\cdots \epsilon_{p}\sharp, 0^{\mid n_{1}\mid-1}, \epsilon_{2}\cdots \epsilon_{p}\sharp, 0^{\mid n_{2}\mid-1}, \ldots, \epsilon_{p}\sharp, 0^{\mid n_{p}\mid-1} ;1 \right).$$ \end{description} \end{defi} \textsc{Remarks}: \begin{itemize} \item[$\cdot$] The Lie algebra of the fundamental group $\pi_{1}^{dR}(\mathbb{P}^{1}\diagdown \lbrace 0, 1, \infty\rbrace)=\pi_{1}^{dR}(\mathcal{M}_{0,4})$ is generated by $e_{0}, e_{1},e_{\infty}$ with the only condition than $e_{0}+e_{1}+e_{\infty}=0$\footnote{ For the case of motivic Euler sums, it is the Lie algebra generated by $e_{0}, e_{1}, e_{-1}, e_{\infty}$ with the only condition than $e_{0}+e_{1}+e_{-1}+e_{\infty}=0$; similarly for other roots of unity with $e_{\eta}$. Note that $e_{i}$ corresponds to the class of the residue around $i$ in $H_{dR}^{1}(\mathbb{P}^{1} \diagdown \lbrace 0, \mu_{N}, \infty \rbrace)^{\vee}$. }. If we keep $e_{0}$ and $e_{\infty}$ as generators, instead of the usual $e_{0},e_{1}$, it leads towards MMZV $^{\star\star}$ up to a sign, instead of MMZV since $-\omega_{0}+\omega_{1}- \omega_{\star}=0$. We could also choose $e_{1}$ and $e_{\infty}$ as generators, which leads to another version of MMZV that has not been much studied yet. These versions are equivalent since each one can be expressed as $\mathbb{Q}$ linear combination of another one. \item[$\cdot$] By linearity and $\shuffle$-regularisation $(\ref{eq:shufflereg})$, all these versions ($\star$, $\star\star$, $\sharp$ or $\sharp\sharp$) are $\mathbb{Q}$-linear combination of motivic Euler sums. Indeed, with $n_{+}$ the number of $+$ among $\circ$: $$\begin{array}{llll} \zeta ^{\star,\mathfrak{m}}(n_{1}, \ldots, n_{p}) &=& \sum_{\circ=\mlq + \mrq \text{ or } ,} & \zeta ^{\mathfrak{m}}(n_{1}\circ \cdots \circ n_{p}) \\ \\ \zeta ^{\mathfrak{m}}(n_{1}, \ldots, n_{p}) &=& \sum_{\circ=\mlq + \mrq \text{ or } ,} (-1)^{n_{+}} & \zeta ^{\star,\mathfrak{m}}(n_{1}\circ \cdots \circ n_{p}) \\ \\ \zeta ^{ \sharp,\mathfrak{m}}(n_{1}, \ldots, n_{p}) &=& \sum_{\circ=\mlq + \mrq \text{ or } ,} 2^{p-n_{+}} & \zeta ^{\mathfrak{m}}(n_{1}\circ \cdots \circ n_{p}) \\ \\ \zeta ^{\mathfrak{m}}(n_{1}, \ldots, n_{p}) &=& \sum_{\circ=\mlq + \mrq \text{ or } ,} (-1)^{n_{+}} 2^{-p} & \zeta ^{ \sharp,\mathfrak{m}}(n_{1}\circ \cdots \circ n_{p}) \\ \\ \zeta ^{\star\star,\mathfrak{m}}(n_{1}, \ldots, n_{p}) &=& \sum_{i=0}^{p-1} & \zeta ^{\star,\mathfrak{m}}_{\mid n_{1}\mid+\cdots+\mid n_{i}\mid}(n_{i+1}, \cdots , n_{p}) \\ & = & \sum_{\circ=\mlq + \mrq \text{ or } ,\atop i=0}^{p-1} & \zeta^{\mathfrak{m}}_{\mid n_{1}\mid+\cdots+\mid n_{i}\mid}(n_{i+1}\circ \cdots \circ n_{p})\\ \\ \zeta ^{ \sharp\sharp,\mathfrak{m}}(n_{1}, \ldots, n_{p}) &=& \sum_{ i=0}^{p-1} & \zeta ^{\sharp,\mathfrak{m}}_{\mid n_{1}\mid+\cdots+\mid n_{i}\mid}(n_{i+1}, \cdots , n_{p}) \\ & = & \sum_{\circ=\mlq + \mrq \text{ or } ,\atop i=0}^{p-1} 2^{p-i-n_{+}} & \zeta^{\mathfrak{m}}_{\mid n_{1}\mid+\cdots+\mid n_{i}\mid}(n_{i+1}\circ \cdots \circ n_{p}) \\ \\ \zeta ^{\star,\mathfrak{m}}(n_{1}, \ldots, n_{p}) &=& \zeta ^{\star\star,\mathfrak{m}}(n_{1}, \ldots, n_{p})& -\zeta ^{\star\star,\mathfrak{m}}_{\mid n_{1}\mid}(n_{2}, \ldots, n_{p}) \\ \\ \zeta ^{\sharp,\mathfrak{m}}(n_{1}, \ldots, n_{p}) &=& \zeta ^{\sharp\sharp}(n_{1}, \ldots, n_{p}) &-\zeta ^{\sharp\sharp,\mathfrak{m}}_{\mid n_{1}\mid}(n_{2}, \ldots, n_{p}) \\ \end{array}$$ \texttt{Notation:} Beware, the $\mlq + \mrq$ here is on $n_{i}\in\mathbb{Z}^{\ast}$ is a summation of absolute values while signs are multiplied: $$n_{1} \mlq + \mrq \cdots \mlq + \mrq n_{i} \rightarrow sign(n_{1}\cdots n_{i})( \vert n_{1}\vert +\cdots + \vert n_{i} \vert).$$ \end{itemize} \texttt{Examples:} Expressing them as $\mathbb{Q}$ linear combinations of motivic Euler sums\footnote{To get rid of the $0$ in front of the MZV, as in the last example, we use the shuffle regularisation $\ref{eq:shufflereg}$.}: $$\begin{array}{lll} \zeta^{\star,\mathfrak{m}}(2,\overline{1},3) & = & -I^{\mathfrak{m}}(0;-1,0,-\star, \star,0,0; 1) \\ & = & \zeta^{\mathfrak{m}}(2,\overline{1},3)+ \zeta^{\mathfrak{m}}(\overline{3},3)+ \zeta^{\mathfrak{m}}(2,\overline{4})+\zeta^{\mathfrak{m}}(\overline{6}) \\ \zeta^{\sharp,\mathfrak{m}}(2,\overline{1},3) &=& - 2 I^{\mathfrak{m}}(0;-1,0,-\sharp, \sharp,0,0; 1) \\ &=& 8\zeta^{\mathfrak{m}}(2,\overline{1},3)+ 4\zeta^{\mathfrak{m}}(\overline{3},3)+ 4\zeta^{\mathfrak{m}}(2,\overline{4})+2\zeta^{\mathfrak{m}}(\overline{6})\\ \zeta^{\star\star,\mathfrak{m}}(2,\overline{1},3) &=& -I^{\mathfrak{m}}(0;-\star,0,-\star, \star,0,0; 1) \\ &=& \zeta^{\star,\mathfrak{m}}(2,\overline{1},3)+ \zeta^{\star,\mathfrak{m}}_{2}(\overline{1},3)+\zeta^{\star,\mathfrak{m}}_{3}(3) \\ &=& \zeta^{\star,\mathfrak{m}}(2, \overline{1}, 3)+\zeta^{\star,\mathfrak{m}}(\overline{3},3)+3 \zeta^{\star,\mathfrak{m}}(\overline{2},4)+6\zeta^{\star,\mathfrak{m}}(1, 5)-10\zeta^{\star,\mathfrak{m}}(6)\\ &=& 11\zeta^{\mathfrak{m}}(\overline{6})+2\zeta^{\mathfrak{m}}(\overline{3}, 3)+\zeta^{\mathfrak{m}}(2, \overline{4})+\zeta^{\mathfrak{m}}(2,\overline{1}, 3)+3\zeta^{\mathfrak{m}}(\overline{2}, 4)+6\zeta^{\mathfrak{m}}(\overline{1}, 5)-10\zeta^{\mathfrak{m}}(6)\\ \end{array}$$ \paragraph{Stuffle.} One of the most famous relations between cyclotomic MZV, the\textit{ stuffle} relation, coming from the multiplication of series, has been proven to be \textit{motivic} i.e. true for cyclotomic MMZV, which was a priori non obvious. \footnote{The stuffle for these motivic iterated integrals can be deduced from works by Goncharov on mixed Hodge structures, but was also proved in a direct way by G. Racinet, in his thesis, or I. Souderes in $\cite{So}$ via blow-ups. Remark that shuffle relation, coming from the iterated integral representation is clearly \textit{motivic}.} In particular: \begin{lemm} $$\zeta^{\mathfrak{m}}\left( a_{1}, \ldots, a_{r} \atop \alpha_{1}, \ldots, \alpha_{r}\right) \zeta^{\mathfrak{m}}\left( b_{1}, \ldots, b_{s} \atop \beta_{1}, \ldots, \beta_{s}\right)=\sum_{ \left( c_{j} \atop \gamma_{j}\right) = \left( a_{i} \atop \alpha_{i} \right) ,\left( b_{i'} \atop \beta_{i'}\right) \text{ or }\left( a_{i}+b_{i'} \atop \alpha_{i}\beta_{i'}\right) \atop \text{order } (a_{i}), (b_{i}) \text{ preserved} }\zeta^{\mathfrak{m}}\left( c_{1}, \ldots, c_{m} \atop \gamma_{1}, \ldots, \gamma_{m} \right) .$$ $$\zeta^{\star,\mathfrak{m}}\left( a_{1}, \ldots, a_{r} \atop \alpha_{1}, \ldots, \alpha_{r}\right) \zeta^{\star, \mathfrak{m}}\left( b_{1}, \ldots, b_{s} \atop \beta_{1}, \ldots, \beta_{s}\right)=\sum_{ \left( c_{j} \atop \gamma_{j}\right) = \left( a_{i} \atop \alpha_{i} \right) ,\left( b_{i} \atop \beta_{i}\right) \text{ or }\left( a_{i}+b_{i'} \atop \alpha_{i}\beta_{i'}\right) \atop \text{order } (a_{i}), (b_{i}) \text{ preserved} }(-1)^{r+s+m}\zeta^{\star, \mathfrak{m}}\left( c_{1}, \ldots, c_{m} \atop \gamma_{1}, \ldots, \gamma_{m} \right) .$$ $$\zeta^{\sharp , \mathfrak{m}}\left( \textbf{a} \atop \boldsymbol{\alpha} \right) \zeta^{\sharp, \mathfrak{m}}\left( \textbf{ b } \atop \boldsymbol{\beta} \right)=\sum_{ \left( c_{j} \atop \gamma_{j}\right) = \left( a_{i}+\sum_{l=1}^{k} a_{i+l} +b_{i'+l} \atop \alpha_{i}\prod_{l=1}^{k}\alpha_{i+l}\beta_{i'+l}\right) \text{ or } \left( b_{i'}+\sum_{l=1}^{k} a_{i+l} +b_{i'+l} \atop \beta_{i'}\prod_{l=1}^{k}\alpha_{i+l}\beta_{i'+l}\right) \atop k\geq 0, \text{ order } (a_{i}), (b_{i}) \text{ preserved}}(-1)^{\frac{r+s-m}{2}}\zeta^{\sharp, \mathfrak{m}}\left( c_{1}, \ldots, c_{m} \atop \gamma_{1}, \ldots, \gamma_{m} \right) .$$ \end{lemm} \noindent\textsc{Remarks:} \begin{itemize} \item[$\cdot$] In the depth graded, stuffle corresponds to shuffle the sequences $\left( \boldsymbol{a} \atop \boldsymbol{\alpha} \right) $ and $\left( \boldsymbol{b} \atop \boldsymbol{\beta} \right) $. \item[$\cdot$] Other identities mixing the two versions could also be stated, such as $$\zeta^{\star, \mathfrak{m}}\left( a_{1}, \ldots, a_{r} \atop \alpha_{1}, \ldots, \alpha_{r}\right) \zeta^{\mathfrak{m}}\left( b_{1}, \ldots, b_{s} \atop \beta_{1}, \ldots, \beta_{s}\right)=\sum_{ \left( c_{j} \atop \gamma_{j}\right) = \left( a_{i} \atop \alpha_{i} \right) ,\left( b_{i'} \atop \beta_{i'}\right) \text{ or }\left( (\sum_{l=1}^{k} a_{i+l})+b_{i'} \atop (\prod_{l=1}^{k}\alpha_{i+l})\beta_{i'}\right) \atop k \geq 1, \text{order } (a_{i}), (b_{i}) \text{ preserved} }\zeta^{\mathfrak{m}}\left( c_{1}, \ldots, c_{m} \atop \gamma_{1}, \ldots, \gamma_{m} \right) .$$ \end{itemize} \section{Relations in $\mathcal{L}$} \subsection{Antipode relation} In this part, we are interested in some Antipodal relations for motivic Euler sums in the coalgebra $\mathcal{L}$, i.e. modulo products. To explain quickly where they come from, let's go back to two combinatorial Hopf algebra structures.\\ \\ First recall that if $A$ is a graded connected bialgebra, there exists an unique antipode S (leading to a Hopf algebra structure)\footnote{It comes from the usual required relation for the antipode in a Hopf algebra, but because it is graded and connected, we can apply the formula recursively to construct it, in an unique way. }, which is the graded map defined by: \begin{equation} \label{eq:Antipode} S(x)= -x-\sum S(x_{(1)}) \cdot x_{(2)}, \end{equation} where $\cdot$ is the product and using Sweedler notations for the coaction: $$\Delta (x)= 1\otimes x+ x\otimes 1+ \sum x_{(1)}\otimes x_{(2)}= \Delta'(x)+ 1\otimes x+ x\otimes 1 .$$ Hence, in the quotient $A/ A_{>0}\cdot A_{>0} $: $$S(x) \equiv -x . $$ \subsubsection{The $\shuffle$ Hopf algebra} Let $X=\lbrace a_{1},\cdots, a_{n} \rbrace$ an alphabet and $A_{\shuffle}\mathrel{\mathop:}=\mathbb{Q} \langle X^{\times} \rangle$ the $\mathbb{Q}$-vector space generated by words on X, i.e. non commutative polynomials in $a_{i}$. It is easy to see that $A_{\shuffle}$ is a Hopf algebra with the $\shuffle$ shuffle product, the deconcatenation coproduct $\Delta_{D}$ and antipode $S_{\shuffle}$:\nomenclature{$\Delta_{D}$}{the deconcatenation coproduct} \begin{equation} \label{eq:shufflecoproduct} \Delta_{D}(a_{i_{1}}\cdots a_{i_{n}})= \sum_{k=0}^{n} a_{i_{1}}\cdots a_{i_{k}} \otimes a_{i_{k+1}} \cdots a_{i_{n}}. \end{equation} \begin{equation} \label{eq:shuffleantipode} S_{\shuffle} (a_{i_{1}} \cdots a_{i_{n}})= (-1)^{n} a_{i_{n}} \cdots a_{i_{1}}. \end{equation} $A_{\shuffle}$ is even a connected graded Hopf algebra, called the \textit{ shuffle Hopf algebra}; the grading coming from the degree of polynomial. By the equivalence of category between $\mathbb{Q}$-Hopf algebra and $\mathbb{Q}$-Affine Group Scheme, it corresponds to: \begin{equation} \label{eq:gpschshuffle} G=\text{Spec} A_{\shuffle} : R \rightarrow \text{Hom}(\mathbb{Q} \langle X \rangle, R)=\lbrace S\in R\langle\langle a_{i} \rangle\rangle \mid \Delta_{\shuffle} S= S\widehat{\otimes} S, \epsilon(S)=1 \rbrace, \end{equation} where $\Delta_{\shuffle}$ is the coproduct dual to the product $\shuffle$:\nomenclature{$\Delta_{\shuffle}$}{the $\shuffle$ coproduct} $$\Delta_{\shuffle}(a_{i_{1}}\cdots a_{i_{n}})= \left( 1\otimes a_{i_{1}}+ a_{i_{1}}\otimes 1\right) \cdots \left( 1\otimes a_{i_{n}}+ a_{i_{n}}\otimes 1\right) .$$ Let restrict now to $X=\lbrace 0,\mu_{N}\rbrace$; our main interest in this Chapter is $N=2$, but it can be extended to other roots of unity. The shuffle relation for motivic iterated integral relative to $\mu_{N}$: \begin{equation}\label{eq:shuffleim} I^{\mathfrak{m}}(0; \cdot ; 1) \text{ is a morphism of Hopf algebra from } A_{\shuffle} \text{ to } (\mathbb{R},\times): \end{equation} $$I^{\mathfrak{m}}(0; w ; 1) I^{\mathfrak{m}}(0; w' ; 1)= I^{\mathfrak{m}}(0; w\shuffle w' ;1) \text{ with } w,w' \text{ words in } X.$$ \begin{lemm}[\textbf{Antipode $\shuffle$}] In the coalgebra $\mathcal{L}$, with $w$ the weight, $\bullet$ standing for MMZV$_{\mu_{N}}$, or $\star\star$ ($N=2$) resp. $\sharp\sharp$-version ($N=2$): $$\zeta^{\bullet,\mathfrak{l}}_{n-1}\left( n_{1}, \ldots, n_{p} \atop \epsilon_{1}, \ldots, \epsilon_{p} \right) \equiv (-1)^{w+1}\zeta^{\bullet,\mathfrak{l}}_{n_{p}-1}\left( n_{p-1}, \ldots, n_{1},n \atop \epsilon_{p-1}^{-1}, \ldots, \epsilon_{1}^{-1}, \epsilon \right) \text{ where } \epsilon\mathrel{\mathop:}=\epsilon_{1}\cdot\ldots\cdot\epsilon_{p}.$$ \end{lemm} \noindent This formula stated for any $N$ is slightly simpler in the case $N=1,2$ since $n_{i}\in\mathbb{Z}^{\ast}$: \begin{framed} \begin{equation}\label{eq:antipodeshuffle2} \textsc{Antipode } \shuffle \quad : \begin{array}{l} \zeta^{\bullet,\mathfrak{l}}_{n-1}\left( n_{1}, \ldots, n_{p} \right) \equiv(-1)^{w+1}\zeta^{\bullet,\mathfrak{l}}_{\mid n_{p}\mid -1}\left( n_{p-1}, \ldots, n_{1},sign(n_{1}\cdots n_{p}) n \right)\\ \text{ } \\ I^{\mathfrak{l}}(0;X;\epsilon)\equiv (-1)^{w} I^{\mathfrak{l}}(\epsilon;\widetilde{X};0) \equiv (-1)^{w+1} I^{\mathfrak{l}}(0; \widetilde{X}; \epsilon) \end{array}. \end{equation} \end{framed} Here $X$ is any word in $0,\pm 1$ or $0, \pm \star$ or $0, \pm\sharp$, and $\widetilde{X}$ denotes the \textit{reversed} word. \begin{proof} For motivic iterated integrals, as said above: $$ S_{\shuffle} (I^{\mathfrak{m}}(0; a_{1}, \ldots, a_{n}; 1))= (-1)^{n}I^{\mathfrak{m}}(0; a_{n}, \ldots, a_{1}; 1),$$ which, in terms of the MMZV$_{\mu_{N}}$ notation is: $$S_{\shuffle}\left( \zeta^{\bullet,\mathfrak{l}}_{n-1}\left( n_{1}, \ldots, n_{p} \atop \epsilon_{1}, \ldots, \epsilon_{p} \right) \right) \equiv (-1)^{w}\zeta^{\bullet,\mathfrak{l}}_{n_{p}-1}\left( n_{p-1}, \ldots, n_{1},n \atop \epsilon_{p-1}^{-1}, \ldots, \epsilon_{1}^{-1}, \epsilon \right) \text{ where } \epsilon\mathrel{\mathop:}=\epsilon_{1}\cdot\ldots\cdot\epsilon_{p}.$$ Then, if we look at the antipode recursive formula $\eqref{eq:Antipode}$ in the coalgebra $\mathcal{L}$, for $a_{i}\in \lbrace 0, \mu_{N} \rbrace$: $$ S_{\shuffle} (I^{\mathfrak{l}}(0; a_{1}, \ldots, a_{n}; 1))\equiv - I^{\mathfrak{l}}(0; a_{1}, \ldots, a_{n}; 1).$$ This leads to the lemma above. The $\shuffle$-antipode relation can also be seen at the level of iterated integrals as the path composition modulo products followed by a reverse of path. \end{proof} \subsubsection{The $\ast$ Hopf algebra} Let $Y=\lbrace \cdots, y_{-n}, \ldots, y_{-1}, y_{1},\cdots, y_{n}, \cdots \rbrace$ an infinite alphabet and $A_{\ast}\mathrel{\mathop:}=\mathbb{Q} \langle Y^{\times} \rangle$ the non commutative polynomials in $y_{i}$ with rational coefficients, with $y_{0}=1$ the empty word. Similarly, it is a graded connected Hopf algebra called the\textit{ stuffle Hopf algebra}, with the stuffle $\ast$ product and the following coproduct:\footnote{For the $\shuffle$ algebra, we had to use the notation in terms of iterated integrals, with $0,\pm 1$, but for the $\ast$ stuffle relation, it is more natural with the Euler sums notation, which corresponds to $y_{n_{i}}, n_{i}\in\mathbb{Z}$.} \begin{equation} \label{eq:stufflecoproduct} \Delta_{D\ast}(y_{n_{1}} \cdots y_{n_{p}})= \sum_{} y_{n_{1}} \cdots y_{n_{i}}\otimes y_{n_{i+1}}, \ldots, y_{n_{p}}, \quad n_{i}\in \mathbb{Z}^{\ast}. \end{equation} \texttt{Nota Bene}: Remark that here we restricted to Euler sums, $N=2$, but it could be extended for other roots of unity, for which stuffle relation has been stated in $\S 4.1$.\\ The completed dual is the Hopf algebra of series $\mathbb{Q}\left\langle \left\langle Y \right\rangle \right\rangle $ with the coproduct: $$\Delta_{\ast}(y_{n})= \sum_{ k =0 \atop sgn(n)=\epsilon_{1}\epsilon_{2}}^{\mid n \mid} y_{\epsilon_{1} k} \otimes y_{\epsilon_{2}( n-k)}.$$ Now, let introduce the notations:\footnote{Here $\star$ resp. $\sharp$ refers naturally to the Euler $\star$ resp. $\sharp$, sums, as we see in the next lemma. Beware, it is not a $\ast$ homomorphism.} $$(y_{n_{1}} \cdots y_{n_{p}})^{\star} \mathrel{\mathop:}= \sum_{1=i_{0}< i_{1} < \cdots < i_{k-1}\leq i_{k+1}=p \atop k\geq 0} y_{n_{i_{0}}\mlq + \mrq\cdots \mlq + \mrq n_{i_{1}-1}} \cdots y_{n_{i_{j}}\mlq + \mrq\cdots \mlq + \mrq n_{i_{j+1}-1}} \cdots y_{n_{i_{k}}\mlq + \mrq \cdots \mlq + \mrq n_{i_{k+1}}}.$$ $$(y_{n_{1}} \cdots y_{n_{p}})^{\sharp} \mathrel{\mathop:}= \sum_{1=i_{0}< i_{1} < \cdots < i_{k-1}\leq i_{k+1}=p \atop k\geq 0} 2^{k+1} y_{n_{i_{0}}\mlq + \mrq\cdots \mlq + \mrq n_{i_{1}-1}} \cdots y_{n_{i_{j}} \mlq + \mrq \cdots \mlq + \mrq n_{i_{j+1}-1}} \cdots y_{n_{i_{k}}\mlq + \mrq \cdots \mlq + \mrq n_{i_{k+1}}},$$ where $n_{i}\in\mathbb{Z}^{\ast}$ and the operation $\mlq + \mrq$ indicates that signs are multiplied whereas absolute values are summed. It is straightforward to check that: \begin{equation} \Delta_{D\ast}(w^{\star})=(\Delta_{D\ast}(w))^{\star} , \quad \text{ and } \quad \Delta_{D\ast}(w^{\sharp})=(\Delta_{D\ast}(w))^{\sharp}. \end{equation} As said above, the relation stuffle is motivic: \begin{center} $\zeta^{\mathfrak{m}}(\cdot)$ is a morphism of Hopf algebra from $A_{\ast}$ to $(\mathbb{R},\times)$. \end{center} \begin{lemm}[\textbf{Antipode $\ast$}] In the coalgebra $\mathcal{L}$, with $n_{i}\in\mathbb{Z}^{\ast}$ $$\zeta^{\mathfrak{l}}_{n-1}(n_{1}, \ldots, n_{p}) \equiv (-1)^{p+1}\zeta^{\star,\mathfrak{l}}_{n-1}(n_{p}, \ldots, n_{1}).$$ $$\zeta^{\sharp,\mathfrak{l}}_{n-1}(n_{1}, \ldots, n_{p})\equiv (-1)^{p+1}\zeta^{\sharp,\mathfrak{l}}_{n-1}(n_{p}, \ldots, n_{1}).$$ \end{lemm} \begin{proof} By recursion, using the formula $\eqref{eq:Antipode}$, and the following identity (left to the reader): $$\sum_{i=0}^{p-1} (-1)^{i}(y_{n_{i}} \cdots y_{n_{1}})^{\star} \ast (y_{n_{i+1}} \cdots y_{n_{p}})= -(-1)^{p} (y_{n_{p}} \cdots y_{n_{1}})^{\star}, $$ we deduce the antipode $S_{\ast}$: $$S_{\ast} (y_{n_{1}} \cdots y_{n_{p}})= (-1)^{p} (y_{n_{p}} \cdots y_{n_{1}})^{\star} .$$ Similarly: $$S_{\ast}((y_{n_{1}} \cdots y_{n_{p}})^{\sharp})=-\sum_{i=0}^{n-1} S_{\ast}((y_{n_{1}} \cdots y_{n_{i}})^{\sharp}) \ast (y_{n_{i+1}} \cdots y_{n_{p}})^{\sharp}$$ $$=-\sum_{i=0}^{n-1} (-1)^{i}(y_{n_{i}} \cdots y_{n_{1}})^{\sharp} \ast (y_{n_{i+1}} \cdots y_{n_{p}})^{\sharp}=(-1)^{p}(y_{n_{p}} \cdots y_{n_{1}})^{\sharp}.$$ Then, we deduce the lemma, since $\zeta^{\mathfrak{m}}(\cdot)$ is a morphism of Hopf algebra. Moreover, the formula $\eqref{eq:Antipode}$ in the coalgebra $\mathcal{L}$ gives that: $$S(\zeta^{\mathfrak{l}}(\textbf{s}))\equiv -\zeta^{\mathfrak{l}}(\textbf{s}).$$ \end{proof} \subsection{Hybrid relation in $\mathcal{L}$} In this part, we look at a new relation called \textit{hybrid relation} between motivic Euler sums in the coalgebra $\mathcal{L}$, i.e. modulo products, which comes from the motivic version of the octagon relation (for $N>1$, cf. $\cite{EF}$) \footnote{\begin{figure}[H] \centering \includegraphics[]{hexagon.pdf} \caption{For $N=1$, Hexagon relation: $e^{i\pi e_{0}} \Phi(e_{\infty}, e_{0}) e^{i\pi e_{\infty}} \Phi(e_{1},e_{\infty}) e^{i\pi e_{1}}\Phi( e_{0},e_{1})=1.$} \label{fig:hexagon} \end{figure}} \begin{figure}[H] \centering \includegraphics[]{octagon.pdf} \caption{Octagon relation, $N>1$:\\ $ \Phi(e_{0}, e_{1}, \ldots, e_{n}) e^{\frac{2 i\pi e_{1}}{N}} \Phi(e_{\infty}, e_{1}, e_{n}, \ldots, e_{2})^{-1} e^{\frac{2 i\pi e_{\infty}}{N}}\Phi(e_{\infty}, e_{n}, \ldots, e_{1}) e^{\frac{2i\pi e_{n}}{N}}\Phi( e_{0},e_{n}, e_{1}, \ldots, e_{n-1})^{-1}e^{\frac{2 i\pi e_{0}}{N}}$\\ $=1$} \label{fig:octagon} \end{figure} \noindent This relation is motivic, and hence valid for the motivic Drinfeld associator $\Phi^{\mathfrak{m}}$ ($\ref{eq:associator}$), replacing $2 i \pi$ by the Lefschetz motivic period $\mathbb{L}^{\mathfrak{m}}$. \\ Let focus on the case $N=2$ and recall that the space of motivic periods of $\mathcal{MT}\left( \mathbb{Z}[\frac{1}{2}]\right)$ decomposes as (cf. $\ref{eq:periodgeomr}$): \begin{equation}\label{eq:perioddecomp2} \mathcal{P}_{\mathcal{MT}\left( \mathbb{Z}[\frac{1}{2}]\right)}^{\mathfrak{m}}= \mathcal{H}^{2} \oplus \mathcal{H}^{2}. \mathbb{L}^{\mathfrak{m}}, \quad \text{ where } \begin{array}{l} \mathcal{H}^{2} \text{ is } \mathcal{F}_{\infty} \text{ invariant} \\ \mathcal{H}^{2}. \mathbb{L}^{\mathfrak{m}} \text{ is } \mathcal{F}_{\infty} \text{ anti-invariant} \end{array}. \end{equation} For the motivic Drinfeld associator, seeing the path in the Riemann sphere, it becomes: \begin{figure}[H] \centering \includegraphics[]{octagon2.pdf} \caption{Octagon relation, $N=2$ with $e_{0}+e_{1}+e_{-1}+e_{\infty}=0$:\\ $e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{-1}}{2}} \Phi^{\mathfrak{m}}(e_{0}, e_{-1},e_{1})^{-1} e^{ \frac{\mathbb{L}^{\mathfrak{m}}e_{0}}{2}} \Phi^{\mathfrak{m}}(e_{0},e_{1},e_{-1}) e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{1}}{2}}\Phi^{\mathfrak{m}}( e_{\infty},e_{1},e_{-1})^{-1} e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{\infty}}{2}}\Phi^{\mathfrak{m}}( e_{\infty},e_{-1},e_{1}) =1.$} \label{fig:octagon2} \end{figure} Let $X= \mathbb{P}^{1}\diagdown \left\lbrace 0, \pm 1, \infty \right\rbrace $. The action of the \textit{real Frobenius} $\boldsymbol{\mathcal{F}_{\infty}}$ on $X(\mathbb{C})$ is induced by complex conjugation. The real Frobenius acts on the Betti realization $\pi^{B}(X (\mathbb{C}))$\footnote{ It is compatible with the groupoid structure of $\pi^{B}$, and the local monodromy. }, and induces an involution on motivic periods, compatible with the Galois action: $$\mathcal{F}_{\infty}: \mathcal{P}_{\mathcal{MT}(\mathbb{Z}[\frac{1}{2}])}^{\mathfrak{m}} \rightarrow\mathcal{P}_{\mathcal{MT}(\mathbb{Z}[\frac{1}{2}])}^{\mathfrak{m}}.$$ The Lefschetz motivic period $\mathbb{L}^{\mathfrak{m}}$ is anti-invariant by $\mathcal{F}_{\infty}$: $$\mathcal{F}_{\infty} \mathbb{L}^{\mathfrak{m}}= -\mathbb{L}^{\mathfrak{m}},$$ whereas terms corresponding to real paths in Figure $\ref{fig:octagon2}$, such as Drinfeld associator terms, are obviously invariant by $\mathcal{F}_{\infty}$.\\ \\ The linearized $\mathcal{F}_{\infty}$-anti-invariant part of this octagon relation leads to the following hybrid relation. \begin{theo}\label{hybrid} In the coalgebra $\mathcal{L}^{2}$, with $n_{i}\in \mathbb{Z}^{\ast}$, $w$ the weight: $$\zeta^{\mathfrak{l}}_{k}\left( n_{0}, n_{1},\ldots, n_{p} \right) + \zeta^{\mathfrak{l}}_{\mid n_{0} \mid +k}\left( n_{1}, \ldots, n_{p} \right) \equiv (-1)^{w+1}\left( \zeta^{\mathfrak{l}}_{k}\left( n_{p}, \ldots, n_{1}, n_{0} \right) + \zeta^{\mathfrak{l}}_{k+\mid n_{p}\mid}\left( n_{p-1}, \ldots, n_{1},n_{0} \right)\right)$$ Equivalently, in terms of motivic iterated integrals, for $X$ any word in $\lbrace 0, \pm 1 \rbrace$, with $\widetilde{X}$ the reversed word, we obtain both: $$I^{\mathfrak{l}} (0; 0^{k}, \star, X; 1)\equiv I^{\mathfrak{l}} (0; X, \star, 0^{k}; 1)\equiv (-1)^{w+1} I^{\mathfrak{l}} (0; 0^{k}, \star, \widetilde{X}; 1), $$ $$I^{\mathfrak{l}} (0; 0^{k}, -\star, X; 1)\equiv I^{\mathfrak{l}} (0;- X, -\star, 0^{k}; 1)\equiv (-1)^{w+1} I^{\mathfrak{l}} (0; 0^{k}, -\star, -\widetilde{X}; 1) $$ \end{theo} The proof is given below, firstly for $k=0$, using octagon relation (Figure $\ref{fig:octagon2}$). The generalization for any $k >0$ is deduced directly from the shuffle regularization $(\ref{eq:shufflereg})$.\\ \\ \textsc{Remarks}: \begin{itemize} \item[$\cdot$] This theorem implies notably the famous \textit{depth-drop phenomena} when weight and depth have not the same parity (cf. Corollary $\ref{hybridc}$). \item[$\cdot$] Equivalently, this statement is true for $X$ any word in $\lbrace 0, \pm \star \rbrace$. Recall that ($\ref{eq:miistarsharp}$), by linearity: $$ I^{\mathfrak{m}}(\ldots, \pm \star, \ldots)\mathrel{\mathop:}= I^{\mathfrak{m}}(\ldots, \pm 1, \ldots) - I^{\mathfrak{m}}(\ldots, 0, \ldots).$$ \item[$\cdot$] The point of view adopted by Francis Brown in $\cite{Br3}$, and its use of commutative polynomials (also seen in Ecalle work) can be applied in the coalgebra $\mathcal{L}$ and leads to a new proof of Theorem $\ref{hybrid}$ in the case of MMZV, i.e. $N=1$, sketched in Appendix $A.4$; it uses the stuffle relation and the antipode shuffle. Unfortunately, generalization for motivic Euler sums of this proof is not clear, because of this commutative polynomial setting. \end{itemize} Since Antipode $\ast$ relation expresses $\zeta^{\mathfrak{l}}_{n-1}(n_{1}, \ldots, n_{p})+(-1)^{p} \zeta^{\mathfrak{l}}_{n-1}(n_{p}, \ldots, n_{1})$ in terms of smaller depth (cf. Lemma $4.2.2$), when weight and depth have not the same parity, it turns out that a (motivic) Euler sum can be expressed by smaller depth:\footnote{Erik Panzer recently found a new proof of this depth drop result for MZV at roots of unity, which appear as a special case of some functional equations of polylogarithms in several variables. } \begin{coro}\label{hybridc} If $w+p$ odd, a motivic Euler sum in $\mathcal{L}$ is reducible in smaller depth: $$2\zeta^{\mathfrak{l}}_{n-1}(n_{1}, \ldots, n_{p}) \equiv$$ $$-\zeta^{\mathfrak{l}}_{n+\mid n_{1}\mid -1}(n_{2}, \ldots, n_{p})+(-1)^{p} \zeta^{\mathfrak{l}}_{n+\mid n_{p}\mid -1}(n_{p-1}, \ldots, n_{1})+\sum_{\circ=+ \text{ or } ,\atop \text{at least one } +} (-1)^{p+1} \zeta^{\mathfrak{l}}_{n-1}(n_{p}\circ \cdots \circ n_{1}).$$ \end{coro} \paragraph{Proof of Theorem $\ref{hybrid}$} First, the octagon relation (Figure $\ref{fig:octagon2}$) is equivalent to: \begin{lemm} In $\mathcal{P}_{\mathcal{MT}\left( \mathbb{Z}[\frac{1}{2}]\right)}^{\mathfrak{m}}\left\langle \left\langle e_{0}, e_{1}, e_{-1}\right\rangle \right\rangle $, with $e_{0} + e_{1} + e_{-1} +e_{\infty} =0$: \begin{equation}\label{eq:octagon21} \Phi^{\mathfrak{m}}(e_{0}, e_{1},e_{-1}) e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{0}}{2}} \Phi^{\mathfrak{m}}(e_{-1}, e_{0},e_{\infty}) e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{-1}}{2}} \Phi^{\mathfrak{m}}(e_{\infty}, e_{-1},e_{1}) e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{\infty}}{2}} \Phi^{\mathfrak{m}}(e_{1}, e_{\infty},e_{0}) e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{1}}{2}} =1, \end{equation} Hence, the linearized octagon relation is: \begin{multline}\label{eq:octagonlin} - e_{0} \Phi^{\mathfrak{l}}(e_{-1}, e_{0},e_{\infty})+ \Phi^{\mathfrak{l}}(e_{-1}, e_{0},e_{\infty})e_{0} +(e_{0}+e_{-1}) \Phi^{\mathfrak{l}}(e_{\infty}, e_{-1},e_{1}) - \Phi^{\mathfrak{l}}(e_{\infty}, e_{-1},e_{1}) (e_{0}+e_{-1})\\ - e_{1} \Phi^{\mathfrak{l}}(e_{1}, e_{\infty},e_{0}) + \Phi^{\mathfrak{l}}(e_{1}, e_{\infty},e_{0}) e_{1} \equiv 0. \end{multline} \end{lemm} \begin{proof} \begin{itemize} \item[$\cdot$] Let's first remark that: $$ \Phi^{\mathfrak{m}}(e_{0}, e_{1},e_{-1})= \Phi^{\mathfrak{m}}(e_{1}, e_{0},e_{\infty})^{-1} .$$ Indeed, the coefficient in the series $\Phi^{\mathfrak{m}}(e_{1}, e_{0},e_{\infty})$ of a word $e_{0}^{a_{0}} e_{\eta_{1}} e_{0}^{a_{1}} \cdots e_{\eta_{r}} e_{0}^{a_{r}}$, where $\eta_{i}\in \lbrace\pm 1 \rbrace$ is (cf. $\S 4.6$): $$ I^{\mathfrak{m}} \left(0; (\omega_{1}-\omega_{-1})^{a_{0}} (-\omega_{\mu_{1}}) (\omega_{1}-\omega_{-1})^{a_{1}} \cdots (-\omega_{\mu_{r}})(\omega_{1}-\omega_{-1})^{a_{r}} ;1 \right) \texttt{ with } \mu_{i}\mathrel{\mathop:}= \left\lbrace \begin{array}{ll} -\star& \texttt{if } \eta_{i}=1\\ -1 & \texttt{if } \eta_{i}=-1 \end{array} \right. .$$ Let introduce the following homography $\phi_{\tau\sigma}$ (cf. Annexe $(\ref{homography2})$): $$\phi_{\tau\sigma}= \phi_{\tau\sigma}^{-1}: t \mapsto \frac{1-t}{1+t} :\left\lbrace \begin{array}{l} -\omega_{\star}\mapsto \omega_{\star} \\ -\omega_{1}\mapsto \omega_{-\star}\\ \omega_{-1}-\omega_{1} \mapsto -\omega_{0}\\ \omega_{-1} \mapsto -\omega_{-1}\\ \omega_{-\star} \mapsto -\omega_{1} \end{array} \right..$$ If we apply $\phi_{\tau\sigma}$ to the motivic iterated integral above, it gives: $ I^{\mathfrak{m}} \left(1; \omega_{0}^{a_{0}} \omega_{\eta_{1}} \omega_{0}^{a_{1}} \cdots \omega_{\eta_{r}} \omega_{0}^{a_{r}} ;0 \right)$. Hence, summing over words $w$ in $e_{0},e_{1},e_{-1}$: $$ \Phi^{\mathfrak{m}}(e_{1}, e_{0},e_{\infty})= \sum I^{\mathfrak{m}}(1; w; 0) w$$ Therefore: $$\Phi^{\mathfrak{m}}(e_{0}, e_{1},e_{-1})\Phi^{\mathfrak{m}}(e_{1}, e_{0},e_{\infty})= \sum_{w, w=uv} I^{\mathfrak{m}}(0; u; 1) I^{\mathfrak{m}}(1; v; 0) w= 1.$$ We used the composition formula for iterated integral to conclude, since for $w$ non empty, $\sum_{w=uv} I^{\mathfrak{m}}(0; u; 1) I^{\mathfrak{m}}(1; v; 0)= I^{\mathfrak{m}}(0; w; 0) =0$.\\ Similarly: $$\Phi^{\mathfrak{m}}(e_{0}, e_{-1},e_{1})= \Phi^{\mathfrak{m}}(e_{-1}, e_{0},e_{\infty})^{-1} , \quad \text{ and } \quad \Phi^{\mathfrak{m}}(e_{\infty}, e_{1},e_{-1})= \Phi^{\mathfrak{m}}(e_{1}, e_{\infty},e_{0})^{-1}.$$ The identity $\ref{eq:octagon21}$ follows from $\ref{fig:octagon2}$.\\ \item[$\cdot$] Let consider both paths on the Riemann sphere $\gamma$ and $\overline{\gamma}$, its conjugate: \footnote{Path $\gamma$ corresponds to the cycle $\sigma$, $1 \mapsto \infty \mapsto -1 \mapsto 0 \mapsto 1$ (cf. in Annexe $\ref{homography2}$). Beware, in the figure, the position of both path is not completely accurate in order to distinguish them.} \\ \\ \includegraphics[]{octagon3.pdf}\\ Applying $(id-\mathcal{F}_{\infty})$ to the octagon identity $\ref{eq:octagon21}$ \footnote{The identity $\ref{eq:octagon21}$ corresponds to the path $\gamma$ whereas applying $\mathcal{F}_{\infty}$ to the path $\gamma$ corresponds to the path $\overline{\gamma}$ represented.} leads to: \begin{small} \begin{multline}\label{eq:octagon22} \Phi^{\mathfrak{m}}(e_{0}, e_{1},e_{-1}) e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{0}}{2}} \Phi^{\mathfrak{m}}(e_{-1}, e_{0},e_{\infty}) e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{-1}}{2}} \Phi^{\mathfrak{m}}(e_{\infty}, e_{-1},e_{1}) e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{\infty}}{2}} \Phi^{\mathfrak{m}}(e_{1}, e_{\infty},e_{0}) e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{1}}{2}}\\ -\Phi^{\mathfrak{m}}(e_{0}, e_{1},e_{-1}) e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{0}}{2}} \Phi^{\mathfrak{m}}(e_{-1}, e_{0},e_{\infty}) e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{-1}}{2}} \Phi^{\mathfrak{m}}(e_{\infty}, e_{-1},e_{1}) e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{\infty}}{2}} \Phi^{\mathfrak{m}}(e_{1}, e_{\infty},e_{0}) e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{1}}{2}}=0. \end{multline} \end{small} By $(\ref{eq:perioddecomp2})$, the left side of $(\ref{eq:octagon22})$, being anti-invariant by $\mathcal{F}_{\infty}$, lies in $ \mathcal{H}^{2}\cdot \mathbb{L}^{\mathfrak{m}} \left\langle \left\langle e_{0}, e_{1}, e_{-1} \right\rangle \right\rangle $. Consequently, we can divide it by $\mathbb{L}^{\mathfrak{m}}$ and consider its projection $\pi^{\mathcal{L}}$ in the coalgebra $\mathcal{L} \left\langle \left\langle e_{0}, e_{1}, e_{-1} \right\rangle \right\rangle $, which gives firstly: \begin{small} \begin{multline}\label{eq:octagon23}\hspace*{-0.5cm} 0=\Phi^{\mathfrak{l}}(e_{0}, e_{1},e_{-1}) \pi^{\mathcal{L}} \left( (\mathbb{L}^{\mathfrak{m}})^{-1} \left[ e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{0}}{2}} e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{-1}}{2}}e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{\infty}}{2}} e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{1}}{2}} - e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{0}}{2}} e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{-1}}{2}}e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{\infty}}{2}} e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{1}}{2}} \right] \right) \\ \hspace*{-0.5cm} +\pi^{\mathcal{L}} \left( (\mathbb{L}^{\mathfrak{m}})^{-1} \left[ e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{0}}{2}} \Phi^{\mathfrak{l}}(e_{-1}, e_{0},e_{\infty}) e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{-1}}{2}} e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{\infty}}{2}} e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{1}}{2}}- e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{0}}{2}} \Phi^{\mathfrak{l}}(e_{-1}, e_{0},e_{\infty}) e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{-1}}{2}} e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{\infty}}{2}} e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{1}}{2}} \right] \right) \\ \hspace*{-0.5cm} + \pi^{\mathcal{L}} \left( (\mathbb{L}^{\mathfrak{m}})^{-1} \left[ e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{0}}{2}} e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{-1}}{2}} \Phi^{\mathfrak{l}}(e_{\infty}, e_{-1},e_{1}) e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{\infty}}{2}} e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{1}}{2}} - e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{0}}{2}} e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{-1}}{2}} \Phi^{\mathfrak{l}}(e_{\infty}, e_{-1},e_{1}) e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{\infty}}{2}} e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{1}}{2}} \right] \right) \\ \hspace*{-0.5cm} +\pi^{\mathcal{L}} \left( (\mathbb{L}^{\mathfrak{m}})^{-1} \left[ e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{0}}{2}} e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{-1}}{2}} e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{\infty}}{2}} \Phi^{\mathfrak{l}}(e_{1}, e_{\infty},e_{0}) e^{\frac{\mathbb{L}^{\mathfrak{m}} e_{1}}{2}} - e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{0}}{2}} e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{-1}}{2}} e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{\infty}}{2}} \Phi^{\mathfrak{l}}(e_{1}, e_{\infty},e_{0}) e^{-\frac{\mathbb{L}^{\mathfrak{m}} e_{1}}{2}} \right] \right) \end{multline} \end{small} The first line is zero (since $e_{0}+e_{1}+ e_{-1}+e_{\infty}=0$) whereas each other line will contribute by two terms, in order to give $(\ref{eq:octagonlin})$. Indeed, the projection $\pi^{\mathcal{L}}(x)$, when seeing $x$ as a polynomial (with only even powers) in $\mathbb{L}^{\mathfrak{m}}$, only keep the constant term; hence, for each term, only one of the exponentials above $e^{x}$ contributes by its linear term i.e. $x$, while the others contribute simply by $1$. For instance, if we examine carefully the second line of $(\ref{eq:octagon23})$, we get: $$\begin{array}{ll} = & e_{0} \Phi^{\mathfrak{l}}(e_{-1}, e_{0},e_{\infty}) + \Phi^{\mathfrak{l}}(e_{-1}, e_{0},e_{\infty}) (e_{-1}+e_{\infty}+e_{1})\\ & - (-e_{0}) \Phi^{\mathfrak{l}}(e_{-1}, e_{0},e_{\infty}) - \Phi^{\mathfrak{l}}(e_{-1}, e_{0},e_{\infty}) ( - e_{-1}- e_{\infty}- e_{1}) \\ = & 2 \left[ e_{0} \Phi^{\mathfrak{l}}(e_{-1}, e_{0},e_{\infty}) - \Phi^{\mathfrak{l}}(e_{-1}, e_{0},e_{\infty}) e_{0}\right] \end{array}.$$ Similarly, the third line of $(\ref{eq:octagon23})$ is equal to $(e_{0}+e_{-1}) \Phi^{\mathfrak{l}}(e_{\infty}, e_{-1},e_{1}) - \Phi^{\mathfrak{l}}(e_{\infty}, e_{-1},e_{1}) (e_{0}+e_{-1})$ and the last line is equal to $ -e_{1} \Phi^{\mathfrak{l}}(e_{1}, e_{\infty},e_{0}) + \Phi^{\mathfrak{l}}(e_{1}, e_{\infty},e_{0}) e_{1}$. Therefore, $(\ref{eq:octagon23})$ is equivalent to $(\ref{eq:octagonlin})$, as claimed. \end{itemize} \end{proof} This linearized octagon relation $\ref{eq:octagonlin}$, while looking at the coefficient of a specific word in $\lbrace e_{0},e_{1}, e_{-1}\rbrace$, provides an identity between some $ \zeta^{\star\star,\mathfrak{l}} (\bullet)$ and $\zeta^{\mathfrak{l}} (\bullet)$ in the coalgebra $\mathcal{L}$. The different identities obtained in this way are detailed in the $\S 4.6$. In the following proof of Theorem $\ref{hybrid}$, two of those identities are used. \begin{proof}[Proof of Theorem $\ref{hybrid}$] The identity with MMZV$_{\mu_{2}}$ is equivalent to, in terms of motivic iterated integrals:\footnote{Indeed, if $\prod_{i=0}^{p} \epsilon_{i}=1$, it corresponds to the first case, whereas if $\prod_{i=0}^{p} \epsilon_{i}$, we need the second case.} $$I^{\mathfrak{l}} (0; 0^{k}, \star, X; 1)\equiv I^{\mathfrak{l}} (0; X, \star, 0^{k}; 1) \text{ and } I^{\mathfrak{l}} (0; 0^{k}, -\star, X; 1)\equiv I^{\mathfrak{l}} (0;- X, -\star, 0^{k}; 1).$$ Furthermore, by shuffle regularization formula ($\ref{eq:shufflereg}$), spreading the first $0$ further inside the iterated integrals, the identity $I^{\mathfrak{l}} (0;\boldsymbol{0}^{k}, \star, X; 1)\equiv (-1)^{w+1} I^{\mathfrak{l}} (0;\boldsymbol{0}^{k}, \star, \widetilde{X}; 1)$ boils down to the case $k=0$. \\ The notations are as usual: $\epsilon_{i}=\text{sign} (n_{i})$, $\epsilon_{i}=\eta_{i}\eta_{i+1}$,$\epsilon_{p}= \eta_{p}$, $n_{i}=\epsilon_{i}(a_{i}+1)$. \begin{itemize} \item[$(i)$] In $(\ref{eq:octagonlin})$, if we look at the coefficient of a specific word in $\lbrace e_{0},e_{1}, e_{-1}\rbrace$ ending and beginning with $e_{-1}$ (as in $\S 4.6$), only two terms contribute, i.e.: \begin{equation}\label{eq:octagonlinpart1} e_{-1}\Phi^{\mathfrak{l}}(e_{\infty}, e_{-1},e_{1})- \Phi^{\mathfrak{l}}(e_{\infty}, e_{-1},e_{1})e_{-1} \end{equation} The coefficient of $e_{0}^{a_{0}}e_{\eta_{1}} e_{0}^{a_{1}} \cdots e_{\eta_{p}} e_{0}^{a_{p}}$ in $\Phi^{\mathfrak{m}}(e_{\infty}, e_{-1},e_{1})$ is $(-1)^{n+p}\zeta^{\star\star,\mathfrak{m}}_{n_{0}-1} \left( n_{1}, \cdots, n_{p-1}, -n_{p}\right)$.\footnote{The expressions of those associators are more detailed in the proof of Lemma $\ref{lemmlor}$.} Hence, the coefficient in $(\ref{eq:octagonlinpart1})$ (as in $(\ref{eq:octagonlin})$) of the word $e_{-1} e_{0}^{a_{0}} e_{\eta_{1}} \cdots e_{\eta_{p}} e_{0}^{a_{p}} e_{-1}$ is: $$ \zeta^{\star\star, \mathfrak{l}}_{\mid n_{0}\mid -1}(n_{1}, \cdots, - n_{p}, 1) - \zeta^{\star\star, \mathfrak{l}}(n_{0}, n_{1}, \cdots, n_{p-1}, -n_{p})=0, \quad \text{ with} \prod_{i=0}^{p} \epsilon_{i}=1.$$ In terms of iterated integrals, reversing the first one with Antipode $\shuffle$, it is: $$ I^{\mathfrak{l}} \left(0;-X , \star ;1 \right)\equiv I^{\mathfrak{l}} \left(0; \star, -X ;1 \right), \text{ with } X\mathrel{\mathop:}=0^{n_{0}-1} \eta_{1} 0^{n_{1}-1} \cdots \eta_{p} 0^{n_{p}-1}.$$ Therefore, since $X$ can be any word in $\lbrace 0, \pm \star \rbrace$, by linearity this is also true for any word X in $\lbrace 0, \pm 1 \rbrace$: $ I^{\mathfrak{l}} \left(0;X, \star ;1 \right)\equiv I^{\mathfrak{l}} \left(0; \star, X ;1 \right)$. \item[$(ii)$] Now, let look at the coefficient of a specific word in $\lbrace e_{0},e_{1}, e_{-1}\rbrace$ beginning by $e_{1}$, and ending by $e_{-1}$. Only two terms in the left side of $(\ref{eq:octagonlin})$ contribute, i.e.: \begin{equation}\label{eq:octagonlinpart2} -e_{1}\Phi^{\mathfrak{l}}(e_{1}, e_{\infty},e_{0})- \Phi^{\mathfrak{l}}(e_{\infty}, e_{-1},e_{1})e_{-1} \end{equation} The coefficient in this expression of the word $e_{1} e_{0}^{a_{0}} e_{\eta_{1}} \cdots e_{\eta_{p}} e_{0}^{a_{p}} e_{-1}$ is: $$ \zeta^{\star\star, \mathfrak{l}}_{\mid n_{0}\mid -1}(n_{1}, \cdots, n_{p}, -1) - \zeta^{\star\star, \mathfrak{l}}(n_{0}, n_{1}, \cdots, n_{p})=0, \quad \text{ with} \prod_{i=0}^{p} \epsilon_{i}=-1.$$ In terms of iterated integrals, reversing the first one with Antipode $\shuffle$, it is: $$ I^{\mathfrak{l}} \left(0; - X , -\star ;1 \right)\equiv I^{\mathfrak{l}} \left(0; -\star, X ;1 \right).$$ Therefore, since $X$ can be any word in $\lbrace 0, \pm \star \rbrace$, by linearity this is also true for any word X in $\lbrace 0, \pm 1 \rbrace$. \end{itemize} \end{proof} \paragraph{For Euler $\boldsymbol{\star\star}$ sums. } \begin{coro} In the coalgebra $\mathcal{L}^{2}$, with $n_{i}\in\mathbb{Z}^{\ast}$, $n\geq 1$: \begin{equation}\label{eq:antipodestaresss} \zeta^{\star\star,\mathfrak{l}}_{n-1}(n_{1}, \ldots, n_{p})\equiv (-1)^{w+1}\zeta^{\star\star,\mathfrak{l}}_{n-1}(n_{p}, \ldots, n_{1}). \end{equation} Motivic Euler $\star\star$ sums of depth $p$ in $\mathcal{L}$ form a dihedral group of order $p+1$: $$\textsc{(Shift) } \quad \zeta^{\star\star,\mathfrak{l}}_{\mid n\mid -1}(n_{1}, \ldots, n_{p})\equiv \zeta^{\star\star,\mathfrak{l}}_{\mid n_{1}\mid -1}(n_{2}, \ldots, n_{p},n) \quad \text{ where } sgn(n)\mathrel{\mathop:}= \prod_{i} sgn(n_{i}).$$ \end{coro} \noindent Indeed, these two identities lead to a dihedral group structure of order $p+1$: $(\ref{eq:antipodestaresss})$, respectively $\textsc{Shift}$, correspond to the action of a reflection resp. of a cycle of order $p$ on motivic Euler $\star\star$ sums of depth $p$ in $\mathcal{L}$. \begin{proof} Writing $\zeta^{\star\star,\mathfrak{m}}$ as a sum of Euler sums: \begin{small} $$\zeta^{\star\star,\mathfrak{m}}_{n-1}(n_{1}, \ldots, n_{p})=\sum_{i=1}^{p} \zeta^{\mathfrak{m}}_{n-1+\mid n_{1}\mid+\cdots+ \mid n_{i-1}\mid}(n_{i} \circ \cdots \circ n_{p})=\sum_{r \atop A_{i}} \left( \zeta^{\mathfrak{m}}_{n-1}(A_{1}, \ldots, A_{r})+ \zeta^{\mathfrak{m}}_{n-1+\mid A_{1}\mid}(A_{2}, \ldots, A_{r})\right),$$ \end{small} where the last sum is over $(A_{i})_{i}$ such that each $A_{i}$ is a non empty \say{sum} of consecutive $(n_{j})'s$, preserving the order; the absolute value being summed whereas the sign of the $n_{i}$ involved are multiplied; moreover, $\mid A_{1}\mid \geq \mid n_{1}\mid $ resp. $\mid A_{r} \mid \geq \mid n_{p}\mid $.\\ Using Theorem $(\ref{hybrid})$ in the coalgebra $\mathcal{L}$, the previous equality turns into: $$(-1)^{w+1}\sum_{r \atop A_{i}} \left( \zeta^{\mathfrak{l}}_{n-1}(A_{r}, \ldots, A_{1})+ \zeta^{\mathfrak{l}}_{n-1+\mid A_{r}\mid}(A_{r-1}, \ldots, A_{1})\right) \equiv (-1)^{w+1}\zeta^{\star\star,\mathfrak{m}}_{n-1}(n_{p}, \ldots, n_{1}). $$ The identity $\textsc{Shift}$ is obtained as the composition of Antipode $\shuffle$ $(\ref{eq:antipodeshuffle2})$ and the first identity of the corollary. \end{proof} \paragraph{For Euler $\boldsymbol{\sharp\sharp}$ sums.} \begin{coro} In the coalgebra $\mathcal{L}$, for $n\in\mathbb{N}$, $n_{i}\in\mathbb{Z}^{\ast}$, $\epsilon_{i}\mathrel{\mathop:}=sign(n_{i})$:\footnote{Here, $\mlq - \mrq$ denotes the operation where absolute values are subtracted whereas sign multiplied.}\\ \begin{tabular}{lll} \textsc{Reverse} & $\zeta^{\sharp\sharp,\mathfrak{l}}_{n}(n_{1}, \ldots, n_{p})+ (-1)^{w}\zeta^{\sharp\sharp,\mathfrak{l}}_{n}(n_{p}, \ldots, n_{1}) \equiv \left\{ \begin{array}{l} 0 . \\ \zeta^{\sharp,\mathfrak{l}}_{n}(n_{1}, \ldots, n_{p}) \end{array}\right.$ & $ \begin{array}{l} \textrm{ if } w+p \textrm{ even } . \\ \textrm{ if } w+p \textrm{ odd } \end{array} .$\\ &&\\ \textsc{Shift} & $\zeta^{\sharp\sharp,\mathfrak{l}}_{ n -1}(n_{1}, \ldots, n_{p})\equiv \zeta^{\sharp\sharp,\mathfrak{l}}_{\mid n_{1}\mid-1}(n_{2}, \ldots, n_{p},\epsilon_{1}\cdots \epsilon_{p} \cdot n)$ & for $w+p$ even.\\ &&\\ \textsc{Cut} & $\zeta^{\sharp\sharp,\mathfrak{l}}_{n}(n_{1},\cdots, n_{p}) \equiv \zeta^{\sharp\sharp,\mathfrak{l}}_{n+\mid n_{p}\mid}(n_{1},\cdots, n_{p-1}),$ & for $w+p$ odd.\\ &&\\ \textsc{Minus} & $\zeta^{\sharp\sharp,\mathfrak{l}}_{n-i}(n_{1},\cdots, n_{p}) \equiv \zeta^{\sharp\sharp,\mathfrak{l}}_{n}\left( n_{1},\cdots, n_{p-1}, n_{p} \mlq - \mrq i)\right)$, & for $\begin{array}{l} w+p \text{ odd }\\ i \leq \min(n,\mid n_{p}\mid) \end{array}$.\\ &&\\ \textsc{Sign} & $\zeta^{\sharp\sharp,\mathfrak{l}}_{n}(n_{1},\cdots, n_{p-1}, n_{p}) \equiv \zeta^{\sharp\sharp,\mathfrak{l}}_{n}(n_{1},\cdots, n_{p-1},-n_{p})$,& for $w+p$ odd. \end{tabular} $$\quad\Rightarrow \forall W \in \lbrace 0, \pm \sharp\rbrace^{\times} \text{with an odd number of } 0, \quad I^{\mathfrak{l}}(-1; W ; 1) \equiv 0 .$$ \end{coro} \noindent \textsc{Remark}: In the coaction of Euler sums, terms with $\overline{1}$ can appear\footnote{More precisely, using the notations of Lemma $\ref{lemmt}$, a $\overline{1}$ can appear in terms of the type $T_{\epsilon, -\epsilon}$ for a cut between $\epsilon$ and $-\epsilon$.}, which are clearly not motivic multiple zeta values. The left side corresponding to such a term in the coaction part $D_{2r+1}(\cdot)$ is $I^{ \mathfrak{l}}(1;X;-1)$, X odd weight with $0, \pm \sharp$. It is worth underlying that, for the $\sharp$ family with $\lbrace \overline{even}, odd\rbrace$, these terms disappear by $\textsc{Sign}$, since by constraint on parity, X will always be of even depth for such a cut. This $\sharp$ family is then more suitable for an unramified criterion, cf. $\S 4.3$. \begin{proof} These are consequences of the hybrid relation in Theorem $\ref{hybrid}$. \begin{itemize} \item[$\cdot$] \textsc{Reverse:} Writing $\zeta^{\sharp\sharp,\mathfrak{l}}$ as a sum of Euler sums: \begin{flushleft} $\hspace*{-0.5cm}\zeta^{\sharp\sharp,\mathfrak{m}}_{k}(n_{1}, \ldots, n_{p}) +(-1)^{w} \zeta^{\sharp\sharp,\mathfrak{m}}_{k}(n_{p}, \ldots, n_{1})$ \end{flushleft} \begin{small} $$\hspace*{-0.5cm}\begin{array}{l} =\sum_{i=1}^{p} 2^{p-i+1-n_{+}} \zeta^{\mathfrak{m}}_{k+n_{1}+\cdots+ n_{i-1}}(n_{i} \circ \cdots \circ n_{p}) + (-1)^{w}2^{i-n_{+}} \zeta^{\mathfrak{m}}_{k+n_{p}+\cdots+ n_{i+1}}(n_{i} \circ \cdots \circ n_{1})\\ \\ =\sum_{r \atop A_{i}}2^{r-1} \left(\right. 2 \zeta^{\mathfrak{m}}_{k}(A_{1}, \ldots, A_{r}) +2 (-1)^{w} \zeta^{\mathfrak{m}}_{k}(A_{r}, \ldots, A_{1}) + \zeta^{\mathfrak{m}}_{k+A_{1}}(A_{2}, \ldots, A_{r}) +(-1)^{w} \zeta^{\mathfrak{m}}_{k+A_{r}}(A_{r-1}, \ldots, A_{1}) \left. \right) \end{array}$$ \end{small} where the sum is over $(A_{i})$ such that each $A_{i}$ is a non empty \say{sum} of consecutive $(n_{j})'s$, preserving the order; i.e. absolute values of $n_{i}$ are summed whereas signs are multiplied; moreover, $A_{1}$ resp. $A_{r}$ are no less than $n_{1}$ resp. $n_{p}$.\\ By Theorem $\ref{hybrid}$, the previous equality turns into, in $\mathcal{L}$: $$\sum_{r \atop A_{i}}2^{r-1} \left( \zeta^{\mathfrak{l}}_{k}(A_{1}, \ldots, A_{r})+ (-1)^{w} \zeta^{\mathfrak{l}}_{k}(A_{r}, \ldots, A_{1})\right)$$ $$ \equiv 2^{-1} \left( \zeta^{\sharp,\mathfrak{l}}_{k}(n_{1}, \ldots, n_{p})+ (-1)^{w} \zeta^{\sharp,\mathfrak{l}}_{k}(n_{p}, \ldots, n_{1})\right)\equiv 2^{-1} \zeta^{\sharp,\mathfrak{l}}_{k}(n_{1}, \ldots, n_{p}) \left( 1+ (-1)^{w+p+1} \right).$$ By the Antipode $\star$ relation applied to $\zeta^{\sharp,\mathfrak{l}}$, it implies the result stated, splitting the cases $w+p$ even and $w+p$ odd. \item[$\cdot$] \textsc{Shift:} Obtained when combining \textsc{Reverse} and \textsc{Antipode} $\shuffle$, when $w+p$ even. \item[$\cdot$] \textsc{Cut:} Reverse in the case $w+p$ odd implies: $$\zeta^{\sharp\sharp,\mathfrak{l}}_{n+\mid n_{1}\mid }(n_{2}, \ldots, n_{p})+ (-1)^{w}\zeta^{\sharp\sharp,\mathfrak{l}}_{n}(n_{p}, \ldots, n_{1}) \equiv 0,$$ Which, reversing the variables, gives the Cut rule. \item[$\cdot$] \textsc{Minus} follows from \textsc{Cut} since, by \textsc{Cut} both sides are equal to $\zeta^{\sharp\sharp,\mathfrak{l}}_{n-i+ \mid n_{p}\mid}(n_{1},\cdots, n_{p-1})$. \item[$\cdot$] In \textsc{Cut}, the sign of $n_{p}$ does not matter, hence, using \textsc{Cut} in both directions, with different signs leads to \textsc{Sign}: $$\zeta^{\sharp\sharp,\mathfrak{l}}_{n} (n_{1},\ldots,n_{p})\equiv \zeta^{\sharp\sharp,\mathfrak{l}}_{n+ \mid n_p\mid } (n_{1}, \ldots,n_{p-1})\equiv \zeta^{\sharp\sharp,\mathfrak{l}}_{n} ( n_{1},\ldots,-n_{p}).$$ Note that, translating in terms of iterated integrals, it leads to, for $X$ any sequence of $0, \pm \sharp$, with $w+p$ odd: $$ I^{\mathfrak{l}}(0; X ; 1) \equiv I^{\mathfrak{l}}(0; -X; 1), $$ where $-X$ is obtained from $X$ after exchanging $\sharp$ and $-\sharp$. Moreover, $I^{\mathfrak{l}}(0; -X; 1)\equiv I^{\mathfrak{l}}(0; X; -1) \equiv - I^{\mathfrak{l}}(-1; X; 0)$. Hence, we obtain, using the composition rule of iterated integrals modulo product: $$I^{\mathfrak{l}}(0; X ; 1) + I^{\mathfrak{l}}(-1; X; 0)\equiv I^{\mathfrak{l}}(-1; X ; 1) \equiv 0.$$ \end{itemize} \end{proof} \section{Euler $\sharp$ sums} Let's consider more precisely the following family, appearing in Conjecture $\ref{lzg}$, ith only positive odd and negative even integers for arguments: $$\zeta^{\sharp, \mathfrak{m}} \left( \lbrace \overline{\text{even }} , \text{odd } \rbrace^{\times} \right) .$$ In the iterated integral, this condition means that we see only the following sequences: \begin{center} $\epsilon 0^{2a} \epsilon$, $\quad$ or $\quad\epsilon 0^{2a+1} -\epsilon$, $\quad$ with $\quad\epsilon\in \lbrace \pm\sharp \rbrace$. \end{center} \begin{theo}\label{ESsharphonorary} The motivic Euler sums $\zeta^{\sharp, \mathfrak{m}} (\lbrace \overline{\text{even }}, \text{odd } \rbrace^{\times} )$ are motivic geometric$^{+}$ periods of $\mathcal{MT}(\mathbb{Z})$.\\ Hence, they are $\mathbb{Q}$ linear combinations of motivic multiple zeta values. \end{theo} The proof, in $\S 4.3.2$, relies mainly upon the stability under the coaction of this family.\\ This motivic family is even a generating family of motivic MZV:\nomenclature{$\mathcal{B}^{\sharp}$}{is a family of (unramified) motivic Euler $\sharp$ sums, basis of MMZV} \begin{theo}\label{ESsharpbasis} The following family is a basis of $\mathcal{H}^{1}$: $$\mathcal{B}^{\sharp}\mathrel{\mathop:}= \left\lbrace \zeta^{ \sharp,\mathfrak{m}} (2a_{0}+1,2a_{1}+3,\cdots, 2 a_{p-1}+3, \overline{2a_{p}+2})\text{ , } a_{i}\geq 0\right\rbrace .$$ \end{theo} First, it is worth noticing that this subfamily is also stable under the coaction.\\ \\ \textsc{Remark}: It is conjecturally the same family as the Hoffman star family $\zeta^{\star} (\boldsymbol{2}^{a_{0}},3,\cdots, 3, \boldsymbol{2}^{a_{p}})$, by Conjecture $(\ref{lzg})$.\\ \\ For that purpose, we use the increasing \textit{depth filtration} $\mathcal{F}^{\mathfrak{D}}$ on $\mathcal{H}^{2}$ such that (cf. $\S 2.4.3$): \begin{center} $\mathcal{F}_{p}^{\mathfrak{D}} \mathcal{H}^{2}$ is generated by Euler sums of depth smaller than $p$. \end{center} Note that it is not a grading, but we define the associated graded as the quotient $gr_{p}^{\mathfrak{D}}\mathrel{\mathop:}=\mathcal{F}_{p}^{\mathfrak{D}} \diagup \mathcal{F}_{p-1}^{\mathfrak{D}}$. The vector space $\mathcal{F}_{p}^{\mathfrak{D}}\mathcal{H}$ is stable under the action of $\mathcal{G}$. The linear independence of this $\sharp$ family is proved below thanks to a recursion on the depth and on the weight, using the injectivity of a map $\partial$ where $\partial$ came out of the depth and weight-graded part of the coaction $\Delta$. \subsection{Depth graded Coaction} In Chapter $2$, we defined the depth graded derivations $D_{r,p}$ (cf. $\ref{Drp}$), and $D^{-1}_{r,p}$ ($\ref{eq:derivnp}$) after the projection on the right side, using depth $1$ results: $$gr^{\mathcal{D}}_{1} \mathcal{L}_{2r+1}=\mathbb{Q}\zeta^{\mathfrak{l}}(2r+1).$$ Let look at the following maps, whose injectivity is fundamental to the Theorem $\ref{ESsharpbasis}$: $$ D^{-1}_{2r+1,p} : gr^{\mathfrak{D}}_{p}\mathcal{H}_{n}\rightarrow gr^{\mathfrak{D}}_{p-1}\mathcal{H}_{n-2r-1} .$$ $$\partial_{<n,p} \mathrel{\mathop:}=\oplus_{2r+1<n} D^{-1}_{2r+1,p} .$$ Their explicit expression is: \begin{lemm}\footnotemark[2]\footnotetext[2]{To be accurate, the term $i=0$ in the first sum has to be understood as: $$ \frac{2^{2r+1}}{1-2^{2r}}\binom{2r}{2a_{1}+2} \zeta^{\sharp,\mathfrak{m}} (2\alpha+3, 2 a_{2}+3,\cdots, \overline{2a_{p}+2}) . $$ Meanwhile the terms $i=1$, resp. $i=p$ in the second sum have to be understood as: $$ \frac{2^{2r+1}}{1-2^{2r}}\binom{2r}{2a_{0}+2} \zeta^{\sharp,\mathfrak{m}} (2\alpha+3, 2 a_{2}+3,\cdots, \overline{2a_{p}+2}) \quad \text{ resp. } \quad \frac{2^{2r+1}}{1-2^{2r}}\binom{2r}{2a_{p-1}+2} \zeta^{\sharp,\mathfrak{m}} (\cdots, 2 a_{p-2}+3, \overline{2\alpha+2}).$$} \begin{multline} \label{eq:dgrderiv} D^{-1}_{2r+1,p} \left( \zeta^{\sharp,\mathfrak{m}} (2a_{0}+1,2a_{1}+3,\cdots, 2 a_{p-1}+3, \overline{2a_{p}+2}) \right) = \\ \delta_{r=a_{0}} \frac{2^{2r+1}}{1-2^{2r}}\binom{2r}{2r+2} \zeta^{\sharp,\mathfrak{m}} (2 a_{1}+3,\cdots, \overline{2a_{p}+2})\\ + \sum_{0 \leq i \leq p-2, \quad \alpha \leq a_{i}\atop r=a_{i+1}+a_{i}+1-\alpha} \frac{2^{2r+1}}{1-2^{2r}}\binom{2r}{2a_{i+1}+2} \zeta^{\sharp,\mathfrak{m}} (\cdots, 2 a_{i-1}+3,2\alpha+3, 2 a_{i+2}+3,\cdots, \overline{2a_{p}+2})\\ + \sum_{1 \leq i \leq p-1, \quad \alpha \leq a_{i} \atop r=a_{i-1}+a_{i}+1-\alpha} \frac{2^{2r+1}}{1-2^{2r}}\binom{2r}{2a_{i-1}+2} \zeta^{\sharp,\mathfrak{m}} (\cdots, 2 a_{i-2}+3,2\alpha+3, 2 a_{i+1}+3,\cdots, \overline{2a_{p}+2})\\ + \textsc{(Deconcatenation)} \sum_{\alpha \leq a_{p} \atop r=a_{p-1}+a_{p}+1-\alpha} 2 \binom{2r}{2a_{p}+1}\zeta^{\sharp,\mathfrak{m}} (\cdots, 2 a_{p-1}+3,\overline{2\alpha+2}). \end{multline} \end{lemm} \begin{proof} Looking at the Annexe $A.1$ expression for $D_{2r+1}$, we obtain for $D_{2r+1,p}$ keeping only the cuts of depth one (removing exactly one non zero element): \begin{multline} \nonumber D_{2r+1,p} \zeta^{\sharp,\mathfrak{m}} (2a_{0}+1,2a_{1}+3,\cdots, 2 a_{p-1}+3, \overline{2a_{p}+2})=\\ \sum_{i, \alpha \leq a_{i}\atop r=a_{i+1}+a_{i}+1-\alpha} 2 \zeta^{\mathfrak{l}} _{2a_{i}-2\alpha}(2a_{i+1}+3) \otimes \zeta^{\sharp,\mathfrak{m}} (\cdots, 2 a_{i-1}+3,2\alpha+3, 2 a_{i+2}+3,\cdots, \overline{2a_{p}+2})\\ +\sum_{i, \alpha \leq a_{i} \atop r=a_{i-1}+a_{i}+1-\alpha} 2 \zeta^{\mathfrak{l}} _{2a_{i}-2\alpha}(2a_{i-1}+3) \otimes \zeta^{\sharp,\mathfrak{m}} (\cdots, 2 a_{i-2}+3,2\alpha+3, 2 a_{i+1}+3,\cdots, \overline{2a_{p}+2})\\ +\sum_{\alpha \leq a_{p} \atop r=a_{p-1}+a_{p}+1-\alpha} 2 \zeta^{\mathfrak{l}} _{2a_{p-1}-2\alpha+1}(\overline{2a_{p}+2}) \otimes \zeta^{\sharp,\mathfrak{m}} (\cdots, 2 a_{p-1}+3,\overline{2\alpha+2}). \end{multline} To lighten the result, some cases at the borders ($i=0$, or $i=p$) have been included in the sum, being fundamentally similar (despite some index problems). These are clarified in the previous footnote\footnotemark[2].\\ In particular, with notations of the Lemma $\ref{lemmt}$, $T_{0,0}$ terms can be neglected as they decrease the depth by at least $2$; same for the $T_{0,\epsilon}$ and $T_{\epsilon,0}$ for cuts between $\epsilon$ and $\pm \epsilon$. To obtain the lemma, it remains to check the coefficient of $\zeta^{\mathfrak{l}}(\overline{2r+1})$ for each term in the left side thanks to the known identities: $$ \zeta^{\mathfrak{l}}(2r+1)= \frac{-2^{2r}}{2^{2r}-1} \zeta^{\mathfrak{l}}(\overline{2r+1})\quad \text{ and } \quad \zeta^{\mathfrak{l}}_{2r+1-a}(a)=(-1)^{a+1}\binom{2r}{a-1} \zeta^{\mathfrak{l}}(2r+1).$$ \end{proof} \subsection{Proofs of Theorem $4.3.1$ and $4.3.2$} \begin{proof}[\textbf{Proof of Theorem $4.3.1$}] By Corollary $5.1.2$, we can prove it in two steps: \begin{itemize} \item[$\cdot$] First, checking that $D_{1}(\cdot)=0$ for this family, which is rather obvious by Lemma $\ref{condd1}$ since there is no sequence of the type $\lbrace 0, \epsilon, -\epsilon \rbrace$ or $\lbrace \epsilon, -\epsilon, 0 \rbrace$ in the iterated integral. \item[$\cdot$] Secondly, we can use a recursion on weight to prove that $D_{2r+1}(\cdot)$, for $r> 0$, are unramified. Consequently, using recursion, this follows from the following statement: \begin{center} The family $\zeta^{\sharp, \mathfrak{m}} \left( \lbrace \overline{\text{even }}, + \text{odd } \rbrace^{\times} \right) $ is stable under $D_{2r+1}$. \end{center} This is proved in Lemma $A.1.3$, using the relations of $\S 4.2$ in order to simplify the \textit{unstable cuts}, i.e. the cuts where a sequence of type $\epsilon, 0^{2a+1}, \epsilon$ or $\epsilon, 0^{2a}, -\epsilon$ appears; indeed, these cuts give rise to a $\text{even}$ or to a $\overline{ \text{ odd}}$ in the $\sharp$ Euler sum. \end{itemize} One fundamental observation about this family, used in Lemma $A.1.3$ is: for a subsequence of odd length from the iterated integral, because of these patterns of $\epsilon, \boldsymbol{0}^{2a}, \epsilon$, or $\epsilon, \boldsymbol{0}^{2a+1}, -\epsilon$, we can put in relation the depth $p$, the weight $w$ and $s$ the number of sign changes among the $\pm\sharp$: $$w\equiv p-s \pmod{2}.$$ It means that if we have a cut $\epsilon_{0},\cdots \epsilon_{p+1}$ of odd weight, then: \begin{center} \textsc{Either:} Depth $p$ is odd, $s$ even, $\epsilon_{0}=\epsilon_{p+1}$, \textsc{Or:} Depth $p$ is even, $s$ odd, $\epsilon_{0}=-\epsilon_{p+1}$. \end{center} \end{proof} \begin{proof}[\textbf{Proof of Theorem $4.3.2$}] By a cardinality argument, it is sufficient to prove the linear independence of the family, which is based on the injectivity of $\partial_{<n,p}$. Let us define: \footnote{Sub-$\mathbb{Q}$ vector space of $\mathcal{H}^{1}$ by previous Theorem.}\nomenclature{$\mathcal{H}^{odd\sharp}$}{$\mathbb{Q}$-vector space generated by $\zeta^{ \sharp,\mathfrak{m}} (2a_{0}+1,2a_{1}+3,\cdots, 2 a_{p-1}+3, \overline{2a_{p}+2})$} \begin{center} $\mathcal{H}^{odd\sharp}$: $\mathbb{Q}$-vector space generated by $\zeta^{ \sharp,\mathfrak{m}} (2a_{0}+1,2a_{1}+3,\cdots, 2 a_{p-1}+3, \overline{2a_{p}+2})$. \end{center} The first thing to remark is that $\mathcal{H}^{odd\sharp}$ is stable under these derivations, by the expression obtained in Lemma $A.1.4$.: $$D_{2r+1} (\mathcal{H}_{n}^{odd\sharp}) \subset \mathcal{L}_{2r+1} \otimes \mathcal{H}_{n-2r-1}^{odd\sharp},$$ Now, let consider the restriction on $\mathcal{H}^{odd\sharp}$ of $\partial_{<n,p}$ and prove: $$\partial_{<n,p}: gr^{\mathfrak{D}}_{p} \mathcal{H}_{n}^{odd\sharp} \rightarrow \oplus_{2r+1<n} gr^{\mathfrak{D}}_{p-1}\mathcal{H}_{n-2r-1}^{odd\sharp} \text{ is bijective. }$$ The formula $\eqref{eq:dgrderiv}$ gives the explicit expression of this map. Let us prove more precisely: \begin{center} $M^{\mathfrak{D}}_{n,p}$ the matrix of $\partial_{<n,p}$ on $\left\lbrace \zeta^{ \sharp ,\mathfrak{m}} (2a_{0}+1,2a_{1}+3,\cdots, 2 a_{p-1}+3, \overline{2a_{p}+2})\right\rbrace $ in terms of $\left\lbrace \zeta^{ \sharp ,\mathfrak{m}} (2b_{0}+1,2b_{1}+3,\cdots, 2 b_{p-2}+3, \overline{2b_{p-1}+2})\right\rbrace $ is invertible. \end{center} \texttt{Nota Bene}: The matrix $M^{\mathfrak{D}}_{n,p}$ is well (uniquely) defined provided that the $\zeta^{ \sharp ,\mathfrak{m}}$ of the second line are linearly independent. So first, we have to consider the formal matrix associated $\mathbb{M}^{\mathfrak{D}}_{n,p}$ defined explicitly (combinatorially) by the formula for the derivations given, and prove $\mathbb{M}^{\mathfrak{D}}_{n,p}$ is invertible. Afterwards, we could state that $M^{\mathfrak{D}}_{n,p}$ is well defined and invertible too since equal to $\mathbb{M}^{\mathfrak{D}}_{n,p}$. \begin{proof} The invertibility comes from the fact that the (strictly) smallest terms $2$-adically in $\eqref{eq:dgrderiv}$ are the deconcatenation ones, which is an injective operation. More precisely, let $\widetilde{M}^{\mathfrak{D}}_{n,p}$ be the matrix $\mathbb{M}_{n,p}$ where we have multiplied each line corresponding to $D_{2r+1}$ by ($2^{-2r}$). Then, order elements on both sides by lexicographical order on ($a_{p}, \ldots, a_{0}$), resp. ($r,b_{p-1}, \ldots, b_{0}$), such that the diagonal corresponds to $r=a_{p}+1$ and $b_{i}=a_{i}$ for $i<p$. The $2$ -adic valuation of all the terms in $(\ref{eq:dgrderiv})$ (once divided by $2^{2r}$) is at least $1$, except for the deconcatenation terms since: $$v_{2}\left( 2^{-2r+1} \binom{2r}{2a_{p}+1} \right) \leq 0 \Longleftrightarrow v_{2}\left( \binom{2r}{2a_{p}+1} \right) \leq 2r-1.$$ Then, modulo $2$, only the deconcatenation terms remain, so the matrix $\widetilde{M}^{\mathfrak{D}}_{n,p}$ is triangular with $1$ on the diagonal. This implies that $\det (\widetilde{M}^{\mathfrak{D}}_{n,p})\equiv 1 \pmod{2}$, and in particular is non zero: the matrix $\widetilde{M}^{\mathfrak{D}}_{n,p}$ is invertible, and so does $\mathbb{M}^{\mathfrak{D}}_{n,p}$. \end{proof} This allows us to complete the proof since it implies: \begin{center} The elements of $\mathcal{B}^{\sharp}$ are linearly independent. \end{center} \begin{proof} First, let prove the linear independence of this family of the same depth and weight, by recursion on $p$. For depth $0$, this is obvious since $\zeta^{\mathfrak{m}}(\overline{2n})$ is a rational multiple of $\pi^{2n}$.\\ Assuming by recursion on the depth that the elements of weight $n$ and depth $p-1$ are linearly independent, since $M^{\mathfrak{D}}_{n,p}$ is invertible, this means both that the $\zeta^{ \sharp,\mathfrak{m}} (2a_{0}+1,2a_{1}+3,\cdots, 2 a_{p-1}+3, \overline{2a_{p}+2})$ of weight $n$ are linearly independent and that $\partial_{<n,p}$ is bijective, as announced before.\\ The last step is just to realize that the bijectivity of $\partial_{<n,l}$ also implies that elements of different depths are also linearly independent. The proof could be done by contradiction: by applying $\partial_{<n,p}$ on a linear combination where $p$ is the maximal depth appearing, we arrive at an equality between same level elements. \end{proof} \end{proof} \section{Hoffman $\star$} \begin{theo}\label{Hoffstar} If the analytic conjecture ($\ref{conjcoeff}$) holds, then the motivic \textit{Hoffman} $\star$ family $\lbrace \zeta^{\star,\mathfrak{m}} (\lbrace 2,3 \rbrace^{\times})\rbrace$ is a basis of $\mathcal{H}^{1}$, the space of MMZV. \end{theo} For that purpose, we define an increasing filtration $\mathcal{F}^{L}_{\bullet}$ on $\mathcal{H}^{2,3}$, called \textbf{level}, such that: \begin{equation}\label{eq:levelf} \mathcal{F}^{L}_{l}\mathcal{H}^{2,3} \text{ is spanned by } \zeta^{\star,\mathfrak{m}} (\boldsymbol{2}^{a_{0}},3,\cdots,3, \boldsymbol{2}^{a_{p}}) \text{, with less than 'l' } 3. \end{equation} It corresponds to the motivic depth for this family, as we see through the proof below and the coaction calculus.\\ \paragraph{Sketch. } The vector space $\mathcal{F}^{L}_{l}\mathcal{H}^{2,3}$ is stable under the action of $\mathcal{G}$ ($\ref{eq:levelfiltstrable}$). The linear independence of the Hoffman $\star$ family is proved below ($ § 4.4.2$) thanks to a recursion on the level and on the weight, using the injectivity of a map $\partial^{L}$ where $\partial^{L}$ came out of the level and weight-graded part of the coaction $\Delta$ (cf. $4.4.2$). The injectivity is proved via $2$-adic properties of some coefficients conjectured in $\ref{conjcoeff}$.\\ Indeed, when computing the level graded coaction (cf. Lemma $4.4.2$) on the Hoffman $\star$ elements, looking at the left side, some elements appear, such as $\zeta^{\star\star,\mathfrak{m}}(\boldsymbol{2}^{a},3,\boldsymbol{2}^{b})$ but also $\zeta^{\star\star,\mathfrak{m}}(\boldsymbol{2}^{a},3,\boldsymbol{2}^{b})$. These are not always of depth $1$ as we could expect,\footnote{As for the Hoffman non $\star$ case done by Francis Brown, using a result of Don Zagier for level $1$.} but at least are abelians: product of motivic simple zeta values, as proved in Lemma $\ref{lemmcoeff}$.\\ To prove the linear independence of Hoffman $\star$ elements, we will then need to know some coefficients appearing in Lemma $\ref{lemmcoeff}$ (or at least the 2-adic valuation) of $\zeta(weight)$ for each of these terms, conjectured in $\ref{conjcoeff}$, which is the only missing part of the proof, and can be solved at the analytic level.\\ \subsection{Level graded coaction} Let use the following form for a MMZV$^{\star}$, gathering the $2$: $$\zeta^{\star, \mathfrak{m}} (\boldsymbol{2}^{a_{0}},c_{1},\cdots,c_{p}, \boldsymbol{2}^{a_{p}}), \quad c_{i}\in\mathbb{N}^{\ast}, c_{i}\neq 2.$$ This writing is suitable for the Galois action (and coaction) calculus, since by the antipode relations ($\S 4.2$), many of the cuts from a $2$ to a $2$ get simplified (cf. Annexe $\S A.1$).\\ For the Hoffman family, with only $2$ and $3$, the expression obtained is:\footnote{Cf. Lemma $A.1.2$; where $\delta_{2r+1}$ means here that the left side has to be of weigh $2r+1$.}\\ \begin{flushleft} \hspace*{-0.7cm}$D_{2r+1} \zeta^{\star, \mathfrak{m}} (\boldsymbol{2}^{a_{0}},3,\cdots,3, \boldsymbol{2}^{a_{p}})$ \end{flushleft} \begin{multline} \label{eq:dr3} \hspace*{-1.3cm}= \delta_{2r+1}\sum_{i<j} \left[ \begin{array}{lll} + \quad \zeta^{\star\star, \mathfrak{l}}_{1} (\boldsymbol{2}^{a_{i+1}},3,\cdots,3, \boldsymbol{2}^{\leq a_{j}}) & \otimes & \zeta^{\star, \mathfrak{m}} (\cdots,3, \boldsymbol{2}^{1+a_{i}+ \leq a_{j}},3, \cdots)\\ - \quad \zeta^{\star\star, \mathfrak{l}}_{1} (\boldsymbol{2}^{\leq a_{i}},3,\cdots,3, \boldsymbol{2}^{ a_{j-1}}) & \otimes & \zeta^{\star, \mathfrak{m}} (\cdots,3, \boldsymbol{2}^{1+a_{j}+ \leq a_{i}},3, \cdots)\\ + \left( \zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{a_{i+1}},3,\cdots, \boldsymbol{2}^{a_{j}},3) + \zeta^{\star\star, \mathfrak{l}}_{1} (\boldsymbol{2}^{<a_{i}},3,\cdots, \boldsymbol{2}^{a_{j}},3) \right) & \otimes& \zeta^{\star, \mathfrak{m}} (\cdots,3, \boldsymbol{2}^{<a_{i}},3,\boldsymbol{2}^{a_{j+1}},3, \cdots)\\ - \left(\zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{a_{j+1}},3,\cdots,3) + \zeta^{\star\star, \mathfrak{l}}_{1}(\boldsymbol{2}^{<a_{j}},3,\cdots,3) \right)& \otimes & \zeta^{\star, \mathfrak{m}} (\cdots,3, \boldsymbol{2}^{a_{i-1}},3,\boldsymbol{2}^{< a_{j}},3, \cdots) \\ \end{array} \right] \\ \quad \quad \begin{array}{lll} \quad \quad+ \quad\delta_{2r+1} \quad \left( \zeta^{\star, \mathfrak{l}} (\boldsymbol{2}^{a_{0}},3,\cdots,3, \boldsymbol{2}^{\leq a_{i}})- \zeta^{\star\star, \mathfrak{l}} (\boldsymbol{2}^{\leq a_{i}},3,\cdots,3, \boldsymbol{2}^{a_{0}}) \right) & \otimes & \zeta^{\star, \mathfrak{m}} (\boldsymbol{2}^{\leq a_{i}},3, \cdots)\\ \quad\quad +\quad \delta_{2r+1} \quad\zeta^{\star\star, \mathfrak{l}} (\boldsymbol{2}^{\leq a_{j}},3,\cdots,3, \boldsymbol{2}^{ a_{p}}) & \otimes & \zeta^{\star, \mathfrak{m}} (\cdots,3, \boldsymbol{2}^{\leq a_{j}}). \end{array} \end{multline} In particular, the coaction on the Hoffman $\star$ elements is stable. \\ By the previous expression $(\ref{eq:dr3})$, we see that each cut (of odd length) removes at least one $3$. It means that the level filtration is stable under the action of $\mathcal{G}$ and: \begin{equation} \label{eq:levelfiltstrable} D_{2r+1}(\mathcal{F}^{L}_{l}\mathcal{H}^{2,3}) \subset \mathcal{L}_{2r+1} \otimes \mathcal{F}^{L}_{l-1}\mathcal{H}_{n-2r-1}^{2,3} . \end{equation} Then, let consider the level graded derivation: \begin{equation} gr^{L}_{l} D_{2r+1}: gr^{L}_{l}\mathcal{H}_{n}^{2,3} \rightarrow \mathcal{L}_{2r+1} \otimes gr^{L}_{l-1}\mathcal{H}_{n-2r-1}^{2,3}. \end{equation} If we restrict ourselves to the cuts in the coaction that remove exactly one $3$ in the right side, the formula $(\ref{eq:dr3})$ leads to: \begin{flushleft} \hspace*{-0.5cm}$gr^{L}_{l} D_{2r+1} \zeta^{\star, \mathfrak{m}} (\boldsymbol{2}^{a_{0}},3,\cdots,3, \boldsymbol{2}^{a_{p}}) =$ \end{flushleft} \begin{multline}\label{eq:gdr3} \hspace*{-1.5cm}\begin{array}{lll} \quad - \delta_{a_{0} < r \leq a_{0}+a_{1}+2} \quad \zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{a_{0}}, 3, \boldsymbol{2}^{r-a_{0}-2}) &\otimes & \zeta^{\star, \mathfrak{m}} (\boldsymbol{2}^{ a_{0}+a_{1}+1-r},3, \cdots) \end{array}\\ \hspace*{-1.3cm}\sum_{i<j} \left[ \begin{array}{l} \delta_{r\leq a_{i}} \quad \zeta^{\star\star, \mathfrak{l}}_{1} (\boldsymbol{2}^{r}) \quad \quad \quad \quad \quad \otimes \left( \zeta^{\star, \mathfrak{m}} (\cdots,3, \boldsymbol{2}^{a_{i-1}+ a_{i}-r+1},3, \cdots) - \zeta^{\star, \mathfrak{m}} (\cdots,3, \boldsymbol{2}^{a_{i+1}+ a_{i}-r+1},3, \cdots) \right) \\ + \left( \delta_{r=a_{i}+2} \zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{a_{i}},3) + \delta_{r< a_{i}+a_{i-1}+3} \zeta^{\star\star, \mathfrak{l}}_{1} (\boldsymbol{2}^{r-a_{i}-3}, 3, \boldsymbol{2}^{a_{i}},3) \right) \otimes \zeta^{\star, \mathfrak{m}} (\cdots,3, \boldsymbol{2}^{a_{i}+a_{i-1}-r+1},3,\boldsymbol{2}^{a_{i+1}},3, \cdots)\\ - \left( \delta_{r=a_{i}+2} \zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{a_{i}},3) + \delta_{r< a_{i}+a_{i+1}+3} \zeta^{\star\star, \mathfrak{l}}_{1}(\boldsymbol{2}^{r-a_{i}-3},3, \boldsymbol{2}^{a_{i}}, 3) \right) \otimes \zeta^{\star, \mathfrak{m}} (\cdots,3, \boldsymbol{2}^{a_{i-1}},3,\boldsymbol{2}^{a_{i}+a_{i+1}-r+1},3, \cdots) \end{array} \right] \\ \hspace*{-2cm} \textsc{(D)} \begin{array}{lll} +\delta_{a_{p}+1 \leq r \leq a_{p}+a_{p-1}+1} \quad \zeta^{\star\star, \mathfrak{l}} (\boldsymbol{2}^{r- a_{p}-1},3, \boldsymbol{2}^{ a_{p}}) &\otimes & \zeta^{\star, \mathfrak{m}} (\cdots,3, \boldsymbol{2}^{a_{p}+ a_{p-1}-r+1}). \end{array} \end{multline} By the antipode $\shuffle$ relation (cf. $\ref{eq:antipodeshuffle2}$): $$\zeta^{\star\star, \mathfrak{l}}_{1} (\boldsymbol{2}^{a},3, \boldsymbol{2}^{b},3)= \zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{b},3, \boldsymbol{2}^{a+1})=\zeta^{\star\star, \mathfrak{l}}(\boldsymbol{2}^{b+1},3, \boldsymbol{2}^{a+1})- \zeta^{\star, \mathfrak{l}}(\boldsymbol{2}^{b+1},3, \boldsymbol{2}^{a+1}).$$ Then, by Lemma $\ref{lemmcoeff}$, all the terms appearing in the left side of $gr^{L}_{l} D_{2r+1}$ are product of simple MZV, which turns into, in the coalgebra $\mathcal{L}$ a rational multiple of $\zeta^{\mathfrak{l}}(2r+1)$: $$gr^{L}_{l} D_{2r+1} (gr^{L}_{l}\mathcal{H}_{n}^{2,3}) \subset \mathbb{Q}\zeta^{\mathfrak{l}}(2r+1)\otimes gr^{L}_{l-1}\mathcal{H}_{n-2r-1}^{2,3}.$$ \\ Sending $\zeta^{\mathfrak{l}}(2r+1)$ to $1$ with the projection $\pi:\mathbb{Q} \zeta^{\mathfrak{l}}(2r+1)\rightarrow\mathbb{Q}$, we can then consider:\nomenclature{$\partial^{L}_{r,l}$ and $\partial^{L}_{<n,l}$}{defined as composition from derivations} \begin{description} \item[$\boldsymbol{\cdot\quad \partial^{L}_{r,l}}$] $ : gr^{L}_{l}\mathcal{H}_{n}^{2,3}\rightarrow gr^{L}_{l-1}\mathcal{H}_{n-2r-1}^{2,3}, \quad \text{ defined as the composition }$ $$\partial^{L}_{r,l}\mathrel{\mathop:}=gr_{l}^{L}\partial_{2r+1}\mathrel{\mathop:}=m\circ(\pi\otimes id)(gr^{L}_{l} D_{r}): \quad gr^{L}_{l}\mathcal{H}_{n}^{2,3} \rightarrow \mathbb{Q}\otimes_{\mathbb{Q}} gr^{L}_{l-1}\mathcal{H}_{n-2r-1}^{2,3} \rightarrow gr^{L}_{l-1}\mathcal{H}_{n-2r-1}^{2,3} .$$ \item[$\boldsymbol{\cdot\quad \partial^{L}_{<n,l}}$] $\mathrel{\mathop:}=\oplus_{2r+1<n}\partial^{L}_{r,l} .$ \\ \end{description} The injectivity of this map is the keystone of the Hoffman$^{\star}$ proof. Its explicit expression is: \begin{lemm} \begin{flushleft} $\partial^{L}_{r,l} (\zeta^{\star, \mathfrak{m}} (\boldsymbol{2}^{a_{0}},3,\cdots,3, \boldsymbol{2}^{a_{p}}))=$ \end{flushleft} $$\begin{array}{l} \quad - \delta_{a_{0} < r \leq a_{0}+a_{1}+2} \widetilde{B}^{a_{0}+1,r-a_{0}-2} \zeta^{\star, \mathfrak{m}} (\boldsymbol{2}^{ a_{0}+a_{1}+1-r},3, \cdots) \\ \\ + \sum_{i<j} \left[ \begin{array}{l} \delta_{r\leq a_{i}}C_{r} \left( \zeta^{\star, \mathfrak{m}} (\cdots,3, \boldsymbol{2}^{a_{i-1}+ a_{i}-r+1},3, \cdots) - \zeta^{\star, \mathfrak{m}} (\cdots,3, \boldsymbol{2}^{a_{i+1}+ a_{i}-r+1},3, \cdots) \right) \\ \\ +\delta_{a_{i}+2\leq r \leq a_{i}+a_{i-1}+2} \widetilde{B}^{a_{i}+1,r-a_{i}-2} \zeta^{\star, \mathfrak{m}} (\cdots,3, \boldsymbol{2}^{a_{i}+a_{i-1}-r+1},3,\boldsymbol{2}^{a_{i+1}},3, \cdots) \\ \\ - \delta_{a_{i}+2 \leq r\leq a_{i}+a_{i+1}+2} \widetilde{B}^{a_{i}+1,r-a_{i}-2} \zeta^{\star, \mathfrak{m}} (\cdots,3, \boldsymbol{2}^{a_{i-1}},3,\boldsymbol{2}^{a_{i}+a_{i+1}-r+1},3, \cdots) \\ \end{array} \right] \\ \\ \textsc{(D)} + \delta_{a_{p}+1 \leq r \leq a_{p}+a_{p-1}+1} B^{r-a_{p}-1,a_{p}} \zeta^{\star, \mathfrak{m}} (\cdots,3, \boldsymbol{2}^{a_{p}+ a_{p-1}-r+1}) , \\ \\ \quad \quad \quad \text{ with } \widetilde{B}^{a,b}\mathrel{\mathop:}=B^{a,b}C_{a+b+1}-A^{a,b}. \end{array}$$ \end{lemm} \begin{proof} Using Lemma $\ref{lemmcoeff}$ for the left side of $gr^{L}_{p} D_{2r+1}$, and keeping just the coefficients of $\zeta^{2r+1}$, we obtain easily this formula. In particular: \begin{flushleft} $\zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{a},3, \boldsymbol{2}^{b})=\zeta^{\star\star, \mathfrak{l}}(\boldsymbol{2}^{a+1},3, \boldsymbol{2}^{b})- \zeta^{\star, \mathfrak{l}}(\boldsymbol{2}^{a+1},3, \boldsymbol{2}^{b}) = \widetilde{B}^{a+1,b} \zeta^{\mathfrak{l}}(\overline{2a+2b+5}).$\\ $\zeta^{\star\star, \mathfrak{l}}_{1} (\boldsymbol{2}^{a},3, \boldsymbol{2}^{b},3)= \zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{b},3, \boldsymbol{2}^{a+1})= \widetilde{B}^{b+1,a+1}\zeta^{\mathfrak{l}}(\overline{2a+2b+7}).$ \end{flushleft} \end{proof} \subsection{Proof of Theorem $4.4.1$} Since the cardinal of the Hoffman $\star$ family in weight $n$ is equal to the dimension of $\mathcal{H}_{n}^{1}$, \footnote{Obviously same recursive relation: $d_{n}=d_{n-2}+d_{n-3}$} it remains to prove that they are linearly independent: \begin{center} \texttt{Claim 1}: The Hoffman $\star$ elements are linearly independent. \end{center} It fundamentally use the injectivity of the map defined above, $\partial^{L}_{<n,l}$, via a recursion on the level. Indeed, let first prove the following statement: \begin{equation} \label{eq:bijective} \texttt{Claim 2}: \quad \partial^{L}_{<n,l}: gr^{L}_{l}\mathcal{H}_{n}^{2,3}\rightarrow \oplus_{2r+1<n} gr^{L}_{l-1}\mathcal{H}_{n-2r-1}^{2,3} \text{ is bijective}. \end{equation} Using the Conjecture $\ref{conjcoeff}$ (assumed for this theorem), regarding the $2$-adic valuation of these coefficients, with $r=a+b+1$:\footnote{The last inequality comes from the fact that $v_{2} (\binom{2r}{2b+1} )<2r $.} \begin{equation}\label{eq:valuations} \hspace*{-0.7cm}\left\lbrace \begin{array}{ll} C_{r}=\frac{2^{2r+1}}{2r+1} &\Rightarrow v_{2}(C_{r})=2r+1 .\\ \widetilde{B}^{a,b}\mathrel{\mathop:}= B^{a,b}C_{r}-A^{a,b}=2^{2r+1}\left( \frac{1}{2r+1}-\frac{\binom{2r}{2a}}{2^{2r}-1} \right) &\Rightarrow v_{2}(\widetilde{B}^{a,b}) \geq 2r+1.\\ B^{a,b}C_{r}=C_{r}-2\binom{2r}{2b+1} &\Rightarrow v_{2}(B^{0,r-1}C_{r})= 2+ v_{2}(r) \leq v_{2}(B^{a,b}C_{r}) < 2r+1 . \end{array} \right. \end{equation} The deconcatenation terms in $\partial^{L}_{<n,l}$, which correspond to the terms with $B^{a,b}C_{r}$ are then the smallest 2-adically, which is crucial for the injectivity.\\ \\ Now, define a matrix $M_{n,l}$ as the matrix of $\partial^{L}_{<n,l}$ on $\zeta^{\star, \mathfrak{m}} (\boldsymbol{2}^{a_{0}},3,\cdots,3, \boldsymbol{2}^{a_{l}})$ in terms of $\zeta^{\star, \mathfrak{m}} (\boldsymbol{2}^{b_{0}},3,\cdots,3, \boldsymbol{2}^{b_{l-1}})$; even if up to now, we do not know that these families are linearly independent. We order the elements on both sides by lexicographical order on ($a_{l}, \ldots, a_{0}$), resp. ($r,b_{l-1}, \ldots, b_{0}$), such that the diagonal corresponds to $r=a_{l}$ and $b_{i}=a_{i}$ for $i<l$ and claim: \begin{center} \texttt{Claim 3}: The matrix $M_{n,l}$ of $\partial^{L}_{<n,l}$ on the Hoffman $\star$ elements is invertible \end{center} \begin{proof}[\texttt{Proof of Claim 3}] Indeed, let $\widetilde{M}_{n,l}$ be the matrix $M_{n,l}$ where we have multiplied each line corresponding to $D_{2r+1}$ by ($2^{-v_{2}(r)-2}$). Then modulo $2$, because of the previous computations on the $2$-adic valuations of the coefficients, only the deconcatenations terms remain. Hence, with the previous order, the matrix is, modulo $2$, triangular with $1$ on the diagonal; the diagonal being the case where $B^{0,r-1}C_{r}$ appears. This implies that $\det (\widetilde{M}_{n,l})\equiv 1 \pmod{2}$, and in particular is non zero. Consequently, the matrix $\widetilde{M}_{n,l}$ is invertible and so does $M_{n,l}$. \end{proof} Obviously, $\texttt{Claim 3} \Rightarrow \texttt{Claim 2} $, but it will also enables us to complete the proof: \begin{proof}[\texttt{Proof of Claim 1}] Let first prove it for the Hoffman $\star$ elements of a same level and weight, by recursion on level. Level $0$ is obvious: $\zeta^{\star,\mathfrak{m}}(2)^{n}$ is a rational multiple of $(\pi^{\mathfrak{m}})^{2n}$. Assuming by recursion on the level that the Hoffman $\star$ elements of weight $\leq n$ and level $l-1$ are linearly independent, since $M_{n,l}$ is invertible, this means both that the Hoffman $\star$ elements of weight $n$ and level $l$ are linearly independent.\\ The last step is to realize that the bijectivity of $\partial^{L}_{<n,l}$ also implies that Hoffman $\star$ elements of different levels are linearly independent. Indeed, proof can be done by contradiction: applying $\partial^{L}_{<n,l}$ to a linear combination of Hoffman $\star$ elements, $l$ being the maximal number of $3$, we arrive at an equality between same level elements, and at a contradiction. \end{proof} \subsection{Analytic conjecture} Here are the equalities needed for Theorem $4.4.1$, known up to some rational coefficients: \begin{lemm} \label{lemmcoeff} With $w$, $d$ resp. $ht$ denoting the weight, the depth, resp. the height: \begin{itemize} \item[$(o)$] $\begin{array}{llll} \zeta^{\mathfrak{m}}(\overline{r}) & = & (2^{1-r}-1) &\zeta^{\mathfrak{m}}(r).\\ \zeta^{\mathfrak{m}}(2n) & = & \frac{\mid B_{n}\mid 2^{3n-1}3^{n}}{(2n)!} &\zeta^{\mathfrak{m}}(2)^{n}. \end{array}$ \item[$(i)$] $\zeta^{\star,\mathfrak{m}}(\boldsymbol{2}^{n})= -2 \zeta^{\mathfrak{m}}(\overline{2n}) =\frac{(2^{2n}-2)6^{n}}{(2n)!}\vert B_{2n}\vert\zeta^{\mathfrak{m}}(2)^{n}.$ \item[$(ii)$] $\zeta^{\star,\mathfrak{m}}_{1}(\boldsymbol{2}^{n})= -2 \sum_{r=1}^{n} \zeta^{\mathfrak{m}}(2r+1)\zeta^{\star,\mathfrak{m}}(\boldsymbol{2}^{n-r}).$ \item[$(iii)$] \begin{align} \zeta^{\star\star,\mathfrak{m}}(\boldsymbol{2}^{n}) & = \sum_{d \leq n} \sum_{w(\textbf{m})=2n \atop ht(\textbf{m})=d(\textbf{m})=d} 2^{2n-2d}\zeta^{\mathfrak{m}}(\textbf{m}) \\ & =\sum_{2n=\sum s_{k}(2i_{k}+1)+2S \atop i_{k}\neq i_{j}} \left( \prod_{k=1}^{p} \frac{C_{i_{k}}^{s_{k}}} {s_{k}!} \zeta^{\mathfrak{m}}(\overline{2i_{k}+1})^{s_{k}} \right) D_{S} \zeta^{\mathfrak{m}}(2)^{S}. \nonumber\\ \zeta^{\star\star,\mathfrak{m}}_{1}(\boldsymbol{2}^{n}) & =-\sum_{d \leq n} \sum_{w(\textbf{m})=2n+1 \atop ht(\textbf{m})=d(\textbf{m})=d} 2^{2n+1-2d}\zeta^{\mathfrak{m}}(\textbf{m}) \\ &=\sum_{2n+1=\sum s_{k}(2i_{k}+1)+2S \atop i_{k}\neq i_{j}} \left( \prod_{k=1}^{p} \frac{C_{i_{k}}^{s_{k}}} {s_{k}!} \zeta^{\mathfrak{m}}(\overline{2i_{k}+1})^{s_{k}}\right) D_{S} \zeta^{\mathfrak{m}}(2)^{S}\nonumber \end{align} \item[$(iv)$] $\zeta^{\star,\mathfrak{m}}(\boldsymbol{2}^{a},3,\boldsymbol{2}^{b})= \sum A^{a,b}_{r} \zeta^{\mathfrak{m}}(\overline{2r+1})\zeta^{\star,\mathfrak{m}}(\boldsymbol{2}^{n-r}).$ \item[$(v)$] \begin{align} \zeta^{\star\star,\mathfrak{m}}(\boldsymbol{2}^{a},3,\boldsymbol{2}^{b}) &= \sum_{w=\sum s_{k}(2i_{k}+1)+2S \atop i_{k}\neq i_{j}} B^{a,b}_{i_{1},\cdots, i_{p}\atop s_{1}\cdots s_{p}} \left( \prod_{k=1}^{p} \frac{C_{i_{k}}^{s_{k}}} {s_{k}!} \zeta^{\mathfrak{m}}(\overline{2i_{k}+1})^{s_{k}}\right) D_{S} \zeta^{\mathfrak{m}}(2)^{S}.\\ \zeta^{\star\star,\mathfrak{m}}_{1}(\boldsymbol{2}^{a},3,\boldsymbol{2}^{b}) &=D^{a,b} \zeta^{\mathfrak{m}}(2)^{\frac{w}{2}}+ \sum_{w=\sum s_{k}(2i_{k}+1)+2S \atop i_{k}\neq i_{j}} B^{a,b}_{i_{1},\cdots, i_{p}\atop s_{1}\cdots s_{p}} \left( \prod_{k=1}^{p} \frac{C_{i_{k}}^{s_{k}}} {s_{k}!} \zeta^{\mathfrak{m}}(\overline{2i_{k}+1})^{s_{k}}\right) D_{S}\zeta^{\mathfrak{m}}(2)^{S}. \end{align} \end{itemize} Where: \begin{itemize} \item[$\cdot$] $C_{r}=\frac{2^{2r+1}}{2r+1}$, $D_{S}$ explicit\footnote{Cf. Proof.} and with the following constraint: \begin{equation} \label{eq:constrainta} A^{a,b}_{r}=A_{r}^{a,r-a-1}+C_{r} \left( B^{r-b-1,b}- B^{r-a-1,a} +\delta_{r\leq b}-\delta_{r\leq a} \right). \end{equation} \item[$\cdot$] The recursive formula for $B$-coefficients, where $B^{x,y}\mathrel{\mathop:}=B^{x,y}_{x+y+1 \atop 1}$ and $r<a+b+1$: \begin{equation} \label{eq:constraintb} \begin{array}{lll } B^{a,b}_{r \atop 1} & = & \delta_{r\leq b} - \delta_{r< a}+ B^{r-b-1,b}+\frac{D^{a-r-1,b}}{a+b-r+1}+\delta_{r=a} \frac{2(2^{2b+1}-1)6^{b+1} \mid B_{2b+2} \mid}{(2b+2)! D_{b+1}}.\\ B^{a,b}_{i_{1},\cdots, i_{p}\atop s_{1}\cdots s_{p}} &=& \left\{ \begin{array}{l} \delta_{i_{1}\leq b } - \delta_{i_{1}< a } + B^{i_{1}-b-1,b} + B^{a-i_{1}-1,b}_{i_{1}, \ldots, i_{p}\atop s_{1}-1, \ldots, s_{p}} \quad \text{ for } \sum s_{k} \text{ odd } \\ \delta_{i_{1}\leq b } - \delta_{i_{1}\leq a } + B^{i_{1}-b-1,b} +B^{a-i_{1},b}_{i_{1}, \ldots, i_{p}\atop s_{1}-1, \ldots, s_{p}} \quad \text{ else }. \end{array} \right. \end{array} \end{equation} \end{itemize} \end{lemm} \noindent Before giving the proof, here is the (analytic) conjecture remaining on some of these coefficients, sufficient to complete the Hoffman $\star$ basis proof (cf. Theorem $\ref{Hoffstar}$): \begin{conj}\label{conjcoeff} The equalities $(v)$ are satisfied for real MZV, with: $$B^{a,b}=1-\frac{2}{C_{a+b+1}}\binom{2a+2b+2}{2b+1}.$$ \end{conj} \textsc{Remarks:} \begin{itemize} \item[$\cdot$] This conjecture is of an entirely different nature from the techniques developed in this thesis. We can expect that it can proved using analytic methods as the usual techniques of identifying hypergeometric series, as in $\cite{Za}$, or $\cite{Li}$. \item[$\cdot$] The equality $(iv)$ is already proven in the analytic case by Ohno-Zagier (cf.$\cite{IKOO}$, $\cite{Za}$), with the values of the coefficient $A_{r}^{a,b}$ given below. Nevertheless, as we will see through the proofs below, to make the coefficients for the (stronger) motivic identity $(iv)$ explicit, we need to prove the other identities in $(v)$. \item[$\cdot$] We will use below a result of Ohno and Zagier on sums of MZV of fixed weight, depth and height to conclude for the coefficients for $(iii)$. \end{itemize} \begin{theo} If the analytic conjecture ($\ref{conjcoeff}$) holds, the equalities $(iv)$, $(v)$ are true in the motivic case, with the same values of the coefficients. In particular: $$A_{r}^{a,b}= 2\left( -\delta_{r=a}+ \binom{2r}{2a} \right) \frac{2^{2r}}{2^{2r}-1}-2\binom{2r}{2b+1}.$$ \end{theo} \begin{proof} Remind that if we know a motivic equality up to one unknown coefficient (of $\zeta(weight)$), the analytic result analogue enables us to conclude on its value by Corollary $\ref{kerdn}$.\\ Let assume now, in a recursion on $n$, that we know $\lbrace B^{a,b}, D^{a,b}, B_{i_{1} \cdots i_{p} \atop s_{1} \cdots s_{p} }^{a,b} \rbrace_{a+b+1<n}$ and consider $(a,b)$ such that $a+b+1=n$. Then, by $(\ref{eq:constraintb})$, we are able to compute the $B_{\textbf{i}\atop \textbf{s}}^{a,b}$ with $(s,i)\neq (1,n)$. Using the analytic $(v)$ equality, and Corollary $\ref{kerdn}$, we deduce the only remaining unknown coefficient $B^{a,b}$ resp. $D^{a,b}$ in $(v)$.\\ Lastly, by recursion on $n$ we deduce the $A_{r}^{a,b}$ coefficients: let assume they are known for $a+b+1<n$, and take $(a,b)$ with $a+b+1=n$. By the constraint $(\ref{eq:constrainta})$, since we already know $B$ and $C$ coefficients, we deduce $A_{r}^{a,b}$ for $r<n$. The remaining coefficient, $A_{n}^{a,b}$, is obtained using the analytic $(iv)$ equality and Corollary $\ref{kerdn}$. \end{proof} \paragraph{\texttt{Proof of} Lemma $\ref{lemmcoeff}$.}: \begin{proof} Computing the coaction on these elements, by a recursive procedure, we are able to prove these identities up to some rational coefficients, with the Corollary $\ref{kerdn}$. When the analytic analogue of the equality is known for MZV, we \textit{may} conclude on the value of the remaining rational coefficient of $\zeta^{\mathfrak{m}}(w)$ by identification (as for $(i),(ii),(iii)$). However, if the family is not stable under the coaction , (as for $(iv)$) knowing the analytic case is not enough.\\ \texttt{Nota Bene:} This proof refers to the expression of $D_{2r+1}$ in Lemma $\ref{lemmt}$: we look at cuts of length $2r+1$ among the sequence of $0, 1, $ or $\star$ (in the iterated integral writing); there are different kind of cuts (according their extremities), and each cut may bring out two terms ($T_{0,0}$ and $T_{0,\star}$ for instance). The simplifications are illustrated by the diagrams, where some arrows (term of a cut) get simplified by rules specified in Annexe $A$.\\ \begin{itemize} \item[$(i)$] The corresponding iterated integral: $$I^{\mathfrak{m}}(0; 1, 0, \star, 0 \cdots, \star, 0; 1).$$ The only possible cuts of odd length are between two $\star$ ($T_{0,\star}$ and $T_{\star,0}$) or $T_{1,0}$ from the first $1$ to a $\star$, or $T_{0,1}$ from a $\star$ to the last $1$. By \textsc{ Shift }(\ref{eq:shift}), these cuts get simplified two by two. Since $D_{2r+1}(\cdot)$, for $2r+1<2n$ are all zero, it belongs to $\mathbb{Q}\zeta^{\mathfrak{m}}(2n)$, by Corollary $\ref{kerdn}$). Using the (known) analytic equality, we can conclude. \item[$(ii)$] It is quite similar to $(i)$: using $\textsc{ Shift }$ $(\ref{eq:shift})$, it remains only the cut:\\ \includegraphics[]{dep2.pdf}\\ $$\text{i.e.}: \quad D_{2r+1} (\zeta^{\star,\mathfrak{m}}_{1}(\boldsymbol{2}^{n}))= \zeta^{\mathfrak{l},\star}_{1}(\boldsymbol{2}^{r})\otimes \zeta^{\star,\mathfrak{m}}(\boldsymbol{2}^{n-r})=-2 \zeta^{ \mathfrak{l}}(\overline{2r+1})\otimes \zeta^{ \star,\mathfrak{m}}(\boldsymbol{2}^{n-r}).$$ The last equality is deduced from the recursive hypothesis (smaller weight). The analytic equality (coming from the Zagier-Ohno formula, and the $\shuffle$ regulation) enables us to conclude on the value of the remaining coefficient of $\zeta^{\mathfrak{m}}(2n+1)$. \item[$(iii)$] Expressing these ES$\star\star$ as a linear combination of ES by $\shuffle$ regularisation: $$\hspace*{-0.5cm}\zeta^{\star\star,\mathfrak{m}}(\boldsymbol{2}^{n})= \sum_{k_{i} \text{ even}} \zeta^{\mathfrak{m}}_{2n-\sum k_{i}}(k_{1},\cdots, k_{p})=\sum_{n_{i}\geq 2} \left( \sum_{k_{i} \text{ even} \atop k_{i} \leq n_{i}} \binom{n_{1}-1}{k_{1}-1} \cdots \binom{n_{d}-1}{k_{d}-1} \right) \zeta^{\mathfrak{m}}(n_{1},\cdots, n_{d}) .$$ Using the multi-binomial formula: $$2^{\sum m_{i}}=\sum_{l_{i} \leq m_{i}} \binom{m_{1}}{l_{1}}(1-(-1))^{l_{1}} \cdots \binom{m_{d}}{l_{d}}(1-(-1))^{l_{d}}= 2^{d} \sum_{l_{i} \leq m_{i}\atop l_{i} \text{ odd }} \binom{m_{1}}{l_{1}} \cdots \binom{m_{d}}{l_{d}} .$$ Thus: $$\zeta^{\star\star,\mathfrak{m}}(\boldsymbol{2}^{n})=\sum_{d \leq n} \sum_{w(\textbf{m})=2n \atop ht(\textbf{m})=d(\textbf{m})=d} 2^{2n-2d}\zeta^{\mathfrak{m}}(\textbf{m}).$$ Similarly for $(4.27)$, since: $$\zeta^{\star\star,\mathfrak{m}}_{1}(\boldsymbol{2}^{n})=\sum_{k_{i} \text{ even}} \zeta^{\mathfrak{m}}_{2n+1-\sum k_{i}}(k_{1},\cdots, k_{p})=\sum_{d \leq n} \sum_{w(\textbf{m})=2n \atop ht(\textbf{m})=d(\textbf{m})=d} 2^{2n-2d}\zeta^{\mathfrak{m}}(\textbf{m}).$$ Now, using still only $\textsc{ Shift }$ $(\ref{eq:shift})$, it remains the following cuts:\\ \includegraphics[]{dep3.pdf}\\ With a recursion on $n$ for both $(4.26)$, $(4.27)$, we deduce: $$D_{2r+1}(\zeta^{\star\star,\mathfrak{m}}(\boldsymbol{2}^{n}))=\zeta^{\star\star,\mathfrak{m}}_{1}(\boldsymbol{2}^{r})\otimes \zeta^{\star\star,\mathfrak{l}}_{1}(\boldsymbol{2}^{n-r})=C_{r} \zeta^{ \mathfrak{l}}(\overline{2r+1})\otimes \zeta^{\star\star,\mathfrak{m}}_{1}(\boldsymbol{2}^{n-r-1}).$$ $$D_{2r+1}(\zeta^{\star\star,\mathfrak{m}}_{1}(\boldsymbol{2}^{n}))=\zeta^{\star\star,\mathfrak{l}}_{1}(\boldsymbol{2}^{r})\otimes \zeta^{\star\star,\mathfrak{m}}(\boldsymbol{2}^{n-r})=C_{r} \zeta^{ \mathfrak{l}}(\overline{2r+1})\otimes \zeta^{\star\star,\mathfrak{m}}(\boldsymbol{2}^{n-r}).$$ To find the remaining coefficients, we need the analytic result corresponding, which is a consequence of the sum relation for MZV of fixed weight, depth and height, by Ohno and Zagier ($\cite{OZa}$, Theorem $1$), via the hypergeometric functions.\\ Using $\cite{OZa}$, the generating series of these sums is, with $\alpha,\beta=\frac{x+y \pm \sqrt{(x+y)^{2}-4z}}{2}$: $$\begin{array}{lll} \phi_{0}(x,y,z)\mathrel{\mathop:} & = & \sum_{s\leq d \atop w\geq d+s} \left( \sum \zeta(\textbf{k}) \right) x^{w-d-s}y^{d-s}z^{s-1} \\ & = & \frac{1}{xy-z} \left( 1- \exp \left( \sum_{m=2}^{\infty} \frac{\zeta(m)}{m}(x^{m}+y^{m}-\alpha^{m}-\beta^{m}) \right) \right) . \end{array}$$ From this, let express the generating series of both $\zeta^{\star\star}(\boldsymbol{2}^{n})$ and $\zeta^{\star\star}_{1}(\boldsymbol{2}^{n})$: $$\phi(x)\mathrel{\mathop:}= \sum_{w} \left( \sum_{ht(\textbf{k})=d(\textbf{k})=d\atop w\geq 2d} 2^{w-2d} \zeta(\textbf{k}) \right) x^{w-2}= \phi_{0}(2x, 0, x^{2}).$$ Using the result of Ohno and Don Zagier: $$\phi(x)= \frac{1}{x^{2}} \left(\exp \left( \sum_{m=2}^{\infty} \frac{2^{m}-2}{m} \zeta(m) x^{m} \right) -1\right).$$ Consequently, both $\zeta^{\star\star}(\boldsymbol{2}^{n})$ and $\zeta^{\star\star}_{1}(\boldsymbol{2}^{n})$ can be written explicitly as polynomials in simple zetas. For $\zeta^{\star\star}(\boldsymbol{2}^{n})$, by taking the coefficient of $x^{2n-2}$ in $\phi(x)$: $$\zeta^{\star\star}(\boldsymbol{2}^{n})= \sum_{\sum m_{i} s_{i}=2n \atop m_{i}\neq m_{j}} \prod_{i=1}^{k} \left( \frac{1}{s_{i} !}\left( \zeta(m_{i}) \frac{2^{m_{i}}-2}{m_{i}}\right)^{s_{i}} \right) .$$ Gathering the zetas at even arguments, it turns into: $$\zeta^{\star\star}(\boldsymbol{2}^{n})= \sum_{\sum (2i_{k}+1) s_{k}+2S=2n \atop i_{k}\neq i_{j}} \prod_{i=1}^{p} \left( \frac{1}{s_{k} !}\left( \zeta(2 i_{k}+1) \frac{2^{2 i_{k}+1}-2}{2i_{k}+1}\right)^{s_{k}} \right) d_{S} \zeta(2)^{S}, $$ \begin{equation}\label{eq:coeffds} \text{ where } d_{S}\mathrel{\mathop:}=3^{S}\cdot 2^{3S}\sum_{\sum m_{i} s_{i}=S \atop m_{i}\neq m_{j}} \prod_{i=1}^{k} \left( \frac{1}{s_{i}!} \left( \frac{\mid B_{2m_{i}}\mid (2^{2m_{i}-1}-1) } {2m_{i} (2m_{i})!}\right)^{s_{i}} \right). \end{equation} It remains to turn $\zeta(odd)$ into $\zeta(\overline{odd})$ by $(o)$ to fit the expression of the Lemma: $$\zeta^{\star\star}(\boldsymbol{2}^{n})= \sum_{\sum (2i_{k}+1) s_{k}+2S=2n \atop i_{k}\neq i_{j}} \prod_{i=1}^{p} \left( \frac{1}{s_{k} !}\left( c_{i_{k}}\zeta(\overline{2 i_{k}+1}) \right)^{s_{k}} \right) d_{S} \zeta(2)^{S}, \text{ where } c_{r}=\frac{2^{2r+1}}{2r+1}.$$ It is completely similar for $\zeta^{\star\star}_{1}(\boldsymbol{2}^{n})$: by taking the coefficient of $x^{2n-3}$ in $\phi(x)$, we obtained the analytic analogue of $(4.25)$, with the same coefficients $d_{S}$ and $c_{r}$.\\ Now, using these analytic results for $(4.26)$, $(4.27)$, by recursion on the weight, we can identify the coefficient $D_{S}$ and $C_{r}$ with resp. $d_{S}$ and $c_{r}$, since there is one unknown coefficient at each step of the recursion. \item[$(iv)$] After some simplifications by Antipodes rules ($\S A.1$), only the following cuts remain:\\ \includegraphics[]{dep4.pdf}\\ This leads to the formula:\\ $$D_{2r+1} (\zeta^{\star,\mathfrak{m}}(\boldsymbol{2}^{a},3,\boldsymbol{2}^{b}))= \left(\zeta^{\star,\mathfrak{m}}(\boldsymbol{2}^{a},3,\boldsymbol{2}^{r-a-1})+\right.$$ $$ \left.\left( \delta_{r \leq b}-\delta_{r \leq a}\right) \zeta^{\star\star,\mathfrak{m}}_{1}(\boldsymbol{2}^{r}) + \zeta^{\star\star , \mathfrak{m}}(\boldsymbol{2}^{r-b-1},3,\boldsymbol{2}^{b}) -\zeta^{\star\star ,\mathfrak{m}}(\boldsymbol{2}^{r-a-1},3,\boldsymbol{2}^{a})\right) \otimes \zeta^{\star,\mathfrak{m}}(\boldsymbol{2}^{n-r}).$$ In particular, the Hoffman $\star$ family is not stable under the coaction, so we need first to prove $(v)$, and then: $$\hspace*{-0.7cm}D_{2r+1} (\zeta^{\star ,\mathfrak{m}}(\boldsymbol{2}^{a},3,\boldsymbol{2}^{b}))= \left( A_{r}^{a,r-a-1}+C_{r} \left( B^{r-b-1,b}- B^{r-a-1,a} +\delta_{r\leq b}-\delta_{r\leq a} \right)\right) \zeta^{ \mathfrak{l}}(\overline{2r+1})\otimes \zeta^{\star ,\mathfrak{m}}(\boldsymbol{2}^{n-r}). $$ It leads to the constraint $(\ref{eq:constrainta})$ above for coefficients $A$. To make these coefficients explicit, apart from the known analytic Ohno Zagier formula, we need the analytic analogue of $(v)$ identities, as stated in Conjecture $\ref{conjcoeff}$. \item[$(v)$] By Annexe rules, the following cuts get simplified (by colors, above with below):\footnote{The vertical arrows indicates a cut from the $\star$ to a $\star$ of the same group.}\\ \includegraphics[]{dep5.pdf}\\ Indeed, cyan arrows get simplified by \textsc{Antipode} $\shuffle$, $T_{0,0}$ resp. $T_{0, \star}$ above with $T_{0,0}$ resp. $T_{\star,0}$ below; magenta ones by $\textsc{ Shift }$ $(\ref{eq:shift})$, term above with the term below shifted by two on the left. It remains the following cuts for $(4.28)$:\\ \includegraphics[]{dep6.pdf}\\ In a very similar way, the simplifications lead to the following remaining terms:\\ \includegraphics[]{dep7.pdf}\\ Then, the derivations reduce to: $$\hspace*{-0.7cm}D_{2r+1} (\zeta^{\star\star ,\mathfrak{m}}(\boldsymbol{2}^{a},3,\boldsymbol{2}^{b}))= \left( \left( \delta_{r\leq b}-\delta_{r \leq a}\right) \zeta^{\star\star ,\mathfrak{l}}_{1}(\boldsymbol{2}^{r}) +\delta_{r> b}\zeta^{\star\star, \mathfrak{m-l}}(\boldsymbol{2}^{r-b-1},3,\boldsymbol{2}^{b})\right) \otimes \zeta^{\star\star ,\mathfrak{m}}(\boldsymbol{2}^{n-r}) +$$ $$\hspace*{+1cm} +\delta_{r\leq a-1} \zeta^{\star\star ,\mathfrak{l}}_{1}(\boldsymbol{2}^{r}) \otimes \zeta^{\star\star ,\mathfrak{m}}_{1}(\boldsymbol{2}^{a-r-1},3,\boldsymbol{2}^{b})+ \delta_{r=a} \zeta^{\star\star ,\mathfrak{l}}_{1}(\boldsymbol{2}^{a})\otimes \zeta^{\star\star ,\mathfrak{m}}_{2}(\boldsymbol{2}^{b}) .$$ $$\hspace*{-1.4cm}D_{2r+1} (\zeta^{\star\star ,\mathfrak{m}}_{1}(\boldsymbol{2}^{a},3,\boldsymbol{2}^{b}))= \left( \left( \delta_{r\leq b}-\delta_{r \leq a}\right) \zeta^{\star\star ,\mathfrak{l}}_{1}(\boldsymbol{2}^{r}) + \zeta^{\star\star ,\mathfrak{l}}(\boldsymbol{2}^{r-b-1},3,\boldsymbol{2}^{b})\right)\otimes \zeta^{\star\star,\mathfrak{m}}_{1}(\boldsymbol{2}^{n-r}) +$$ $$ + \zeta^{\star\star ,\mathfrak{l}}_{1}(\boldsymbol{2}^{r})\otimes \zeta^{\star\star,\mathfrak{m}}(\boldsymbol{2}^{a-r},3,\boldsymbol{2}^{b}) .$$ \hspace*{-0.5cm}With a recursion on $w$ for both: $$\hspace*{-1.4cm}\begin{array}{ll} D_{2r+1} (\zeta^{\star\star ,\mathfrak{m}}(\boldsymbol{2}^{a},3,\boldsymbol{2}^{b})) & = C_{r} \zeta^{ \mathfrak{l}}(\overline{2r+1})\otimes\\ & \left( \left( \delta_{r\leq b}-\delta_{r < a} + B_{r}^{r-b-1,b}\right) \zeta^{\star\star ,\mathfrak{m}}(\boldsymbol{2}^{n-r}) + \zeta^{\star\star ,\mathfrak{m}}_{1}(\boldsymbol{2^{a-r-1},}3,\boldsymbol{2}^{b})+ \delta_{r=a}\zeta^{\star ,\mathfrak{m}}(\boldsymbol{2}^{b+1}) \right) .\\ & \\ D_{2r+1} (\zeta^{\star\star ,\mathfrak{m}}_{1}(\boldsymbol{2}^{a},3,\boldsymbol{2}^{b}))& = C_{r} \zeta^{ \mathfrak{l}}(\overline{2r+1})\otimes\left( \left( \delta_{r\leq b}-\delta_{r \leq a} + B_{r}^{r-b-1,b}\right) \zeta^{ \star\star ,\mathfrak{m}}_{1}(\boldsymbol{2}^{n-r}) + \zeta^{\star\star ,\mathfrak{m}}(\boldsymbol{2}^{a-r},3,\boldsymbol{2}^{b}) \right). \end{array}$$ This leads to the recursive formula $(\ref{eq:constraintb})$ for $B$. \end{itemize} \end{proof} \section{Motivic generalized Linebarger Zhao Conjecture} We conjecture the following motivic identities, which express each motivic MZV $\star$ as a motivic Euler $\sharp$ sum: \begin{conj}\label{lzg} For $a_{i},c_{i} \in \mathbb{N}^{\ast}$, $c_{i}\neq 2$, $$\zeta^{\star, \mathfrak{m}} \left( \boldsymbol{2}^{a_{0}},c_{1},\cdots,c_{p}, \boldsymbol{2}^{a_{p}}\right) =(-1)^{1+\delta_{c_{1}}}\zeta^{\sharp, \mathfrak{m}} \left(B_{0},\boldsymbol{1}^{c_{1}-3 },\cdots,\boldsymbol{1}^{ c_{i}-3 },B_{i}, \ldots, B_{p}\right), $$ where $\left\lbrace \begin{array}{l} B_{0}\mathrel{\mathop:}= \pm (2a_{0}+1-\delta_{c_{1}})\\ B_{i}\mathrel{\mathop:}= \pm(2a_{i}+3-\delta_{c_{i}}-\delta_{c_{i+1}})\\ B_{p}\mathrel{\mathop:}=\pm ( 2 a_{p}+2-\delta_{c_{p}}) \end{array}\right.$, with $\pm\mathrel{\mathop:}=\left\lbrace \begin{array}{l} - \text{ if } \mid B_{i}\mid \text{ even} \\ + \text{ if } \mid B_{i}\mid \text{ odd} \end{array} \right.$, $\begin{array}{l} \delta_{c}\mathrel{\mathop:}=\delta_{c=1},\\ \text{the Kronecker symbol}. \end{array}$ and $\boldsymbol{1}^{n}:=\boldsymbol{1}^{min(0,n)}$ is a sequence of $n$ 1 if $n\in\mathbb{N}$, an empty sequence else. \end{conj} \textsc{Remarks}: \begin{itemize} \item[$\cdot$] Motivic Euler $\sharp$ sums appearing on the right side have already been proven to be unramified in $\S 4.3$, i.e. MMZV. \item[$\cdot$] This conjecture implies that the motivic Hoffman $\star$ family is a basis, since it corresponds here to the motivic Euler $\sharp$ sum family proved to be a basis in Theorem $\ref{ESsharpbasis}$: cf. ($\ref{eq:LZhoffman}$). \item[$\cdot$] The number of sequences of consecutive $1$ in $\zeta^{\star}$, $n_{1}$ is linked with the number of even in $\zeta^{\sharp}$, $n_{e}$, here by the following formula:\\ $$n_{e}=1+2n_{1}-2\delta_{c_{p}} -\delta_{c_{1}}.$$ In particular, when there is no $1$ in the MMZV $\star$, there is only one even (at the end) in the Euler sum $\sharp$. There are always at least one even in the Euler sums. \end{itemize} Special cases of this conjecture, which are already proven for real Euler sums (references indicated in the braket), but remain conjectures in the motivic case: \begin{description} \item[Two-One] [Ohno Zudilin, $\cite{OZ}$.] \begin{equation}\label{eq:OZ21} \zeta^{\star, \mathfrak{m}} (\boldsymbol{2}^{a_{0}},1,\cdots,1, \boldsymbol{2}^{a_{p}})= - \zeta^{\sharp, \mathfrak{m}} \left( \overline{2a_{0}}, 2a_{1}+1, \ldots, 2a_{p-1}+1, 2 a_{p}+1\right) . \end{equation} \item [Three-One] [Broadhurst et alii, $\cite{BBB}$.] \footnote{The Three-One formula was conjectured for real Euler sums by Zagier, proved by Broadhurst et alii in $\cite{BBB}$.} \begin{equation}\label{eq:Z31} \zeta^{\star, \mathfrak{m}} (\boldsymbol{2}^{a_{0}},1,\boldsymbol{2}^{a_{1}},3 \cdots,1, \boldsymbol{2}^{a_{p-1}}, 3, \boldsymbol{2}^{a_{p}}) = -\zeta^{\sharp, \mathfrak{m}} \left( \overline{2a_{0}}, \overline{2a_{1}+2}, \ldots, \overline{2a_{p-1}+2}, \overline{2 a_{p}+2} \right) . \end{equation} \item[Linebarger-Zhao$\star$] [Linebarger Zhao, $\cite{LZ}$] With $c_{i}\geq 3$: \begin{equation}\label{eq:LZ} \zeta^{\star, \mathfrak{m}} \left( \boldsymbol{2}^{a_{0}},c_{1},\cdots,c_{p}, \boldsymbol{2}^{a_{p}}\right) = -\zeta^{\sharp, \mathfrak{m}} \left( 2a_{0}+1,\boldsymbol{1}^{ c_{1}-3 },\cdots,\boldsymbol{1}^{ c_{i}-3 },2a_{i}+3, \ldots, \overline{ 2 a_{p}+2} \right) \end{equation} In particular, when all $c_{i}=3$: \begin{equation}\label{eq:LZhoffman} \zeta^{\star, \mathfrak{m}} \left( \boldsymbol{2}^{a_{0}},3,\cdots,3, \boldsymbol{2}^{a_{p}}\right) = - \zeta^{\sharp, \mathfrak{m}} \left( 2a_{0}+1, 2a_{1}+3, \ldots, 2a_{p-1}+3, \overline{2 a_{p}+2}\right) . \end{equation} \end{description} \texttt{Examples}: Particular identities implied by the previous conjecture, sometimes known for MZV and which could then be proven for motivic Euler sums directly with the coaction: \begin{itemize} \item[$\cdot$] $ \zeta^{\star, \mathfrak{m}}(1, \left\lbrace 2 \right\rbrace^{n} )=2 \zeta^{ \mathfrak{m}}(2n+1).$ \item[$\cdot$] $ \zeta^{\star, \mathfrak{m}}(1, \left\lbrace 2 \right\rbrace^{a}, 1, \left\lbrace 2 \right\rbrace^{b} )= \zeta^{ \sharp\mathfrak{m} }(2a+1,2b+1)= 4 \zeta^{ \mathfrak{m} }(2a+1,2b+1)+ 2 \zeta^{ \mathfrak{m} }(2a+2b+2). $ \item[$\cdot$] $ \zeta^{ \mathfrak{m}} (n)= - \zeta^{\sharp, \mathfrak{m}} (\lbrace 1\rbrace^{n-2}, -2)= -\sum_{ w(\boldsymbol{k})=n \atop \boldsymbol{k} \text{admissible}} \boldsymbol{2}^{p} \zeta^{\mathfrak{m}}(k_{1}, \ldots, k_{p-1}, -k_{p}).$ \item[$\cdot$] $ \zeta^{ \star, \mathfrak{m}} (\lbrace 2 \rbrace ^{n})= \sum_{\boldsymbol{k} \in \lbrace \text{ even }\rbrace^{\times} \atop w(\boldsymbol{k})= 2n} \zeta^{\mathfrak{m}} (\boldsymbol{k})=- 2 \zeta^{\mathfrak{m}} (-2n) .$ \end{itemize} We paved the way for the proof of Conjecture $\ref{lzg}$, bringing it back to an identity in $\mathcal{L}$: \begin{theo} Let assume: \begin{itemize} \item[$(i)$] The analytic version of $\ref{lzg}$ is true. \item[$(ii)$] In the coalgebra $\mathcal{L}$, i.e. modulo products, for odd weights: \begin{equation}\label{eq:conjid} \zeta^{\sharp, \mathfrak{l}} _{B_{0}-1}(\boldsymbol{1}^{ \gamma_{1}},\cdots, \boldsymbol{1}^{\gamma_{p} },B_{p})\equiv \zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{a_{0}-1},c_{1},\cdots,\boldsymbol{2}^{a_{p}})-\zeta^{\star\star, \mathfrak{l}}_{1} (\boldsymbol{2}^{a_{0}}, c_{1}-1, \ldots, \boldsymbol{2}^{a_{p}}) , \end{equation} \begin{flushright} with $c_{1}\geq 3$, $a_{0}>0$, $\gamma_{i}=c_{i}-3 + 2\delta_{c_{i}}$ and $\left\lbrace \begin{array}{l} B_{0}= 2a_{0}+1-\delta_{c_{1}}\\ B_{i}=2a_{i}+3-\delta_{c_{i}}-\delta_{c_{i+1}}\\ B_{p}=2a_{p}+3-\delta_{c_{p}} \end{array} \right. $. \end{flushright} \end{itemize} Then: \begin{enumerate}[I.] \item Conjecture $\ref{lzg}$ is true, for motivic Euler sums. \item In the coalgebra $\mathcal{L}$, for odd weights, with $c_{1}\geq 3$ and the previous notations: \begin{equation}\label{eq:toolid} \zeta^{\sharp, \mathfrak{l}} (\boldsymbol{1}^{ \gamma_{1}},\cdots, \boldsymbol{1}^{ \gamma_{p} },B_{p})\equiv - \zeta^{\star, \mathfrak{l}}_{1} (c_{1}-1, \boldsymbol{2}^{a_{1}},c_{2},\cdots,c_{p}, \boldsymbol{2}^{a_{p}}). \end{equation} \end{enumerate} \end{theo} \texttt{ADDENDUM:} The hypothesis $(i)$ is proved: J. Zhao deduced it from its Theorem 1.4 in $\cite{Zh3}$.\\ \\ \textsc{Remark:} The $(ii)$ hypothesis should be proven either directly via the various relations in $\mathcal{L}$ proven in $\S 4.2$ (as for $\ref{eq:toolid}$), or using the coaction, which would require the analytic identity corresponding. Beware, $(ii)$ would only be true in $\mathcal{L}^{2}$, not in $\mathcal{H}^{2}$. \begin{proof} To prove this equality $1.$ at a motivic level by recursion, we would need to proof that the coaction is equal on both side, and use the conjecture analytic version of the same equality. We prove $I$ and $II$ successively, in a same recursion on the weight: \begin{enumerate}[I.] \item Using the formulas of the coactions $D_{r}$ for these families (Lemma $A.1.2$ and $A.1.4$), we can gather terms in both sides according to the right side, which leads to three types: $$ \begin{array}{llll} (a) & \zeta^{\star, \mathfrak{m}} (\cdots,\boldsymbol{2}^{a_{i}}, \alpha, \boldsymbol{2}^{\beta}, c_{j+1}, \cdots) & \longleftrightarrow & \zeta^{\sharp ,\mathfrak{m}}(B_{0} \cdots, B_{i}, \textcolor{magenta}{1^{\gamma}, B}, 1^{\gamma_{j+1}}, \ldots, B_{p}) \\ (b) & \zeta^{\star, \mathfrak{m}} (\cdots,\boldsymbol{2}^{a_{i-1}}, c_{i}, \boldsymbol{2}^{\beta}, c_{j+1}, \cdots) & \longleftrightarrow & \zeta^{\sharp ,\mathfrak{m}}(B_{0} \cdots, B_{i-1}, 1^{\gamma_{i}}, \textcolor{green}{B}, 1^{\gamma_{j+1}}, \ldots, B_{p}) \\ (c) & \zeta^{\star, \mathfrak{m}} (\cdots, c_{i}, \boldsymbol{2}^{\beta}, \alpha, \boldsymbol{2}^{a_{j}}, \cdots) & \longleftrightarrow & \zeta^{\sharp, \mathfrak{m}}(B_{0} \cdots, 1^{\gamma_{i+1}},\textcolor{cyan}{ B, 1^{\gamma}}, B_{j+1}, \ldots, B_{p}) \end{array},$$ with $ \gamma=\alpha-3$ and $B=2\beta +3-\delta_{c_{j+1}}$, or $B=2\beta+3 - \delta_{c_{i}}- \delta_{c_{j+1}}$ for $(b)$.\\ The third case, antisymmetric of the first case, may be omitted below. By recursive hypothesis, these right sides are equal and it remains to compare the left sides associated: \begin{enumerate} \item On the one hand, by lemma $A.1.2$, the left side corresponding: $$ \delta_{3\leq \alpha \leq c_{i+1}-1 \atop 0\leq \beta a_{j}} \zeta^{\star, \mathfrak{l}}_{c_{i+1}-\alpha}- (\boldsymbol{2}^{ a_{j}-\beta}, \ldots, \boldsymbol{2}^{a_{i+1}}).$$ On the other hand (Lemma $A.1.4$), the left side is: $$-\delta_{2 \leq B \leq B_{j} \atop 0\leq\gamma\leq\gamma_{i+1}-1}\zeta^{\sharp,\mathfrak{l}}(B_{j}-B+1, 1^{\gamma_{j}}, \ldots, 1^{\gamma_{i+1}-\gamma-1}).$$ They are both equal, by $\ref{eq:toolid}$, where $c_{i+1}-\alpha+2$ corresponds to $c_{1} $ and is greater than $3$.\\ \item By lemma $A.1.2$, the left side corresponding for $\zeta^{\star}$: $$\hspace*{-1.2cm}\begin{array}{llll} -& \delta_{c_{i}>3} \zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{a_{i}}, \ldots, \boldsymbol{2}^{ a_{j}-\beta-1}) & + & \delta_{c_{j}>3} \zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{a_{j}}, \ldots, \boldsymbol{2}^{ a_{i}-\beta-1}) \\ - & \delta_{c_{i}=1} \zeta^{\star\star, \mathfrak{l}} (\boldsymbol{2}^{a_{j}-\beta}, \ldots, \boldsymbol{2}^{ a_{i}}) & +& \delta_{c_{j+1}=1} \zeta^{\star\star, \mathfrak{l}} (\boldsymbol{2}^{a_{i}-\beta}, \ldots, \boldsymbol{2}^{ a_{j}})\\ + & \delta_{c_{i+1}=1 \atop \beta> a_{i}} \zeta^{\star\star, \mathfrak{l}}_{1} (\boldsymbol{2}^{a_{i}+a_{j}-\beta}, \ldots, \boldsymbol{2}^{ a_{i+1}}) & - & \delta_{c_{j}=1 \atop \beta> a_{j}} \zeta^{\star\star, \mathfrak{l}}_{1} (\boldsymbol{2}^{a_{i}+a_{j}-\beta}, \ldots, \boldsymbol{2}^{ a_{j-1}})\\ - & \delta_{a_{j}< \beta \leq a_{i}+ a_{j}+1} \zeta^{\star\star, \mathfrak{l}}_{c_{j}-2} (\boldsymbol{2}^{a_{j-1}}, \ldots, \boldsymbol{2}^{a_{i}+ a_{j}-\beta+1}) & + & \delta_{a_{i}< \beta \leq a_{j}+a_{i}+1} \zeta^{\star\star, \mathfrak{l}}_{c_{i+1}-2} (\boldsymbol{2}^{a_{i+1}}, \ldots, \boldsymbol{2}^{ a_{i}+ a_{j} -\beta+1}) . \end{array}$$ It should correspond to (using still lemma $A.1.4$), with $B_{k}=2a_{k}+3-\delta_{c_{k}}-\delta_{c_{k+1}}$, $\gamma_{k}=c_{k}-3+2\delta_{c_{k}}$ and $B=2\beta+3 - \delta_{c_{i}}- \delta_{c_{j+1}}$: $$\left( \delta_{B_{i}< B}\zeta^{\sharp\sharp,\mathfrak{l}}_{B_{i}+B_{j}-B}(1^{\gamma_{j}}, \ldots, 1^{\gamma_{i+1}}) - \delta_{B_{j}< B}\zeta^{\sharp\sharp,\mathfrak{l}}_{B_{i}+B_{j}-B}(1^{\gamma_{i+1}}, \ldots, 1^{\gamma_{j}}) \right. $$ $$\left. + \zeta^{\sharp\sharp,\mathfrak{l}}_{B_{i}-B}(1^{\gamma_{i+1}}, \ldots, B_{j}) - \zeta^{\sharp\sharp,\mathfrak{l}}_{B_{j}-B}(1^{\gamma_{j}}, \ldots, B_{i})\right) .$$ The first line has even depth, while the second line has odd depth, as noticed in Lemma $A.1.4$. Let distinguish three cases, and assume $a_{i}<a_{j}$:\footnote{The case $a_{j}<a_{i}$ is anti-symmetric, hence analogue.} \begin{itemize} \item[$(i)$] When $\beta< a_{i}<a_{j}$, we should have: \begin{equation}\label{eq:ci} \zeta^{\sharp\sharp,\mathfrak{l}}_{B_{i}-B}(1^{\gamma_{i+1}}, \ldots, B_{j}) - \zeta^{\sharp\sharp,\mathfrak{l}}_{B_{j}-B}(1^{\gamma_{j}}, \ldots, B_{i}) \text{ equal to:} \end{equation} $$\begin{array}{llll} - \delta_{c_{i} >3} & \zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{a_{j}-\beta-1}, \ldots, \boldsymbol{2}^{ a_{i}}) & - \delta_{c_{i}=1} & \zeta^{\star\star, \mathfrak{l}} (\boldsymbol{2}^{a_{j}-\beta}, \ldots, \boldsymbol{2}^{ a_{i}}) \\ +\delta_{c_{j+1}>3} & \zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{a_{i}-\beta-1}, \ldots, \boldsymbol{2}^{ a_{j}}) & + \delta_{c_{j+1}=1} & \zeta^{\star\star, \mathfrak{l}} (\boldsymbol{2}^{a_{i}-\beta}, \ldots, \boldsymbol{2}^{ a_{j}}) \end{array}$$ \begin{itemize} \item[$\cdot$] Let first look at the case where $c_{i}>3, c_{j+1}>3$. Renumbering the indices, using $\textsc{Shift}$ for odd depth for the second line, it is equivalent to, with $\alpha=\beta +1$, $B_{p}=2a_{p}+3, B_{0}=2a_{0}+3$: $$\begin{array}{llll} &\zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{a_{0}-\alpha},c_{1},\cdots,c_{p},\boldsymbol{2}^{a_{p}}) & -& \zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{a_{0}},c_{1},\cdots,c_{p},\boldsymbol{2}^{a_{p}-\alpha}) \\ \equiv & \zeta^{\sharp\sharp, \mathfrak{l}} _{B_{0}-B}(1^{\gamma_{1}},\cdots, 1^{\gamma_{p}},B_{p}) & - & \zeta^{\sharp\sharp, \mathfrak{l}} _{B_{p}-B}(B_{0}, 1^{\gamma_{1}},\cdots, 1^{\gamma_{p}})\\ \equiv & \zeta^{\sharp\sharp, \mathfrak{l}} _{B_{p}-1}(B_{0}-B+1,1^{\gamma_{1}},\cdots, 1^{\gamma_{p}}) & -& \zeta^{\sharp\sharp,\mathfrak{l}} _{B_{p}-B}(B_{0}, 1^{ \gamma_{1}},\cdots, 1^{\gamma_{p}})\\ \equiv & \zeta^{\sharp, \mathfrak{l}} _{B_{p}-1}(B_{0}-B+1,1^{\gamma_{1}},\cdots, 1^{\gamma_{p}}) & -& \zeta^{\sharp,\mathfrak{l}} _{B_{p}-B}(B_{0}, 1^{ \gamma_{1}},\cdots, 1^{\gamma_{p}}). \end{array}$$ This boils down to $(\ref{eq:conjid})$ applied to each $\zeta^{\star\star}_{2}$, since by \textsc{Shift} $(\ref{eq:shift})$ the two terms of the type $\zeta^{\star\star}_{1}$ get simplified.\\ \item[$\cdot$] Let now look at the case where $c_{i}=1, c_{j+1}>3$ \footnote{The case $c_{j+1}=1, c_{i}>3$ being analogue, by symmetry.}; hence $B_{i}=2a_{i}+2-\delta_{c_{i+1}}$, $B=2\beta+2$. In a first hand, we have to consider: $$ \zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{a_{i}-\beta-1},c_{i+1},\cdots,c_{j},\boldsymbol{2}^{a_{j}}) - \zeta^{\star\star, \mathfrak{l}} (\boldsymbol{2}^{a_{j}-\beta},c_{j},\cdots,c_{i+1},\boldsymbol{2}^{a_{i}}).$$ By renumbering indices in $\ref{eq:ci}$, the correspondence boils down here to the following $\boldsymbol{\diamond} = \boldsymbol{\Join}$, where $B_{0}=2a_{0}+3-\delta_{c_{1}}$, $B_{i}=2a_{i}+3-\delta_{c_{i}}-\delta_{c_{i+1}}$, $B=2\beta +2$: $$ (\boldsymbol{\diamond}) \quad \zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{a_{0}-\beta},c_{1},\cdots,c_{p},\boldsymbol{2}^{a_{p}}) - \zeta^{\star\star, \mathfrak{l}} (\boldsymbol{2}^{a_{0}+1},c_{1},\cdots,c_{p},\boldsymbol{2}^{a_{p}-\beta})$$ $$(\boldsymbol{\Join}) \quad \zeta^{\sharp\sharp,\mathfrak{l}}_{B_{0}-B+1}(1^{\gamma_{1}}, \ldots,1^{\gamma_{p}}, B_{p}) - \zeta^{\sharp\sharp,\mathfrak{l}}_{B_{p}-B}(1^{\gamma_{p}}, \ldots,1^{\gamma_{1}}, B_{0}+1).$$ Turning in $(\boldsymbol{\diamond})$ the second term into a $\zeta^{\star, \mathfrak{l}}(2, \cdots)+ \zeta^{\star\star, \mathfrak{l}}_{2} (\cdots)$, and applying the identity $(\ref{eq:conjid})$ for both terms $\zeta^{\star\star, \mathfrak{l}}_{2}(\cdots)$ leads to: $$\hspace*{-0.7cm}(\boldsymbol{\diamond}) \left\lbrace \begin{array}{lll} + \zeta^{\star\star, \mathfrak{l}}_{1} (\boldsymbol{2}^{a_{0}-\beta+1},c_{1}-1,\cdots,c_{p},\boldsymbol{2}^{a_{p}}) & - \zeta^{\star\star, \mathfrak{l}}_{1} (\boldsymbol{2}^{a_{0}+1},c_{1}-1,\cdots,c_{p},\boldsymbol{2}^{a_{p}-\beta})& \quad (\boldsymbol{\diamond_{1}}) \\ + \zeta^{\sharp, \mathfrak{l}}_{B_{0}-B+1} (\boldsymbol{1}^{\gamma_{1}},\cdots,\boldsymbol{1}^{\gamma_{p}},B_{p}) &- \zeta^{\sharp, \mathfrak{l}}_{B_{0}-1} (\boldsymbol{1}^{\gamma_{1}},\cdots,\boldsymbol{1}^{\gamma_{p}},B_{p}-B+2)& \quad(\boldsymbol{\diamond_{2}}) \\ - \zeta^{\star, \mathfrak{l}} (\boldsymbol{2}^{a_{0}+1},c_{1},\cdots,c_{p},\boldsymbol{2}^{a_{p}-\beta}) & & \quad (\boldsymbol{\diamond_{3}}) \\ \end{array} \right. $$ The first line, $(\boldsymbol{\diamond}_{1}) $ by $\textsc{Shift}$ is zero. We apply $\textsc{Antipode}$ $\ast$ on the terms of the second line, then turn each into a difference $\zeta^{\sharp\sharp}_{n}(m, \cdots)- \zeta^{\sharp\sharp}_{n+m}(\cdots)$; the terms of the type $\zeta^{\sharp\sharp}_{n+m}(\cdots)$, are identical and get simplified: $$(\boldsymbol{\diamond_{2}}) \quad \begin{array}{lll} \equiv & \zeta^{\sharp\sharp, \mathfrak{l}}_{B_{0}-B+1} (B_{p},\boldsymbol{1}^{\gamma_{p}},\cdots,\boldsymbol{1}^{\gamma_{1}}) & - \zeta^{\sharp\sharp, \mathfrak{l}}_{B_{0}-B+1+ B_{p}} (\boldsymbol{1}^{\gamma_{p}},\cdots,\boldsymbol{1}^{\gamma_{1}}) \\ & -\zeta^{\sharp\sharp, \mathfrak{l}}_{B_{0}-1} (B_{p}-B+2,\boldsymbol{1}^{\gamma_{p}},\cdots,\boldsymbol{1}^{\gamma_{1}}) & + \zeta^{\sharp\sharp, \mathfrak{l}}_{B_{0}-B+1+ B_{p}} (\boldsymbol{1}^{\gamma_{p}},\cdots,\boldsymbol{1}^{\gamma_{1}}) \\ \equiv & \zeta^{\sharp\sharp, \mathfrak{l}}_{B_{0}-B+1} (B_{p},\boldsymbol{1}^{\gamma_{p}},\cdots,\boldsymbol{1}^{\gamma_{1}}) & - \zeta^{\sharp\sharp, \mathfrak{l}}_{B_{0}-1} (B_{p}-B+2,\boldsymbol{1}^{\gamma_{p}},\cdots,\boldsymbol{1}^{\gamma_{1}}). \end{array}$$ Furthermore, applying the recursion hypothesis (I.), i.e. conjecture $\ref{lzg}$ on $(\boldsymbol{\diamond}_{3})$, and turn it into a difference of $\zeta^{\sharp\sharp}$: $$(\boldsymbol{\diamond_{3}})\quad \begin{array}{ll} & - \zeta^{\star, \mathfrak{l}} (\boldsymbol{2}^{a_{0}+1},c_{1},\cdots,c_{p},\boldsymbol{2}^{a_{p}-\beta})\\ \equiv & - \zeta^{\sharp, \mathfrak{l}} (B_{p}-B+1,\boldsymbol{1}^{\gamma_{p}},\cdots,\boldsymbol{1}^{\gamma_{1}},B_{0})\\ \equiv & - \zeta^{\sharp\sharp, \mathfrak{l}} (B_{p}-B+1,\boldsymbol{1}^{\gamma_{p}},\cdots,\boldsymbol{1}^{\gamma_{1}},B_{0}) + \zeta^{\sharp\sharp, \mathfrak{l}}_{B_{p}-B+1} (\boldsymbol{1}^{\gamma_{p}},\cdots,\boldsymbol{1}^{\gamma_{1}},B_{0}) \end{array}$$ When adding $(\boldsymbol{\diamond_{2}})$ and $(\boldsymbol{\diamond_{3}})$ to get $(\boldsymbol{\diamond})$, the two last terms (odd depth) being simplified by $\textsc{Shift}$, it remains: $$(\boldsymbol{\diamond}) \quad \zeta^{\sharp\sharp, \mathfrak{l}}_{B_{0}-B+1} (B_{p},\boldsymbol{1}^{\gamma_{p}},\cdots,\boldsymbol{1}^{\gamma_{1}}) - \zeta^{\sharp\sharp, \mathfrak{l}} (B_{p}-B+1,\boldsymbol{1}^{\gamma_{p}},\cdots,\boldsymbol{1}^{\gamma_{1}},B_{0}). $$ This, applying \textsc{Antipode} $\ast$ to the first term, $\textsc{Cut}$ and $\textsc{Shift}$ to the second, corresponds to $(\boldsymbol{\Join})$.\\ \end{itemize} \item[$(ii)$] When $\beta >a_{j}>a_{i}$, we should have: $$\begin{array}{lll} & - \zeta^{\star\star, \mathfrak{l}}_{c_{j}-2} (\boldsymbol{2}^{a_{j-1}}, \ldots, \boldsymbol{2}^{a_{i}+ a_{j}-\beta+1}) & + \zeta^{\star\star, \mathfrak{l}}_{c_{i+1}-2} (\boldsymbol{2}^{a_{i+1}}, \ldots, \boldsymbol{2}^{ a_{i}+ a_{j} -\beta+1})\\ \equiv & + \zeta^{\sharp\sharp,\mathfrak{l}}_{B_{i}+B_{j}-B}(1^{\gamma_{j}}, \ldots, 1^{\gamma_{i+1}}) & -\zeta^{\sharp\sharp,\mathfrak{l}}_{B_{i}+B_{j}-B}(1^{\gamma_{i+1}}, \ldots, 1^{\gamma_{j}}). \end{array}$$ Using \textsc{Shift} $(\ref{eq:shift})$ for the first line, and renumbering the indices, it is equivalent to, with $c_{1},c_{p} \geq 3$ and $a_{0}>0$: \begin{equation} \label{eq:corresp3} \zeta^{\star\star, \mathfrak{l}}_{1} (\boldsymbol{2}^{a_{0}},c_{1}-1,\cdots,c_{p})- \zeta^{\star\star, \mathfrak{l}}_{1} (\boldsymbol{2}^{a_{0}},c_{1},\cdots,c_{p}-1) \end{equation} $$ \equiv \zeta^{\sharp\sharp, \mathfrak{l}} _{B_{0}+2}(1^{\gamma_{1}},\cdots, 1^{\gamma_{p}})-\zeta^{\sharp\sharp, \mathfrak{l}} _{B_{0}+2}(1^{\gamma_{p}},\cdots, 1^{\gamma_{1}}) \equiv \zeta^{\sharp, \mathfrak{l}} _{B_{0}+2}(1^{\gamma_{1}}, \ldots, 1^{\gamma_{p}}).$$ The last equality comes from Corollary $4.2.7$, since depth is even. By $(\ref{eq:corresp})$ applied on each term of the first line $$\zeta^{\star\star, \mathfrak{l}}_{1} (\boldsymbol{2}^{a_{0}},c_{1}-1,\cdots,c_{p})- \zeta^{\star\star, \mathfrak{l}}_{1} (\boldsymbol{2}^{a_{0}},c_{1},\cdots,c_{p}-1)$$ $$\hspace*{-1.5cm} \equiv \zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{a_{0}-1},c_{1},\cdots,c_{p})+ \zeta^{\sharp \mathfrak{l}} _{2a_{0}}(3,1^{\gamma_{p}},\cdots, 1^{\gamma_{1}}) - \zeta^{\star\star, \mathfrak{l}}_{2} (c_{p},\cdots,c_{1},\boldsymbol{2}^{a_{0}-1}) - \zeta^{\sharp\sharp, \mathfrak{l}}_{2}(2a_{0}+1,1^{\gamma_{p}},\cdots, 1^{\gamma_{1}}).$$ By Antipode $\shuffle$, the $\zeta^{\star\star}$ get simplified, and by the definition of $\zeta^{\sharp\sharp}$, the previous equality is equal to: $$\equiv- \zeta^{\sharp \mathfrak{l}} _{2a_{0}+3}(1^{\gamma_{p}},\cdots, 1^{\gamma_{1}}) + \zeta^{\sharp\sharp \mathfrak{l}} _{2a_{0}}(3,1^{\gamma_{p}},\cdots, 1^{\gamma_{1}}) + \zeta^{\sharp\sharp \mathfrak{l}} _{2a_{0}+4}(1^{\gamma_{p}-1},\cdots, 1^{\gamma_{1}}) $$ $$ - \zeta^{\sharp\sharp \mathfrak{l}} _{2}(2a_{0}+1, 1^{\gamma_{p}},\cdots, 1^{\gamma_{1}}) + \zeta^{\sharp\sharp \mathfrak{l}} _{2a_{0}+3}(1^{\gamma_{r}},\cdots, 1^{\gamma_{1}}).$$ Then, by \textsc{Shift} $(\ref{eq:shift})$, the second and fourth term get simplified while the third and fifth term get simplified by \textsc{Cut} $(\ref{eq:cut})$. It remains: $$- \zeta^{\sharp , \mathfrak{l}} _{2a_{0}+3}(1^{\gamma_{p}},\cdots, 1^{\gamma_{1}}), \quad \text{ which leads straight to } \ref{eq:corresp3}.$$ \item[$(iii)$] When $a_{i}< \beta <a_{j}$, we should have: \begin{multline}\nonumber - \zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{a_{i}}, \ldots, \boldsymbol{2}^{a_{j}-\beta-1}) + \zeta^{\star\star, \mathfrak{l}}_{c_{i+1}-2} (\boldsymbol{2}^{a_{i+1}}, \ldots, \boldsymbol{2}^{a_{i}+ a_{j} -\beta+1}) \\ \equiv \zeta^{\sharp\sharp,\mathfrak{l}}_{B_{i}+B_{j}-B}(1^{\gamma_{j}}, \ldots, 1^{\gamma_{i+1}})-\zeta^{\sharp\sharp,\mathfrak{l}}_{B_{j}-B}(1^{\gamma_{j}}, \ldots, B_{i}). \end{multline} Using resp. \textsc{Antipode} \textsc{Shift} $(\ref{eq:shift})$ for the first line, and re-ordering the indices, it is equivalent to, with $c_{1}\geq 3$, $B_{0}=2a_{0}+1-\delta c_{1}$ here: \begin{equation} \label{eq:corresp} \zeta^{\star\star, \mathfrak{l}}_{2} (\boldsymbol{2}^{a_{0}-1},c_{1},\cdots,c_{p},\boldsymbol{2}^{a_{p}})- \zeta^{\star\star, \mathfrak{l}}_{1} (\boldsymbol{2}^{a_{0}},c_{1}-1,\cdots,c_{p},\boldsymbol{2}^{a_{p}}) \end{equation} $$\equiv \zeta^{\sharp\sharp, \mathfrak{l}} _{B_{p}-1}(B_{0}, 1^{\gamma_{1}},\cdots, 1^{\gamma_{p}}) - \zeta^{\sharp\sharp, \mathfrak{l}} _{B_{0}+B_{p}-1}(1^{ \gamma_{p}},\cdots, 1^{\gamma_{1}}) \equiv \zeta^{\sharp, \mathfrak{l}} _{B_{0}-1}(1^{\gamma_{1}},\cdots, 1^{\gamma_{p}},B_{p}).$$ This matches with the identity $\ref{eq:conjid}$; the last equality coming from $\textsc{Shift}$ since depth is odd. \end{itemize} \item Antisymmetric of the first case.\\ \end{enumerate} \item Let us denote the sequences $\textbf{X}=\boldsymbol{2}^{a_{1}}, \ldots , \boldsymbol{2}^{a_{p}}$ and $\textbf{Y}= \boldsymbol{1}^{\gamma_{1}-1}, B_{1},\cdots, \boldsymbol{1}^{\gamma_{p}} $.\\ We want to prove that: \begin{equation} \label{eq:1234567} \zeta^{\sharp,\mathfrak{l}} (1,\textbf{Y},B_{p})\equiv -\zeta^{\star,\mathfrak{l}}_{1} (c_{1}-1,\textbf{X}) \end{equation} Relations used are mostly these stated in $\S 4.2$. Using the definition of $\zeta^{\star\star}$: \begin{equation}\label{eq:12345} \begin{array}{ll} -\zeta^{\star,\mathfrak{l}}_{1} (c_{1}-1,\textbf{X}) & \equiv -\zeta^{\star\star,\mathfrak{l}}_{1} (c_{1}-1,\textbf{X})+ \zeta^{\star\star,\mathfrak{l}}_{c_{1}} (\textbf{X})\\ & \equiv - \zeta^{\star\star,\mathfrak{l}} (1,c_{1}-1,\textbf{X})+ \zeta^{\star,\mathfrak{l}} (1,c_{1}-1,\textbf{X})+ \zeta^{\star\star,\mathfrak{l}}(c_{1},\textbf{X})- \zeta^{\star,\mathfrak{l}} (c_{1},\textbf{X}) \\ & \equiv \zeta^{\star,\mathfrak{l}} (1,c_{1}-1,\textbf{X})- \zeta^{\star,\mathfrak{l}} (c_{1},\textbf{X})- \zeta^{\star,\mathfrak{l}}(c_{1}-1,\textbf{X},1). \end{array} \end{equation} There, the first and third term in the second line, after applying \textsc{Shift}, have given the last $\zeta^{\star}$ in the last line.\\ Using then Conjecture $\ref{lzg}$, in terms of MMZV$^{\sharp}$, then MMZV$^{\sharp\sharp}$, it gives: \begin{multline} \zeta^{\sharp,\mathfrak{l}} (2,\textbf{Y},B_{p}-1)+ \zeta^{\sharp,\mathfrak{l}} (1,1,\textbf{Y},B_{p}-1)+ \zeta^{\sharp,\mathfrak{l}} (1,\textbf{Y},B_{p}-1,1)\\ \equiv \zeta^{\sharp\sharp,\mathfrak{l}} (2,\textbf{Y},B_{p}-1)- \zeta^{\sharp\sharp,\mathfrak{l}} _{2}(\textbf{Y},B_{p}-1)+ \zeta^{\sharp\sharp,\mathfrak{l}} (1,1,\textbf{Y},B_{p}-1)\\ -\zeta^{\sharp\sharp,\mathfrak{l}}_{1} (1,\textbf{Y},B_{p}-1)+ \zeta^{\sharp\sharp,\mathfrak{l}} (1,\textbf{Y},B_{p}-1,1)-\zeta^{\sharp\sharp,\mathfrak{l}}_{1} (\textbf{Y},B_{p}-1,1) \end{multline} First term (odd depth)\footnote{Since weight is odd, we know also depth parity of these terms.} is simplified with the last, by $\textsc{Schift}$. Fifth term (even depth) get simplified by \textsc{Cut} with the fourth term. Hence it remains two terms of even depth: $$\equiv - \zeta^{\sharp\sharp,\mathfrak{l}} _{2}(\textbf{Y},B_{p}-1)+ \zeta^{\sharp\sharp,\mathfrak{l}} (1,1,\textbf{Y},B_{p}-1) \equiv - \zeta^{\sharp\sharp,\mathfrak{l}} _{1}(\textbf{Y},B_{p})+ \zeta^{\sharp\sharp,\mathfrak{l}}_{B_{p}-1} (1,1,\textbf{Y}) , $$ where \textsc{Minus} resp. \textsc{Cut} have been applied. This matches with $(\ref{eq:1234567})$ since, by $\textsc{Shift}:$ $$\equiv - \zeta^{\sharp\sharp,\mathfrak{l}} _{1}(\textbf{Y},B_{p})+ \zeta^{\sharp\sharp,\mathfrak{l}} (1,\textbf{Y},B_{p})\equiv \zeta^{\sharp,\mathfrak{l}}(1,\textbf{Y},B_{p}). $$ The case $c_{1}=3$ slightly differs since $(\ref{eq:12345})$ gives, by recursion hypothesis I.($\ref{lzg}$): $$ -\zeta^{\star,\mathfrak{l}}_{1} (2,\textbf{X})\equiv \zeta^{\sharp,\mathfrak{l}} (B_{1}+1,\textbf{Y}',B_{p}-1)+ \zeta^{\sharp,\mathfrak{l}} (1,B_{1},\textbf{Y}',B_{p}-1)+ \zeta^{\sharp,\mathfrak{l}} (B_{1},\textbf{Y}',B_{p}-1,1),$$ where $\textbf{Y}'= \boldsymbol{1}^{\gamma_{2}},\cdots, \boldsymbol{1}^{\gamma_{p}} $, odd depth. Turning into MES$^{\sharp\sharp}$, and using identities of $\S 4.2$ in the same way than above, leads to the result. Indeed, from: $$\equiv \zeta^{\sharp\sharp,\mathfrak{l}} (B_{1}+1,\textbf{Y}',B_{p}-1)+ \zeta^{\sharp\sharp,\mathfrak{l}} (1,B_{1},\textbf{Y}',B_{p}-1)+ \zeta^{\sharp\sharp,\mathfrak{l}} (B_{1},\textbf{Y}',B_{p}-1,1)$$ $$-\zeta^{\sharp\sharp,\mathfrak{l}}_{B_{1}+1} (\textbf{Y}',B_{p}-1)- \zeta^{\sharp\sharp,\mathfrak{l}}_{1} (B_{1},\textbf{Y}',B_{p}-1)-\zeta^{\sharp\sharp,\mathfrak{l}}_{B_{1}} (\textbf{Y}',B_{p}-1,1)$$ First and last terms get simplified via $\textsc{Shift}$, while third and fifth term get simplified by $\textsc{Cut}$; besides, we apply \textsc{minus} for second term, and \textsc{minus} for the fourth term, which are both of even depth. This leads to $\ref{eq:toolid}$, using again $\textsc{Shift}$ for the first term: $$\begin{array}{l} \equiv\zeta^{\sharp\sharp,\mathfrak{l}}_{B_{p}-1} (1,B_{1},\textbf{Y}')-\zeta^{\sharp\sharp,\mathfrak{l}}_{B_{1}} (\textbf{Y}',B_{p}) \\ \equiv \zeta^{\sharp\sharp,\mathfrak{l}} (B_{1},\textbf{Y}',B_{p})-\zeta^{\sharp\sharp,\mathfrak{l}}_{B_{1}} (\textbf{Y}',B_{p}) \\ \equiv \zeta^{\sharp,\mathfrak{l}} (B_{1},\textbf{Y}',B_{p}). \end{array}$$ \end{enumerate} \end{proof} \section{Appendix $1$: From the linearized octagon relation} The identities in the coalgebra $\mathcal{L}$ obtained from the linearized octagon relation $\ref{eq:octagonlin}$: \begin{lemm}\label{lemmlor} In the coalgebra $\mathcal{L}$, $n_{i}\in\mathbb{Z}^{\ast}$:\footnote{Here, $\mlq + \mrq$ still denotes the operation where absolute values are summed and signs multiplied.} \begin{itemize} \item[$(i)$] $\zeta^{\star\star, \mathfrak{l}}(n_{0},\cdots, n_{p})= (-1)^{w+1} \zeta^{\star\star,\mathfrak{l}}(n_{p},\cdots, n_{0})$. \item[$(ii)$] $\zeta^{\mathfrak{l}}(n_{0},\cdots, n_{p})+(-1)^{w+p} \zeta^{\star\star\mathfrak{l}}(n_{0},\cdots, n_{p})+(-1)^{p} \zeta^{\star\star\mathfrak{l}}_{n_{p}}(n_{p-1},\cdots,n_{1},n_{0})=0$. \item[$(iii)$] $$\hspace*{-1cm}\zeta^{\mathfrak{l}}_{n_{0}-1}(n_{1},\cdots, n_{p})- \zeta^{\mathfrak{l}}_{n_{0}}(n_{1},\cdots,n_{p-1}, n_{p}\mlq + \mrq 1 )=(-1)^{w} \left[ \zeta^{\star\star,\mathfrak{l}}_{n_{0}-1}(n_{1},\cdots, n_{p})- \zeta^{\star\star,\mathfrak{l}}_{n_{0}}(n_{1},\cdots,n_{p-1}, n_{p}\mlq + \mrq 1)\right].$$ \end{itemize} \end{lemm} \begin{proof} The sign of $n_{i}$ is denoted $\epsilon_{i}$ as usual. First, we remark that, with $\eta_{i}=\pm 1$, $n_{i}=\epsilon_{i} (a_{i}+1)$, and $\epsilon_{i}= \eta_{i}\eta_{i+1}$: $$\hspace*{-0.5cm}\begin{array}{ll} \Phi^{\mathfrak{m}}(e_{\infty}, e_{-1},e_{1}) & = \sum I^{\mathfrak{m}} \left(0; (-\omega_{0})^{a_{0}} (-\omega_{-\eta_{1}\star}) (-\omega_{0})^{a_{1}} \cdots (-\omega_{-\eta_{p}\star}) (-\omega_{0})^{a_{p}} ;1 \right) e_{0}^{a_{0}}e_{\eta_{1}} e_{0}^{a_{1}} \cdots e_{\eta_{p}} e_{0}^{a_{p}}\\ & \\ & = \sum (-1)^{n+p}\zeta^{\star\star,\mathfrak{m}}_{n_{0}-1} \left( n_{1}, \cdots, n_{p-1}, -n_{p}\right) e_{0}^{a_{0}}e_{\eta_{1}} e_{0}^{a_{1}} \cdots e_{\eta_{p}} e_{0}^{a_{p}}. \\ \end{array}$$ Similarly, with $ \mu_{i}\mathrel{\mathop:}= \left\lbrace \begin{array}{ll} \star & \texttt{if } \eta_{i}=1\\ 1 & \texttt{if } \eta_{i}=-1 \end{array} \right. $, applying the homography $\phi_{\tau\sigma}$ to get the second line: $$\hspace*{-0.5cm}\begin{array}{ll} \Phi^{\mathfrak{m}}(e_{-1}, e_{0},e_{\infty}) & = \sum I^{\mathfrak{m}} \left(0; (\omega_{1}-\omega_{-1})^{a_{0}} \omega_{\mu_{1}} (\omega_{1}-\omega_{-1})^{a_{1}} \cdots \omega_{\mu_{p}} (\omega_{1}-\omega_{-1})^{a_{p}} ;1 \right) e_{0}^{a_{0}}e_{\eta_{1}} e_{0}^{a_{1}} \cdots e_{\eta_{p}} e_{0}^{a_{p}}\\ & \\ \Phi^{\mathfrak{l}}(e_{-1}, e_{0},e_{\infty}) & = \sum (-1)^{p} I^{\mathfrak{m}} \left(0; 0^{a_{0}} \omega_{-\eta_{1}} 0^{a_{1}} \cdots \omega_{-\eta_{p}} 0^{a_{p}} ;1 \right) e_{0}^{a_{0}}e_{\eta_{1}} e_{0}^{a_{1}} \cdots e_{\eta_{p}} e_{0}^{a_{p}}\\ & \\ & = \sum \zeta^{\mathfrak{m}}_{n_{0}-1} \left( n_{1}, \cdots, n_{p-1}, -n_{p}\right) e_{0}^{a_{0}}e_{\eta_{1}} e_{0}^{a_{1}} \cdots e_{\eta_{p}} e_{0}^{a_{p}}. \\ \end{array}$$ Lastly, still using $\phi_{\tau\sigma}$, with here $\mu_{i}\mathrel{\mathop:}= \left\lbrace \begin{array}{ll} \star & \texttt{if } \eta_{i}=1\\ 1 & \texttt{if } \eta_{i}=-1 \end{array} \right. $: $$\hspace*{-0.5cm}\begin{array}{ll} \Phi^{\mathfrak{m}}(e_{1}, e_{\infty},e_{0}) & = \sum I^{\mathfrak{m}} \left(0; (\omega_{-1}-\omega_{1})^{a_{0}} \omega_{\mu_{1}} (\omega_{-1}-\omega_{1})^{a_{1}} \cdots \omega_{\mu_{p}} (\omega_{-1}-\omega_{1})^{a_{p}} ;1 \right) e_{0}^{a_{0}}e_{\eta_{1}} e_{0}^{a_{1}} \cdots e_{\eta_{p}} e_{0}^{a_{p}}\\ & \\ \Phi^{\mathfrak{l}}(e_{1}, e_{\infty},e_{0}) & = \sum (-1)^{w+1} I^{\mathfrak{m}} \left(0; 0^{a_{0}} \omega_{\eta_{1}\star} 0^{a_{1}} \cdots \omega_{\eta_{p}\star} 0^{a_{p}} ;1 \right) e_{0}^{a_{0}}e_{\eta_{1}} e_{0}^{a_{1}} \cdots e_{\eta_{p}} e_{0}^{a_{p}}\\ & \\ & = \sum (-1)^{n+p+1}\zeta^{\star\star,\mathfrak{m}}_{n_{0}-1} \left( n_{1}, \cdots, n_{p-1}, n_{p}\right) e_{0}^{a_{0}}e_{\eta_{1}} e_{0}^{a_{1}} \cdots e_{\eta_{p}} e_{0}^{a_{p}}. \\ \end{array}$$ \begin{itemize} \item[$(i)$] This case is the one used in Theorem $\ref{hybrid}$. This identity is equivalent to, in terms of iterated integrals, for $X$ any sequence of $\left\lbrace 0, \pm 1 \right\rbrace $ or of $\left\lbrace 0, \pm \star \right\rbrace $: $$\left\lbrace \begin{array}{llll} I^{\mathfrak{l}}(0;0^{k}, \star, X ;1) & = & I^{\mathfrak{l}}(0; X, \star, 0^{k}; 1) & \text{ if } \prod_{i=0}^{p} \epsilon_{i}=1 \Leftrightarrow \eta_{0}=1\\ I^{\mathfrak{l}}(0;0^{k}, -\star, X ;1) & = & I^{\mathfrak{l}}(0; -X, -\star, 0^{k}; 1) & \text{ if } \prod_{i=0}^{p} \epsilon_{i}=-1 \Leftrightarrow \eta_{0}=-1\\ \end{array} \right. $$ The first case is deduced from $\ref{eq:octagonlin}$ when looking at the coefficient of a word beginning and ending by $e_{-1}$ (or beginning and ending by $e_{1}$), whereas the second case is obtained from the coefficient of a word beginning by $e_{-1}$ and ending by $e_{1}$, or beginning by $e_{1}$ and ending by $e_{-1}$. \item[$(ii)$] Let split into two cases, according to the sign of $\prod \epsilon_{i}$: \begin{itemize} \item[$\cdot$] In $\ref{eq:octagonlin}$, when looking at the coefficient of a word beginning by $e_{1}$ and ending by $e_{0}$, only these three terms contribute: $$ \Phi^{\mathfrak{l}}(e_{-1}, e_{0},e_{\infty})e_{0}- \Phi^{\mathfrak{l}}(e_{\infty}, e_{-1},e_{1})e_{0}- e_{1} \Phi^{\mathfrak{l}}(e_{1}, e_{\infty},e_{0}) .$$ Moreover, the coefficient of $e_{-1} e_{0}^{a_{0}} e_{\eta_{1}} \cdots e_{\eta_{p}} e_{0}^{a_{p}+1}$ is, using the expressions above for $\Phi^{\mathfrak{l}}(\cdot)$: \begin{multline}\nonumber (-1)^{p} I^{\mathfrak{l}}(0; -1, -X; 1)+ (-1)^{w+1} I^{\mathfrak{l}}(0; -\star, -X_{\star}; 1)+ (-1)^{w}I^{\mathfrak{l}}(0; X_{\star}, 0; 1)=0, \\ \text{where }\begin{array}{l} X:= \omega_{0}^{a_{0}} \omega_{\eta_{1}} \cdots \omega_{\eta_{p}} \omega_{0}^{a_{p}}\\ X_{\star}:= \omega_{0}^{a_{0}} \omega_{\eta_{1}\star} \cdots \omega_{\eta_{p}\star} \omega_{0}^{a_{p}} \end{array}. \end{multline} In terms of motivic Euler sums, it is, with $\prod \epsilon_{i}=1$: $$ \zeta^{\mathfrak{l}} (n_{0},\cdots, -n_{p}) +(-1)^{w+p} \zeta^{\star\star\mathfrak{l}}(n_{0},\cdots, -n_{p})+(-1)^{w+p} \zeta^{\star\star\mathfrak{l}}_{n_{0}-1}(n_{1},\cdots,n_{p-1}, n_{p}\mlq + \mrq 1)=0.$$ Changing $n_{p}$ into $-n_{p}$, and applying Antipode $\shuffle$ to the last term, it gives, with now $\prod \epsilon_{i}=-1$: $$ \zeta^{\mathfrak{l}} (n_{0},\cdots, n_{p}) +(-1)^{w+p} \zeta^{\star\star\mathfrak{l}}(n_{0},\cdots, n_{p})+(-1)^{p} \zeta^{\star\star\mathfrak{l}}_{n_{p}}(n_{p-1},\cdots,n_{1},n_{0})=0.$$ \item[$\cdot$] Similarly, for the coefficient of a word beginning by $e_{-1}$ and ending by $e_{0}$, only these three terms contribute: $$ \Phi^{\mathfrak{l}}(e_{-1}, e_{0},e_{\infty})e_{0}- \Phi^{\mathfrak{l}}(e_{\infty}, e_{-1},e_{1})e_{0}+ e_{-1} \Phi^{\mathfrak{l}}(e_{\infty}, e_{-1},e_{1}) .$$ Similarly than above, it leads to the identity, with $\prod \epsilon_{i}=-1$: $$ \zeta^{\mathfrak{l}} (n_{0},\cdots, -n_{p}) +(-1)^{w+p} \zeta^{\star\star\mathfrak{l}}(n_{0},\cdots, -n_{p})+(-1)^{w+p} \zeta^{\star\star\mathfrak{l}}_{n_{0}-1}(n_{1},\cdots,n_{p-1},-(n_{p}\mlq + \mrq 1))=0.$$ Changing $n_{p}$ into $-n_{p}$, and applying Antipode $\shuffle$ to the last term, it gives, with now $\prod \epsilon_{i}=1$: $$ \zeta^{\mathfrak{l}} (n_{0},\cdots, n_{p}) +(-1)^{w+p+1} \zeta^{\star\star\mathfrak{l}}(n_{0},\cdots, n_{p})+(-1)^{p} \zeta^{\star\star\mathfrak{l}}_{n_{p}}(n_{p-1},\cdots,n_{1},n_{0})=0.$$ \end{itemize} \item[$(iii)$] When looking at the coefficient of a word beginning by $e_{0}$ and ending by $e_{0}$ in $\ref{eq:octagonlin}$, only these three terms contribute: $$ -e_{0} \Phi^{\mathfrak{l}}(e_{-1}, e_{0},e_{\infty})+ \Phi^{\mathfrak{l}}(e_{-1}, e_{0},e_{\infty})e_{0}+ e_{0} \Phi^{\mathfrak{l}}(e_{\infty}, e_{-1},e_{1})- \Phi^{\mathfrak{l}}(e_{\infty}, e_{-1},e_{1})e_{0}.$$ If we identify the coefficient of the word $ e_{0}^{a_{0}+1} e_{-\eta_{1}} \cdots e_{-\eta_{p}} e_{0}^{a_{p}+1}$, it leads straight to the identity $(iii)$. \end{itemize} \textsc{Remark}: Looking at the coefficient of words beginning by $e_{0}$ and ending by $e_{1}$ or $e_{-1}$ in $\ref{eq:octagonlin}$ would lead to the same identity than the second case. \end{proof} \section{Appendix $2$: Missing coefficients} In Lemma $\ref{lemmcoeff}$, the coefficients $D_{a,b}$ appearing (in $(v)$) are the only one which are not conjectured. Albeit these values are not required for the proof of Theorem $4.4.1$, we provide here a table of values in small weights. Let examine the coefficient corresponding to $\zeta^{\star}(\boldsymbol{2}^{n})$ instead of $\zeta^{\star}(2)^{n}$, which is (by $(i)$ in Lemma $\ref{lemmcoeff}$), with $n=a+b+1$: \begin{equation} \widetilde{D}^{a,b}\mathrel{\mathop:}= \frac{(2n)!}{6^{n}\mid B_{2n}\mid (2^{2n}-2)} D^{a,b} \quad \text{ and } \quad \widetilde{D}_{n}\mathrel{\mathop:}= \frac{(2n)!}{6^{n}\mid B_{2n}\mid (2^{2n}-2)}D_{n} . \end{equation} We have an expression $(\ref{eq:coeffds})$ for $D_{n}$, albeit not very elegant, which would give: \begin{equation} \label{eq:coeffdstilde} \widetilde{D}_{n}= \frac{2^{2n} (2n)!}{(2^{2n}-2)\mid B_{2n}\mid }\sum_{\sum m_{i} s_{i}=n \atop m_{i}\neq m_{j}} \prod_{i=1}^{k} \left( \frac{1}{s_{i}!} \left( \frac{\mid B_{2m_{i}}\mid (2^{2m_{i}-1}-1) } {2m_{i} (2m_{i})!}\right)^{s_{i}} \right). \end{equation} Here is a table of values for $\widetilde{D}_{n}$ and $\widetilde{D}^{k,n-k-1}$ in small weights:\\ \\ \begin{tabular}{| l || c | c |c | c |} \hline $\cdot \quad \quad \diagdown n$ & $2$ & $3$ & $4$ & $5$ \\ \hline $\widetilde{D_{n}}$ & $\frac{19}{2^{3}-1}$ & $\frac{275}{2^{5}-1}$& $\frac{11813}{3(2^{7}-1)}$ & $\frac{783}{7}$\\ & & & & \\ \hline $\widetilde{D}_{k,n-1-k}$ &$\frac{-12}{7}$ & $\frac{-84}{31}, \frac{160}{31}$& $\frac{1064}{127}, \frac{-1680}{127}, \frac{-9584}{381}$ & $\frac{189624}{2555}$,$\frac{-137104}{2555}$,$\frac{-49488}{511}$,$\frac{-17664}{511}$ \\ & & & &\\ \hline \hline $\cdot \quad \quad \diagdown n$ & $6$ & $7$ & $8$ & $9$ \\ \hline $\widetilde{D_{n}} \quad \quad $ & $\frac{581444793}{691(2^{11}-1)}$& $\frac{263101079}{21(2^{13}-1)}$& $\frac{6807311830555}{3617(2^{15}-1)}$& $\frac{124889801445461}{43867(2^{17}-1)}$\\ & & & & \\ \hline \end{tabular} \\ \\ \\ The denominators of $\widetilde{D_{n}},\widetilde{D}_{k,n-1-k}$ can be written as $(2^{2n-1}-1)$ times the numerator of the Bernoulli number $B_{2n}$. No formula has been found yet for their numerators, that should involve binomial coefficients. These coefficients are related since, by shuffle: $$\begin{array}{lll} & \zeta^{\star\star, \mathfrak{m}}_{2} (\boldsymbol{2}^{n})+ \sum_{k=0}^{n-1}\zeta^{\star\star, \mathfrak{m}}_{1} (\boldsymbol{2}^{k},3,\boldsymbol{2}^{n-k-1}) & =0\\ & \zeta^{\star\star, \mathfrak{m}}(\boldsymbol{2}^{n+1})-\zeta^{\star, \mathfrak{m}}(\boldsymbol{2}^{n+1}) \sum_{k=0}^{n-1}\zeta^{\star\star, \mathfrak{m}}_{1} (\boldsymbol{2}^{k},3,\boldsymbol{2}^{n-k-1}) & =0. \end{array}$$ Identifying the coefficients of $\zeta^{\star}(\boldsymbol{2}^{n})$ in formulas $(iii),(v)$ in Lemma $\ref{lemmcoeff}$ leads to: \begin{equation}\label{eq:coeffdrel} 1-\widetilde{D_{n}}= \sum_{k=0}^{n-1} \widetilde{D}_{k,n-1-k}. \end{equation} \chapter{Galois Descents} \paragraph{\texttt{Contents}: } The first section gives the general picture (for any $N$), sketching the Galois descent ideas. The second section focuses on the cases $N=2,3,4,\mlq 6\mrq, 8$, defining the filtrations by the motivic level associated to each descent, and displays both results and proofs. Some examples in small depth for are given in the Annexe $\S A.2$.\\ \\ \texttt{Notations}: For a fixed $N$, let $k_{N}\mathrel{\mathop:}=\mathbb{Q}(\xi_{N})$, where $\xi_{N}\in\mu_{N}$ is a primitive $N^{\text{th}}$ root of unity, and $\mathcal{O}_{N}$ is the ring of integers of $k_{N}$. The subscript or exponent $N$ will be omitted when it is not ambiguous. For the general case, the decomposition of $N$ is denoted $N=\prod q_{i}= \prod p_{i}^{\alpha}$.\\ \section{Overview} \paragraph{Change of field. } As said in Chapter $3$, for each $N, N'$ with $N' | N$, the Galois action on $\mathcal{H}_{N}$ and $\mathcal{H}_{N'}$ is determined by the coaction $\Delta$. More precisely, let consider the following descent\footnote{More generally, there are Galois descents $(\mathcal{d})=(k_{N}/k_{N'}, M/M')$ from $\mathcal{H}^{\mathcal{MT}\left( \mathcal{O}_{k_{N}} \left[ \frac{1}{M}\right] \right) }$, to $\mathcal{H}^{\mathcal{MT}\left( \mathcal{O}_{k_{N'}} \left[ \frac{1}{M'}\right]\right) }$, with $N'\mid N$, $M'\mid M$, with a set of derivations $\mathscr{D}^{\mathcal{d}} \subset \mathscr{D}^{N}$ associated.}, \texttt{assuming $\phi_{N'}$ is an isomorphism of graded Hopf comodules}: \footnote{Conjecturally as soon as $N'\neq p^{r}$, $p\geq 5$. Proven for $N'=1,2,3,4,\mlq 6\mrq,8$.} $$\xymatrixcolsep{5pc}\xymatrix{ \mathcal{H}^{N} \ar@{^{(}->}[r] ^{\phi_{N}}_{n.c} & \mathcal{H}^{\mathcal{MT}_{\Gamma_{N}}} \\ \mathcal{H}^{N'}\ar[u]_{\mathcal{G}^{N/N'}} \ar@{^{(}->}[r] _{n.c}^{\phi_{N'}\atop \sim} &\mathcal{H}^{\mathcal{MT}_{\Gamma_{N'}}} \ar[u]^{\mathcal{G}^{\mathcal{MT}}_{N/N'}} }$$ Let choose a basis for $gr_{1}\mathcal{L}_{r}^{\mathcal{MT}_{N'}}$, and extend it into a basis of $gr_{1}\mathcal{L}_{r}^{\mathcal{MT}_{N}}$: $$ \left\lbrace \zeta^{\mathfrak{m}}(r; \eta'_{i,r}) \right\rbrace_{i} \subset \left\lbrace \zeta^{\mathfrak{m}}(r; \eta'_{i,r}) \right\rbrace \cup \left\lbrace \zeta^{\mathfrak{m}}(r; \eta_{i}) \right\rbrace_{1\leq i \leq c_{r}}, $$ $$\text{where } \quad c_{r} =\left\{ \begin{array}{ll} a_{N}-a_{N'}= \frac{\varphi(N)-\varphi(N')}{2}+p(N)-p(N') & \text{ if } r=1\\ b_{N}-b_{N'}= \frac{\varphi(N)-\varphi(N')}{2} & \text{ if } r>1\\ \end{array} \right. .$$ Then, once this basis fixed, let split the set of derivations $\mathscr{D}^{N}$ into two parts (cf. $\S 2.4.4$), one corresponding to $\mathcal{H}^{N'}$:\nomenclature{$\mathscr{D}^{\backslash \mathcal{d}}$ and $\mathscr{D}^{\mathcal{d}} $}{sets of derivations associated to a descent $\mathcal{d}$} \begin{equation} \label{eq:derivdescent} \mathscr{D}^{N} = \mathscr{D}^{\backslash \mathcal{d}} \uplus \mathscr{D}^{\mathcal{d}} \quad \text{ where }\quad \left\lbrace \begin{array}{l} \mathscr{D}^{\backslash \mathcal{d}} =\mathscr{D}^{N'}\mathrel{\mathop:}= \cup_{r} \left\lbrace D_{r}^{\eta'_{i,r}} \right\rbrace_{1\leq i \leq c_{r}} \\ \mathscr{D}^{\mathcal{d}}\mathrel{\mathop:}= \cup_{r} \left\lbrace D^{\eta_{i,r}}_{r} \right\rbrace_{1\leq i \leq c_{r}} \end{array} \right. . \end{equation} \texttt{Examples:} \begin{itemize} \item[$\cdot$] For the descent from $\mathcal{MT}_{3}$ to $\mathcal{MT}_{1}$: $\mathscr{D}^{(k_{3}/\mathbb{Q}, 3/1)}=\left\lbrace D^{\xi_{3}}_{1}, D^{\xi_{3}}_{2r}, r>0 \right\rbrace $. \item[$\cdot$] For the descent from $\mathcal{MT}_{8}$ to $\mathcal{MT}_{4}$: $\mathscr{D}^{(k_{8}/k_{4}, 2/2)}=\left\lbrace D^{\xi_{8}}_{r}-D^{-\xi_{8}}_{r}, r>0 \right\rbrace $. \item[$\cdot$] For the descent from $\mathcal{MT}_{9}$ to $\mathcal{MT}_{3}$: $\mathscr{D}^{(k_{9}/k_{3}, 3/3)}=\left\lbrace D^{\xi_{9}}_{r}-D^{-\xi^{4}_{9}}_{r}, D^{\xi_{9}}_{r}-D^{-\xi^{7}_{9}}_{r} r>0 \right\rbrace $.\footnote{By the relations in depth $1$, since: $$\zeta^{\mathfrak{a}} \left( r\atop \xi^{3}_{9}\right)= 3^{r-1} \left( \zeta^{\mathfrak{a}} \left( r\atop \xi^{1}_{9}\right) + \zeta^{\mathfrak{a}} \left( r\atop \xi^{4}_{9}\right)+ \zeta^{\mathfrak{a}} \left( r\atop \xi^{7}_{9}\right) \right) \quad \text{etc.}$$} \end{itemize} \begin{theo} Let $N'\mid N$ such that $\mathcal{H}^{N'}\cong \mathcal{H}^{\mathcal{MT}_{\Gamma_{N'}}}$.\\ Let $\mathfrak{Z}\in gr^{\mathfrak{D}}_{p}\mathcal{H}_{n}^{N}$, depth graded MMZV relative to $\mu_{N}$.\\ Then $\mathfrak{Z}\in gr^{\mathfrak{D}}_{p}\mathcal{H}^{N'}$, i.e. $\mathfrak{Z}$ is a depth graded MMZV relative to $\mu_{N'}$ modulo smaller depth if and only if: $$ \left( \forall r<n, \forall D_{r,p}\in\mathscr{D}_{r}^{\mathcal{d}},\quad D_{r,p}(\mathfrak{Z})=0\right) \quad \textrm{ and } \quad \left( \forall r<n, \forall D_{r,p} \in\mathscr{D}^{\diagdown\mathcal{d}}, \quad D_{r,p}(\mathfrak{Z})\in gr^{\mathfrak{D}}_{p-1}\mathcal{H}^{N'}\right) .$$ \end{theo} \begin{proof} In the $(f_{i})$ side, the analogue of this theorem is pretty obvious, and the result can be transported via $\phi$, and back since $\phi_{N'}$ isomorphism by assumption. \end{proof} This is a very useful recursive criterion (derivation strictly decreasing weight and depth) to determine if a (motivic) multiple zeta value at $\mu_{N}$ is in fact a (motivic) multiple zeta value at $\mu_{N'}$, modulo smaller depth terms; applying it recursively, it could also take care of smaller depth terms. This criterion applies for motivic MZV$_{\mu_{N}}$, and by period morphism is deduced for MZV$_{\mu_{N}}$.\\ \paragraph{Change of Ramification.} If the descent has just a ramified part, the criterion can be stated in a non depth graded version. Indeed, there, since only weight $1$ matters, to define the derivation space $\mathcal{D}^{\mathcal{d}}$ as above ($\ref{eq:derivdescent}$), we need to choose a basis for $\mathcal{O}_{N}^{\ast}\otimes \mathbb{Q}$, which we complete with $\left\lbrace \xi^{\frac{N}{q_{i}}}_{N}\right\rbrace_{i\in I}$ into a basis for $\Gamma_{N}$. Then, with $N=\prod p_{i}^{\alpha_{i}}=\prod q_{i}$: \begin{theo}\label{ramificationchange} Let $\mathfrak{Z}\in \mathcal{H}_{n}^{N}\subset \mathcal{H}^{\mathcal{MT}_{\Gamma_{N}}}$, MMZV relative to $\mu_{N}$.\\ Then $\mathfrak{Z}\in \mathcal{H}^{\mathcal{MT}(\mathcal{O}_{N})}$ unramified if and only if: $$ \left( \forall i\in I, D^{\xi^{\frac{N}{q_{i}}}}_{1}(\mathfrak{Z})=0\right) \quad \textrm{ and } \quad \left( \forall r<n, \forall D_{r}\in\mathscr{D}^{\diagdown\mathcal{d}}, \quad D_{r}(\mathfrak{Z})\in \mathcal{H}^{\mathcal{MT}(\mathcal{O}_{N})}\right) .$$ \end{theo} \texttt{Nota Bene}: Intermediate descents and change of ramification, keeping part of some of the weight $1$ elements $\left\lbrace \xi^{\frac{N}{q_{i}}}_{N}\right\rbrace$ could also be stated.\\ \\ \texttt{Examples}: \begin{description} \item[$\boldsymbol{N=2}$:] As claimed in the introduction, the descent between $\mathcal{H}^{2}$ and $\mathcal{H}^{1}$ is precisely measured by $D_{1}$:\footnote{$\mathscr{D}^{(\mathbb{Q}/\mathbb{Q}, 2/1)}=\left\lbrace D^{-1}_{1} \right\rbrace $ with the above notations; and $D^{-1}_{1}$ is here simply denoted $D_{1}$ .} \begin{coro}\label{criterehonoraire} Let $\mathfrak{Z}\in\mathcal{H}^{2}=\mathcal{H}^{\mathcal{MT}_{2}}$, a motivic Euler sum.\\ Then $\mathfrak{Z}\in\mathcal{H}^{1}=\mathcal{H}^{\mathcal{MT}_{1}}$, i.e. $\mathfrak{Z}$ is a motivic multiple zeta value if and only if: $$D_{1}(\mathfrak{Z})=0 \quad \textrm{ and } \quad D_{2r+1}(\mathfrak{Z})\in\mathcal{H}^{1}.$$ \end{coro} \item[$\boldsymbol{N=3,4,6}$:] \begin{coro}\label{ramif346} Let $N\in \lbrace 3,4,6\rbrace$ and $\mathfrak{Z}\in\mathcal{H}^{\mathcal{MT}(\mathcal{O}_{N} \left[ \frac{1}{N}\right] )}$, a motivic MZV$_{\mu_{N}}$.\\ Then $\mathfrak{Z}$ is unramified, $\mathfrak{Z}\in\mathcal{H}^{\mathcal{MT} (\mathcal{O}_{N})}$ if and only if: $$D_{1}(\mathfrak{Z})=0 \textrm{ and } \quad D_{r}(\mathfrak{Z})\in\mathcal{H}^{\mathcal{MT} (\mathcal{O}_{N})}.$$ \end{coro} \item[$\boldsymbol{N=p^{r}}$:] A basis for $\mathcal{O}^{N}\otimes \mathbb{Q}$ is formed by: $\left\lbrace \frac{1-\xi^{k}}{1-\xi} \right\rbrace_{k\wedge p=1 \atop 0<k\leq\frac{N}{2}} $, which corresponds to $$\text{ a basis of } \mathcal{A}_{1}^{\mathcal{MT}(\mathcal{O}_{N})} \quad : \left\lbrace \zeta^{\mathfrak{m}}\left( 1 \atop \xi^{k} \right)- \zeta^{\mathfrak{m}}\left( 1 \atop \xi \right) \right\rbrace _{ k\wedge p=1 \atop 0<k\leq\frac{N}{2}} .$$ It can be completed in a basis of $\mathcal{A}_{1}^{N}$ with $\zeta^{\mathfrak{m}}\left( 1 \atop \xi^{1} \right)$. \footnote{With the previous theorem notations, $\mathcal{D}^{\mathcal{d}}=\lbrace D^{\xi}_{1}\rbrace$ whereas $\mathcal{D}^{\diagdown \mathcal{d}}= \lbrace D^{\xi^{k}}_{1}-D^{\xi}_{1} \rbrace_{k\wedge p=1 \atop 1<k\leq\frac{N}{2} } \cup_{r>1} \lbrace D^{\xi^{k}}_{r}\rbrace_{k\wedge p=1 \atop 0<k\leq\frac{N}{2}}$; where $D^{\xi}_{1}$ has to be understood as the projection of the left side over $\zeta^{\mathfrak{a}}\left( 1 \atop \xi \right)$ in respect to the basis above of $\mathcal{H}_{1}^{\mathcal{MT}(\mathcal{O}_{N})}$ more $\zeta^{\mathfrak{a}}\left( 1 \atop \xi \right)$. This leads to a criterion equivalent to $(\ref{ramifpr})$.} However, if we consider the basis of $\mathcal{A}_{1}^{N}$ formed by primitive roots of unity up to conjugates, the criterion for the descent could also be stated as follows: \begin{coro}\label{ramifpr} Let $N=p^{r}$ and $\mathfrak{Z}\in\mathcal{H}^{\mathcal{MT}_{\Gamma_{N}}}=\mathcal{H}^{\mathcal{MT}(\mathcal{O}_{N} \left[ \frac{1}{p}\right] )}$, relative to $\mu_{N}$\footnote{For instance a MMZV relative to $\mu_{N}$. Beware, for $p>5$, there could be other periods.}.\\ Then $\mathfrak{Z}$ is unramified, $\mathfrak{Z}\in\mathcal{H}^{\mathcal{MT} (\mathcal{O}_{N})}$ if and only if: $$\sum_{k\wedge p=1 \atop 0<k\leq\frac{N}{2}} D^{\xi^{k}_{N}}_{1}(\mathfrak{Z})=0 \quad \textrm{ and } \quad \forall \left\lbrace \begin{array}{l} r>1 \\ 1<k\leq\frac{N}{2}\\ k\wedge p=1 \\ \end{array} \right. , \quad D^{\xi^{k}_{N}}_{r}(\mathfrak{Z})\in\mathcal{H}^{\mathcal{MT} (\mathcal{O}_{N})}.$$ \end{coro} \end{description} \section{Descents for $\boldsymbol{N=2,3,4,\mlq 6\mrq,8}$.} \subsection{Depth $\boldsymbol{1}$} Let start with depth $1$ results, deduced from Lemma $2.4.1$ (from $\cite{De}$), fundamental to initiate the recursion later. \begin{lemm} The basis for $gr^{\mathfrak{D}}_{1} \mathcal{A}$ is: $$\left\{ \zeta^{\mathfrak{a}}\left(r; \xi \right) \text{ such that } \left\{ \begin{array}{ll} r>1 \text{ odd } & \text{ if }N=1 \\ r \text{ odd } & \text{ if }N=2 \\ r>0 & \text{ if } N=3,4 \\ r>1 & \text{ if } N=6 \\ \end{array} \right. \right\rbrace $$ For $N=8$, the basis for $gr^{\mathfrak{D}}_{1} \mathcal{A}_{r}$ is two dimensional, for all $r>0$: $$\left\{ \zeta^{\mathfrak{a}}\left(r; \xi \right), \zeta^{\mathfrak{a}}\left(r; -\xi \right)\right\rbrace.$$ \end{lemm} Let make these relations explicit in depth $1$ for $N=2,3,4,\mlq 6\mrq,8$, since we would use some $p$-adic properties of the basis elements in our proof: \begin{description} \item[\textsc{For $N=2$:}] The distribution relation in depth 1 is: $$\zeta^{\mathfrak{a}}\left( {2 r + 1 \atop 1}\right) = (2^{-2r}-1)\zeta^{\mathfrak{a}}\left( {2r+1 \atop -1}\right) .$$ \item[\textsc{For $N=3$:}] $$ \zeta^{\mathfrak{l}} \left( {2r+1 \atop 1} \right)\left(1-3^{2r}\right)= 2\cdot 3^{2r}\zeta^{\mathfrak{l}}\left({2r+1 \atop \xi}\right) \quad \quad \zeta^{\mathfrak{l}}\left({2r \atop 1}\right)=0 \quad \quad \zeta^{\mathfrak{l}}\left({r \atop \xi}\right) =\left(-1\right)^{r-1} \zeta^{\mathfrak{l}}\left({r \atop \xi^{-1}}\right). $$ \item[\textsc{For $N=4$:}] $$\begin{array}{lllllll} \zeta^{\mathfrak{l}}\left({ r \atop 1} \right) (1-2^{r-1}) & = & 2^{r-1}\cdot \zeta^{\mathfrak{l}}\left( {r\atop -1} \right) \text{ for } r\neq 1 & \quad & \zeta^{\mathfrak{l}}\left({1\atop 1}\right) & = & \zeta^{\mathfrak{l}}\left( {2r\atop -1} \right)=0 \\ \zeta^{\mathfrak{l}}\left({2r+1\atop -1}\right) & = & 2^{2r+1} \zeta^{\mathfrak{l}}\left( {2r+1\atop \xi} \right) & \quad & \zeta^{\mathfrak{l}}\left( {r \atop \xi} \right) & = & \left(-1\right)^{r-1} \zeta^{\mathfrak{l}}\left({r\atop \xi^{-1}}\right). \end{array}$$ \item[\textsc{For $N=6$:}] $$\begin{array}{lllllll} \zeta^{\mathfrak{l}}\left({r\atop 1}\right)\left(1-2^{r-1}\right) & = & 2^{r-1}\zeta^{\mathfrak{l}}\left({ r \atop -1} \right) \text{ for } r\neq 1 & \quad & \zeta^{\mathfrak{l}}\left( {1 \atop 1} \right) & = & \zeta^{\mathfrak{l}}\left({2r\atop -1}\right)=0\\ \zeta^{\mathfrak{l}}\left( {2r+1 \atop -1} \right) & = & \frac{2\cdot 3^{2r}}{1-3^{2r}} \zeta^{\mathfrak{l}}\left( {2r+1 \atop \xi} \right)& \quad & \zeta^{\mathfrak{l}}\left( {r \atop \xi^{2}} \right) & = & \frac{2^{r-1}}{1-(-2)^{r-1}} \zeta^{\mathfrak{l}}\left( {r \atop \xi} \right).\\ \zeta^{\mathfrak{l}}\left({r\atop \xi} \right) &=&\left(-1\right)^{r-1} \zeta^{\mathfrak{l}}\left( {r \atop \xi^{-1}} \right) &\quad & \zeta^{\mathfrak{l}}\left({r \atop -\xi} \right) & = & \left(-1\right)^{r-1} \zeta^{\mathfrak{l}}\left( {r \atop -\xi^{-1}} \right). \end{array}$$ \item[\textsc{For $N=8$:}] $$\begin{array}{lllllll} \zeta^{\mathfrak{l}}\left({ r \atop 1} \right)& =& \frac{ 2^{r-1}}{\left(1-2^{r-1}\right)}\zeta^{\mathfrak{l}}\left({r\atop -1}\right) \text{ for } r\neq 1 & \quad & \zeta^{\mathfrak{l}}\left( {1 \atop 1} \right) &=&\zeta^{\mathfrak{l}}\left({2r\atop -1}\right)=0 \\ \zeta^{\mathfrak{l}}\left({ r \atop -i }\right) &=& 2^{r-1} \left( \zeta^{\mathfrak{l}}\left({r\atop \xi}\right) + \zeta^{\mathfrak{l}}\left({r\atop -\xi}\right) \right) & \quad & \zeta^{\mathfrak{l}}\left( {2r+1 \atop -1} \right) &=& 2^{2r+1} \zeta^{\mathfrak{l}}\left({2r+1\atop i}\right) \\ \zeta^{\mathfrak{l}}\left({ r\atop \pm \xi} \right) &=&\left(-1\right)^{r-1} \zeta^{\mathfrak{l}}\left( {r \atop \pm \xi^{-1} }\right) & \quad & \zeta^{\mathfrak{l}}\left( {r \atop i} \right) &=&\left(-1\right)^{r-1} \zeta^{\mathfrak{l}}\left( {r \atop -i}\right) \\ \end{array}$$ \end{description} \subsection{Motivic Level filtration} Let fix a descent $(\mathcal{d})=(k_{N}/k_{N'}, M/M')$ from $\mathcal{H}^{\mathcal{MT}\left( \mathcal{O}_{k_{N}} \left[ \frac{1}{M}\right] \right) }$, to $\mathcal{H}^{\mathcal{MT}\left( \mathcal{O}_{k_{N'}} \left[ \frac{1}{M'}\right]\right) }$, with $N'\mid N$, $M'\mid M$, among these considered in this section, represented in Figures $\ref{fig:d248}, \ref{fig:d36}$.\\ Let us define a motivic level increasing filtration $\mathcal{F}^{\mathcal{d}}$ associated, from the set of derivations associated to this descent, $\mathscr{D}^{\mathcal{d}} \subset \mathscr{D}^{N}$, defined in $(\ref{eq:derivdescent})$. \begin{defi} The filtration by the \textbf{motivic level} associated to a descent $(\mathcal{d})$ is defined recursively on $\mathcal{H}^{N}$ by: \begin{itemize} \item[$\cdot$] $\mathcal{F}^{\mathcal{d}} _{-1} \mathcal{H}^{N}=0$. \item[$\cdot$] $\mathcal{F}^{\mathcal{d}} _{i} \mathcal{H}^{N}$ is the largest submodule of $\mathcal{H}^{N}$ such that $\mathcal{F}^{\mathcal{d}}_{i}\mathcal{H}^{N}/\mathcal{F}^{\mathcal{d}} _{i-1}\mathcal{H}^{N}$ is killed by $\mathscr{D}^{\mathcal{d}}$, $\quad$ i.e. is in the kernel of $\oplus_{D\in \mathscr{D}^{\mathcal{d}}} D$. \end{itemize} \end{defi} It's a graded Hopf algebra's filtration: $$\mathcal{F} _{i}\mathcal{H}. \mathcal{F}_{j}\mathcal{H} \subset \mathcal{F}_{i+j}\mathcal{H} \text{ , } \quad \Delta (\mathcal{F}_{n}\mathcal{H})\subset \sum_{i+j=n} \mathcal{F}_{i}\mathcal{A} \otimes \mathcal{F}_{j}\mathcal{H}.$$ The associated graded is denoted: $gr^{\mathcal{d}} _{i}$ and the quotients, coalgebras compatible with $\Delta$: \begin{equation} \label{eq:quotienth} \mathcal{H}^{\geq 0} \mathrel{\mathop:}= \mathcal{H} \text{ , } \boldsymbol{\mathcal{H}^{\geq i}}\mathrel{\mathop:}= \mathcal{H}/ \mathcal{F}_{i-1}\mathcal{H} \text{ with the projections :}\quad \quad \forall j\geq i \text{ , } \pi_{i,j}: \mathcal{H}^{\geq i} \rightarrow \mathcal{H}^{\geq j}. \end{equation} Note that, via the isomorphism $\phi$, the motivic filtration on $\mathcal{H}^{\mathcal{MT}_{N}}$ corresponds to\footnote{In particular, remark that $\dim \mathcal{F}^{\mathcal{d}} _{i} \mathcal{H}_{n}^{\mathcal{MT}_{N}}$ are known.}: \begin{equation} \label{eq:isomfiltration}\mathcal{F}^{\mathcal{d}} _{i} \mathcal{H}^{\mathcal{MT}_{N}} \longleftrightarrow \left\langle x\in H^{N} \mid Deg^{\mathcal{d}} (x) \leq i \right\rangle _{\mathbb{Q}} , \end{equation} where $Deg^{\mathcal{d}}$ is the degree in $\left\lbrace \lbrace f^{j}_{r} \rbrace_{b_{N'}<j\leq b_{N} \atop r>1} , \lbrace f^{j}_{1} \rbrace_{a_{N'}<j\leq a_{N}} \right\rbrace $, which are the images of the complementary part of $ gr_{1}\mathcal{L}^{\mathcal{MT}_{N'}}$ in the basis of $gr_{1}\mathcal{L}^{\mathcal{MT}_{N}}$.\\ \\ \texttt{Example}: For the descent between $\mathcal{H}^{\mathcal{MT}_{2}}$ and $\mathcal{H}^{\mathcal{MT}_{1}}$, since $gr_{1}\mathcal{L}^{\mathcal{MT}_{2}}= \left\langle \zeta^{\mathfrak{m}}(-1), \left\lbrace \zeta^{\mathfrak{m}}(2r+1)\right\rbrace _{r>0}\right\rangle$: $$\mathcal{F} _{i} \mathcal{H}^{\mathcal{MT}_{2}} \quad \xrightarrow[\sim]{\quad \phi}\quad \left\langle x\in \mathbb{Q}\langle f_{1}, f_{3}, \cdots \rangle\otimes \mathbb{Q}[f_{2}] \mid Deg_{f_{1}} (x) \leq i \right\rangle _{\mathbb{Q}} \text{ , where } Deg_{f_{1}}= \text{ degree in } f_{1}.$$ \\ By definition of these filtrations: \begin{equation}D_{r,p}^{\eta} \left( \mathcal{F}_{i}\mathcal{H}_{n} \right) \subset \left\lbrace \begin{array}{ll} \mathcal{F}_{i-1}\mathcal{H}_{n-r} & \text{ if }D_{r,p}^{\eta}\in\mathscr{D}^{\mathcal{d}}_{r} \\ \mathcal{F}_{i}\mathcal{H}_{n-r} & \text{ if } D_{r,p}^{\eta}\in\mathscr{D}^{\backslash\mathcal{d}}_{r} \end{array} \right. . \end{equation} Similarly, looking at $\partial_{n,p}$ (cf. $\ref{eq:pderivnp}$): \begin{equation} \partial_{n,p}(\mathcal{F}_{i-1}\mathcal{H}_{n}) \subset \oplus_{r<n} \left( gr_{p-1}^{\mathfrak{D}} \mathcal{F}_{i-2}\mathcal{H}_{n-r}\right)^{\text{ card } \mathscr{D}^{\mathcal{d}}_{r}} \oplus_{r<n} \left( gr_{p-1}^{\mathfrak{D}} \mathcal{F}_{i-1}\mathcal{H}_{n-r}\right) ^{\text{ card } \mathscr{D}^{\backslash\mathcal{d}}_{r}}. \end{equation} This allows us to pass to quotients, and define $D^{\eta,i,\mathcal{d}}_{n,p}$ and $\partial^{i,\mathcal{d}}_{n,p}$:\nomenclature{$D^{\eta,i,\mathcal{d}}_{n,p}$ and $\partial^{i,\mathcal{d}}_{n,p}$}{quotient maps} \begin{equation} \label{eq:derivinp} \boldsymbol{D^{\eta,i,\mathcal{d}}_{n,p}}: gr_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i} \rightarrow \left\lbrace \begin{array}{ll} gr_{p-1}^{\mathfrak{D}} \mathcal{H}_{n-r}^{\geq i-1} & \text{ if }D_{r,p}^{\eta}\in\mathscr{D}^{\mathcal{d}}_{r} \\ gr_{p-1}^{\mathfrak{D}} \mathcal{H}_{n-r}^{\geq i} & \text{ if } D_{r,p}^{\eta}\in\mathscr{D}^{\backslash\mathcal{d}}_{r} \end{array} \right. \end{equation} \begin{framed} \begin{equation} \label{eq:pderivinp} \boldsymbol{\partial^{i,\mathcal{d}}_{n,p}}: gr_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i} \rightarrow \oplus_{r<n} \left( gr_{p-1}^{\mathfrak{D}} \mathcal{H}_{n-r}^{\geq i-1}\right)^{\text{ card } \mathscr{D}^{\mathcal{d}}_{r}} \oplus_{r<n} \left( gr_{p-1}^{\mathfrak{D}} \mathcal{H}_{n-r}^{\geq i}\right)^{\text{ card } \mathscr{D}^{\backslash\mathcal{d}}_{r}} . \end{equation} \end{framed} The bijectivity of this map is essential to the results stated below. \subsection{General Results} In the following results, the filtration considered $\mathcal{F}_{i}$ is the filtration by the motivic level associated to the (fixed) descent $\mathcal{d}$ while the index $i$, in $\mathcal{B}_{n, p, i}$ refers to the level notion for elements in $\mathcal{B}$ associated to the descent $\mathcal{d}$.\footnote{Precisely defined, for each descent in $\S 5.2.5 $.}\\ We first obtain the following result on the depth graded quotients, for all $i\geq 0$, with: $$\mathbb{Z}_{1[P]} \mathrel{\mathop:}= \frac{\mathbb{Z}}{1+ P\mathbb{Z}}=\left\{ \frac{a}{1+b P}, a,b\in\mathbb{Z} \right\} \text{ with } \begin{array}{ll} P=2 & \text{ for } N=2,4,8 \\ P=3 & \text{ for } N=3,6 \end{array} .$$ \begin{lemm} \begin{itemize} \item[$\cdot$] $$\mathcal{B}_{n, p, \geq i} \text{ is a linearly free family of } gr_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i} \text{ and defines a } \mathbb{Z}_{1[P]} \text{ structure :}$$ Each element $\mathfrak{Z}= \zeta^{\mathfrak{m}}\left( z_{1}, \ldots , z_{p} \atop \epsilon_{1}, \ldots, \epsilon_{p}\right)\in \mathcal{B}_{n,p} $ decomposes in a $\mathbb{Z}_{1[P]}$-linear combination of $\mathcal{B}_{n, p, \geq i}$ elements, denoted $cl_{n,p,\geq i}(\mathfrak{Z})$ in $gr_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i}$, which defines, in an unique way: $$cl_{n,p,\geq i}: \langle\mathcal{B}_{n, p, \leq i-1}\rangle_{\mathbb{Q}} \rightarrow \langle\mathcal{B}_{n, p, \geq i}\rangle_{\mathbb{Q}}.$$ \item[$\cdot$] The following map $\partial^{i,\mathcal{d}}_{n,p}$ is bijective: $$\partial^{i,\mathcal{d}}_{n,p}: gr_{p}^{\mathfrak{D}} \langle \mathcal{B}_{n, \geq i} \rangle_{\mathbb{Q}} \rightarrow \oplus_{r<n} \left( gr_{p-1}^{\mathfrak{D}} \langle \mathcal{B}_{n-1, \geq i-1} \rangle_{\mathbb{Q}} \right) ^{\oplus \text{ card } \mathcal{D}^{\mathcal{d}}_{r}} \oplus_{r<n} \left( gr_{p-1}^{\mathfrak{D}} \langle \mathcal{B}_{n-2r-1, \geq i} \rangle_{\mathbb{Q}} \right) ^{\oplus \text{ card } \mathcal{D}^{\backslash\mathcal{d}}_{r}}.$$ \end{itemize} \end{lemm} \nomenclature{$cl_{n,p,\geq i}$, or $cl_{n,\leq p,\geq i}$}{maps whose existence is proved in $\S 5.2$}Before giving the proof, in the next section, let present its consequences such as bases for the quotient, the filtration and the graded spaces for each descent considered: \begin{theo} \begin{itemize} \item[$(i)$] $\mathcal{B}_{n,\leq p, \geq i}$ is a basis of $\mathcal{F}_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i}=\mathcal{F}_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i, \mathcal{MT}}$. \item[$(ii)$] \begin{itemize} \item[$\cdot$] $\mathcal{B}_{n, p, \geq i}$ is a basis of $gr_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i}=gr_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i, \mathcal{MT}}$ on which it defines a $\mathbb{Z}_{1[P]}$-structure:\\ Each element $\mathfrak{Z}= \zeta^{\mathfrak{m}}\left( z_{1}, \ldots , z_{p} \atop \epsilon_{1}, \ldots, \epsilon_{p}\right)$ decomposes in a $\mathbb{Z}_{1[P]}$-linear combination of $\mathcal{B}_{n, p, \geq i}$ elements, denoted $cl_{n,p,\geq i}(\mathfrak{Z})$ in $gr_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i}$, which defines in an unique way: $$cl_{n,p,\geq i}: \langle\mathcal{B}_{n, p, \leq i-1}\rangle_{\mathbb{Q}} \rightarrow \langle\mathcal{B}_{n, p, \geq i}\rangle_{\mathbb{Q}} \text{ such that } \mathfrak{Z}+cl_{n,p,\geq i}(\mathfrak{Z})\in \mathcal{F}_{i-1}\mathcal{H}_{n}+ \mathcal{F}^{\mathfrak{D}}_{p-1}\mathcal{H}_{n}.$$ \item[$\cdot$] The following map is bijective: $$\partial^{i, \mathcal{d}}_{n,p}: gr_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i} \rightarrow \oplus_{r<n} \left( gr_{p-1}^{\mathfrak{D}} \mathcal{H}_{n-1}^{\geq i-1}\right) ^{\oplus \text{ card } \mathcal{D}^{\mathcal{d}}_{r}} \oplus_{r<n} \left( gr_{p-1}^{\mathfrak{D}} \mathcal{H}_{n-r}^{\geq i}\right) ^{\oplus \text{ card } \mathcal{D}^{\backslash\mathcal{d}}_{r}}. $$ \item[$\cdot$] $\mathcal{B}_{n,\cdot, \geq i} $ is a basis of $\mathcal{H}^{\geq i}_{n} =\mathcal{H}^{\geq i, \mathcal{MT}}_{n}$. \end{itemize} \item[$(iii)$] We have the two split exact sequences in bijection: $$ 0\longrightarrow \mathcal{F}_{i}\mathcal{H}_{n} \longrightarrow \mathcal{H}_{n} \stackrel{\pi_{0,i+1}} {\rightarrow}\mathcal{H}_{n}^{\geq i+1} \longrightarrow 0$$ $$ 0 \rightarrow \langle \mathcal{B}_{n, \cdot, \leq i} \rangle_{\mathbb{Q}} \rightarrow \langle\mathcal{B}_{n} \rangle_{\mathbb{Q}} \rightarrow \langle \mathcal{B}_{n, \cdot, \geq i+1} \rangle_{\mathbb{Q}} \rightarrow 0 .$$ The following map, defined in an unique way: $$cl_{n,\leq p,\geq i}: \langle\mathcal{B}_{n, p, \leq i-1}\rangle_{\mathbb{Q}} \rightarrow \langle\mathcal{B}_{n, \leq p, \geq i}\rangle_{\mathbb{Q}} \text{ such that } \mathfrak{Z}+cl_{n,\leq p,\geq i}(\mathfrak{Z})\in \mathcal{F}_{i-1}\mathcal{H}_{n}.$$ \item[$(iv)$] A basis for the filtration spaces $\mathcal{F}_{i} \mathcal{H}^{\mathcal{MT}}_{n}=\mathcal{F}_{i} \mathcal{H}_{n}$: $$\cup_{p} \left\{ \mathfrak{Z}+ cl_{n, \leq p, \geq i+1}(\mathfrak{Z}), \mathfrak{Z}\in \mathcal{B}_{n, p, \leq i} \right\}.$$ \item[$(v)$] A basis for the graded space $gr_{i} \mathcal{H}^{\mathcal{MT}}_{n}=gr_{i} \mathcal{H}_{n}$: $$\cup_{p} \left\{ \mathfrak{Z}+ cl_{n, \leq p, \geq i+1}(\mathfrak{Z}), \mathfrak{Z}\in \mathcal{B}_{n, p, i} \right\}.$$ \end{itemize} \end{theo} The proof is given in $\S 5.2.4$, and the notion of level resp. motivic level, some consequences and specifications for $N=2,3,4,\mlq 6\mrq,8$ individually are provided in $\S 5.2.5$. Some examples in small depth are displayed in Appendice $A.2$.\\ \\ \\ \texttt{{\large Consequences, level $i=0$:}} \begin{itemize} \item[$\cdot$] The level $0$ of the basis elements $\mathcal{B}^{N}$ forms a basis of $\mathcal{H}^{N} = \mathcal{H}^{\mathcal{MT}_{N}}$, for $N=2,3,4,\mlq 6 \mrq, 8$. This gives a new proof (dual) of Deligne's result (in $\cite{De}$).\\ The level $0$ of this filtration is hence isomorphic to the following algebras:\footnote{The equalities of the kind $\mathcal{H}^{\mathcal{MT}_{N}}= \mathcal{H}^{N}$ are consequences of the previous theorem for $N=2,3,4,\mlq 6 \mrq,8$, and by F. Brown for $N=1$ (cf. $\cite{Br2}$). Moreover, we have inclusions of the kind $\mathcal{H}^{\mathcal{MT}_{N'}} \subseteq \mathcal{F}_{0}^{k_{N}/k_{N'},M/M'}\mathcal{H}^{\mathcal{MT}_{N}}$ and we deduce the equality from dimensions at fixed weight.} $$ \mathcal{F}_{0}^{k_{N}/k_{N'},M/M'}\mathcal{H}^{\mathcal{MT}_{N}}=\mathcal{F}_{0}^{k_{N}/k_{N'},M/M'}\mathcal{H}^{N}=\mathcal{H}^{\mathcal{MT}_{N',M'}}="\mathcal{H}^{N',M'}" .$$ Hence the inclusions in the following diagram are here isomorphisms: $$\xymatrix{ \mathcal{F}_{0}^{k_{N}/k_{N'},M/M'}\mathcal{H}^{\mathcal{MT}_{N}} & \mathcal{H}^{\mathcal{MT}_{N'}} \ar@{^{(}->}[l]\\ \mathcal{F}_{0}^{k_{N}/k_{N'},M/M'}\mathcal{H}^{N} \ar@{^{(}->}[u] & \mathcal{H}^{N'} \ar@{^{(}->}[l] \ar@{^{(}->}[u]}.$$ \item[$\cdot$] It gives, considering such a descent $(k_{N}/k_{N'},M/M')$, a basis for $\mathcal{F}^{0}\mathcal{H}^{N}= \mathcal{H}^{\mathcal{MT}_{N',M'}}$ in terms of the basis of $\mathcal{H}^{N}$. For instance, it leads to a new basis for motivic multiple zeta values in terms of motivic Euler sums, or motivic MZV$_{\mu_{3}}$.\\ Some other $0$-level such as $\mathcal{F}_{0}^{k_{N}/k_{N},P/1}$, $N=3,4$ which should reflect the descent from $\mathcal{MT}(\mathcal{O}_{N}\left[ \frac{1}{P}\right] )$ to $\mathcal{MT}(\mathcal{O}_{N})$ are not known to be associated to a fundamental group, but the previous result enables us to reach them. We obtain a basis for: \begin{itemize} \item[$\bullet$] $\boldsymbol{\mathcal{H}^{\mathcal{MT}(\mathbb{Z}\left[\frac{1}{3}\right])}}$ in terms of the basis of $\mathcal{H}^{\mathcal{MT}(\mathcal{O}_{3}[\frac{1}{3}])}$. \item[$\bullet$] $\boldsymbol{\mathcal{H}^{\mathcal{MT}(\mathcal{O}_{3})}}$ in terms of the basis of $\mathcal{H}^{\mathcal{MT}(\mathcal{O}_{3}[\frac{1}{3}])}$. \item[$\bullet$] $\boldsymbol{\mathcal{H}^{\mathcal{MT}(\mathcal{O}_{4})}}$ in terms of the basis of $\mathcal{H}^{\mathcal{MT}(\mathcal{O}_{4}[\frac{1}{4}])}$. \\ \end{itemize} \end{itemize} \subsection{Proofs} As proved below, Theorem $5.2.4$ boils down to the Lemma $5.2.3$. Remind the map $\partial^{i,\mathcal{d}}_{n,p}$: $$\partial^{i,\mathcal{d}}_{n,p}: gr_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i} \rightarrow \oplus_{r<n} \left( gr_{p-1}^{\mathfrak{D}} \mathcal{H}_{n-r}^{\geq i-1}\right)^{\text{ card } \mathscr{D}^{\mathcal{d}}_{r}} \oplus_{r<n} \left( gr_{p-1}^{\mathfrak{D}} \mathcal{H}_{n-r}^{\geq i}\right)^{\text{ card } \mathscr{D}^{\backslash\mathcal{d}}_{r}}.$$ We will look at its image on $\mathcal{B}_{n,p,\geq i} $ and prove both the injectivity of $\partial^{i,\mathcal{d}}_{n,p}$ as considered in Lemma $5.2.3$, and the linear independence of these elements $\mathcal{B}_{n,p,\geq i}$. \paragraph{\Large { Proof of Lemma $\boldsymbol{5.2.3}$ for $\boldsymbol{N=2}$:}} The formula $(\ref{Drp})$ for $D^{-1}_{2r+1,p}$ on $\mathcal{B}$ elements:\footnote{Using identity: $\zeta^{\mathfrak{a}}(\overline{2 r + 1 })= (2^{-2r}-1)\zeta^{\mathfrak{a}}(2r+1 )$. Projection on $\zeta^{\mathfrak{l}}(\overline{2r+1})$ for the left side.} \begin{multline} \label{Deriv2} D^{-1}_{2r+1,p} \left(\zeta^{\mathfrak{m}} (2x_{1}+1, \ldots , \overline{2x_{p}+1}) \right) = \\ \frac{2^{2r}}{1-2^{2r}}\delta_{r =x_{1}} \cdot \zeta^{\mathfrak{m}} (2 x_{2}+1, \ldots, \overline{2x_{p}+1}) \\ \frac{2^{2r}}{1-2^{2r}} \left\lbrace \begin{array}{l} \sum_{i=1}^{p-2} \delta_{x_{i+1}\leq r < x_{i}+ x_{i+1} } \binom{2r}{2x_{i+1}} \\ -\sum_{i=1}^{p-1} \delta_{x_{i}\leq r < x_{i}+ x_{i+1}} \binom{2r}{2x_{i}} \end{array} \right. \cdot \zeta^{\mathfrak{m}} \left( \cdots ,2x_{i-1}+1, 2 (x_{i}+x_{i+1}-r) +1, 2 x_{i+2}+1, \cdots \right) \\ \textrm{\textsc{(d) }} +\delta_{x_{p} \leq r \leq x_{p}+ x_{p-1}} \binom{2r}{2x_{p}} \cdot\zeta^{\mathfrak{m}} \left( \cdots ,2x_{p-2}+1, \overline{2 (x_{p-1}+x_{p}-r) +1}\right) \end{multline} Terms of type \textsc{(d)} play a particular role since they correspond to deconcatenation for the coaction, and will be the terms of minimal $2$-adic valuation.\\ $D^{-1}_{1,p}$ acts as a deconcatenation on this family: \begin{equation} \label{Deriv21} D^{-1}_{1,p} \left(\zeta^{\mathfrak{m}} (2x_{1}+1, \ldots , \overline{2x_{p}+1}) \right) = \left\{ \begin{array}{ll} 0 & \text{ if } x_{p}\neq 0 \\ \zeta^{\mathfrak{m}} (2x_{1}+1, \ldots , \overline{2x_{p-1}+1}) & \text{ if } x_{p}=0 .\\ \end{array} \right. \end{equation} For $N=2$, $\partial^{i}_{n,p}$ ($\ref{eq:pderivinp}$) is simply: \begin{equation}\label{eq:pderivinp2} \partial^{i}_{n,p}: gr_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i} \rightarrow gr_{p-1}^{\mathfrak{D}} \mathcal{H}_{n-1}^{\geq i-1} \oplus_{1<2r+1\leq n-p+1} gr_{p-1}^{\mathfrak{D}} \mathcal{H}_{n-2r-1}^{\geq i}. \end{equation} Let prove all statements of Lemma $5.2.3$, recursively on the weight, and then recursively on depth and on the level, from $i=0$. \begin{proof} By recursion hypothesis, weight being strictly smaller, we assume that: $$\mathcal{B}_{n-1,p-1,\geq i-1} \oplus_{1<2r+1\leq n-p+1} \mathcal{B}_{n-2r-1,p-1,\geq i} \text{ is a basis of } $$ $$gr_{p-1}^{\mathfrak{D}} \mathcal{H}_{n-1}^{\geq i-1,\mathcal{B}} \oplus_{1<2r+1\leq n-p+1} gr_{p-1}^{\mathfrak{D}} \mathcal{H}_{n-2r-1}^{\geq i,\mathcal{B}}. $$ \begin{center} \textsc{Claim:} The matrix $M^{i}_{n,p}$ of $\left(\partial^{i,\mathcal{d}}_{n,p} (z) \right)_{z\in \mathcal{B}_{n, p, \geq i}}$ on these spaces is invertible. \end{center} \texttt{Nota Bene:} Here $D^{-1}_{1}(z)$, resp. $D^{-1}_{2r+1,p}(z)$ are expressed in terms of $\mathcal{B}_{n-1,p-1,\geq i-1} $ resp. $\mathcal{B}_{n-2r-1,p-1,\geq i}$.\\ It will prove both the bijectivity of $\partial^{i,\mathcal{d}}_{n,p}$ as considered in the lemma and the linear independence of $\mathcal{B}_{n, p, \geq i}$. Let divide $M^{i}_{n,p}$ into four blocks, with the first column corresponding to elements of $\mathcal{B}_{n, p, \geq i}$ ending by $1$: \begin{center} \begin{tabular}{ l || c | c ||} & $x_{p}=0$ & $x_{p}>0$ \\ \hline $D_{1,p}$ & M$1$ & M$2$ \\ $D_{>1,p}$ & M$3$ & M$4$ \\ \hline \end{tabular} \end{center} According to ($\ref{Deriv21}$), $D^{-1}_{1,p}$ is zero on the elements not ending by 1, and acts as a deconcatenation on the others. Therefore, M$3=0$, so $M^{i}_{n,p}$ is lower triangular by blocks, and the left-upper-block M$1$ is diagonal invertible. It remains to prove the invertibility of the right-lower-block $\widetilde{M}\mathrel{\mathop:}=M4$, corresponding to $D^{-1}_{>1,p}$ and to the elements of $\mathcal{B}_{n, p, \geq i}$ not ending by 1.\\ \\ Notice that in the formula $(\ref{Deriv2})$ of $D_{2r+1,p}$, applied to an element of $\mathcal{B}_{n, p, \geq i}$, most of terms appearing have a number of $1$ greater than $i$ but there are also terms in $\mathcal{B}_{n-2r-1,p-1,i-1}$, with exactly $(i-1)$ \say{$1$} for type $\textsc{a,b,c}$ only. We will make disappear the latter modulo $2$, since they are $2$-adically greater. \\ More precisely, using recursion hypothesis (in weight strictly smaller), we can replace them in $gr_{p-1} \mathcal{H}^{\geq i}_{n-2r-1}$ by a $\mathbb{Z}_{\text{odd}}$-linear combination of elements in $\mathcal{B}_{n-2r-1, p-1, \geq i}$, which does not lower the $2$-adic valuation. It is worth noticing that the type \textsc{d} elements considered are now always in $\mathcal{B}_{n-2r-1,p-1,\geq i}$, since we removed the case $x_{p}= 0$.\\ Once done, we can construct the matrix $\widetilde{M}$ and examine its entries.\\ Order elements of $\mathcal{B}$ on both sides by lexicographic order of its \say{reversed} elements: \begin{center} $(x_{p},x_{p-1},\cdots, x_{1})$ for the colums, $(r,y_{p-1},\cdots, y_{1})$ for the rows. \end{center} Remark that, with such an order, the diagonal corresponds to the deconcatenation terms: $r=x_{p}$ and $x_{i}=y_{i}$.\\ Referring to $(\ref{Deriv2})$, and by the previous remark, we see that $\widetilde{M}$ has all its entries of 2-adic valuation positive or equal to zero, since the coefficients in $(\ref{Deriv2})$ are in $2^{2r}\mathbb{Z}_{\text{odd}}$ (for types \textsc{a,b,c}) or of the form $\mathbb{Z}_{\text{odd}}$ for types \textsc{d,d'}. If we look only at the terms with $2$-adic valuation zero, (which comes to consider $\widetilde{M}$ modulo $2$), it only remains in $(\ref{Deriv2})$ the terms of type \textsc{(d,d')}, that is: \begin{multline}\nonumber D_{2r+1,p} (\zeta^{\mathfrak{m}}(2x_{1}+1, \ldots, \overline{2x_{p}+1})) \equiv \delta_{ r = x_{p}+ x_{p-1}} \binom{2r}{2x_{p}} \zeta^{\mathfrak{m}} (2x_{1}+1, \ldots ,2x_{p-2}+1, \overline{1}) \\ + \delta_{x_{p} \leq r < x_{p}+ x_{p-1}} \binom{2r}{2x_{p}} \zeta^{\mathfrak{m}} (2x_{1}+1, \ldots ,2x_{p-2}+1, \overline{2 (x_{p-1}+x_{p}-r) +1}) \pmod{ 2}. \end{multline} Therefore, modulo 2, with the order previously defined, it remains only an upper triangular matrix ($\delta_{x_{p}\leq r}$), with 1 on the diagonal ($\delta_{x_{p}= r}$, deconcatenation terms). Thus, $\det\widetilde{M}$ has a 2-adic valuation equal to zero, and in particular can not be zero, that's why $\widetilde{M}$ is invertible.\\ The $\mathbb{Z}_{odd}$ structure is easily deduced from the fact that the determinant of $\widetilde{M}$ is odd, and the observation that if we consider $D_{2r+1,p} (\zeta^{\mathfrak{m}} (z_{1}, \ldots, z_{p}))$, all the coefficients are integers. \end{proof} \paragraph{{\Large Proof of Lemma $\boldsymbol{5.2.3}$ for other $\boldsymbol{N}$.}} These cases can be handled in a rather similar way than the case $N=2$, except that the number of generators is different and that several descents are possible, hence there will be several notions of level and filtrations by the motivic level, one for each descent. Let fix a descent $\mathcal{d}$ and underline the differences in the proof: \begin{proof} In the same way, we prove by recursion on weight, depth and level, that the following map is bijective: $$\partial^{i,\mathcal{d}}_{n,p}: gr_{p}^{\mathfrak{D}} \langle \mathcal{B}_{n, \geq i} \rangle_{\mathbb{Q}} \rightarrow \oplus_{r<n} \left( gr_{p-1}^{\mathfrak{D}} \langle \mathcal{B}_{n-1, \geq i-1} \rangle_{\mathbb{Q}} \right) ^{\oplus \text{ card } \mathscr{D}^{\mathcal{d}}_{r}} \oplus_{r<n} \left( gr_{p-1}^{\mathfrak{D}} \langle \mathcal{B}_{n-2r-1, \geq i} \rangle_{\mathbb{Q}} \right) ^{\oplus \text{ card } \mathscr{D}^{\backslash\mathcal{d}}_{r}}.$$ \begin{center} I.e the matrix $M^{i}_{n,p}$ of $\left(\partial^{i}_{n,p} (z) \right)_{z\in \mathcal{B}_{n, p, \geq i}}$ on $\oplus_{r<n} \mathcal{B}_{n-r,p-1,\geq i-1}^{\text{ card } \mathscr{D}^{\mathcal{d}}_{r}} \oplus_{r<n} \mathcal{B}_{n-r,p-1,\geq i}^{\text{ card } \mathscr{D}^{\backslash\mathcal{d}}_{r}}$ \footnote{Elements in arrival space are linearly independent by recursion hypothesis.} is invertible. \end{center} As before, by recursive hypothesis, we replace elements of level $\leq i$ appearing in $D^{i}_{r,p}$, $r\geq 1$ by $\mathbb{Z}_{1[P]}$-linear combinations of elements of level $\geq i$ in the quotient $gr_{p-1}^{\mathfrak{D}} \mathcal{H}_{n-r}^{\geq i}$, which does not decrease the $P$-adic valuation.\\ Now looking at the expression for $D_{r,p}$ in Lemma $2.4.3$, we see that on the elements considered, \footnote{i.e. of the form $\zeta^{\mathfrak{m}} \left({x_{1}, \ldots , x_{p} \atop \epsilon_{1}, \ldots ,\epsilon_{p-1}, \epsilon_{p}\xi_{N} }\right)$, with $\epsilon_{i}\in \pm 1$ for $N=8$, $\epsilon_{i}=1$ else.} the left side is: \begin{center} Either $\zeta^{\mathfrak(l)}\left( r\atop 1 \right) $ for type $\textsc{a,b,c} \qquad $ Or $\zeta^{\mathfrak(l)}\left( r\atop \xi \right) $ for Deconcatenation terms. \end{center} Using results in depth $1$ of Deligne and Goncharov (cf. $\S 2.4.3$), the deconcatenation terms are $P$-adically smaller. \\ \texttt{For instance}, for $N=\mlq 6 \mrq$, $r$ odd: $$\zeta^{\mathfrak{l}}\left( r; 1\right) =\frac{2\cdot 6^{r-1}}{(1-2^{r-1})(1-3^{r-1})} \zeta^{\mathfrak{l}}(r; \xi) , \quad \text{ and } \quad v_{3} \left( \frac{2\cdot 6^{r-1}}{(1-2^{r-1})(1-3^{r-1})}\right) >0 .$$ \texttt{Nota Bene:} For $N=8$, $D_{r}$ has two independent components, $D_{r}^{\xi}$ and $D_{r}^{-\xi}$. We have to distinguish them, but the statement remains similar since the terms appearing in the left side are either $\zeta^{\mathfrak(l)}\left( r\atop \pm 1 \right)$, or deconcatenation terms, $\zeta^{\mathfrak(l)}\left( r\atop \pm \xi \right)$, $2$-adically smaller by $\S 4.1$.\\ Thanks to congruences modulo $P$, only the deconcatenation terms remain:\\ $$D_{r,p} \left(\zeta^{\mathfrak{m}} \left({x_{1}, \ldots , x_{p} \atop \epsilon_{1}, \ldots ,\epsilon_{p-1},\epsilon_{p} \xi }\right)\right) = $$ $$ \delta_{ x_{p} \leq r \leq x_{p}+ x_{p-1}-1} (-1)^{r-x_{p}} \binom{r-1}{x_{p}-1} \zeta ^{\mathfrak{l}} \left( r\atop \epsilon_{p}\xi \right) \otimes \zeta^{\mathfrak{m}} \left({ x_{1}, \ldots, x_{p-2}, x_{p-1}+x_{p}-r\atop \epsilon_{1}, \cdots, \epsilon_{p-2}, \epsilon_{p-1}\epsilon_{p}\xi} \right) \pmod{P}.$$ As in the previous case, the matrix being modulo $P$ triangular with $1$ on the diagonal, has a determinant congruent at $1$ modulo $P$, and then, in particular, is invertible. \\ \end{proof} \paragraph{{\Large \texttt{EXAMPLE for} $\boldsymbol{N=2}$}:} Let us illustrate the previous proof by an example, for weight $n=9$, depth $p=3$, level $i=0$, with the previous notations.\\ Instead of $\mathcal{B}_{9, 3, \geq 0}$, we will restrict to the subfamily (corresponding to $\mathcal{A}$): $$\mathcal{B}_{9, 3, \geq 0}^{0}\mathrel{\mathop:}= \left\{ \zeta^{\mathfrak{m}}(2a+1,2b+1,\overline{2c+1}) \text{ of weight } 9 \right\} \subset$$ $$ \mathcal{B}_{9, 3, \geq 0}\mathrel{\mathop:}= \left\{ \zeta^{\mathfrak{m}}(2a+1,2b+1,\overline{2c+1})\zeta^{\mathfrak{m}}(2)^{s}\text{ of weight } 9 \right\}$$ Note that $\zeta^{\mathfrak{m}}(2)$ being trivial under the coaction, the matrix $M_{9,3}$ is diagonal by blocks following the different values of $s$ and we can prove the invertibility of each block separately; here we restrict to the block $s=0$. The matrix $\widetilde{M}$ considered represents the coefficients of: $$\zeta^{\mathfrak{m}}(\overline{2r+1})\otimes \zeta^{\mathfrak{m}}(2x+1,\overline{2y+1})\quad \text{ in }\quad D_{2r+1,3}(\zeta^{\mathfrak{m}}(2a+1,2b+1,\overline{2c+1})).$$ The chosen order for the columns, resp. for the rows \footnote{I.e. for $\zeta^{\mathfrak{m}}(2a+1,2b+1,2c+1)$ resp. for $(D_{2r+1,3}, \zeta^{\mathfrak{m}}(2x+1,\overline{2y+1}))$.} is the lexicographic order applied to $(c,b,a)$ resp. to $(r,y,x)$. Modulo $2$, it only remains the terms of type \textsc{d,d'}, that is: $$ D_{2r+1,3} (\zeta^{\mathfrak{m}}(2a+1, 2b+1, \overline{2c+1})) \equiv \delta_{c \leq r \leq b+c} \binom{2r}{2c} \zeta^{\mathfrak{m}} (2a+1, \overline{2 (b+c-r) +1}) \text{ } \pmod{ 2}.$$ With the previous order, $\widetilde{M}_{9,3}$ is then, modulo $2$:\footnote{Notice that the first four rows are exact: no need of congruences modulo $2$ for $D_{1}$ because it acts as a deconcatenation on the base.}\\ \\ \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} $D_{r}, \zeta\backslash$ $\zeta$& $7,1,\overline{1}$ & $5,3,\overline{1}$ & $3,5,\overline{1}$& $1,7,\overline{1}$& $5,1,\overline{3}$&$3,3,\overline{3}$&$1,5,\overline{3}$&$3,1,\overline{5}$&$1,3,\overline{5}$ & $1,1,\overline{7}$ \\ \hline $D_{1},\zeta^{\mathfrak{m}}(7,\overline{1})$ & $1$ & $0$ &$0$ &$0$ &$0$ &$0$ &$0$ &$0$ &$0$ &$0$ \\ $D_{1},\zeta^{\mathfrak{m}}(5,\overline{3})$ & $0$ & $1$ &$0$ &$0$ &$0$ &$0$ &$0$ &$0$ &$0$ &$0$ \\ $D_{1},\zeta^{\mathfrak{m}}(3,\overline{5})$ & $0$ & $0$ &$1$ &$0$ &$0$ &$0$ &$0$ &$0$ &$0$ &$0$ \\ $D_{1},\zeta^{\mathfrak{m}}(1,\overline{7})$ & $0$ & $0$ &$0$ &$1$ &$0$ &$0$ &$0$ &$0$ &$0$ &$0$ \\ $D_{3},\zeta^{\mathfrak{m}}(5,\overline{7})$ & $0$ & $0$ &$0$ &$0$ &$1$ &$0$ &$0$ &$0$ &$0$ &$0$ \\ $D_{3},\zeta^{\mathfrak{m}}(3,\overline{3})$ & $0$ & $0$ &$0$ &$0$ &$0$ &$1$ &$0$ &$0$ &$0$ &$0$ \\ $D_{3},\zeta^{\mathfrak{m}}(1,\overline{5})$ & $0$ & $0$ &$0$ &$0$ &$0$ &$0$ &$1$ &$0$ &$0$ &$0$ \\ $D_{5},\zeta^{\mathfrak{m}}(3,\overline{1})$ & $0$ & $0$ &$0$ &$0$ &$0$ &$\binom{4}{2}$ &$0$ &$1$ &$0$ &$0$ \\ $D_{5},\zeta^{\mathfrak{m}}(1,\overline{3})$ & $0$ & $0$ &$0$ &$0$ &$0$ &$0$ &$\binom{4}{2}$ &$0$ &$1$ &$0$ \\ $D_{7},\zeta^{\mathfrak{m}}(1,\overline{1})$ & $0$ & $0$ &$0$ &$0$ &$0$ &$0$ &$\binom{6}{2}$ &$0$ &$\binom{6}{4}$ &$1$ \\ \\ \end{tabular}. As announced, $\widetilde{M}$ modulo $2$ is triangular with $1$ on the diagonal, thus obviously invertible. \paragraph{ {\Large Proof of the Theorem $\boldsymbol{5.2.4}$}.} \begin{proof} This Theorem comes down to the Lemma $5.2.3$ proving the freeness of $\mathcal{B}_{n, p, \geq i}$ in $gr_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i}$ defining a $\mathbb{Z}_{odd}$-structure: \begin{itemize} \item[$(i)$] By this Lemma, $\mathcal{B}_{n, p, \geq i}$ is linearly free in the depth graded, and $\partial^{i,\mathcal{d}}_{n,p}$, which decreases strictly the depth, is bijective on $\mathcal{B}_{n, p, \geq i}$. The family $\mathcal{B}_{n, \leq p, \geq i}$, all depth mixed is then linearly independent on $\mathcal{F}_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i}\subset \mathcal{F}_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i, \mathcal{MT}}$: easily proved by application of $\partial^{i,\mathcal{d}}_{n,p}$.\\ By a dimension argument, since $\dim \mathcal{F}_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i, \mathcal{MT}}= \text{ card } \mathcal{B}_{n, \leq p, \geq i}$, we deduce the generating property. \item[$(ii)$] By the lemma, this family is linearly independent, and by $(i)$ applied to depth $p-1$, $$gr_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i}\subset gr_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i, \mathcal{MT}}.$$ Then, by a dimension argument, since $\dim gr_{p}^{\mathfrak{D}} \mathcal{H}_{n}^{\geq i, \mathcal{MT}} = \text{ card } \mathcal{B}_{n, p, \geq i}$ we conclude on the generating property. The $\mathbb{Z}_{odd}$ structure has been proven in the previous lemma.\\ By the bijectivity of $\partial_{n,p}^{i,\mathcal{d}}$ (still previous lemma), which decreases the depth, and using the freeness of the elements of a same depth in the depth graded, there is no linear relation between elements of $\mathcal{B}_{n,\cdot, \geq i}$ of different depths in $\mathcal{H}_{n}^{\geq i} \subset \mathcal{H}^{\geq i \mathcal{MT}}_{n}$. The family considered is then linearly independent in $\mathcal{H}_{n}^{\geq i}$. Since $\text{card } \mathcal{B}_{n,\cdot, \geq i} =\dim \mathcal{H}^{\geq i, \mathcal{MT}}_{n}$, we conclude on the equality of the previous inclusions. \item[$(iii)$] The second exact sequence is obviously split since $ \mathcal{B}_{n, \cdot,\geq i+1}$ is a subset of $\mathcal{B}_{n}$. We already know that $\mathcal{B}_{n}$ is a basis of $\mathcal{H}_{n}$ and $\mathcal{B}_{n, \cdot, \geq i+1}$ is a basis of $\mathcal{H}_{n}^{\geq i+1}$. Therefore, it gives a map $\mathcal{H}_{n} \leftarrow\mathcal{H}_{n}^{\geq i+1}$ and split the first exact sequence. \\ The construction of $cl_{n,\leq p, \geq i}(x)$, obtained from $cl_{n,p, \geq i}(x)$ applied repeatedly, is the following: \begin{center} $x\in\mathcal{B}_{n, \cdot, \leq i-1} $ is sent on $\bar{x}\in \mathcal{H}_{n}^{\geq i} \cong \langle\mathcal{B}_{n, \leq p, \geq i}\rangle_{\mathbb{Q}} $ by the projection $\pi_{0,i}$ and so $x -\bar{x} \in \mathcal{F}_{i-1}\mathcal{H}$. \end{center} Notice that the problem of making $cl(x)$ explicit boils down to the problem of describing the map $\pi_{0,i}$ in the bases $\mathcal{B}$. \item[$(iv)$] By the previous statements, these elements are linearly independent in $\mathcal{F}_{i} \mathcal{H}^{MT}_{n}$. Moreover, their cardinal is equal to the dimension of $\mathcal{F}_{i} \mathcal{H}^{MT}_{n}$. It gives the basis announced, composed of elements $x\in \mathcal{B}_{n, \cdot, \leq i}$, each corrected by an element denoted $cl(x)$ of $ \langle\mathcal{B}_{n, \cdot, \geq i+1}\rangle_{\mathbb{Q}}$. \item[$(v)$] By the previous statements, these elements are linearly independent in $gr_{i} \mathcal{H}_{n}$, and by a dimension argument, we can conclude. \end{itemize} \end{proof} \subsection{Specified Results} \subsubsection{\textsc{The case } $N=2$.} Here, since there is only one Galois descent from $\mathcal{H}^{2}$ to $\mathcal{H}^{1}$, the previous exponents for level filtrations can be omitted, as the exponent $2$ for $\mathcal{H}$ the space of motivic Euler sums. Set $\mathbb{Z}_{\text{odd}}= \left\{ \frac{a}{b} \text{ , } a\in\mathbb{Z}, b\in 2 \mathbb{Z}+1 \right\}$, rationals having a $2$-adic valuation positive or infinite. Let us define particular families of motivic Euler sums, a notion of level and of motivic level. \begin{defi} \begin{itemize} \item[$\cdot$] $\mathcal{B}^{2}\mathrel{\mathop:}=\left\{\zeta^{\mathfrak{m}}(2x_{1}+1, \ldots, 2 x_{p-1}+1,\overline{2 x_{p}+1}) \zeta(2)^{\mathfrak{m},k}, x_{i} \geq 0, k \in \mathbb{N} \right\}.$\\ Here, the level is defined as the number of $x_{i}$ equal to zero. \item[$\cdot$] The filtration by the motivic ($\mathbb{Q}/\mathbb{Q},2/1$)-level, $$\mathcal{F}_{i}\mathcal{H}\mathrel{\mathop:}=\left\{ \mathfrak{Z} \in \mathcal{H}, \textrm{ such that } D^{-1}_{1}\mathfrak{Z} \in \mathcal{F}_{i-1} \mathcal{H} \text{ , } \forall r>0, D^{1}_{2r+1}\mathfrak{Z} \in \mathcal{F}_{i}\mathcal{H} \right\}.$$ \begin{center} I.e. $\mathcal{F}_{i}$ is the largest submodule such that $\mathcal{F}_{i} / \mathcal{F}_{i-1}$ is killed by $D_{1}$. \end{center} \end{itemize} \end{defi} This level filtration commutes with the increasing depth filtration.\\ \\ \textsc{Remarks}: The increasing or decreasing filtration defined from the number of 1 appearing in the motivic multiple zeta values is not preserved by the coproduct, since the number of 1 can either decrease or increase (by at the most 1) and is therefore not \textit{motivic}.\\ \\ Let list some consequences of the results in $\S 5.2.3$, which generalize in particular a result similar to P. Deligne's one (cf. $\cite{De}$): \begin{coro} The map $\mathcal{G}^{\mathcal{MT}} \rightarrow \mathcal{G}^{\mathcal{MT}'}$ is an isomorphism.\\ The elements of $\mathcal{B}_{n}$, $\zeta^{\mathfrak{m}}(2x_{1}+1, \ldots, \overline{2 x_{p}+1}) \zeta(2)^{k}$ of weight $n$, form a basis of motivic Euler sums of weight $n$, $\mathcal{H}^{2}_{n}=\mathcal{H}^{\mathcal{MT}_{2}}_{n}$, and define a $\mathbb{Z}_{odd}$-structure on the motivic Euler sums. \end{coro} \noindent The period map, $\text{per}: \mathcal{H} \rightarrow \mathbb{C}$, induces the following result for the Euler sums: \begin{center} Each Euler sum is a $\mathbb{Z}_{odd}$-linear combination of Euler sums \\ $\zeta(2x_{1}+1, \ldots, \overline{2 x_{p}+1}) \zeta(2)^{k}, k\geq 0, x_{i} \geq 0$ of the same weight. \end{center} \newpage \noindent Here is the result on the $0^{\text{th}}$ level of the Galois descent from $\mathcal{H}^{1}$ to $\mathcal{H}^{2}$: \begin{coro} $$\mathcal{F}_{0}\mathcal{H}^{\mathcal{MT}_{2}}=\mathcal{F}_{0}\mathcal{H}^{2}=\mathcal{H}^{\mathcal{MT}_{1}}=\mathcal{H}^{1} .$$ A basis of motivic multiple zeta values in weight $n$, is formed by terms of $\mathcal{B}_{n}$ with $0$-level each corrected by linear combinations of elements of $\mathcal{B}_{n}$ of level $1$: \begin{multline}\nonumber \mathcal{B}_{n}^{1}\mathrel{\mathop:}=\left\{ \zeta^{\mathfrak{m}}(2x_{1}+1, \ldots, \overline{2x_{p}+1})\zeta^{\mathfrak{m}}(2)^{s} + \sum_{y_{i} \geq 0 \atop \text{at least one } y_{i} =0} \alpha_{\textbf{x} , \textbf{y}} \zeta^{\mathfrak{m}}(2y_{1}+1, \ldots, \overline{2y_{p}+1})\zeta^{\mathfrak{m}}(2)^{s} + \right.\\ \left. \sum_{\text{lower depth } q<p, z_{i}\geq 0 \atop \text{ at least one } z_{i} =0} \beta_{\textbf{x}, \textbf{z}} \zeta^{\mathfrak{m}}(2 z_{1}+1, \ldots, \overline{2 z_{q}+1})\zeta^{\mathfrak{m}}(2)^{s}, x_{i}>0 , \alpha_{\textbf{x} , \textbf{y}} , \beta_{\textbf{x} , \textbf{z}} \in\mathbb{Q},\right\}_{\sum x_{i}= \sum y_{i}=\sum z_{i}= \frac{n-p}{2} -s}. \end{multline} \end{coro} \paragraph{Honorary.} About the first condition in $\ref{criterehonoraire}$ to be honorary: \begin{lemm}\label{condd1} Let $\zeta^{\mathfrak{m}}(n_{1},\cdots,n_{p}) \in\mathcal{H}^{2}$, a motivic Euler sum, with $n_{i}\in\mathbb{Z}^{\ast}$, $ n_{p}\neq 1$. Then: $$\forall i \text{ , } n_{i}\neq -1 \Rightarrow D_{1}(\zeta^{\mathfrak{m}}(n_{1},\cdots,n_{p}))=0 $$ \end{lemm} \begin{proof} Looking at all iterated integrals of length $1$ in $\mathcal{L}$, $I^{\mathfrak{l}}(a;b;c)$, $a,b,c\in \lbrace 0,\pm 1\rbrace$: the only non zero ones are these with a consecutive $\lbrace 1,-1\rbrace$ or $\lbrace -1,1\rbrace$ sequence in the iterated integral, with the condition that extremities are different, that is: $$I(0;1;-1), I(0;-1;1), I(1;-1;0), I(-1;+1;0), I(-1;\pm 1;1), I(1;\pm 1;-1).$$ Moreover, they are all equal to $\pm \log^{\mathfrak{a}} (2)$ in the Hopf algebra $\mathcal{A}$. Consequently, if there is no $-1$ in the Euler sums notation, it implies that $D_{1}$ would be zero. \end{proof} \paragraph{Comparison with Hoffman's basis. } Let compare: \begin{itemize} \item[$(i)$] The Hoffman basis of $\mathcal{H}^{1}$ formed by motivic MZV with only $2$ and $3$ ($\cite{Br2}$) $$\mathcal{B}^{H}\mathrel{\mathop:}= \left\{\zeta^{\mathfrak{m}} (x_{1}, \ldots, x_{k}), \text{ where } x_{i}\in\left\{2,3\right\} \right\}.$$ \item[$(ii)$] $\mathcal{B}^{1}$, the base of $\mathcal{H}^{1}$ previously obtained (Corollary $5.2.7$). \end{itemize} Beware, the index $p$ for $\mathcal{B}^{H}$ indicates the number of 3 among the $x_{i}$, whereas for $\mathcal{B}^{1}$, it still indicates the depth; in both case, it can be seen as the \textit{motivic depth} (cf. $\S 2.4.3$): \begin{coro} $\mathcal{B}^{1}_{n,p}$ is a basis of $gr_{p}^{\mathfrak{D}} \langle\mathcal{B}^{H}_{n,p}\rangle_{\mathbb{Q}}$ and defines a $\mathbb{Z}_{\text{odd}}$-structure.\\ I.e. each element of the Hoffman basis of weight $n$ and with $p$ three, $p>0$, decomposes into a $\mathbb{Z}_{\text{odd}}$-linear combination of $\mathcal{B}^{1}_{n,p}$ elements plus terms of depth strictly less than $p$. \end{coro} \begin{proof} Deduced from the previous results, with the $\mathcal{Z}_{odd}$ structure of the basis for Euler sums. \end{proof} \subsubsection{\textsc{The cases } $N=3,4$.} For $N=3,4$ there are a generator in each degree $\geq 1$ and two Galois descents. \\ \begin{defi} \begin{itemize} \item[$\cdot$] \textbf{Family:} $\mathcal{B}\mathrel{\mathop:}=\left\{\zeta^{\mathfrak{m}}\left({x_{1}, \ldots,x_{p}\atop 1, \ldots , 1, \xi }\right) (2i \pi)^{s,\mathfrak{m}}, x_{i} \geq 1, s \geq 0 \right\}$. \item[$\cdot$] \textbf{Level:} $$\begin{array}{lll} \text{ The $(k_{N}/k_{N},P/1)$-level } & \text{ is defined as } & \text{ the number of $x_{i}$ equal to 1 }\\ \text{ The $(k_{N}/\mathbb{Q},P/P)$-level } & \text{ } & \text{ the number of $x_{i}$ even }\\ \text{ The $(k_{N}/\mathbb{Q},P/1)$-level } & \text{ } & \text{ the number of even $x_{i}$ or equal to $1$ } \end{array} $$ \item[$\cdot$] \textbf{Filtrations by the motivic level:} $\mathcal{F}^{\mathcal{d}} _{-1} \mathcal{H}^{N}=0$ and $\mathcal{F}^{\mathcal{d}} _{i} \mathcal{H}^{N}$ is the largest submodule of $\mathcal{H}^{N}$ such that $\mathcal{F}^{\mathcal{d}}_{i}\mathcal{H}^{N}/\mathcal{F}^{\mathcal{d}} _{i-1}\mathcal{H}^{N}$ is killed by $\mathscr{D}^{\mathcal{d}}$, where $$\mathscr{D}^{\mathcal{d}} = \begin{array}{ll} \lbrace D^{\xi}_{1} \rbrace & \text{ for } \mathcal{d}=(k_{N}/k_{N},P/1)\\ \lbrace(D^{\xi}_{2r})_{r>0} \rbrace & \text{ for } \mathcal{d}=(k_{N}/\mathbb{Q},P/P)\\ \lbrace D^{\xi}_{1},(D^{\xi}_{2r})_{r>0} \rbrace & \text{ for } \mathcal{d}=(k_{N}/\mathbb{Q},P/1) \\ \end{array}. $$ \end{itemize} \end{defi} \textsc{Remarks}: \begin{itemize} \item[$\cdot$] As before, the increasing, or decreasing, filtration that we could define by the number of 1 (resp. number of even) appearing in the motivic multiple zeta values is not preserved by the coproduct, since the number of 1 can either diminish or increase (at most 1), so is not motivic. \item[$\cdot$] An effective way of seeing those motivic level filtrations, giving a recursive criterion: $$\hspace*{-0.5cm}\mathcal{F}_{i}^{k_{N}/\mathbb{Q},P/P }\mathcal{H}= \left\{ \mathfrak{Z} \in \mathcal{H}, \textrm{ s. t. } \forall r > 0 \text{ , } D^{\xi}_{2r}(\mathfrak{Z}) \in \mathcal{F}_{i-1}^{k_{N}/\mathbb{Q},P/P}\mathcal{H} \text{ , } \forall r \geq 0 \text{ , } D^{\xi}_{2r+1}(\mathfrak{Z}) \in \mathcal{F}_{i}^{ k_{N}/\mathbb{Q},P/P}\mathcal{H} \right\}.$$ \end{itemize} \noindent We deduce from the result in $\S 5.2.3$ a result of P. Deligne ($i=0$, cf. $\cite{De}$): \begin{coro} The elements of $\mathcal{B}^{N}_{n,p, \geq i}$ form a basis of $gr_{p}^{\mathfrak{D}} \mathcal{H}_{n}/ \mathcal{F}_{i-1} \mathcal{H}_{n}$.\\ In particular the map $\mathcal{G}^{\mathcal{MT}_{N}} \rightarrow \mathcal{G}^{\mathcal{MT}_{N}'}$ is an isomorphism. The elements of $\mathcal{B}_{n}^{N}$, form a basis of motivic multiple zeta value relative to $\mu_{N}$, $\mathcal{H}_{n}^{N}$. \end{coro} The level $0$ of the filtrations considered for $N' \vert N\in \left\lbrace 3,4 \right\rbrace $ gives the Galois descents: \begin{coro} A basis of $\mathcal{H}_{n}^{N'} $ is formed by elements of $\mathcal{B}_{n}^{N}$ of level $0$ each corrected by linear combination of elements $\mathcal{B}_{n}^{N}$ of level $ \geq 1$. In particular, with $\xi$ primitive: \begin{itemize} \item[$\cdot$] \textbf{Galois descent} from $N'=1$ to $N=3,4$: A basis of motivic multiple zeta values: $$\hspace*{-0.5cm}\mathcal{B}^{1 ; N} \mathrel{\mathop:}= \left\{ \zeta^{\mathfrak{m}}\left({2x_{1}+1, \ldots, 2x_{p}+1\atop 1, \ldots, 1, \xi} \right) \zeta^{\mathfrak{m}}(2)^{s} + \sum_{y_{i} \geq 0 \atop \text{ at least one $y_{i}$ even or } = 1} \alpha_{\textbf{x},\textbf{y}} \zeta^{\mathfrak{m}} \left({y_{1}, \ldots, y_{p}\atop 1, \ldots, 1, \xi } \right)\zeta^{\mathfrak{m}}(2)^{s} \right.$$ $$ \left. + \sum_{\text{ lower depth } q<p, \atop \text{ at least one even or } = 1} \beta_{\textbf{x},\textbf{z}} \zeta^{\mathfrak{m}}\left({z_{1}, \ldots, z_{q}\atop 1, \ldots, 1, \xi } \right)\zeta^{\mathfrak{m}}(2)^{s} \text{ , } x_{i}>0 , \alpha_{\textbf{x},\textbf{y}} , \beta_{\textbf{x},\textbf{z}} \in\mathbb{Q} \right\}. $$ \item[$\cdot$] \textbf{Galois descent} from $N'=2$ to $N=4$: A basis of motivic Euler sums: $$\hspace*{-0.5cm}\mathcal{B}^{2; 4}\mathrel{\mathop:}= \left\{ \zeta^{\mathfrak{m}} \left({2x_{1}+1, \ldots, 2x_{p}+1\atop 1, \ldots, 1, \xi_{4}} \right)\zeta^{\mathfrak{m}}(2)^{s} + \sum_{y_{i}>0 \atop \text{at least one even}} \alpha_{\textbf{x},\textbf{y}} \zeta^{\mathfrak{m}}\left({y_{1}, \ldots, y_{p}\atop 1, \ldots, 1, \xi_{4} } \right)\zeta^{\mathfrak{m}}(2)^{s} \right.$$ $$ \left. +\sum_{\text{lower depth } q<p \atop z_{i}>0, \text{at least one even}} \beta_{\textbf{x},\textbf{z}} \zeta^{\mathfrak{m}}\left( z_{1}, \ldots, z_{q} \atop 1, \ldots, 1, \xi_{4} \right) \zeta^{\mathfrak{m}}(2)^{s} \text{ , } x_{i}\geq 0 , \alpha_{\textbf{x},\textbf{y}}, \beta_{\textbf{x},\textbf{z}}\in\mathbb{Q} \right\} .$$ \item[$\cdot$] Similarly, replacing $\xi_{4}$ by $\xi_{3}$ in $\mathcal{B}^{2; 4}$, this gives a basis of: $$\mathcal{F}^{k_{3}/\mathbb{Q},3/3}_{0} \mathcal{H}_{n}^{3}=\boldsymbol{\mathcal{H}_{n}^{\mathcal{MT}(\mathbb{Z}[\frac{1}{3}])}}.$$ \item[$\cdot$] A basis of $\mathcal{F}^{k_{N}/k_{N},P/1}_{0} \mathcal{H}_{n}^{N}=\boldsymbol{\mathcal{H}_{n}^{\mathcal{MT}(\mathcal{O}_{N})}}$, with $N= 3,4$: $$\hspace*{-0.5cm}\mathcal{B}^{N \text{ unram}}\mathrel{\mathop:}= \left\{ \zeta^{\mathfrak{m}} \left({x_{1}, \ldots, x_{p} \atop 1, \ldots, 1, \xi} \right)\zeta^{\mathfrak{m}}(2)^{s} + \sum_{y_{i}>0 \atop \text{at least one } 1} \alpha_{\textbf{x},\textbf{y}} \zeta^{\mathfrak{m}}\left({y_{1}, \ldots, y_{p}\atop 1, \ldots, 1, \xi} \right)\zeta^{\mathfrak{m}}(2)^{s} \right.$$ $$ \left. +\sum_{\text{lower depth } q<p \atop z_{i}>0, \text{at least one } 1} \beta_{\textbf{x},\textbf{z}} \zeta^{\mathfrak{m}}\left( z_{1}, \ldots, z_{q} \atop 1, \ldots, 1, \xi \right) \zeta^{\mathfrak{m}}(2)^{s} \text{ , } x_{i} > 0 , \alpha_{\textbf{x},\textbf{y}}, \beta_{\textbf{x},\textbf{z}}\in\mathbb{Q} \right\} .$$ \end{itemize} \end{coro} \noindent \texttt{Nota Bene:} Notice that for the last two level $0$ spaces, $\mathcal{H}_{n}^{\mathcal{MT}(\mathcal{O}_{N})}$, $N=3,4$ and $\mathcal{H}_{n}^{\mathcal{MT}(\mathbb{Z}[\frac{1}{3}])}$, we still do not have another way to reach them, since those categories of mixed Tate motives are not simply generated by a motivic fundamental group. \subsubsection{\textsc{The case } $N=8$.} For $N=8$ there are two generators in each degree $\geq 1$ and three possible Galois descents: with $\mathcal{H}^{4}$, $\mathcal{H}^{2}$ or $\mathcal{H}^{1}$.\\ \begin{defi} \begin{itemize} \item[$\cdot$] \textbf{Family:} $\mathcal{B}\mathrel{\mathop:}=\left\{\zeta^{\mathfrak{m}}\left( {x_{1}, \ldots,x_{p}\atop \epsilon_{1}, \ldots , \epsilon_{p-1},\epsilon_{p} \xi }\right)(2i \pi)^{s,\mathfrak{m}}, x_{i} \geq 1, \epsilon_{i}\in \left\{\pm 1\right\} s \geq 0 \right\}$. \item[$\cdot$] \textbf{Level}, denoted $i$: $$\begin{array}{lll} \text{ The $(k_{8}/k_{4},2/2)$-level } & \text{ is the number of } & \text{ $\epsilon_{j}$ equal to $-1$ } \\ \text{ The $(k_{8}/\mathbb{Q},2/2)$-level } & \text{ } & \text{ $\epsilon_{j}$ equal to $-1$ $+$ even $x_{j}$ } \\ \text{ The $(k_{8}/\mathbb{Q},2/1)$-level } & \text{ } & \text{ $\epsilon_{j}$ equal to $-1$, $+$ even $x_{j}$ $+$ $x_{j}$ equal to $1$. } \end{array}$$ \item[$\cdot$] \textbf{Filtrations by the motivic level:} $\mathcal{F}^{\mathcal{d}} _{-1} \mathcal{H}^{8}=0$ and $\mathcal{F}^{\mathcal{d}} _{i} \mathcal{H}^{8}$ is the largest submodule of $\mathcal{H}^{8}$ such that $\mathcal{F}^{\mathcal{d}}_{i}\mathcal{H}^{8}/\mathcal{F}^{\mathcal{d}} _{i-1}\mathcal{H}^{8}$ is killed by $\mathscr{D}^{\mathcal{d}}$, where $$\mathscr{D}^{\mathcal{d}} = \begin{array}{ll} \left\lbrace (D^{\xi}_{r}- D^{-\xi}_{r})_{r>0} \right\rbrace & \text{ for } \mathcal{d}=(k_{8}/k_{4},2/2)\\ \left\lbrace (D^{\xi}_{2r+1}- D^{-\xi}_{2r+1})_{r\geq 0}, (D^{\xi}_{2r})_{r>0},( D^{-\xi}_{2r})_{r>0} \right\rbrace & \text{ for } \mathcal{d}=(k_{8}/\mathbb{Q},2/2)\\ \left\lbrace (D^{\xi}_{2r+1}- D^{-\xi}_{2r+1})_{r> 0}, D^{\xi}_{1}, D^{-\xi}_{1}, (D^{\xi}_{2r})_{r>0},( D^{-\xi}_{2r})_{r>0} \right\rbrace & \text{ for } \mathcal{d}=(k_{8}/\mathbb{Q},2/1) \\ \end{array}. $$ \end{itemize} \end{defi} \begin{coro} A basis of $\mathcal{H}_{n}^{N'} $ is formed by elements of $\mathcal{B}_{n}^{N}$ of level $0$ each corrected by linear combination of elements $\mathcal{B}_{n}^{N}$ of level $ \geq 1$. In particular, with $\xi$ primitive: \begin{description} \item[$\boldsymbol{8 \rightarrow 1} $:] A basis of MMZV: $$\hspace*{-0.5cm}\mathcal{B}^{1;8}\mathrel{\mathop:}= \left\{ \zeta^{\mathfrak{m}}\left( 2x_{1}+1, \ldots, 2x_{p}+1 \atop 1, \ldots, 1, \xi \right)\zeta^{\mathfrak{m}}(2)^{s} + \sum_{y_{i} \text{at least one even or } =1 \atop { or one } \epsilon_{i}=-1 } \alpha_{\textbf{x},\textbf{y}} \zeta^{\mathfrak{m}}\left( y_{1}, \ldots, y_{p} \atop \epsilon_{1}, \ldots, \epsilon_{p-1}, \epsilon_{p}\xi \right)\zeta^{\mathfrak{m}}(2)^{s} \right.$$ $$\left. + \sum_{q<p \text{ lower depth, level } \geq 1} \beta_{\textbf{x},\textbf{z}} \zeta^{\mathfrak{m}}\left(z_{1}, \ldots, z_{q} \atop \widetilde{\epsilon}_{1}, \ldots, \widetilde{\epsilon}_{q}\xi \right)\zeta^{\mathfrak{m}}(2)^{s} \text{ , }x_{i}>0 , \alpha_{\textbf{x},\textbf{y}}, \beta_{\textbf{x},\textbf{z}}\in\mathbb{Q}\right\}.$$ \item[$\boldsymbol{8 \rightarrow 2 } $:] A basis of motivic Euler sums: $$\hspace*{-0.5cm}\mathcal{B}^{2;8} \mathrel{\mathop:}= \left\{ \zeta^{\mathfrak{m}} \left(2x_{1}+1, \ldots, 2x_{p}+1 \atop 1, \ldots, 1, \xi\right)\zeta^{\mathfrak{m}}(2)^{s} +\sum_{y_{i} \text{ at least one even} \atop \text{or one }\epsilon_{i}=-1} \alpha_{\textbf{ x},\textbf{y}} \zeta^{\mathfrak{m}}\left( y_{1}, \ldots, y_{p} \atop \epsilon_{1}, \ldots, \epsilon_{p-1}, \epsilon_{p}\xi \right)\zeta^{\mathfrak{m}}(2)^{s} \right.$$ $$\left. + \sum_{\text{lower depth} q<p \atop \text{with level} \geq 1 } \beta_{\textbf{x},\textbf{z}} \zeta^{\mathfrak{m}}\left(z_{1}, \ldots, z_{q} \atop \widetilde{\epsilon}_{1}, \ldots, \widetilde{\epsilon}_{q}\xi\right)\zeta^{\mathfrak{m}}(2)^{s} \text{ , }x_{i}\geq 0,\alpha_{\textbf{x},\textbf{y}}, \beta_{\textbf{x},\textbf{y}} \in\mathbb{Q} \right\}.$$ \item[$\boldsymbol{ 8 \rightarrow 4 } $:] A basis of MMZV relative to $\mu_{4}$: $$\hspace*{-0.5cm}\mathcal{B}^{4;8}\mathrel{\mathop:}= \left\{ \zeta^{\mathfrak{m}}\left( x_{1}, \ldots, x_{p} \atop 1, \ldots, 1, \xi \right)(2i\pi)^{s} + \sum_{\text{ at least one }\epsilon_{i}=-1} \alpha_{\textbf{x}, \textbf{y}} \zeta^{\mathfrak{m}}\left(y_{1}, \ldots, y_{p} \atop \epsilon_{1}, \ldots, \epsilon_{p-1},\epsilon_{p} \xi \right)(2 i \pi)^{s} \right.$$ $$ \left. + \sum_{\text{lower depth, level } \geq 1} \beta_{\textbf{x},\textbf{z}} \zeta^{\mathfrak{m}}\left( z_{1}, \ldots,z_{q} \atop \widetilde{\epsilon}_{1}, \ldots, \widetilde{\epsilon}_{q}\xi \right)(2i \pi)^{s} \alpha_{\textbf{x},\textbf{y}}, \beta_{\textbf{x},\textbf{z}}\in\mathbb{Q} \right\}.$$ \end{description} \end{coro} \subsubsection{\textsc{The case } $N=\mlq 6 \mrq$.} For the unramified category $\mathcal{MT}(\mathcal{O}_{6})$, there is one generator in each degree $>1$ and one Galois descent with $\mathcal{H}^{1}$.\\ \\ First, let us point out this sufficient condition for a MMZV$_{\mu_{6}}$ to be unramified: \begin{lemm} $$\text{Let } \quad \zeta^{\mathfrak{m}} \left( n_{1},\cdots,n_{p} \atop \epsilon_{1}, \ldots, \epsilon_{p} \right) \in\mathcal{H}^{\mathcal{MT}(\mathcal{O}_{6} \left[ \frac{1}{6}\right] )} \text{ a motivic MZV} _{\mu_{6}}, \quad \text{ such that : \footnote{In the iterated integral notation, the associated roots of unity are $\eta_{i}\mathrel{\mathop:}= (\epsilon_{i}\cdots \epsilon_{p})^{-1}$.}}$$ $$\begin{array}{ll} & \text{ Each } \eta_{i} \in \lbrace 1, \xi_{6} \rbrace \\ \textsc{ or }& \text{ Each } \eta_{i} \in \lbrace 1, \xi^{-1}_{6} \rbrace \end{array} \quad \quad \text{ Then, } \quad \zeta^{\mathfrak{m}} \left( n_{1},\cdots,n_{p} \atop \epsilon_{1}, \ldots, \epsilon_{p} \right) \in \mathcal{H}^{\mathcal{MT}(\mathcal{O}_{6})}$$ \end{lemm} \begin{proof} Immediate, by Corollary, $\ref{ramif346}$, and with the expression of the derivations $(\ref{drz})$ since these families are stable under the coaction. \end{proof} \begin{defi} \begin{itemize} \item[$\cdot$] \textbf{Family}: $\mathcal{B}\mathrel{\mathop:}=\left\{\zeta^{\mathfrak{m}}\left( {x_{1}, \ldots,x_{p}\atop 1, \ldots , 1,\xi) } \right)(2i \pi)^{s,\mathfrak{m}}, x_{i} > 1, s \geq 0 \right\}$. \item[$\cdot$] \textbf{Level:} The $(k_{6}/\mathbb{Q},1/1)$-level, denoted $i$, is defined as the number of even $x_{j}$. \item[$\cdot$] \textbf{Filtration by the motivic } $(k_{6}/\mathbb{Q},1/1)$-\textbf{level}: \begin{center} $\mathcal{F}^{(k_{6}/\mathbb{Q},1/1)} _{-1} \mathcal{H}^{6}=0$ and $\mathcal{F}^{(k_{6}/\mathbb{Q},1/1)} _{i} \mathcal{H}^{6}$ is the largest submodule of $\mathcal{H}^{6}$ such that $\mathcal{F}^{(k_{6}/\mathbb{Q},1/1)}_{i}\mathcal{H}^{6}/\mathcal{F}^{(k_{6}/\mathbb{Q},1/1)} _{i-1}\mathcal{H}^{6}$ is killed by $\mathscr{D}^{(k_{6}/\mathbb{Q},1/1)}=\left\lbrace D^{\xi}_{2r} , r>0 \right\rbrace $. \end{center} \end{itemize} \end{defi} \begin{coro} Galois descent from $N'=1$ to $N=\mlq 6 \mrq$ unramified. A basis of MMZV: $$\mathcal{B}^{1;6} \mathrel{\mathop:}= \left\{ \zeta^{\mathfrak{m}}\left( 2x_{1}+1, \ldots, 2x_{p}+1 \atop 1, \ldots, 1, \xi \right)\zeta^{\mathfrak{m}}(2)^{s} + \sum_{y_{i} \text{ at least one even}} \alpha_{\textbf{x},\textbf{y}}\zeta^{\mathfrak{m}}\left( y_{1}, \ldots, y_{p} \atop 1, \ldots, 1, \xi \right)\zeta^{\mathfrak{m}}(2)^{s} \right.$$ $$\left. +\sum_{\text{lower depth, level } \geq 1}\beta_{\textbf{x},\textbf{z}} \zeta^{\mathfrak{m}} \left( z_{1}, \ldots, z_{q} \atop 1, \ldots, 1, \xi \right)\zeta^{\mathfrak{m}}(2)^{s} \text{ , } \alpha_{\textbf{x},\textbf{y}}, \beta_{\textbf{x},\textbf{z}}\in \mathbb{Q}, x_{i}>0 \right\}.$$ \end{coro} \chapter{Miscellaneous uses of the coaction} \section{Maximal depth terms, $\boldsymbol{gr^{\mathfrak{D}}_{\max}\mathcal{H}_{n}}$} The coaction enables us to compute, by a recursive procedure, the coefficients of the terms of \textit{maximal depth}, i.e. the projection on the graded $\boldsymbol{gr^{\mathfrak{D}}_{\max}\mathcal{H}_{n}}$. In particular, let look at: \begin{itemize} \item[$\cdot$] For $N=1$, when weight is a multiple of $3$ ($w=3d$), such as depth $p>d$: $$gr^{\mathfrak{D}}_{p}\mathcal{H}_{3d} =\mathbb{Q} \zeta^{\mathfrak{m}}(3)^{d}.$$ \item[$\cdot$] Another simple case is for $N=2,3,4$, when weight equals depth, which is referred to as the \textit{diagonal comodule}: $$gr^{\mathfrak{D}}_{p}\mathcal{H}_{p} =\mathbb{Q} \zeta^{\mathfrak{m}}\left( 1 \atop \xi_{N}\right) ^{p}.$$ \end{itemize} The space $gr^{\mathfrak{D}}_{\max}\mathcal{H}^{N}_{n}$ is usually more than $1$ dimensional, but the methods presented below could generalize. \subsection{MMZV, weight $\boldsymbol{3d}$.} \paragraph{Preliminaries: Linearized Ihara action.} The linearisation of the map $\circ: U\mathfrak{g} \otimes U\mathfrak{g} \rightarrow U\mathfrak{g}$ induced by Ihara action (cf. $\S 2.4$) can be defined recursively on words by, with $\eta\in\mu_{N}$:\nomenclature{$\underline{\circ}$}{linearized Ihara action} \begin{equation}\label{eq:circlinear} \begin{array}{lll} \underline{\circ}: \quad U\mathfrak{g} \otimes U\mathfrak{g} \rightarrow U\mathfrak{g}: & a \underline{\circ} e_{0}^{n} & = e_{0}^{n} a \\ & a \underline{\circ} e_{0}^{n}e_{\eta} w & = e_{0}^{n} ([\eta].a) e_{\eta}w + e_{0}^{n} e_{\eta } ([\eta].a)^{\ast} w + e_{0}^{n} e_{\eta} (a\underline{\circ} w), \\ \end{array} \end{equation} where ${\ast}$ stands for the following involution: $$(a_{1} \cdots a_{n})^{\ast}\mathrel{\mathop:}=(-1)^{n}a_{n} \cdots a_{1}.$$ For this paragraph, from now, let $N=1$ and let use the \textit{commutative polynomial setting}, introducing the isomorphism:\nomenclature{ $\rho$}{isomorphism used to pass to a commutative polynomial setting} \begin{align}\label{eq:rho} \rho: U \mathfrak{g} & \longrightarrow \mathbb{Q} \langle Y\rangle\mathrel{\mathop:}=\mathbb{Q} \langle y_{0}, y_{1}, \ldots, y_{n},\cdots \rangle \\ e_{0}^{n_{0}}e_{1} e_{0}^{n_{1}} \cdots e_{1} e_{0}^{n_{p}} & \longmapsto y_{0}^{n_{0}} y_{1}^{n_{1}} \cdots y_{p}^{n_{p}} \nonumber \end{align} Remind that if $\Phi\in U \mathfrak{g}$ satisfies the linearized $\shuffle$ relation, it means that $\Phi$ is primitive for $\Delta_{\shuffle}$, and equivalently that $\phi_{u \shuffle v}=0$, with $\phi_{w}$ the coefficient of $w$ in $\Phi$. In particular, this is verified for $\Phi$ in the motivic Lie algebra $\mathfrak{g}^{\mathfrak{m}}$.\\ This property implies for $f=\rho(\Phi)$ a translation invariance (cf. $6.2$ in $\cite{Br3}$) \begin{equation} \label{eq:translationinv} f(y_{0},y_{1},\cdots, y_{p})= f(0,y_{1}-y_{0}, \ldots, y_{p}-y_{0}). \end{equation} Let consider the map: \begin{align} \label{eq:fbar} \mathbb{Q} \langle Y\rangle & \longrightarrow\mathbb{Q} \langle X\rangle = \mathbb{Q} \langle x_{1}, \ldots, x_{n},\cdots\rangle , & \\ \quad \quad f & \longmapsto \overline{f} & \text{ where } \overline{f}(x_{1},\cdots, x_{p})\mathrel{\mathop:}=f(0,x_{1},\cdots, x_{p}).\nonumber \end{align} If $f$ is translation invariant, $f(y_{0}, y_{1}, \ldots, y_{p})=\overline{f}(y_{1}-y_{0},\cdots, y_{p}-y_{0})$.\\ The image of $\mathfrak{g}^{\mathfrak{m}}$ under $\rho$ is contained in the subspace of polynomial in $y_{i}$ invariant by translation. Hence we can consider alternatively in the following $\phi\in\mathfrak{g}^{\mathfrak{m}}$, $f=\rho(\phi)$ or $\overline{f}$.\\ \\ Since the linearized action $\underline{\circ}$ respects the $\mathcal{D}$-grading, it defines, via the isomorphism $\rho: gr^{r}_{\mathfrak{D}} U \mathfrak{g} \rightarrow \mathbb{Q}[y_{0}, \ldots, y_{r}]$, graded version of $(\ref{eq:rho})$, a map: $$\underline{\circ}: \mathbb{Q}[y_{0}, \ldots, y_{r}]\otimes \mathbb{Q}[y_{0}, \ldots, y_{s}] \rightarrow \mathbb{Q}[y_{0}, \ldots, y_{r+s}] \text{ , which in the polynomial representation is:}$$ \begin{multline}\label{eq:circpolynom} f\underline{\circ} g (y_{0}, \ldots, y_{r+s})=\sum_{i=0}^{s} f(y_{i}, \ldots, y_{i+r})g(y_{0}, \ldots, y_{i}, y_{i+r+1}, \ldots, y_{r+s}) \\ + (-1)^{\deg f+r}\sum_{i=1}^{s} f(y_{i+r}, \ldots, y_{i})g(y_{0}, \ldots, y_{i-1}, y_{i+r}, \ldots, y_{r+s}). \end{multline} Or via the isomorphism $\overline{\rho}: gr^{r}_{\mathfrak{D}} U \mathfrak{g} \rightarrow \mathbb{Q}[x_{1}, \ldots, x_{r}]$, graded version of $(\ref{eq:fbar})\circ (\ref{eq:rho}) $: \begin{multline}\label{eq:circpolynomx} f\underline{\circ} g (x_{1}, \ldots, x_{r+s})=\sum_{i=0}^{s} f(x_{i+1}-x_{i}, \ldots, x_{i+r}-x_{i})g(y_{1}, \ldots, x_{i}, x_{i+r+1}, \ldots, x_{r+s}) \\ + (-1)^{\deg f+r}\sum_{i=1}^{s} f(x_{i+r-1}-x_{i+r}, \ldots, x_{i}-x_{i+r})g(x_{1}, \ldots, x_{i-1}, x_{i+r}, \ldots, x_{r+s}). \end{multline} \paragraph{Coefficient of $\boldsymbol{\zeta(3)^{d}}$.} If the weight $w$ is divisible by $3$, for motivic multiple zeta values, it boils down to compute the coefficient of $\zeta^{\mathfrak{m}}(3)^{\frac{w}{3}}$ and a recursive procedure is given in Lemma $6.1.1$.\\ \\ Since $gr_{d}^{\mathfrak{D}} \mathcal{H}_{3d}^{1} $ is one dimensional, generated by $\zeta^{\mathfrak{m}}(3)^{d}$, we can consider the projection: \begin{equation} \vartheta : gr_{d}^{\mathfrak{D}} \mathcal{H}_{3d}^{1} \rightarrow \mathbb{Q}. \end{equation} Giving a motivic multiple zeta value $\zeta^{\mathfrak{m}}(n_{1}, \ldots, n_{d})$, of depth $d$ and weight $w=3d$, there exists a rational $\alpha_{\underline{\textbf{n}}}= \vartheta(\zeta(n_{1}, \ldots, n_{d}))$ such that: \begin{framed} \begin{equation} \zeta^{\mathfrak{m}}(n_{1}, \ldots, n_{d}) = \frac{\alpha_{\underline{\textbf{n}}}} {d!} \zeta^{\mathfrak{m}}(3)^{d} + \text{ terms of depth strictly smaller than } d. \footnote{The terms of depth strictly smaller than $d$ can be expressible in terms of the Deligne basis for instance.} \end{equation} \end{framed} \noindent In the depth graded in depth 1, $\partial \mathfrak{g}^{\mathfrak{m}}_{1}$, the generators are: $$\overline{\sigma}_{2i+1}= (-1)^{i} (\text{ad} e_{0})^{2i} (e_{1}) .$$ We are looking at, in the depth graded: \begin{equation} \label{eq:expcirc3} \exp_{\circ}(\overline{\sigma_{3}})\mathrel{\mathop:}=\sum_{n=0}^{n}\frac{1}{n!} \overline{\sigma_{3}} \circ \cdots \circ \overline{\sigma_{3}}= \sum_{n=0}^{n}\frac{1}{n!} (\text{ad}(e_{0})^{2} (e_{1}))^{\underline{\circ} n}. \end{equation} In the commutative polynomial representation, via $\overline{\rho}$, since $ \overline{\rho}(\overline{\sigma}_{2n+1})= x_{1}^{2n}$, it becomes: $$\sum_{n=0}^{n}\frac{1}{n!} x_{1}^{2} \underline{\circ} (x_{1}^{2} \underline{\circ}( \cdots (x_{1}^{2} \underline{\circ}x_{1}^{2} ) \cdots )).$$ \begin{lemm} The coefficient of $\zeta^{\mathfrak{m}}(3)^{p}$ in $\zeta^{\mathfrak{m}}(n_{1}, \ldots, n_{p})$ of weight $3p$ is given recursively: \begin{multline} \label{coeffzeta3} \alpha_{n_{1}, \ldots, n_{p}}= \delta_{n_{p}=3} \alpha_{n_{1}, \ldots, n_{p-1}}\\ +\sum_{k=1 \atop n_{k}=1 }^{p} \left( \delta_{n_{k-1}\geq 3} \alpha_{n_{1}, \ldots, n_{k-1}-2,n_{k+1}, \ldots, n_{p}} -\delta_{n_{k+1}\geq 3} \alpha_{n_{1}, \ldots, n_{k-1},n_{k+1}-2, \ldots, n_{p}} \right) \\ +2 \sum_{k=1 \atop n_{k}=2 }^{p} \left(-\delta_{n_{k-1}\geq 3} \alpha_{n_{1}, \ldots, n_{k-1}-2,n_{k+1}, \ldots, n_{p}} + \delta_{n_{k+1}\geq 3} \alpha_{n_{1}, \ldots, n_{k-1},n_{k+1}-2, \ldots, n_{p}} \right) . \end{multline} \end{lemm} \noindent \textsc{Remarks}: \begin{itemize} \item[$\cdot$] This is proved for motivic multiple zeta values, and by the period map, it also applies to multiple zeta values. \item[$\cdot$] This lemma (as the next one, more precise) could be generalized for unramified motivic Euler sums. \item[$\cdot$] All the coefficients $\alpha$ are all integers. \end{itemize} \begin{proof} Recursively, let consider: \begin{equation} P_{n+1} (x_{1},\cdots, x_{n+1})\mathrel{\mathop:}=x_{1}^{2} \underline{\circ}P_{n} (x_{1},\cdots, x_{n}). \end{equation} By the definition of the linearized Ihara action $(\ref{eq:circpolynom})$: \begin{multline} \nonumber P_{n+1} (x_{1},\cdots, x_{n+1}) =\sum_{i=0}^{n} (x_{i+1}-x_{i})^{2} P_{n} (x_{1},\cdots, x_{i}, x_{i+2}, \ldots, x_{n+1}) \\ - \sum_{i=1}^{n} (x_{i+1}-x_{i})^{2} P_{n} (x_{1},\cdots, x_{i-1}, x_{i+1}, \ldots, x_{n+1})\\ = (x_{n+1}-x_{n})^{2} P_{n}(x_{1}, \ldots, x_{n})+ \sum_{i=0}^{n-1} (x_{i}-x_{i+2})(x_{i}+x_{i+2}-2x_{i+1}) P_{n} (x_{1},\cdots, x_{i}, x_{i+2}, \ldots, x_{n+1}). \end{multline} Turning now towards the coefficients $c^{\textbf{i}}$ defined by: $$ P_{p} (x_{1},\cdots, x_{p})= \sum c^{\textbf{i}} x_{1}^{i_{1}}\cdots x_{p}^{i_{p}}, \quad \text{ we deduce: } $$ \begin{multline} \nonumber c^{i_{1}, \ldots, i_{p}}= -\delta_{i_{1}=0 \atop i_{2} \geq 2} c^{i_{2}-2,i_{3}, \ldots, i_{p}} + \delta_{i_{p}=2 } c^{i_{1},\cdots, i_{p-1}} + \delta_{i_{n}=0 \atop i_{p-1} \geq 2} c^{i_{1}, \ldots, i_{p-2},i_{p-1}-2} - 2 \delta_{i_{p}=1 \atop i_{p-1} \geq 2} c^{i_{1}, \ldots, i_{p-2}, i_{p-1}-1} \\ + \sum_{k=2 \atop i_{k}=0 }^{p-1} \left( \delta_{i_{k-1}\geq 2} c_{i_{1}, \ldots, i_{k-1}-2,i_{k+1}, \ldots, i_{p}} -\delta_{i_{k+1}\geq 2} c_{i_{1}, \ldots, i_{k-1},i_{k+1}-2, \ldots, i_{p}} \right)\\ + 2 \sum_{k=2 \atop i_{k}=1 }^{p-1} \left( -\delta_{i_{k-1}\geq 1} c_{i_{1}, \ldots, i_{k-1}-2,i_{k+1}, \ldots, i_{p}} + \delta_{i_{k+1}\geq 1} c_{i_{1}, \ldots, i_{k-1},i_{k+1}-2, \ldots, i_{p}} \right) , \end{multline} which gives the recursive formula of the lemma. \end{proof} \paragraph{Generalization. } Another proof of the previous lemma is possible using the dual point of view with the depth-graded derivations $D_{3,p}$, looking at cuts of length $3$ and depth $1$.\footnote{The coefficient $\alpha$ indeed emerges when computing $(D_{3,p})^{\circ p}$.}\\ A motivic multiple zeta value of weight $3d$ and of depth $p>d$ could also be expressed as: \begin{equation}\label{eq:zeta3d} \zeta^{\mathfrak{m}}(n_{1}, \ldots, n_{p}) = \frac{\alpha_{\underline{\textbf{n}}}} {d!} \zeta^{\mathfrak{m}}(3)^{d} + \text{ terms of depth strictly smaller than } d. \end{equation} However, to compute this coefficient $\alpha_{\underline{\textbf{n}}}$, we could not work as before in the depth graded; i.e. this time, we have to consider all the possible cuts of length $3$. Then, the coefficient emerges when computing $\boldsymbol{(D_{3})^{\circ d}}$. \begin{lemm} The coefficient of $\zeta^{\mathfrak{m}}(3)^{d}$ in $\zeta^{\mathfrak{m}}(n_{1}, \ldots, n_{p})$ of weight $3d$ such that $p>d$, is given recursively: \begin{multline} \label{coeffzeta3g} \alpha_{n_{1}, \ldots, n_{p}}= \delta_{n_{p}=3} \alpha_{n_{1}, \ldots, n_{p-1}}\\ +\sum_{k=1 \atop n_{k}=1 }^{p} \left( \delta_{n_{k-1}\geq 3 \atop k\neq 1} \alpha_{n_{1}, \ldots, n_{k-1}-2,n_{k+1}, \ldots, n_{p}} -\delta_{n_{k+1}\geq 3 \atop k\neq p} \alpha_{n_{1}, \ldots, n_{k-1},n_{k+1}-2, \ldots, n_{p}} \right) \\ +2 \sum_{k=1 \atop n_{k}=2 }^{p} \left(-\delta_{n_{k-1}\geq 3} \alpha_{n_{1}, \ldots, n_{k-1}-2,n_{k+1}, \ldots, n_{p}} + \delta_{n_{k+1}\geq 3 \atop k\neq p} \alpha_{n_{1}, \ldots, n_{k-1},n_{k+1}-2, \ldots, n_{p}} \right) \\ + \sum_{k=1 \atop n_{k}=1, n_{k+1}=1 }^{p-1} \left(-\delta_{n_{k-1}\geq 3 \atop k\neq 1} \alpha_{n_{1}, \ldots, n_{k-1}-1,n_{k+2}, \ldots, n_{p}} + \delta_{n_{k+2}\geq 3} \alpha_{n_{1}, \ldots, n_{k-1},n_{k+2}-1, \ldots, n_{p}} \right)\\ + \sum_{k=1 \atop n_{k}=1, n_{k+1}=2 }^{p-1} \left( \delta_{n_{k-1}\geq 3 \atop \text{ or } k=1 } \alpha_{n_{1}, \ldots, n_{k-1},n_{k+2}, \ldots, n_{p}} +2 \delta_{n_{k+2}\geq 2 \atop k\neq p-1} \alpha_{n_{1}, \ldots, n_{k-1},n_{k+2}, \ldots, n_{p}} \right)\\ + \sum_{k=1 \atop n_{k}=2, n_{k+1}=1 }^{p-1} \left(-2\delta_{n_{k-1}\geq 2 \atop \text{ or } k=1} \alpha_{n_{1}, \ldots, n_{k-1},n_{k+2}, \ldots, n_{p}} - \delta_{n_{k+2}\geq 3 \atop k\neq p-1} \alpha_{n_{1}, \ldots, n_{k-1},n_{k+2}, \ldots, n_{p}} \right).\\ \\ \end{multline} \end{lemm} \begin{proof} Let list first all the possible cuts of length $3$ and depth $1$ in a iterated integral with $\lbrace 0,1 \rbrace$:\\ \includegraphics[]{dep8.pdf}\\ The coefficient above the arrow is the coefficient of $\zeta^{\mathfrak{m}}(3)$ in $I^{\mathfrak{m}}(cut)$, using that: $$\zeta^{\mathfrak{m}}_{1}(2)=-2\zeta^{\mathfrak{m}}(3), \quad \zeta^{\mathfrak{m}}(1,2)=\zeta^{\mathfrak{m}}(3), \quad \zeta^{\mathfrak{m}}(2,1)=-2\zeta^{\mathfrak{m}}(3), \quad \zeta^{\mathfrak{m}}_{1}(1,1)=\zeta^{\mathfrak{m}}(3).$$ Therefore, when there is a $1$ followed or preceded by something greater than $4$, the contribution is $\pm 1$, while when there is a $2$ followed or preceded by something greater than $3$, the contribution is $\pm 2$ as claimed in the lemma above. The contributions of a $3$ in the third line when followed of preceded by something greater than $2$ get simplified (except if there is a $3$ at the very end); when a $3$ is followed resp. preceded by a $1$ however, we assimilate it to the contribution of a $1$ preceded resp. followed by a $3$; which leads exactly to the penultimate lemma.\\ Additionally to the cuts listed above:\\ \includegraphics[]{dep9.pdf} This analysis leads to the given formula.\\ \end{proof} In particular, a sequence of the type $\boldsymbol{Y12X}$ resp. $\boldsymbol{X21Y}$ ($X \geq 2, Y \geq 3$) will imply a $(3)$ resp. $(-3)$ times the coefficient of the same sequence without $\boldsymbol{ 12}$, resp. $\boldsymbol{ 21}$.\\ \\ \\ \texttt{{\Large Examples:}} Let list a few families of multiple zeta values for which we have computed explicitly the coefficient $\alpha$ of maximal depth:\\ \\ \begin{tabular}{| c | c | c | } \hline Family & Recursion relation & Coefficient $\alpha$\\ \hline $\zeta^{\mathfrak{m}}(\lbrace 3\rbrace^{p})$ & $\alpha_{\lbrace 3\rbrace^{ p}}=\alpha_{\lbrace 3\rbrace^{ p-1}}$ & $1$\\ $\zeta^{\mathfrak{m}}(\lbrace 1,2\rbrace^{p})$ & $\alpha_{\lbrace 1,2 \rbrace^{ p}}=\alpha_{\lbrace 1,2 \rbrace^{ p-1}}$ & $1$\\ $\zeta^{\mathfrak{m}}(2,4,3\lbrace 3\rbrace^{p})$ & $\alpha_{2,4,\lbrace 3\rbrace^{p}}=\alpha_{2,4,\lbrace 3\rbrace^{p-1}}+2 \alpha_{\lbrace 3\rbrace^{p+1}} $ & $2(p+1)$\\ $\zeta^{\mathfrak{m}}(4,2,\lbrace 3\rbrace^{p})$ & $\alpha_{4,2,\lbrace 3\rbrace^{p}}=3\alpha_{4,2,\lbrace 3\rbrace^{p-1}}-2 \alpha_{\lbrace 3\rbrace^{p+1}} $ & $-3^{p+1}+1$\\ $\zeta^{\mathfrak{m}}(\lbrace 3\rbrace^{p},4,2)$ & $\alpha_{\lbrace 3\rbrace^{p},4,2}=-2\alpha_{\lbrace 3\rbrace^{p+1}} $ & $-2$\\ $\zeta^{\mathfrak{m}}(\lbrace 3\rbrace^{p},2,4)$ & $\alpha_{\lbrace 3\rbrace^{p},2,4}=2\alpha_{\lbrace 3\rbrace^{p-1},2,4}-2 \alpha_{\lbrace 3\rbrace^{p+1}} $ & $(-2)^{p}\frac{4}{3}+\frac{2}{3}$\\ $\zeta^{\mathfrak{m}}(2,\lbrace 3\rbrace^{p},4)$ & $\alpha_{2,\lbrace 3\rbrace^{p},4}=2\alpha_{2,\lbrace 3\rbrace^{p-1},4} $ & $2^{p+1}$\\ $\zeta^{\mathfrak{m}}(4,\lbrace 3\rbrace^{p},2)$ & $\alpha_{4,\lbrace 3\rbrace^{p},2}=-2\alpha_{4,\lbrace 3\rbrace^{p-1},2} $ & $(-2)^{p+1}$\\ $\zeta^{\mathfrak{m}}(1,5,\lbrace 3\rbrace^{p})$ & $\alpha_{1,5,\lbrace 3\rbrace^{p}}=\alpha_{1,5,\lbrace 3\rbrace^{p-1}}-1 $ & $-(p+1)$\\ $\zeta^{\mathfrak{m}}(\lbrace 2\rbrace^{p},\lbrace 4\rbrace^{p})$ & $\alpha_{\lbrace 2\rbrace^{p},\lbrace 4\rbrace^{p}}= 4 \alpha_{\lbrace 2\rbrace^{p-1},\lbrace 4\rbrace^{p-1}}$ & $2^{2p-1}$\\ $\zeta^{\mathfrak{m}}(\lbrace 2\rbrace^{p},\lbrace 3\rbrace^{a} \lbrace 4\rbrace^{p})$ & $\alpha_{\lbrace 2\rbrace^{p},\lbrace 3\rbrace^{a} \lbrace 4\rbrace^{p}}= 2^{a}\alpha_{\lbrace 2\rbrace^{p},\lbrace 4\rbrace^{p}}$ & $2^{a+2p-1}$\\ $\zeta^{\mathfrak{m}}(\lbrace 2\rbrace^{p},p+3)$ & $\alpha _{\lbrace 2\rbrace^{p},p+3}= 2\alpha _{\lbrace 2\rbrace^{p-1},p+1}$ & $2^{p}$\\ $\zeta^{\mathfrak{m}}(2,3,4,\lbrace 3\rbrace^{p})$ & $\alpha_{2,3,4,\lbrace 3\rbrace^{p}}=\alpha_{2,3,4,\lbrace 3\rbrace^{p-1}}+ 2 \alpha_{2,4,\lbrace 3\rbrace^{p}}$ & $2(p+1)(p+2)$\\ $\zeta^{\mathfrak{m}}(2,1,5,4,\lbrace 3\rbrace^{p})$ & $\alpha_{2,1,5,4,\lbrace 3\rbrace^{p}}=\alpha_{2,1,5,4,\lbrace 3\rbrace^{p-1}}- \alpha_{2,3,4,\lbrace 3\rbrace^{p}}$ & $-\frac{2(p+1)(p+2)(p+3)}{3}$\\ $\zeta^{\mathfrak{m}}(\lbrace 2\rbrace^{a}, a+3, \lbrace 3\rbrace^{b})$ & $\alpha_{a; b}=2 \alpha_{a-1;b}+\alpha_{a;b-1 }$ & $2^{a}\binom{a+b}{a} $\\ $\zeta^{\mathfrak{m}}(\lbrace 5,1\rbrace^{n})\text{ with } 3$\footnotemark[1] & $\alpha= \sum_{i=1}^{2p-1} (-1)^{i-1} \alpha_{\lbrace 5,1\rbrace^{p \text{ or } p-1} \text{with } 3} $ \footnotemark[2] & 1\\ \hline \end{tabular}\\ \footnotetext[1]{Any $\zeta^{\mathfrak{m}}(\lbrace 5,1\rbrace^{p})$ where we have inserted some $3$ in the subsequence.} \footnotetext[2]{Either a $3$ has been removed, either a $5,1$ resp. $1,5$ has been converted into a $3$ (with a sign coming from if we consider the elements before or after a $1$). If it ends with $3$, the contribution of a $3$ cancel with the contribution of the last $1$.} \\ \\ \textsc{For instance}, for the coefficient $\alpha_{a;b;c}$ associated to $\zeta^{\mathfrak{m}}(\lbrace 3\rbrace^{a},2, \lbrace 3\rbrace^{b}, 4, \lbrace 3\rbrace^{c} )$, the recursive relation is: \begin{equation} \alpha_{a;b;c}=\alpha_{a;b;c-1}+2\alpha_{a;b-1;c}-2\alpha_{a-1;b;c}, \quad \text{ which leads to the formula:} \end{equation} \begin{multline}\nonumber \alpha_{a;b;c}= (-2)^{a} \sum_{l=0}^{b-1} \sum_{m=0}^{c-1} 2^{l}\frac{(a+l+m-1) !}{(a-1) ! l ! m! } \alpha_{0;b-l;c-m} + 2^{b} \sum_{k=0}^{a-1} \sum_{m=0}^{c-1} (-2)^{k} \frac{(b+k+m-1) !}{k !(b-1) ! m! } \alpha_{a-k;0;c-m}\\ + \sum_{l=0}^{b-1} \sum_{k=0}^{a-1} (-2)^{k} 2^{l} \frac{(k+l+c-1) !}{k! l ! (c-1)! } \alpha_{a-k;b-l;0}. \end{multline} Besides, we can also obtain very easily: $$\hspace*{-0.5cm}\alpha_{a;0;0}= (-2)^{a}\frac{4}{3}+\frac{2}{3} \text{,} \quad \alpha_{0;b;0}=2^{b+1}, \quad \alpha_{0;0;c}= 2(c+1), \quad \text{ and } \quad \alpha_{0;b;c}=2^{b+1}\binom{b+c+1}{c} $$ Indeed, using $\sum_{k=0}^{n}\binom{a+k}{a}= \binom{n+a+1}{a+1}$, and $\sum_{k=0}^{n}(n-k) \binom{k}{a}= \binom{n+1}{a+2}$, we deduce: $$\begin{array}{lll} \alpha_{0;b;c} &= 2\alpha_{0;b-1;c}+\alpha_{0;b;c-1}& = 2^{b+1} \left( \sum_{k=0}^{c-1} \binom{b+k-1}{b-1}(c-k+1) + \sum_{k=0}^{b-1}\binom{c+k-1}{c-1} \right)\\ & & =2^{b+1} \left( \binom{b+c+1}{b+1}-\binom{b+c-1}{b-1} + \binom{b+c-1}{b-1}\right) = 2^{b+1}\binom{b+c+1}{c}. \end{array}$$ \\ \texttt{Conjectured examples:} \\ \\ \begin{center} \begin{tabular}{| c | c | } \hline Family & Conjectured coefficient $\alpha$\\ \hline $\zeta^{\mathfrak{m}}(\lbrace 2, 4 \rbrace^{p })$ & $\alpha_{p}$ such that $ 1-\sqrt{cos(2x)}=\sum \frac{\alpha_{p}(-1)^{p+1} x^{2p}}{(2p)!}$ \\ $\zeta^{\mathfrak{m}}(\lbrace 1, 5 \rbrace^{p})$ & Euler numbers: $\frac{1}{cosh(x)}=\sum \frac{\alpha_{p} x^{2p}}{(2p)!}$ \\ $\zeta^{\mathfrak{m}}(\lbrace 1, 5 \rbrace^{p}, 3)$ & $(-1)^{p} \text{Euler ZigZag numbers } E_{2p+1} $ $= 2^{2p+2}(2^{2p+2}-1)\frac{ B_{2p+2}}{2p+2} $ \\ \hline \end{tabular} \end{center} \subsection{$\boldsymbol{N>1}$, The diagonal algebra.} For $N=2,3,4$, $gr^{\mathfrak{D}}_{d} \mathcal{H}_{d}$ is $1$ dimensional, generated by $\zeta^{\mathfrak{m}}({ 1 \atop \xi})$, where $\xi\in\mu_{N}$ primitive fixed, which allows us to consider the projection: \begin{equation} \vartheta^{N} : gr_{d}^{\mathfrak{D}} \mathcal{H}_{d}^{1} \rightarrow \mathbb{Q}. \end{equation} Giving a motivic multiple zeta value relative to $\mu_{N}$, of weight $d$, depth $d$, there exists a rational such that: \begin{framed} \begin{equation}\label{eq:zeta1d} \zeta^{\mathfrak{m}}\left(1, \ldots, 1 \atop \epsilon_{1} , \ldots, \epsilon_{d} \right) = \frac{\alpha_{\boldsymbol{\epsilon}}} {n!} \zeta^{\mathfrak{m}}\left( 1 \atop \xi \right) ^{d} + \text{ terms of depth strictly smaller than } d. \end{equation} \end{framed} The coefficient $\alpha$ being calculated recursively, using depth $1$ results: \begin{lemm} $$ \hspace*{-0.5cm}\alpha_{\epsilon_{1}, \ldots, \epsilon_{d}}= \left\lbrace \begin{array}{ll} 1 & \text{if } \boldsymbol{N\in \lbrace 2,3 \rbrace}.\\ \sum_{k=1 \atop \epsilon_{k}\neq 1 }^{d} \beta_{\epsilon_{k}}\left( \delta_{\epsilon_{k-1}\epsilon_{k}\neq 1} \alpha_{\epsilon_{1}, \ldots, \epsilon_{k-1}\epsilon_{k},\epsilon_{k+1}, \ldots, \epsilon_{d}} -\delta_{\epsilon_{k+1}\epsilon_{k}\neq 1 \atop k < d} \alpha_{\epsilon_{1}, \ldots, \epsilon_{k-1},\epsilon_{k+1}\epsilon_{k}, \ldots, \epsilon_{d}} \right) & \text{if } \boldsymbol{N=4}. \end{array} \right. $$ \begin{flushright} with $\beta_{\epsilon_{k}}= \left\lbrace \begin{array}{ll} 2 & \text{ if } \epsilon_{k}=-1\\ 1 & \text{ else} \end{array}\right. $. \end{flushright} \end{lemm} \begin{proof} In regards to redundancy, the proof being in the same spirit than the previous section ($N=1, w=3d$), is left to the reader.\footnote{The cases $N=2,3$ correspond to the case $N=4$ with $\beta$ always equal to $1$.} \end{proof} \textsc{Remarks:} \begin{itemize} \item[$\cdot$] For the following categories, the space $gr^{\mathfrak{D}}_{d} \mathcal{H}_{d}$ is also one dimensional: $$\mathcal{MT}\left( \mathcal{O}_{6}\left[ \frac{1}{2}\right] \right) ,\quad \mathcal{MT}\left( \mathcal{O}_{6}\left[ \frac{1}{3}\right] \right) , \quad \mathcal{MT}\left( \mathcal{O}_{5}\right) , \mathcal{MT}\left( \mathcal{O}_{10}\right) , \quad \mathcal{MT}\left( \mathcal{O}_{12}\right) .$$ The recursive method to compute the coefficient of $\zeta^{\mathfrak{m}}\left( 1 \atop \eta\right) ^{d}$ would be similar, except that we do not know a proper basis for these spaces. \item[$\cdot$] For $N=1$, and $w\equiv 2 \mod 3$ for instance, $gr^{\mathfrak{D}}_{\max}\mathcal{H}_{n}$ is generated by the elements of the Euler $\sharp$ sums basis:\\ $\zeta^{\sharp, \mathfrak{m}} (1, \boldsymbol{s}, \overline{2})$ with $\boldsymbol{s}$ composed of $3$'s and one $5$, $\zeta^{\sharp, \mathfrak{m}} (3, 3 , \ldots, 3, \overline{2})$ and $\zeta^{\sharp, \mathfrak{m}} (1, 3 , \ldots, 3, \overline{4})$. \end{itemize} \section{Families of unramified Euler sums.} The proof relies upon the criterion $\ref{criterehonoraire}$,, which enables us to construct infinite families of unramified Euler sums with parity patterns by iteration on the depth, up to depth $5$.\\ \\ \texttt{Notations:} The occurrences of the symbols $E$ or $O$ can denote arbitrary even or odd integers, whereas every repeated occurrence of symbols $E_{i}$ (respectively $O_{i}$) denotes the same positive even (resp. odd) integer. The bracket $\left\{\cdot, \ldots, \cdot \right\}$ means that every permutation of the listed elements is allowed. \begin{theo} The following motivic Euler sums are unramified, i.e. motivic MZV:\footnote{Beware, here, $\overline{O}$ and $\overline{n}$ must be different from $\overline{1}$, whereas $O$ and $n$ may be 1. There is no $\overline{1}$ allowed in these terms if not explicitly written.} \\ \\ \hspace*{-0.5cm} \begin{tabular}{| c | l | l | } \hline & \textsc{Even Weight} & \textsc{Odd Weight} \\ \hline \textsc{Depth } 1 & \text{ All } & \text{ All }\footnotemark[1] \\ \hline \textsc{Depth } 2 & $\zeta^{\mathfrak{m}}(\overline{O},\overline{O}), \zeta^{\mathfrak{m}}(\overline{E},\overline{E})$ & \text{ All } \\ \hline \multirow{2}{*}{ \textsc{Depth } 3 } & $\zeta^{\mathfrak{m}}(\left\{E,\overline{O},\overline{O}\right\}), \zeta^{\mathfrak{m}}(O,\overline{E},\overline{O}), \zeta^{\mathfrak{m}}(\overline{O},\overline{E}, O)$ & $\zeta^{\mathfrak{m}}(\left\{\overline{E},\overline{E},O\right\}), \zeta^{\mathfrak{m}}(\overline{E},\overline{O},E), \zeta^{\mathfrak{m}}(E,\overline{O},\overline{E})$ \\ & $ \zeta^{\mathfrak{m}}(\overline{O_{1}}, \overline{E},\overline{O_{1}}), \zeta^{\mathfrak{m}}(O_{1}, \overline{E},O_{1}), \zeta^{\mathfrak{m}}(\overline{E_{1}}, \overline{E},\overline{E_{1}}) .$ & \\ \hline \multirow{2}{*}{ \textsc{Depth } 4 } & $\zeta^{\mathfrak{m}}(E,\overline{O},\overline{O},E),\zeta^{\mathfrak{m}}(O,\overline{E},\overline{O},E), $ & \\ & $\zeta^{\mathfrak{m}}(O,\overline{E},\overline{E},O), \zeta^{\mathfrak{m}}(E,\overline{O},\overline{E},O)$ & \\ \hline \textsc{Depth } 5 & & $\zeta^{\mathfrak{m}}(O_{1}, \overline{E_{1}},O_{1},\overline{E_{1}}, O_{1}).$ \\ \hline \end{tabular} Similarly for these linear combinations, in depth $2$ or $3$: $$\zeta^{\mathfrak{m}}(n_{1},\overline{n_{2}}) + \zeta^{\mathfrak{m}}(\overline{n_{2}},n_{1}) , \zeta^{\mathfrak{m}}(n_{1},\overline{n_{2}}) + \zeta^{\mathfrak{m}}(\overline{n_{1}},n_{2}), \zeta^{\mathfrak{m}}(n_{1},\overline{n_{2}}) - \zeta^{\mathfrak{m}}(n_{2}, \overline{n_{1}}) .$$ $$(2^{n_{1}}-1) \zeta^{\mathfrak{m}}(n_{1},\overline{1}) + (2^{n_{1}-1}-1) \zeta^{\mathfrak{m}}(\overline{1},n_{1}).$$ $$ \zeta^{\mathfrak{m}}(n_{1},n_{2},\overline{n_{3}}) + (-1)^{n_{1}-1} \zeta^{\mathfrak{m}}(\overline{n_{3}},n_{2},n_{1}) \text{ with } n_{2}+n_{3} \text{ odd }.$$ \end{theo} \texttt{Examples}: These motivic Euler sums are motivic multiple zeta values: $$\zeta^{\mathfrak{m}}(\overline{25}, 14,\overline{17}),\zeta^{\mathfrak{m}}(17, \overline{14},17), \zeta^{\mathfrak{m}}(\overline{24}, \overline{14},\overline{24}), \zeta^{\mathfrak{m}}(6, \overline{23}, \overline{17}, 10) , \zeta^{\mathfrak{m}}(13, \overline{24}, 13,\overline{24}, 13).$$ \textsc{Remarks:} \begin{itemize} \item[$\cdot$] This result for motivic ES implies the analogue statement for ES. \item[$\cdot$] Notice that for each honorary MZV above, the reverse sum is honorary too, which was not obvious a priori, since the condition $\textsc{c}1$ below is not symmetric. \end{itemize} \begin{proof} The proof amounts to the straight-forward control that $D_{1}(\cdot)=0$ (immediate) and that all the elements appearing in the right side of $D_{2r+1}$ are unramified, by recursion on depth: here, these elements satisfy the sufficient criteria given below. Let only point out a few things, referring to the expression $(\ref{eq:derhonorary})$: \begin{description} \item[\texttt{Terms} $\textsc{c}$:] The symmetry condition $(\textsc{c}4)$, obviously true for these single elements above, get rid of these terms. For the few linear combinations of MES given, the cuts of type (\textsc{c}) get simplified together. \item[\texttt{Terms} $\textsc{a,b}$:] Checking that the right sides are unramified is straightforward by depth-recursion hypothesis, since only the (previously proven) unramified elements of lower depth emerge. For example, the possible right sides (not canceled by a symmetric cut and up to reversal symmetry) are listed below, for some elements from depth 3. \\ \hspace*{-1cm} \begin{tabular}{| c | l | l | } \hline & Terms \textsc{a0} & Terms \textsc{a,b} \\ \hline $(O,\overline{E},\overline{E}) $ & & $(\overline{E},\overline{E})$ ,$(O,O)$ \\ $(\overline{E},O,\overline{E}) $ & / & $(\overline{E},\overline{E})$ \\ $(E,\overline{O},\overline{E}) $ & / & $(\overline{E},\overline{E}), (E,E)$ \\ \hline $(E,\overline{O},\overline{O},E) $ & $(\overline{O},E)$ & $(\overline{E},\overline{O},E), (E,O,E),(E,\overline{O},\overline{E}), (E,O),(O,E)$ \\ $(O,\overline{E},\overline{O},E) $ & $(\overline{E},\overline{O},E),(\overline{O},E)$ & $(\overline{E},\overline{O},E),(O,E,E),(O,\overline{E},\overline{E}), (O,E)$ \\ $(O,\overline{E},\overline{E},O) $& $(\overline{E},\overline{E},O) ,(\overline{E},O)$ & $(\overline{E},\overline{E},O), (O,O,O), (O,\overline{E},\overline{E}), (O,E), (E,O)$ \\ $(\overline{E}, O_{1},\overline{E},O_{1}) $& $(\overline{E},O)$ & $(\overline{E},\overline{E},O) , (\overline{E},O,\overline{E}) ,(\overline{E},\overline{O}), (E,O)$ \\ $(\overline{E_{1}}, \overline{E_{2}},\overline{E_{1}}, \overline{E_{2}}) $& / & $(O,\overline{E},\overline{E}) ,(\overline{E},O,\overline{E}) ,(\overline{E},\overline{E},O) ,(\overline{E},\overline{O}),(\overline{O},\overline{E})$ \\ \hline $(O_{1}, \overline{E_{1}},O_{1},\overline{E_{1}}, O_{1})$ & $(\overline{E_{1}},O_{1},\overline{E_{1}}, O_{1}),$ & $(\overline{E},O_{1},\overline{E}, O_{1}), (O_{1}, \overline{E},\overline{E}, O_{1}), (O_{1}, \overline{E},O_{1},\overline{E}),$ \\ & $(O_{1},\overline{E}, O_{1})$ & $(\overline{O},\overline{E},O),(O,\overline{E},\overline{O}), (O,O)$\\ \hline \end{tabular} \end{description} It refers to the expression of the derivations $D_{2r+1}$ (from Lemma $\ref{drz}$$\ref{drz}$): \begin{multline}\label{eq:derhonorary} D_{2r+1} \left(\zeta^{\mathfrak{m}} \left(n_{1}, \ldots , n_{p} \right)\right) = \textsc{(a0) } \delta_{2r+1 = \sum_{k=1}^{i} \mid n_{k} \mid} \zeta^{\mathfrak{l}} (n_{1}, \ldots , n_{i}) \otimes \zeta^{\mathfrak{m}} (n_{i+1},\cdots n_{p}) \\ \textsc{(a,b) } \sum_{1\leq i < j \leq p \atop 2r+1=\sum_{k=i}^{j} \mid n_{k}\mid - y } \left\lbrace \begin{array}{l} -\delta_{2\leq y \leq \mid n_{j}\mid } \zeta_{\mid n_{j}\mid -y}^{\mathfrak{l}} (n_{j-1}, \ldots ,n_{i+1}, n_{i}) \\ +\delta_{2\leq y \leq \mid n_{i}\mid} \zeta_{\mid n_{i}\mid -y}^{\mathfrak{l}} (n_{i+1}, \cdots ,n_{j-1}, n_{j}) \end{array} \right. \otimes \zeta^{\mathfrak{m}} (n_{1}, \ldots, n_{i-1},\prod_{k=i}^{j}\epsilon_{k} \cdot y,n_{j+1},\cdots n_{p}). \\ \textsc{(c) } + \sum_{1\leq i < j \leq p\atop {2r+2=\sum_{k=i}^{j} \mid n_{k}\mid} } \delta_{ \prod_{k=i}^{j} \epsilon_{k} \neq 1} \left\lbrace \begin{array}{l} + \zeta_{\mid n_{i}\mid -1}^{\mathfrak{l}} (n_{i+1}, \cdots ,n_{j-1}, n_{j}) \\ - \zeta_{\mid n_{j}\mid -1}^{\mathfrak{l}} (n_{j-1}, \cdots ,n_{i+1}, n_{i}) \end{array} \right. \otimes \zeta^{\mathfrak{m}} (n_{1}, \ldots, n_{i-1},\overline{1},n_{j+1},\cdots n_{p}). \end{multline} \end{proof} \paragraph{Sufficient condition. }\label{sufficientcondition} Let $\mathfrak{Z}= \zeta^{\mathfrak{m}}(n_{1}, \ldots, n_{p})$ a motivic Euler sum. These four conditions are \textit{sufficient} for $\mathfrak{Z}$ to be unramified: \begin{description} \item [\textsc{c}1]: No $\overline{1}$ in $\mathfrak{Z}$. \item [\textsc{c}2]: For each $(n_{1}, \ldots, n_{i})$ of odd weight, the MES $\zeta^{\mathfrak{m}}(n_{i+1}, \ldots, n_{p})$ is a MMZV. \item [\textsc{c}3]: If a cut removes an odd-weight part (such that there is no symmetric cut possible), the remaining MES (right side in terms \textsc{a,b}), is a MMZV. \item [\textsc{c}4]: Each sub-sequence $(n_{i}, \ldots, n_{j})$ of even weight such that $\prod_{k=i}^{j} \epsilon_{k} \neq 1$ is symmetric. \end{description} \begin{proof} The condition $\textsc{c}1$ implies that $D_{1}(\mathfrak{Z})=0$; conditions $\textsc{c}2$, resp. $\textsc{c}3$ take care that the right side of terms \textsc{a0}, resp. \textsc{a,b} are unramified, while the condition $\textsc{c}4$ cancels the (disturbing) terms \textsc{c}: indeed, a single ES with an $\overline{1}$ can not be unramified.\\ Note that a MES $\mathfrak{Z}$ of depth $2$, weight $n$ is unramified if and only if $ \left\lbrace \begin{array}{l} D_{1}(\mathfrak{Z})=0\\ D_{n-1}(\mathfrak{Z})=0 \end{array}\right.$. \end{proof} \noindent \texttt{Nota Bene:} This criterion is not \textit{necessary}: it does not cover the unramified $\mathbb{Q}$-linear combinations of motivic Euler sums, such as those presented in section $4$, neither some isolated (\textit{symmetric enough}) examples where the unramified terms appearing in $D_{2r+1}$ could cancel between them. However, it embrace general families of single Euler sums which are unramified.\\ \\ Moreover, \begin{framed} \emph{If we \textit{comply with these conditions}, the \textit{only} general families of single MES obtained are the one listed in Theorem $6.2.1$.} \end{framed} \begin{proof}[Sketch of the proof] Notice first that the condition \textsc{c}$4$ implies in particular that there are no consecutive sequences of the type (since it would create type $\textsc{c}$ terms): $$\textsc{ Seq. not allowed : } O\overline{O}, \overline{O}O, \overline{E}E, E\overline{E}.$$ It implies, from depth $3$, that we can't have the sequences (otherwise one of the non allowed depth $2$ sequences above appear in $\textsc{a,b}$ terms): $$\textsc{ Seq. not allowed : } \overline{E}\overline{E}\overline{O}, \overline{E}\overline{E}\overline{E}O, \overline{E}\overline{E}O\overline{E}, E\overline{O}\overline{E},EE\overline{O}, \overline{O}EE, \overline{E}OE, \overline{E}\overline{O}\overline{E}, \overline{O}\overline{O}\overline{O}.$$ Going on in this recursive way, carefully, leads to the previous theorem.\\ \texttt{For instance,} let $\mathfrak{Z}$ a MES of even weight, with no $\overline{1}$, and let detail two cases: \begin{description} \item[\texttt{Depth} $3$:] The right side of $D_{2 r+1}$ has odd weight and depth smaller than $2$, hence is always MMZV if there is no $\overline{1}$ by depth $2$ results. It boils down to the condition $\textsc{c}4$: $\mathfrak{Z}$ must be either symmetric (such as $O_{1}E O_{1}$ or $E_{1}EE_{1}$ with possibly one or three overlines) either have exactly two overlines. Using the analysis above of the allowed sequences in depth $2$ and $3$ for condition $\textsc{c3,4}$ leads to the following: $$(E,\overline{O},\overline{O}),(\overline{O},\overline{O},E), (O,\overline{E},\overline{O}), (\overline{O},\overline{E}, O), (\overline{O},E,\overline{O}), (\overline{O_{1}}, \overline{E},\overline{O_{1}}), (O_{1}, \overline{E},O_{1}), (\overline{E_{1}}, \overline{E},\overline{E_{1}}) .$$ \item[\texttt{Depth} $4$:] Let $\mathfrak{Z}=\zeta^{\mathfrak{m}}\left( n_{1}, \ldots, n_{4}\right) $, $\epsilon_{i}=sign(n_{i})$. To avoid terms of type $ \textsc{c}$ with a right side of depth $1$: if $\epsilon_{1}\epsilon_{2}\epsilon_{3}\neq 1$, either $n_{1}+n_{2}+n_{3}$ is odd, or $n_{1}=n_{3}$ and $\epsilon_{2}=-1$; if $\epsilon_{2}\epsilon_{3}\epsilon_{4}\neq 1$, either $n_{2}+n_{3}+n_{4}$ is odd, or $n_{2}=n_{4}$ and $\epsilon_{3}=-1$. The following sequences are then not allowed: $$ (\overline{E}, O,O,\overline{E}), (\overline{E}, \overline{O},\overline{O},\overline{E}), (\overline{E}, \overline{O},E,O), (\overline{E}, \overline{E},O,O), (O,O,\overline{E}, \overline{E}).$$ \end{description} \end{proof} \section{Motivic Identities} As we have seen above, in particular in Lemma $\ref{lemmcoeff}$, the coaction enables us to prove some identities between MMZV or MES, by recursion on the depth, up to one rational coefficient at each step. This coefficient can be deduced then, if we know the analogue identity for MZV, resp. Euler sums. Nevertheless, a \textit{motivic identity} between MMZV (resp. MES) is stronger than the corresponding relation between real MZV (resp. Euler sums); it may hence require several relations between MZV in order to lift an identity to motivic MZV. An example of such a behaviour occurs with some Hoffman $\star$ elements, ($(iv)$ in Lemma $\ref{lemmcoeff}$).\\ \\ In this section, we list a few examples of identities, picked from the zoo of existing identities, that we are able to lift easily from Euler sums to motivic Euler sums: \textit{Galois trivial} elements (action of the unipotent part of the Galois group being trivial), sums identities, etc. \\ \\ \texttt{Nota Bene}: For other cyclotomic MMZV, we could somehow generalize this idea, but there would be several unknown coefficients at each step, as stated in Theorem $2.4.4$. For $N=3$ or $4$, we have to consider all $D_{r}, 0<r<n$, and there would be one resp. two (if weight even) unknown coefficients at each step ; for $N=\mlq 6 \mrq$, if unramified, considering $D_{r}, r>1$, there would be also one or two unknown coefficients at each step. \\ \\ \texttt{Example:} Here is an identity known for Euler sums, proven at the motivic level by recursion on $n$ via the coaction for motivic Euler sums (and using the analytic identity): \begin{equation} \zeta^{\mathfrak{m}}(\lbrace 3 \rbrace^{n})= \zeta^{\mathfrak{m}}(\lbrace 1,2 \rbrace^{n}) = 8^{n} \zeta^{\mathfrak{m}}(\lbrace 1, \overline{2} \rbrace^{n}). \end{equation} \begin{proof} These three families are stable under the coaction: $$\begin{array}{lllll} D_{2r+1} (\zeta^{\mathfrak{m}}(\lbrace 3 \rbrace^{n})) & = & \delta_{2r+1=3s} \zeta^{\mathfrak{a}}(\lbrace 3 \rbrace^{s}) & \otimes & \zeta^{\mathfrak{m}}(\lbrace 3 \rbrace^{n-s}) .\\ D_{2r+1} (\zeta^{\mathfrak{m}}(\lbrace 1,2 \rbrace^{n})) & = & \delta_{2r+1=3s} \zeta^{\mathfrak{a}}(\lbrace 1,2 \rbrace^{s}) & \otimes & \zeta^{\mathfrak{m}}(\lbrace 1,2 \rbrace^{n-s}) .\\ D_{2r+1} (\zeta^{\mathfrak{m}}(\lbrace 1,\overline{2} \rbrace^{n})) & = & \delta_{2r+1=3s} \zeta^{\mathfrak{a}}(\lbrace 1,\overline{2} \rbrace^{s}) & \otimes & \zeta^{\mathfrak{m}}(\lbrace 1,\overline{2} \rbrace^{n-s}) . \end{array}$$ Indeed, in both case, in the diagrams below, cuts $(3)$ and $(4)$ are symmetric and get simplified by reversal, as cuts $(1)$ and $(2)$, except for last cut of type $(1)$ which remains alone:\\ \includegraphics[]{dep10.pdf}\\ Similarly for $\zeta^{\mathfrak{m}}(\lbrace 1,\overline{2} \rbrace^{n})$: cuts of type $(3)$, $(4)$ resp. $(1), (2)$ get simplified together, except the first one, when $\epsilon=\epsilon'$ in the diagram below. The other possible cuts of odd length would be $(5)$ and $(6)$ below, when $\epsilon=-\epsilon'$, but each is null since antisymmetric.\\ \includegraphics[]{dep11.pdf} \end{proof} \paragraph{Galois trivial.} The Galois action of the unipotent group $\mathcal{U}$ is trivial on $\mathbb{Q}[\mathbb{L}^{\mathfrak{m}, 2n}]= \mathbb{Q}[\zeta^{\mathfrak{m}}(2)^{n}]$. To prove an element of $\mathcal{H}_{2n}$ is a rational multiple of $\mathbb{L}^{\mathfrak{m},2n}$, it is equivalent to check it is in the kernel of the derivations $D_{2r+1}$, for $1\leq 2r+1<2n$, by Corollary $\ref{kerdn}$. We have to use the (known) analogue identities for MZV to conclude on the value of such a rational.\\ \\ \texttt{Example:} \begin{itemize} \item[$\cdot$] Summing on all the possible ways to insert n $\boldsymbol{2}$'s. \begin{equation} \zeta^{\mathfrak{m}}(\left\lbrace 1,3 \right\rbrace^{p} \text{with n } \boldsymbol{2} \text{ inserted } )= \binom{2p+n}{n} \frac{\pi^{4p+2n, \mathfrak{m}}}{(2n+1) (4p+2n+1)!}. \end{equation} \item[$\cdot$] More generally, with fixed $(a_{i})$ such that $\sum a_{i}=n$: \footnote{Both appears also in Charlton's article.$\cite{Cha}$.} \begin{equation} \sum_{\sigma\in\mathfrak{S}_{2p}} \zeta^{\mathfrak{m}}(2^{a_{\sigma(0)}} 1 , 2^{a_{\sigma(1)}}, 3, 2^{a_{\sigma(2)}}, \ldots, 1, 2^{a_{\sigma(2p-1)}}, 3, 2^{a_{\sigma(2p)}} )\in\mathbb{Q} \pi^{4p+2n, \mathfrak{m}}. \end{equation} \end{itemize} \begin{proof} In order to justify why all the derivations $D_{2r+1}$ cancel, the possible cuts of odd length are, with $X= \lbrace 01 \rbrace^{a_{2i+2}+1} \lbrace 10 \rbrace^{a_{2i+3}+1} \cdots \lbrace 01 \rbrace^{a_{2j-2}} \lbrace 10 \rbrace^{a_{2j-1}} $:\\ \includegraphics[]{dep12.pdf} All the cuts get simplified by \textsc{Antipode} $\shuffle$ $\ref{eq:antipodeshuffle2}$, which proves the result, as follows: \begin{itemize} \item[$\cdot$] Cut $(1)$ for $(a_{0}, \ldots, a_{2p})$ with Cut $(2)$ for $(a_{0}, \ldots, a_{2i-1}, a_{2j+1} \cdots, a_{2i}, a_{2j+2 },\cdots, a_{2p})$. \item[$\cdot$] Similarly between $(3)$ and $(4)$, which get simplified considering the sequence where $(a_{2i+1}, \ldots, a_{2j})$ is reversed. \end{itemize} \end{proof} \paragraph{Polynomial in simple zetas.} A way to prove that a family of (motivic) MZV are polynomial in simple (motivic) zetas, by recursion on depth: \begin{lemm} Let $\mathfrak{Z}\in\mathcal{H}^{1}_{n}$ a motivic multiple zeta value of depth $p$. \\ If the following conditions hold, $\forall \quad 1<2r+1<n$, $m\mathrel{\mathop:}=\lfloor\frac{n}{2}\rfloor-1$: \begin{itemize} \item[$(i)$] $D_{2r+1,p}(\mathfrak{Z})= P^{\mathfrak{Z}}_{r}(\zeta^{\mathfrak{m}}(3),\zeta^{\mathfrak{m}}(5), \ldots, \zeta^{\mathfrak{m}}(2m+1), \zeta^{\mathfrak{m}}(2)),$ $$\text{with } P^{\mathfrak{Z}}_{r}(X_{1},\cdots, X_{m}, Y )= \sum_{2s+\sum (2k+1)\cdot a_{k}=n-2r-1 } \beta^{r}_{a_{1}, \ldots, a_{m}, s} X_{1}^{a_{1}} \cdots X_{m}^{a_{m}} Y^{s}.$$ \item[$(ii)$] For $ a_{k},a_{r}>0 \text{ : } \frac{ \beta^{r}_{a_{1}, \ldots, a_{r}-1,\cdots, a_{m},s}}{a_{r}+1} =\frac{ \beta^{k}_{a_{1}, \ldots, a_{k}-1, \ldots, a_{m},s}}{a_{k}}.$ \end{itemize} Then, $\mathfrak{Z}$ is a polynomial in depth $1$ MMZV: $$ \mathfrak{Z}= \alpha \zeta^{\mathfrak{m}}(n)+ \sum_{2s+\sum (2k+1)a_{k}=n} \alpha_{a_{1}, \ldots, a_{m},s} \zeta^{\mathfrak{m}}(3)^{a_{1}} \cdots \zeta^{\mathfrak{m}}(2m+1)^{a_{m}} \zeta^{\mathfrak{m}}(2)^{s}. \footnote{In particular, $\alpha_{a_{1}, \ldots, a_{m},s} =\frac{\beta^{r}_{a_{1}, \ldots,a_{r}-1, \ldots, a_{m}, s}}{a_{r}}$ for $a_{r}\neq 0$. }$$ \end{lemm} \begin{proof} Immediate with Corollary $\ref{kerdn}$ since: $$D_{2r+1,p } \left( \zeta^{\mathfrak{m}}(3)^{a_{1}} \cdots \zeta^{\mathfrak{m}}(2m+1)^{a_{m}} \zeta^{\mathfrak{m}}(2)^{s}\right) = a_{r} \zeta^{\mathfrak{m}}(3)^{a_{1}} \cdots \zeta^{\mathfrak{m}}(2r+1)^{a_{r}} \cdots \zeta^{\mathfrak{m}}(2m+1)^{a_{m}} \zeta^{\mathfrak{m}}(2)^{s}. $$ \end{proof} \noindent \texttt{Example}: Some examples were given in the proof of Lemma $\ref{lemmcoeff}$; the following family is polynomial in zetas \footnote{Proof method: with recursion hypothesis on coefficients, using: $$D_{2r+1}(\zeta^{\mathfrak{m}} (\left\lbrace 1 \right\rbrace ^{n}, m))= - \sum_{j=\max(0,2r+2-m)}^{\min(n-1,2r-1)} \zeta^{\mathfrak{l}} (\left\lbrace 1 \right\rbrace ^{j}, 2r+1-j) \otimes \zeta^{\mathfrak{m}} (\left\lbrace 1 \right\rbrace ^{n-j-1}, m-2r+j).$$}: $$\zeta^{\mathfrak{m}} (\left\lbrace 1 \right\rbrace ^{n}, m).$$ \paragraph{Sum formulas.} Here are listed a few examples of the numerous \textit{sum identities} known for Euler sums\footnote{Usually proved considering the generating function, and expressing it as a hypergeometric function.} which we can lift to motivic Euler sums, via the coaction. For these identities, as we see through the proof, the action of the Galois group is trivial; the families being stable under the derivations, we are able to lift the identity to its motivic version via a simple recursion. \begin{theo} Summations, if not precised are done over the admissible multi-indices, with $w(\cdot)$, resp. $d(\cdot)$, resp. $h(\cdot)$ indicating the weight, resp. the depth, resp. the height: \begin{itemize} \item[(i)] With fixed even (possibly negative) $\left\lbrace a_{i}\right\rbrace _{1 \leq i \leq p}$ of sum $2n$:\footnote{This would be clearly also true for MMZV$^{\star}$.} $$\sum_{\sigma\in\mathfrak{S}_{p}} \zeta^{\mathfrak{m}}(a_{\sigma(1)}, \ldots, a_{\sigma(p)}) \in \mathbb{Q} \pi^{2n, \mathfrak{m}}.$$ In particular:\footnote{The precise coefficient is given in $\cite{BBB1}$, $(48)$ and can then be deduced also for the motivic identity.} $$\zeta^{\mathfrak{m}}(\left\lbrace 2n \right\rbrace^{p} ) , \zeta^{\mathfrak{m}}(\left\lbrace \overline{2n} \right\rbrace^{p} ) \in \mathbb{Q} \pi^{2np, \mathfrak{m}}.$$ More precisely, with Hoffman $\cite{Ho}$ \footnotemark[6] \begin{multline}\nonumber \sum_{\sum n_{i}= 2n} \zeta^{\mathfrak{m}}\left( 2n_{1}, \ldots, 2n_{k}\right) =\\ \frac{1}{2^{2(k-1)}} \binom{2k-1}{k} \zeta^{\mathfrak{m}}(2n) - \sum_{j=1}^{\lfloor\frac{k-1}{2}\rfloor} \frac{1}{2^{2k-3}(2j+1) B_{2j}} \binom{2k-2j-1}{k} \zeta^{\mathfrak{m}}(2j) \zeta^{\mathfrak{m}}(2n-2j) . \end{multline} \item[(ii)] With Granville $\cite{Gra}$, or Zagier $\cite{Za1}$ \footnotemark[6] $$ \sum_{w(\textbf{k})=n, d(\textbf{k})=d } \zeta^{\mathfrak{m} }(\textbf{k})= \zeta^{\mathfrak{m}}(n). $$ \item[(iii)] With Aoki, Ohno $\cite{AO}$\footnotemark[6] \footnotemark[2] \begin{align*} \sum_{w(\textbf{k})=n, d(\textbf{k})=d } \zeta^{\star,\mathfrak{m}}(\textbf{k}) & = \binom{n-1}{d-1} \zeta^{\mathfrak{m}}(n).\\ \sum_{w(\textbf{k})=n, h(\textbf{k})=s } \zeta^{\star,\mathfrak{m}}(\textbf{k})& = 2\binom{n-1}{2s-1} (1-2^{1-n}) \zeta^{\mathfrak{m}}(n). \end{align*} \item[(iv)] With Le, Murakami$\cite{LM}$\footnotemark[6] $$\sum_{w(\textbf{k})=n, h(\textbf{k})=s } (-1)^{d(\textbf{k})}\zeta^{\mathfrak{m}}(\textbf{k})=\left\lbrace \begin{array}{ll} 0 & \text{ if } n \text{ odd} . \\ \frac{(-1)^{\frac{n}{2}} \pi^{\mathfrak{m},n}}{(n+1)!} \sum_{k=0}^{\frac{n}{2}-s}\binom{n+1}{2k} (2-2^{2k})B_{2k} & \text{ if } n \text{ even} .\\ \end{array} \right. $$ \item[(v)] With S. Belcher (?)\footnotemark[6] $$\hspace*{-0.5cm}\begin{array}{llll} \sum_{w(\cdot)=2n \atop d(\cdot)=2p } \zeta^{\mathfrak{m}}(odd, odd>1, odd, \ldots, odd, odd>1)& =& \alpha^{n,p} \zeta^{\mathfrak{m}} (2)^{n}, & \alpha^{n,p} \in \mathbb{Q}\\ \sum_{w(\cdot)=2n+1 \atop d(\cdot)=2p+1} \zeta^{\mathfrak{m}}(odd, odd>1, odd, \ldots, odd>1, odd)&=& \sum_{i=1}^{n} \beta^{n,p}_{i} \zeta^{\mathfrak{m}}(2i+1) \zeta^{\mathfrak{m}}(2)^{n-i} , & \beta^{n,p}_{i}\in\mathbb{Q}\\ \sum_{w(\cdot)=2n+1 \atop d(\cdot)=2p+1} \zeta^{\mathfrak{m}}(odd>1, odd, \ldots, odd, odd>1)&=& \sum_{i=1}^{n} \gamma^{n,p}_{i} \zeta^{\mathfrak{m}}(2i+1) \zeta^{\mathfrak{m}}(2)^{n-i}, & \gamma^{n,p}_{i}\in\mathbb{Q} \end{array}$$ \end{itemize} \footnotetext[6]{The person(s) at the origin of the analytic equality for MZV, used in the proof for motivic MZV.} \end{theo} \noindent \textsc{Remark}: The permutation identity $(i)$ would in particular imply that all sum of MZV at even arguments times a symmetric function of these same arguments are rational multiple of power of $\mathbb{L}^{\mathfrak{m}}$. \\ Many specific identities, in small depth have been already found (as Machide in $\cite{Ma}$, resp. Zhao, Guo, Lei in $\cite{GLZ}$, etc.), and can be directly deduced for motivic MZV, such as: \begin{align*} \hspace*{-2.5cm}\sum_{k=1}^{n-1} \zeta(2k, 2n-2k) \quad\quad & \left\lbrace \begin{array}{lll} 1 & =& \frac{3}{4} \zeta(2n)\\ 4^{k}+4^{n-k} &=& (n+\frac{4}{3}+\frac{4^{n}}{6})\zeta(2n) \\ (2k-1)(2n-2k-1) &=& \frac{3}{4} (n-3) \zeta(2n) \end{array} \right. \\ \hspace*{-0.3cm}\sum \zeta(2i, 2j, 2n-2i-2j) & \left\lbrace \begin{array}{lll} 1 & =& \frac{5}{8} \zeta(2n)- \frac{1}{4} \zeta(2n-2) \zeta(2)\\ ij +jk+ki &=& \frac{5n}{64} \zeta(2n)+(4n-\frac{9}{10}) \zeta(2n-2) \zeta(2) \\ ijk &=& \frac{n}{128} (n-3) \zeta(2n)-\frac{1}{32} \zeta(2n-2) \zeta(2)+\frac{2n-5}{8} \zeta(2n-4) \zeta(4)\\ \end{array} \right.\\ & \\ \end{align*} \begin{proof} We refer to the formula of the derivations $D_{r}$ in Lemma $\ref{lemmt}$. For many of these equalities, when summing over all the permutations of a certain subset, most of the cuts will get simplified two by two as followed: \begin{equation}\label{eq:termda} \zeta^{\mathfrak{m}}\left( k_{1}, \ldots, k_{i}, k_{i+1}, \ldots, k_{j}, k_{j+1}, \cdots k_{d}\right) \text{ : } 0; \cdots 1 0^{k_{i}-1} \boldsymbol{1 } 0^{k_{i+1}-1} \cdots 0^{k_{j-1}-1} 1 \boldsymbol{0^{k_{j}-1}} 1 0^{k_{j+1}-1}\cdots ; 1 . \end{equation} \begin{equation}\label{eq:termdb} \zeta^{\mathfrak{m}}(k_{1},\cdots, k_{i}, k_{j}, \cdots, k_{i+1}, k_{j+1}, \ldots, k_{d}) \text{ : } 0; \cdots 1 0^{k_{i}-1} 1 \boldsymbol{0^{k_{j}-1}} \cdots 0^{k_{i+2}-1} 1 0^{k_{i+1}-1} \boldsymbol{1} 0^{k_{j+1}-1}\cdots ; 1. \end{equation} It remains only the first cuts, beginning with the first $0$, such as: \begin{equation}\label{eq:termd1} \delta_{2r+1= \sum_{j=1}^{i} k_{j}} \zeta^{\mathfrak{m}}\left( k_{1}, \ldots, k_{i})\otimes \zeta^{\mathfrak{m}}(k_{i+1}, \ldots, k_{d}\right) , \end{equation} and possibly the cuts from a $k_{i}=1$ to $k_{d}$, if the sum is over admissible MMZV: \footnote{There, beware, the MZV at the left side can end by $1$.} \begin{equation}\label{eq:termdr} -\delta_{2r+1< \sum_{j=i+1}^{d} k_{j}} \zeta^{\mathfrak{m}}\left( k_{i+1}, \ldots, k_{d-1}, 2r+1- \sum_{j=i+1}^{d-1} k_{j}\right) \otimes \zeta^{\mathfrak{m}}\left( k_{1}, \ldots, k_{i-1}, \sum_{j=i+1}^{d} k_{j} -2r\right) . \end{equation} \begin{itemize} \item[(i)] From the terms above in $D_{2r+1}$, $(\ref{eq:termda})$, and $(\ref{eq:termdb})$ get simplified together, and there are no terms $(\ref{eq:termd1})$ since the $a_{i}$ are all even. Therefore, it is in the kernel of $\oplus_{2r+1<2n} D_{2r+1}$ with even weight, hence Galois trivial.\\ For instance, for $\zeta^{\mathfrak{m}}(\left\lbrace \overline{2n} \right\rbrace^{p} ) $, with $\epsilon, \epsilon'\in \lbrace\pm 1\rbrace$:\\ \includegraphics[]{dep13.pdf}\\ Either, $\epsilon=\epsilon'$ and $X$ is symmetric, and by reversal of path (cf. $\S A.1.1$), cuts above get simplified, or $\epsilon=-\epsilon'$ and $X$ is antisymmetric, and the cuts above still get simplified since $I^{\mathfrak{m}}(\epsilon;0^{a+1} X;0)=-I^{\mathfrak{m}}(0;\widetilde{X} 0^{a+1};\epsilon)=-I^{\mathfrak{m}}(0;X 0^{a+1};-\epsilon)$. \item[(ii)] Let us denote this sum $G(n,d)$, and $G_{1}(n,d)$ the corresponding sum where a $1$ at the end is allowed. As explained in the proof's preamble, the remaining cuts being the first ones and the one from a $k_{i}=1$ to the last $k_{d}$: $$\hspace*{-0.5cm}D_{2r+1}(G (n,d))= \sum_{i=0}^{d-1} G^{\mathfrak{l}}_{1}(2r+1,i) \otimes G(n-2r-1,d-i) -\sum_{i=0}^{d-1} G^{\mathfrak{l}}_{1}(2r+1,i) \otimes G(n-2r-1,d-i) =0 .$$ \item[(iii)] This can be proven also computing the coaction, or noticing that it can be deduced from Euler relation above, turning a MZV$^{\star}$ into a sum of MZV of smaller depth, it turns to be: $$\sum_{i=1}^{d} \sum_{w(\boldsymbol{k})= n, d(\boldsymbol{k})=i} \binom{n-i-1}{d-i} \zeta^{\mathfrak{m}} (\boldsymbol{k}).$$ For the Aoki-Ohno identity, using the formula for MZV $\star$, and with recursion hypothesis, we could similarly prove that the coaction is zero on these elements, and conclude with the result for MZV. \item[(iv)] Let us denote this sum $G_{-}(n,s)$ and $G_{-,(1)}(n,s)$ resp. $G_{-,1}(n,s)$ the analogue sums with possibly a $1$ at the end, resp. with necessarily a $1$ at the end. Looking at the derivations, since we sum over all the permutations of the admissible indices, all the cuts get simplified with its symmetric cut as said above, and it remains only the beginning cut (with the first $0$), and the cut from a $k_{j}=1$ to the last $k_{d}$, which leads to: \begin{multline}\nonumber \hspace*{-1cm}D_{2r+1}(G_{-}(n,s))= \sum_{i=0}^{s-1} \left( G^{\mathfrak{l}}_{-,(1)}(2r+1,i) -G^{\mathfrak{l}}_{-}(2r+1,i+1)- G^{\mathfrak{l}}_{-,1}(2r+1,i)\right) \otimes G_{-}(n-2r-1,s-i) \\ = \sum_{i=0}^{s-1} (G^{\mathfrak{l}}_{-}(2r+1,i) -G^{\mathfrak{l}}_{-}(2r+1,i+1))\otimes G_{-}(n-2r-1,s-i). \end{multline} Using recursion hypothesis, it cancels, and thus, $G_{alt}(n,s)\in\mathbb{Q} \zeta^{\mathfrak{m}}(n)$. Using the analogue analytic equality, we conclude. \item[(v)] For odd sequences with alternating constraints ($>1$ or $\geq 1$ for instance), cuts between $k_{i}$ and $k_{j}$ will get simplified with some symmetric terms in the sum, except possibly (when odd length), the first (i.e. from the first $1$ to a first $0$) and the last (i.e. from a last $0$ to the very last $1$) one. More precisely, with $O$ any odd integer, possibly all different: \begin{itemize} \item[$\cdot$] \begin{small} \begin{multline}\nonumber \hspace*{-1cm}D_{2r+1} \left( \sum_{w(\cdot)=2n \atop d(\cdot)=2p } \zeta^{\mathfrak{m}}(O, O>1, \cdots, O, O>1) \right) \\ = \sum_{i=0}^{p-1} \left( \sum_{w(\cdot)=2r+1 \atop d(\cdot)=2i+1 } \begin{array}{l} + \zeta^{\mathfrak{l}}(O, O>1, \cdots, O>1, O)\\ -\zeta^{\mathfrak{l}}(O, O >1, \ldots, O>1, O) \end{array} \right) \otimes \sum_{w(\cdot)=2n-2r-1 \atop d(\cdot)=2p-2i-1 } \zeta^{\mathfrak{m}}(O, O>1, \ldots, O, O>1)=0. \end{multline} \end{small} \item[$\cdot$] \begin{small} \begin{multline}\nonumber \hspace*{-1cm}D_{2r+1} \left( \sum_{w(\cdot)=2n+1 \atop d(\cdot)=2p+1 } \zeta^{\mathfrak{m}}(O>1, O, \ldots, O, O>1) \right) \\ = \sum_{i=0}^{p-1} \left( \sum_{w(\cdot)=2r+1 \atop d(\cdot)=2i+1 } \zeta^{\mathfrak{l}}(O>1, O, \ldots, O>1) \right) \otimes \sum_{w(\cdot)=2n-2r \atop d(\cdot)=2p-2i-1 } \zeta^{\mathfrak{m}}(O, O>1, \cdots, O, O>1). \end{multline} \end{small} By the previous identity, the right side is in $\mathbb{Q}\pi^{2n-2r}$, which proves the result claimed; it gives also the recursion for the coefficients: $\beta^{n,p}_{r}= \sum_{i=0}^{p-1} \beta^{r,i}_{r} \alpha^{n-r,p-i} $. \item[$\cdot$] \begin{small} \begin{multline}\nonumber \hspace*{-1cm}D_{2r+1} \left( \sum_{w(\cdot)=2n+1 \atop d(\cdot)=2p+1 } \zeta^{\mathfrak{m}}(O, O>1, \ldots, O>1, O) \right) =\\ \begin{array}{l} +\sum_{i=0}^{p-1} \left( \sum_{w(\cdot)=2r+1 \atop d(\cdot)=2i+1 } \zeta^{\mathfrak{m}}(O, O>1, \ldots, O) \right) \otimes \sum_{w(\cdot)=2n-2r \atop d(\cdot)=2p-2i-1 } \zeta^{\mathfrak{m}}(O, O>1, \cdots, O>1)\\ +\sum_{i=0}^{p-1} \left( \sum_{w(\cdot)=2r+1, \atop d(\cdot)=2i+1 } \begin{array}{l} + \zeta^{\mathfrak{m}}(O, O>1, \ldots, O)\\ - \zeta^{\mathfrak{m}}(O, O>1, \ldots, O) \end{array} \right) \otimes \sum_{w(\cdot)=2n-2r \atop d(\cdot)=2p-2i-1 } \zeta^{\mathfrak{m}}(O>1, O, \cdots, O>1, O) \end{array} \end{multline} \end{small} As above, by recursion hypothesis, the right side of the first sum is in $\mathbb{Q}\pi^{2n-2r}$, which proves the result claimed, the second sum being $0$; the rational coefficients $\gamma$ are given by a recursive relation. \end{itemize} \end{itemize} \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and definitions} An embedding of the complete graph on $v$ vertices with each edge on both a $m$-cycle and an $n$-cycle is termed a {\em biembedding}. A biembedding is necessarily 2-colorable with the faces that are $m$-cycles receiving one color while those faces that are $n$-cycles receiving the other color. So each pair of vertices occur together in exactly one $m$-cycle and one $n$-cycle. A {\em k-cycle system} on $v$ points is a collection of simple $k$-cycles with the property that any pair of points appears in a unique $k$-cycle. Hence a biembedding is a simultaneous embedding of an $m$-cycle system and an $n$-cycle system on $v$ points. In this paper we will specifically consider the case of biembeddings of 3-cycle systems (Steiner triple systems) and $n-$cycle systems where both of these systems are on $6n+1$ points. There has been previous work done in the area of biembedding cycle systems, specifically Steiner triple systems. In 2004, both Bennet, Grannell, and Griggs \cite{B04} and Grannell and Korzhik \cite{G04} published papers on {\em nonorientable} biembeddings of pairs of Steiner triple systems. In \cite {G0915} the eighty Steiner triple systems of order 15 were also proven to have {\em orientable} biembeddings. In addition, Granell and Koorchik \cite{G09M} gave methods to construct orientable biembeddings of two cyclic Steiner triple systems from current assignments on M{\"o}bius ladder graphs. Brown \cite{B10} constructed a class of biembeddings where one face is a triangle and one face is a quadrilateral. Recently, Forbes, Griggs, Psomas, and {\v S}ir{\'a}{\v n} \cite{F14} proved the existence of biembeddings of pairs of Steiner triple systems in orientable pseudosurfaces with one pinch point, Griggs, Psomas and {\v S}ir{\'a}{\v n} \cite{GPS.14} presented a uniform framework for biembedding Steiner triple systems obtained from the Bose construction in both orientable and nonorientable surfaces, and McCourt \cite{M14} gave nonorientable biembeddings for the complete graph on $n$ vertices with a Steiner triple system of order $n$ and a Hamiltonian cycle for all $n \equiv 3 \pmod {36}$ with $n \geq 39$. In 2015, Archdeacon \cite{A14} presented a framework for biembedding $m-$cycle and $n-$cycle systems on $v$ points on a surface for general $m$ and $n$. It involved the use of so-called Heffter arrays and is quite general in nature, working in both the orientable and nonorientable case as well as for many possible values of $v$ for fixed $m$ and $n$. This is the first paper to explicitly use these Heffter arrays for biembedding purposes (they are actually of some interest in their own right). In this paper we consider basically the smallest (and tightest) case for which this method works, namely we will prove that for every $n\geq 3$ there exists a biembedding of a Steiner triple system and an $n$-cycle system on $6n+1$ points. We begin with the definitions of Heffter systems and Heffter arrays from \cite{A14}. Some of the definitions in \cite{A14} are more general, but these suffice for our purpose. Let $\mathbb Z_r$ be the cyclic group of odd order $r$ whose elements are denoted 0 and $\pm i$ where $i = 1,2,...,\frac{r-1}{2}$. A \textit{half-set} $L \subseteq \mathbb{Z}_r$ has $\frac{r-1}{2}$ nonzero elements and contains exactly one of $\{x, -x\}$ for each such pair. A \textit{Heffter system} $D(r,k)$ is a partition of $L$ into parts of size $k$ such that the sum of the elements in each part equals 0 modulo $r$. Note that a Heffter system $D(n,3)$ provides a solution to Heffter's first difference problem (see \cite{handbook-skolem}) and hence provides the base blocks for a cyclic Steiner triple system on $n$ points. Two Heffter systems, $D_1 = D(2mn + 1, n)$ and $D_2 = D(2mn + 1, m)$, on the same half-set, $L$, are \textit{orthogonal} if each part (of size $n$) in $D_1$ intersects each part (of size $m$) in $D_2$ in a single element. A \textit{Heffter array} $H(m,n)$ is an $m \times n$ array whose rows form a $D(2mn +1, n)$, call it $D_1$, and whose columns form a $D(2mn+1, m)$, call it $D_2$. Furthermore, since each cell $a_{i,j}$ contains the shared element in the $i^{th}$ part of $D_1$ and the $j^{th}$ part of $D_2$, these row and column Heffter systems are orthogonal. So an $H(m,n)$ is equivalent to a pair of orthogonal Heffter systems. In Example \ref{Heff} we give orthogonal Heffter systems $D_1 = D(31,5)$ and $D_2 = D(31,3)$ along with the resulting Heffter array $H(3,5)$. Note that the elements occurring in the array form a half set of $Z_{31}.$ \begin{example} A Heffter system $D_1 = D(31,5)$ and a Heffter system $D_2 = D(31,3)$: $$\begin{array}{l} D_1 = \{\{6,7,-10,-4,1\}, \{-9,5,2,-11,13\}, \{3,-12,8,15,-14\}\}, \\ D_2 = \{\{6,-9,3\}, \{7,5,-12\}, \{-10,2,8\}, \{-4,-11,15\}, \{1,13,-14\}\}. \end{array} $$ \newpage The resulting Heffter array $H(3,5)$: \begin{center} $\begin{bmatrix} 6 & 7 & -10 & -4 & 1 \\ -9 & 5 & 2 & -11 & 13 \\ 3 & -12 & 8 & 15 & -14 \end{bmatrix}.$ \end{center} \label{Heff} \end{example} \vspace*{.1in} Let $A$ be a subset of $\mathbb Z_{2mn+1} \setminus \{0\}$. Let $(a_1,...,a_k)$ be a cyclic ordering of the elements in $A$ and let $s_i = \sum_{j=1}^i a_j \pmod {2mn+1}$ be the $i^{th}$ partial sum. The ordering is \textit{simple} if $s_i \not = s_j$ for $i \not = j$. A Heffter system $D(2mn+1,k)$ is \textit{simple} if and only if each part has a simple ordering. Further, a Heffter array $H(m,n)$ is {\em simple} if and only if its row and column Heffter systems are simple. In the next section we will give the connection between Heffter arrays and biembeddings of the complete graph. In Section \ref{section3} we use this connection to show that for all $n\geq 3$ there exists a biembedding of the complete graph on $6n+1$ such that each edge is on a simple face of size $n$ and a face of size 3. In Section \ref{section4} we discuss biembeddings of the complete graph on $10n+1$ for $3\leq n \leq 100$ such that each edge is on a simple face of size $n$ and a simple face of size 5. \section{Heffter arrays and biembeddings} In this section we establish the relationship between Heffter arrays and biembeddings. The following proposition from \cite{A14} describes the connection between Heffter systems and $k$-cycle systems. \begin{prop}\label{prop2.1} \cite{A14} The existence of a simple Heffter system $D(v,k)$ implies the existence of a simple $k$-cycle system decomposition of the edges $E(K_{v})$. Furthermore, the resulting $k$-cycle system is cyclic. \end{prop} Let $D_1 = D(2mn+1,m)$ and $D_2 = D(2mn+1,n)$ be two orthogonal Heffter systems with orderings $\omega_1$ and $\omega_2$ respectively. The orderings are \textit{compatible} if their composition $\omega_1 \circ \omega_2$ is a cyclic permutation on the half-set. The following theorem relates $m \times n$ Heffter arrays with compatible simple orderings on the rows and columns to orientable biembeddings of $K_{2mn+1}$. \begin{thm} \cite{A14} Given a Heffter array $H(m,n)$ with simple compatible orderings $\omega_r$ on $D(2mn+1,n)$ and $\omega_c$ on $D(2mn+1,n)$, there exists an embedding of $K_{2mn+1}$ on an orientable surface such that every edge is on a simple cycle face of size $m$ and a simple cycle face of size $n$. \label{bigthm_corr} \end{thm} In the following theorem we prove that if $m$ and $n$ are not both even, then there exist orderings $\omega_r$ and $\omega_c$ of the row and column Heffter systems, respectively, that are compatible. Archdeacon knew this result, however it is not included in \cite{A14}. \begin{thm} Let $H$ be a $m\times n$ Heffter array where at least one of $m$ and $n$ is odd. Then there exist compatible orderings, $\omega_r$ and $\omega_c$ on the row and column Heffter systems. \label{compatible} \end{thm} \begin{proof} Without loss of generality we assume that the number of columns $H$ is odd, so say $n=2t+1$ for some integer $t$. Let $H = (h_{ij})$ be a $m \times n$ Heffter array. We first define $\omega_r$, the ordering of the row Heffter system, as $\omega_r = (h_{11}, h_{12}, \ldots,h_{1n}) (h_{21}, h_{22}, \ldots,h_{2n}) \ldots (h_{m,1}, h_{m,2}, \ldots,h_{m,n})$. This ordering says that each row in the row Heffter system of $H$ is ordered cyclically from left to right. We next order the columns. For $1\leq c \leq t+1$ the ordering for column $c$ is $(h_{1,c},h_{2,c}, \ldots ,h_{m,c})$ (this is basically top to bottom cyclically) and for $t+2\leq c \leq n$ the ordering for column $c$ is $(h_{m,c},h_{m-1,c}, \ldots ,h_{1,c})$ (bottom to top, cyclically). So considering the composition $\omega_r \circ \omega_c$ we have that $$\omega_r \circ \omega_c(h_{i,j}) = \left\{ \begin{array}{ll} h_{i+1,j+1} & \mbox{if } 1\leq j \leq t+1 \\ h_{i-1,j+1}& \mbox{if } t+2\leq j \leq n \end{array}\right. $$ \noindent where all first subscripts are written as elements from $\{1,2, \ldots ,m\}$ reduced modulo $m$ and all second subscripts are written as elements from $\{1,2, \ldots ,n\}$ reduced modulo $n$. So by construction, starting at any cell in column 1 we see that $\omega_r \circ \omega_c$ moves cyclically from left to right and goes ``down'' $t+1$ times and ``up'' $t$ times. Hence for any $r$ and $c$, given an occurrence of $h_{r,c}$ in $\omega_r \circ \omega_c$ the next occurrence of column $c$ in $\omega_r \circ \omega_c$ will be $h_{r+1,c}$. It is now straightforward to see that $$ \begin{array}{rl} \omega_r \circ \omega_c = &(h_{1,1}, h_{2,2}, \ldots,h_{t+1,t+1}, h_{t,t+2}, \ldots h_{3,n}\\ &h_{2,1 }, h_{3,2} \ldots,h_{t+2,t+1}, h_{t+1,t+2},\ldots h_{4,n },\\ &h_{3,1 }, h_{4,2} \ldots,h_{t+3,t+1}, h_{t+2,t+2},\ldots h_{5,n }, \\ &\ \ \ \ \vdots\\ &h_{m,1 }, h_{m+1,2} \ldots,h_{m+t+1,t+1},\ldots h_{m,n }). \end{array} $$ Hence we have that $\omega_r \circ \omega_c$ is written as a single cycle on the half set and thus $\omega_r$ and $\omega_c$ are compatible orderings. \end{proof} Now from Theorems \ref{bigthm_corr} and \ref{compatible} we have the following theorem relating Heffter arrays and biembeddings. \begin {thm}\label{heffter-biembed.thm} Given a simple Heffter array $H(m,n)$ where at least one of $m$ and $n$ is odd, there exists an embedding of $K_{2mn+1}$ on an orientable surface such that every edge is on a simple cycle face of size $m$ and a simple cycle face of size $n$ \end{thm} Restating the previous theorem in terms of biembeddings of cycle systems we have the following. \begin {thm} Given a simple Heffter array $H(m,n)$ with where at least one of $m$ and $n$ is odd, there exists an orientable biembedding of an $m-$cycle system and an $n-$cycle system both on $2mn+1$ points. \end{thm} \section{Constructing simple $H(3,n)$} \label{section3} In this section we will construct simple $H(3,n)$ for all $n \geq 3$. For each $n$, we will begin with the $3 \times n$ Heffter array already constructed in \cite{Ain} and will provide a reordering so that the resulting Heffter array is simple. We first record the existence result for $H(3,n)$. \begin{thm} \cite{Ain} There exists a $3 \times n$ Heffter array for all $n \geq 3$. \label{3xn} \end{thm} We now restate Theorem \ref{heffter-biembed.thm} in the special case when there are 3 rows in the Heffter array. \begin {corollary}\label{heffter-biembed.3xn} If there exists a simple Heffter array $H(3,n)$, then there exists an embedding of $K_{6n+1}$ on an orientable surface such that every edge is on a simple cycle face of size $3$ and a simple cycle face of size $n$. \end{corollary} Suppose $H = (h_{ij})$ is any $3 \times n$ Heffter array. We first note that each column $c$ in $H$, $1\leq c\leq n$, is simple just using the natural top-to-bottom ordering. Thus if we can reorder each of the three rows so they have distinct partial sums, the array will be a simple. In Theorem \ref{reord2} below we will present a single reordering for each Heffter array that makes $\omega_r$ simple. In finding a {\em single} reordering which works for all three rows in $H$, we are actually rearranging the order of the columns without changing the elements which appear in the rows and columns. For notation, the {\em ordering} $(a_1, a_2, \ldots, a_n)$ denotes a reordering of the columns of $H$ so that in the resulting array $H',$ column $a_i$ of $H$ will appear in column $i$ of $H'$. In Example \ref{reord} we give the original array $H(3,8)$, the reordering $R$ for the rows, and the reordered array $H'(3,8)$. \begin{example} The original $3 \times 8$ Heffter array from \cite{Ain}: $$H=\begin{bmatrix} -13 & -11 & 6 & 3 &10 & -8 & 14 & -1 \\ 4 & -7 & 17 & 19 & 5 & -16 & -2 & -20 \\ 9 & 18 & -23 & -22 & -15 & 24 & -12 & 21 \end{bmatrix}.$$ Note that in row 1, $s_1 = s_6 = -13 \equiv 36 \pmod {49}$, and so $\omega_r$ is not simple. Consider the reordering $R = (1,2,6,8,5,3,4,7)$. The reordered $3 \times 8$ Heffter array is: $$H'=\begin{bmatrix} -13 & -11 & -8 & -1 &10 & 6 & 3 & 14 \\ 4 & -7 & -16 & -20 & 5 & 17 & 19 & -2 \\ 9 & 18 & 24 & 21 & -15 & -23 & -22 & -12 \end{bmatrix}.$$ We list the partial sums for each row as their smallest positive residue modulo 49: \\ \hspace*{1.5in} \begin{tabular}{l} Row 1: \ \ $\{36, 25, 17, 16, 26, 32, 35, 0\},$ \\ Row 2: \ \ $\{4, 46, 30, 10, 15, 32, 2, 0\},$ \\ Row 3: \ \ $\{9,27,2,23,8,34,12,0\}.$ \end{tabular} Since all the partial sums are distinct, $\omega_r$ is simple and hence $H'(3,8)$ is a simple Heffter array. \label{reord} \end{example} As the reader can see, the reordering of the columns results in a simultaneous reordering of the three rows resulting in a simple $H(3,n)$ where the elements in each row and column remain the same as in the original array. We handle two small cases in the next theorem. \begin{thm} There exist simple $H(3,3)$ and $H(3,4)$. \label{reord1} \end{thm} \begin{proof} We present an $H(3,3)$ and an $H(3,4)$ from \cite{Ain}. It is easy to check that both are simple. \begin{center} $ \begin{array}{|c|c|c|} \hline -8&-2&-9 \\ \hline 7&-3&-4 \\ \hline 1&5&-6 \\ \hline \end{array}$ \hspace{.7in} $ \begin{array}{|c|c|c|c|} \hline 1& 2& 3& -6 \\ \hline 8& -12& -7& 11 \\ \hline -9& 10& 4& -5 \\ \hline \end{array}$ \end{center} \end{proof} \begin{comment} First recall that the column Heffter system of the $H(3,3)$ from Theorem \ref{3xn} is simple. Furthermore, the partial row sums $\pmod {19}$ are as follows: row 1: $\{11,9,0\}$, row 2: $\{7,4,0\}$, and row 3: $\{1,6,0\}$. Clearly each row has distinct partial sums, and therefore the Heffter array is simple. Similarly, we see that that $H(3,4)$ from Theorem \ref{3xn} is also simple. Here the partial sums $\pmod {25}$ are as follows: row 1: $\{1,3,6,0\}$, row 2: $\{8,21,14,0\}$, and row 3: $\{16,1,5,0\}$. Clearly each row has distinct partial sums and thus the Heffter array is simple. \end{proof} \end{comment} In our next theorem we will construct simple $H(3,n)$ for all $n \geq 5$. The cases are broken up modulo 8 and we will consider each individually. We will begin with the $3\times n$ Heffter array $H$ given in \cite{Ain} and reorder the columns. In all cases we let $H'$ be the $3 \times n$ Heffter array where the columns of $H$ have been reordered as given in each case. We will always write the partial sums as their lowest positive residue modulo $6n+1$. For the following theorem we introduce the following notation for intervals in ${\mathbb Z}$, let $[a,b] = \{a, a+1, a+2, \ldots, b-1, b\}\ \ \mbox{ and } \ \ [a,b]_2 = \{a, a+2, a+4, \ldots, b-2, b\}.$ \begin{thm} There exist simple $3\times n$ Heffter arrays for all $n \geq 3$. \label{reord2} \end{thm} \begin{proof} When $n=3$ or 4 the result follows from Theorem \ref{reord1} above. We now assume that $n\geq 5$. We begin with $H$, a $3 \times n$ Heffter array from \cite{Ain}. Let $R$ be the reordering for each row. For each $i = 1,2,3$ define $P_i$ as the set of partial sums of row $i$. We will divide each $P_i$ into four subsets, $P_{i,1}$, $P_{i,2}$, $P_{i,3}$ and $P_{i,4}$, based on a natural partition of the columns of $H$. For each case modulo 8 we will present the original construction from \cite{Ain}, followed by the reordering $R$ and the subsets $P_{i,j}$. \bigskip \noindent If $\bm{n \equiv 0 \pmod 8, n \geq 8}$: The case of $n=8$ is given above in Example \ref{reord}. For $n >8$, define $m = \frac{n-8}{8}$, so $n=8m+8$ and hence all the arithmetic in this case will be in ${\mathbb Z}_{48m+49}$. The first four columns are: {\small $$A = \begin{bmatrix} -12m-13 & -10m-11 & 4m+6 & 4m+3 \\ 4m+4 & -8m-7 & 18m+17 & 18m+19 \\ 8m+9 & 18m+18 & -22m-23 & -22m-22 \end{bmatrix}.$$} For each $0 \leq r \leq 2m$ define {\small $$A_r = (-1)^r\begin{bmatrix} (8m+r+10) & (-8m+2r-8) & (14m-r+14) & (-4m+2r-1) \\ (8 m - 2 r + 5) & (-16 m - r - 16) & (-4m + 2r - 2) & (-18m - r - 20) \\ (-16 m + r - 15) & (24 m - r + 24) & (-10m - r - 12) & (22 m - r + 21) \end{bmatrix}.$$} Beginning with the matrix $A$, we add on the remaining $n-4$ columns by concatenating the $A_r$ arrays for each value of $r$ between $0$ and $2m$. So the final array will be: $$H= \begin{bmatrix} A & A_0 & A_1 & \cdots & A_{2m} \end{bmatrix}.$$ Let $R = (9,13,...,n-3;1,11,15,..,n-1;2,10,14,..,n-2;6,8,12,16,..., n,5,3,7,4)$ be a reordering of the rows. Note that we use semi-colons to designate the partitions of $P_i$ into its four subsets. So in this case, $P_{i,1}$ is the set of partial sums from row $i$ and columns $\{9,13,...,n-3\}$ in $H$, $P_{i,2}$ is the set of partial sums from row $i$ and columns $\{1,11,15,...,n-1\}$ from $H$, $P_{i,3}$ is the set of partial sums from row $i$ and columns $\{2,10,14,...,n-2\}$, and $P_{i,4}$ is the set of partial sums from row $i$ and columns $\{6,8,12,16,...,n,$ $5,3,7\}$. We must show that the partial sums of the rows of reordered array are all distinct. To check this, we provide the following table where we give the ranges of the partial sums: \renewcommand{\tabcolsep}{2pt} \begin{center} \begin{tabular}{|rcl|} \hline $R$ &= &$\{9,13,...,n-3;1,11,15,..,n-1;2,10,14,..,n-2;6,8,12,16,...,n,5,3,7,4\}$ \\ \hline $P_{1,1}$ &= & $[39m+39, 40m+38] \cup [1,m]$ \\ \hline $P_{1,2}$&= & $[36m+36, 37m+36] \cup [23m+23,24m+22]$ \\ \hline $P_{1,3}$ &= &$[26m+25,28m+25]_2 \cup [32m+33, 34m+31]_2$ \\ \hline $P_{1,4}$ &= & $[16m+16,18m+16]_2 \cup [18m+17, 20m+17]_2\cup \{26m+26,30m+32,0\}$ \\ \hline \end{tabular} \end{center} One can count the number of sums in each of these partitions to show that all the sums must be distinct. For example, in the first partition there will be $((n-3) -9)/4 = 2m-1$ partial sums. Since there are $2m-1$ elements in the range $[39m+39, 40m+38] \cup [1,m]$ it must be that all the partial sums are distinct. We see that the ranges of the partial sums of elements in the first row are: \noindent $P_1 =\{0\} \cup [1,m] \cup [16m+16,18m+16]_2 \cup [18m+17, 20m+17]_2 \cup [23m+23,24m+22]$ $\cup [26m+25,28m+25]_2 \cup \{26m+26,30m+32\} \cup [32m+33, 34m+31]_2 \cup [36m+36, 37m+36]$ $ \cup [39m+39, 40m+38]$. \medskip \noindent For the second row we get the following ranges: \begin{center} \begin{tabular}{|rcl|} \hline $P_{2,1} $&= &$ [40m+46,42m+44]_2 \cup [46m+49,48m+47]_2$ \\ \hline $P_{2,2} $&= &$ [2m+4,4m+6]_2 \cup [4m+8,6m+4]_2$ \\ \hline $P_{2,3} $&= &$ [12m+14,13m+13] \cup [43m+46,44m+45]$ \\ \hline $P_{2,4} $&= &$ [27m+30,28m+30] \cup [8m+10,9m+10] \cup \{16m+15,34m+32,30m+30,0\}$ \\ \hline \end{tabular} \end{center} \noindent Thus the ranges of the partial sums of elements in the second row are: \noindent $P_2 = \{0\} \cup [2m+4,4m+6]_2 \cup [4m+8,6m+4]_2 \cup [8m+10,9m+10] \cup [12m+14,13m+13] \cup \{16m + 15\} \cup [27m+30,28m+30] \cup \{30m+30\} \cup \{34m+32\} \cup [40m+46,42m+44]_2 \cup [43m+46,44m+45] \cup [46m+49,48m+47]_2$.\\ \noindent For the third row we have: \begin{center} \begin{tabular}{|rcl|} \hline $P_{3,1} $&= &$ [1,m] \cup [15m+15,16m+14]$\\ \hline $P_{3,2} $&= &$ [8m+9,9m+9] \cup [19m+22,20m+21]$ \\ \hline $P_{3,3} $&= &$ [2m+4,3m+3] \cup [25m+27,26m+27]$ \\ \hline $P_{3,4} $&= &$ [m+2,2m+2] \cup [22m+22,23m+23] \cup \{6m+8,32m+34,0\}$ \\ \hline \end{tabular} \end{center} \noindent Thus the ranges of the partial sums of elements in the third row are: \noindent $P_3 = \{0\} \cup [1,m] \cup [m+2,2m+2] \cup [2m+4,3m+3] \cup \{6m+8\} \cup [8m+9,9m+9] \cup [15m+15,16m+14] \cup [19m+22,20m+21] \cup [22m+22,23m+23] \cup [25m+27,26m+27] \cup \{32m+34\}.$ \medskip Several things are worth noting. Notice that each partion of the partial sums covers two disjoint ranges of numbers (some sets contain a few extra numbers). For example, $P_{1,1}$ contains the range $39m+39$ to $40m+38$ and the range $1$ to $m$. This is by design. Also, within these ranges the sets of partial sums either contain every number in the range, or every other number in the range. Note that any overlap of the sets of partial sums occurs with one set covering the odds and one covering the evens. Therefore one can check by looking at the ranges that the partial sums $P_i$ in each row are distinct. Similar arguments can be used for each case of $n$ modulo $8$. In all subsequent cases we will provide the reader with the original construction, the reordering, and a table of the partial sums. For further details see \cite{M15}. \bigskip \noindent If $\bm{n \equiv 1 \pmod 8, n \geq 9}$: Here $m = \frac{n-9}{8}$ and note that all the arithmetic in this case will be in ${\mathbb Z}_{48m+55}$. The first five columns are: {\small $$A = \begin{bmatrix} 8m + 7, & 10m + 12 & 16m + 18 & 4m+6 & 4m+3 \\ 8m + 10 & 8m + 9 & -12m - 14 & -22m - 26 & 18m + 22 \\ -16m - 17 & -18m - 21&-4m - 4 & 18m + 20 & -22m - 25 \end{bmatrix}.$$ For each $0 \leq r \leq 2m$ define } {\small \begin{center} $A_r = (-1)^r\begin{bmatrix} (-8m + 2r - 5), & (-10m - r - 13) & (-24m + r - 27) & (-4m + 2r - 1) \\ (16m - r + 16) & (-4m + 2r - 2) & (8m - 2r + 8) & (-18m - r - 23) \\ (-8m - r - 11) & (14m - r + 15) & (16m + r + 19) & (22m - r + 24) \end{bmatrix}.$ \end{center} } To construct $H$, begin with $A$ and add on the remaining $n-5$ columns by concatenating the $A_r$ arrays for each value of $r$ between $0$ and $2m$. Let the reordering be \noindent \begin{center} $R =(8,12,16,...,n-1;3,7,11,15,...,n-2;5,6,10,14,...,n-3;1,9,13,17,...,n,2,4)$. \end{center} Then \begin{center} \begin{tabular}{|rcl|} \hline $P_{1,1} $&=&$ [24m + 28, 25m +28] \cup [47m + 55, 48m +54]$ \\ \hline $P_{1,2} $&=&$ [41m+46 , 42m + 46] \cup [30m +33, 31m +33]$ \\ \hline $P_{1,3} $&=&$ [32m+36 ,34m + 36]_2 \cup [26m + 31, 28m+31]_2$ \\ \hline $P_{1,4} $&=&$ [34m+38, 36m + 38]_2 \cup [32m+37,34m+37]_2 \cup \{44m+49,0\}$ \\ \hline \end{tabular} \end{center} \noindent Thus $P_1 = \{0\} \cup [24m + 28, 25m +28] \cup [26m + 31, 28m+31]_2 \cup [30m +33, 31m +33] \cup [32m+36 ,34m + 36]_2\cup [32m+37,34m+37]_2 \cup [34m+38, 36m + 38]_2 \cup [41m+46 , 42m + 46] \cup \{44m+49\}$ \\ $\cup [47m + 55, 48m +54]$.\\ \begin{center} \begin{tabular}{|rcl|} \hline $P_{2,1} $&=&$ [2,2m] \cup [6m+8,8m+8]$ \\ \hline $P_{2,2} $&=&$ [38m + 47,40m+47]_2 \cup [40m+49, 42m+49]_2$ \\ \hline $P_{2,3} $&=&$ [10m+14,11m+14] \cup [25m+30,26m+30]$ \\ \hline $P_{2,4} $&=&$ [14m+17,15m+17] \cup [33m+40,34m+40] \cup \{22m+26,0\}$ \\ \hline \end{tabular} \end{center} \noindent Thus $P_2 = \{0\} \cup [2,2m] \cup [6m+8,8m+8] \cup [10m+14,11m+14] \cup [14m+17,15m+17] \cup \{22m+26\} \cup [25m+30,26m+30] \cup [33m+40,34m+40] \cup [38m + 47,40m+47]_2 \cup [40m+49, 42m+49]_2.$ \\ \begin{center} \begin{tabular}{|rcl|} \hline $P_{3,1} $&=&$ [47m+54, 48m+54] \cup [16m+19,17m+19]$ \\ \hline $P_{3,2} $&=&$ [13m+15,14m+15] \cup [26m+30,27m+30]$ \\ \hline $P_{3,3} $&=&$ [43m+49, 44m+49] \cup [4m+5,5m+5]$ \\ \hline $P_{3,4} $&=&$ [27m+32, 28m+32] \cup [1,m+1] \cup \{30m+35,0\}$ \\ \hline \end{tabular} \end{center} \noindent So $P_3 = \{0\} \cup [1,m+1] \cup [4m+5,5m+5] \cup [13m+15,14m+15] \cup [16m+19,17m+19]$ \\ $ \cup [26m+30,27m+30] \cup [27m+32, 28m+32] \cup \{30m+35\} \cup [43m+49, 44m+49] \cup [47m+54, 48m+54]. $ \bigskip \noindent If $\bm{n \equiv 2 \pmod 8, n \geq 10}$: In this case $m = \frac{n-10}{8}$. All the arithmetic in this case will be in ${\mathbb Z}_{48m+61}$. The first six columns are: {\small $$A = \begin{bmatrix} 24m + 30 & 16m + 21 & 10m + 13 & 8m + 8 & 4m + 5 & 8m + 9 \\ 24m + 29 & -8m - 11 & -10m - 14 & 12m + 16 & 16m + 20 & 12m + 17 \\ 2 & -8m - 10 & 1 & -20m - 24 & -20m - 25 & -20m - 26 \end{bmatrix}.$$} For each $0 \leq r \leq 2m$ define {\small $$A_r = (-1)^r\begin{bmatrix} (-8m + 2r - 7) & (10m + r + 15) & (-22m + r - 27) & (-8m + 2r - 6) \\ (16m - r + 19) & (4m - 2r + 3) & (4m - 2r + 4) & (-16m - r - 22) \\ (-8m - r - 12) & (-14m + r - 18) & (18m + r + 23) & (24m - r + 28) \end{bmatrix}.$$ } To construct $H$, begin with $A$ and add on the remaining $n-6$ columns by concatenating the $A_r$ arrays for each value of $r$ between $0$ and $2m$. Let the reordering be \begin{center} $R = (10,14,...,n;n-3,n-7,...,7;4,6,8,12,...,n-2;5,9,13,...,n-1,2,3,1).$ \end{center} Then \begin{center} \begin{tabular}{|rcl|} \hline $P_{1,1} $&=&$ [40m+55,42m+55]_2 \cup [46m+61,48m+59]_2$ \\ \hline $P_{1,2} $&=&$ [36m+48,38m+48]_2 \cup [42m+57,44m+55]_2 \cup \{44m+56\}$ \\ \hline $P_{1,3} $&=&$ [3m+4,4m+4] \cup [14m+19,15m+19]$ \\ \hline $P_{1,4} $&=&$ [18m+24,19m+24] \cup [45m+58,46m+58] \cup \{14m+18, 24m+31,0\}$ \\ \hline \end{tabular}\end{center} \noindent Thus $P_1 = \{0\} \cup [3m+4,4m+4] \cup \{14m+18\} \cup [14m+19,15m+19] \cup [18m+24,19m+24] \cup \{24m+31\} \cup [36m+48,38m+48]_2 \cup [40m+55,42m+55]_2 \cup [42m+57,44m+55]_2 \cup \{44m+56\} \cup$ \\ $ [45m+58,46m+58] \cup [46m+61,48m+59]_2$. \\ \begin{center} \begin{tabular}{|rcl|} \hline $P_{2,1} $&=&$ [1,m] \cup [31m+39,32m+39] $\\ \hline $P_{2,2} $&=&$ [45m+58,46m+58] \cup [30m+39,31m+38] \cup \{10m+13\} $\\ \hline $P_{2,3} $&=&$ [22m+30,24m+30]_2 \cup [24m+33,26m+33]_2 $ \\ \hline $P_{2,4} $&=&$ [40m+53,42m+53]_2 \cup [42m+57,44m+57]_2 \cup \{34m+46, 24m+32,0\}$ \\ \hline \end{tabular}\end{center} \noindent Thus $P_2 = \{0\} \cup [1,m] \cup \{10m+13\} \cup [22m+30,24m+30]_2 \cup \{24m+32\} \cup [24m+33,26m+33]_2 \cup [30m+39,31m+38] \cup [31m+39,32m+39] \cup \{34m+46\} \cup [40m+53,42m+53]_2$ \\ $ \cup [42m+57,44m+57]_2 \cup [45m+58,46m+58] $. \begin{center} \begin{tabular}{|rcl|} \hline $P_{3,1} $&=&$ [1,m] \cup [23m+28,24m+28]$\\ \hline $P_{3,2} $&=&$ [13m+16,14m+16] \cup [22m+28,23m+27] \cup \{42m+53\}$\\ \hline $P_{3,3} $&=&$ [8m+9,9m+9] \cup [21m+27,22m+27] $ \\ \hline $P_{3,4} $&=&$ [7m+7,8m+7] \cup [36m+45,37m+45] \cup \{48m+58, 48m+59,0\}$ \\ \hline \end{tabular}\end{center} \noindent So $P_3 = \{0\} \cup [1,m] \cup [7m+7,8m+7] \cup [8m+9,9m+9] \cup [13m+16,14m+16] \cup [21m+27,22m+27] \cup [22m+28,23m+27] \cup [23m+28,24m+28] \cup [36m+45,37m+45] \cup \{42m+53\} \cup \{48m+58, 48m+59\}.$ \bigskip \noindent If $\bm{n \equiv 3 \pmod 8, n \geq 11}$: Define $m = \frac{n-11}{8}$. All the arithmetic in this case will be in ${\mathbb Z}_{48m+67}$. The first seven columns are: {\small $$A = \begin{bmatrix} 24m + 33 & 8m + 11 & 8m + 13 & 4m + 6 & 1 & -12m - 17 & 8m + 10 \\ 24m + 32 & -16m - 23 & -12m - 18 & 10m + 15 & 20m + 27 & -8m - 9 & 14m + 20 \\ 2 & 8m + 12 & 4m + 5 & -14m - 21 & -20m - 28 & 20m + 26 & -22m - 30 \end{bmatrix}.$$ } For each $0 \leq r \leq 2m$ define {\small $$A_r = (-1)^r\begin{bmatrix} (-16m + r - 22) & (24m - r + 31) & (4m - 2r + 4) & (-4m + 2r - 3) \\ (8m - 2r + 8) & (-8m + 2r - 7) & (-22m + r - 29) & (-10m - r - 16) \\ (8m + r + 14) & (-16m - r - 24) & (18m + r + 25) & (14m - r + 19) \end{bmatrix}.$$} To construct $H$, begin with $A$ and add on the remaining $n-7$ columns by concatenating the $A_r$ arrays for each value of $r$ between $0$ and $2m$. Let the reordering be \begin{center} $R = (9,13,...,n-2;8,12,...,n-3;1,11,15,..,n;6,7,10,14,..., n-1,5, 2, 3, 4).$ \end{center} Then \begin{center} \begin{tabular}{|rcl|} \hline $P_{1,1} $&=&$ [1,m] \cup [23m+31,24m+31] $ \\ \hline $P_{1,2} $&=&$ [7m+9,8m+9] \cup [22m+31,23m+30]$ \\ \hline $P_{1,3} $&=&$ [28m+39,30m+39]_2 \cup [30m+42,32m+42]_2 $\\ \hline $P_{1,4} $&=&$ \{18m+22\} \cup [26m+32,28m+32]_2 \cup [28m+38,30m+36]_2 $\\ && $ \cup \{28m+36,28m+37,36m+48,44m+61,0\}$\\ \hline \end{tabular}\end{center} \noindent Thus $P_1 = \{0\} \cup [1,m] \cup [7m+9,8m+9] \cup \{18m+22\} \cup [22m+31,23m+30] \cup [23m+31,24m+31] \cup [26m+32,28m+32]_2 \cup \{28m+36, 28m+37\} \cup [28m+38,30m+36]_2 \cup [28m+39,30m+39]_2 \cup [30m+42,32m+42]_2 \cup \{36m+48, 44m+61\}. $ \begin{center} \begin{tabular}{|rcl|}\hline $P_{2,1} $&=&$ [40m+60,42m+60]_2 \cup [46m+67,48m+65]_2 $ \\ \hline $P_{2,2} $&=&$ [42m+62,44m+60]_2 \cup [1,2m+1]_2 $ \\ \hline $P_{2,3} $&=&$ [13m+17,14m+17] \cup [24m+33,25m+33] $\\ \hline $P_{2,4} $&=&$ \{5m+8\} \cup [45m+66,46m+66] \cup [18m+28,19m+28]$\\ && $ \cup \{18m+26,2m+3,38m+52,0\}$ \\ \hline \end{tabular}\end{center} \noindent Thus $P_2 = \{0\} \cup [1,2m+1]_2 \cup \{2m+3, 5m+8\} \cup [13m+17,14m+17] \cup \{18m+26\} \cup [18m+28,19m+28] \cup [24m+33,25m+33] \cup \{38m+52\} \cup [40m+60,42m+60]_2 \cup [42m+62,44m+60]_2 \cup [45m+66,46m+66] \cup [46m+67,48m+65]_2. $ \begin{center} \begin{tabular}{|rcl|} \hline $P_{3,1} $&=&$ [1,m] \cup [31m+43,32m+43]$\\ \hline $P_{3,2} $&=&$ [30m+43,31m+42] \cup [39m+57,40m+57]$ \\ \hline $P_{3,3} $&=&$ [40m+59,41m+59] \cup [5m+11,6m+11] $\\ \hline $P_{3,4} $&=&$ \{25m+37\} \cup [2m+7,3m+7] \cup [21m+32,22m+32]$ \\ && $\cup \{2m+4,10m+16,14m+21,0\}$ \\ \hline \end{tabular}\end{center} \noindent So $P_3 = \{0\} \cup [1,m] \cup \{2m+4\} \cup [2m+7,3m+7] \cup [5m+11,6m+11] \cup \{10m+16, 14m+21\} \cup [21m+32,22m+32] \cup \{25m+37\} \cup [30m+43,31m+42] \cup [31m+43,32m+43] \cup [39m+57,40m+57] \cup [40m+59,41m+59] .$\\ \bigskip\noindent If $\bm{n \equiv 4 \pmod 8, n \geq 12}$: Let $m = \frac{n-12}{8}$. All the arithmetic in this case will be in ${\mathbb Z}_{48m+73}$. The first eight columns are: \\ {\small \resizebox{\linewidth}{!}{% $A = \begin{bmatrix} 8m + 13 & 10m + 16 & 22m + 34 & -4m - 5 & 4m + 7 & -22m - 35 & -12m - 18 & -1 \\ 4m + 6 & 8m + 11 & -4m - 8 & 22m + 33 & -14m - 22 & 4m + 10 & -2 & -20m - 30 \\ -12m - 19 & -18m - 27 & -18m - 26 & -18m - 28 & 10m + 15 & 18m + 25 & 12m + 20 & 20m + 31 \end{bmatrix}.$} } For $0 \leq r \leq 2m$ define {\small $$A_r = (-1)^r\begin{bmatrix} (-16m + r - 23) & (-8m + 2r -12) & (14m - r + 21) & (4m - 2r + 3) \\ (8m + r + 14) & (-16m - r - 24) & (-10m - r - 17) & (18m + r + 29) \\ (8m - 2r + 9) & (24m - r + 36) & (-4m + 2r - 4) & (-22m + r - 32) \end{bmatrix}.$$} To construct $H$, begin with $A$ and add on the remaining $n-8$ columns by concatenating the $A_r$ arrays for each value of $r$ between $0$ and $2m$. Let the reordering be \begin{center} $R = (9,13,...,n-3;11,15,...,n-1;4,10,14,...,n-2;12,16,...,n,1,2,6,5, 7,8,3).$ \end{center} Then \begin{center} \begin{tabular}{|rcl|} \hline $P_{1,1} $&=&$ [32m+50,33m+50] \cup [47m+73,48m+72] $\\ \hline $P_{1,2} $&=&$ [46m+71,47m+71] \cup [33m+51,34m+50] $ \\ \hline $P_{1,3} $&=&$ [34m+54,36m+54]_2 \cup [40m+66,42m+66]_2 $ \\ \hline $P_{1,4} $&=&$ [38m+59,40m+57]_2 \cup [36m+56,38m+54]_2 $\\ &&$\cup \{38m+57,38m+58, 46m+70,8m+13,34m+51,26m+40,26m+39,0\}$ \\ \hline \end{tabular}\end{center} \noindent Thus $P_1 = \{0, 8m+13, 26m+39,26m+40\} \cup [32m+50,33m+50] \cup [33m+51,34m+50] \cup \{34m+51\} \cup [34m+54,36m+54]_2 \cup [36m+56,38m+54]_2 \cup \{38m+57,38m+58\} \cup [38m+59,40m+57]_2 \cup [40m+66,42m+66]_2 \cup \{46m+70\} \cup [46m+71,47m+71] \cup [47m+73,48m+72]. $ \begin{center} \begin{tabular}{|rcl|} \hline $P_{2,1} $&=&$ [8m+14,9m+14] \cup [47m+73,48m+72] $\\ \hline $P_{2,2} $&=&$ [9m+15,10m+14] \cup [46m+70,47m+70] $\\ \hline $P_{2,3} $&=&$ [3m+6,4m+6] \cup [20m+30,21m+30] $ \\ \hline $P_{2,4} $&=&$ [2m+6,3m+5] \cup [21m+35,22m+35] $\\ &&$\cup \{26m+41,34m+52,38m+62,24m+40,24m+38, 4m+8,0\}$ \\ \hline \end{tabular}\end{center} \noindent Thus $P_2 = \{0\} \cup [2m+6,3m+5] \cup [3m+6,4m+6] \cup \{4m+8\} \cup [8m+14,9m+14] \cup [9m+15,10m+14] \cup [20m+30,21m+30] \cup [21m+35,22m+35] \cup \{24m+38,24m+40,26m+41,34m+52,38m+62\} \cup [46m+70,47m+70] \cup [47m+73,48m+72]. $ \begin{center} \begin{tabular}{|rcl|} \hline $P_{3,1} $&=&$ [2,2m]_2 \cup [6m+9,8m+7]_2 $\\ \hline $P_{3,2} $&=&$ [2m+5,4m+5]_2 \cup [4m+9,6m+7]_2 $ \\ \hline $P_{3,3} $&=&$ [9m+13,10m+13] \cup [34m+50,35m+50] $\\ \hline $P_{3,4} $&=&$ [8m+13,9m+12] \cup [35m+54,36m+54] $\\ && $\cup \{24m+35,6m+8,24m+33,34m+48,46m+38, 18m+26,0\}$ \\ \hline \end{tabular}\end{center} \noindent So $P_3 = \{0\} \cup [2,2m]_2 \cup [2m+5,4m+5]_2 \cup [4m+9,6m+7]_2 \cup \{6m+8\} \cup [6m+9,8m+7]_2 \cup [8m+13,9m+12] \cup [9m+13,10m+13] \cup \{18m+26, 24m+33, 24m+35, 34m+48\} \cup [34m+50,35m+50] \cup [35m+54,36m+54] \cup \{46m+38\}.$ \bigskip \noindent If $\bm{n \equiv 5 \pmod 8, n \geq 5}$: Here $m = \frac{n-5}{8}$. All the arithmetic in this case will be in ${\mathbb Z}_{48m+31}$. The first five columns are: {\small $$A = \begin{bmatrix} 8m + 6 & 10m + 7 & -16m - 10 & -4m - 4 & 4m + 1 \\ -16m - 9 & 8m + 5 & 4m + 2 & -18m - 11 & 18m + 13 \\ 8m + 3 & -18m - 12 & 12m + 8 & 22m + 15 & -22m - 14 \end{bmatrix}.$$} For each $0 \leq r \leq 2m-1$ define {\small $$A_r = (-1)^r\begin{bmatrix} (-8m + 2r - 1) & (-14m + r - 8) & (16m + r + 11) & (4m - 2r - 1) \\ (16m - r + 8) & (4m - 2r) & (8m - 2r + 4) & (18m + r + 14) \\ (-8m - r - 7) & (10m + r + 8) & (-24m + r - 15) & (-22m + r - 13) \end{bmatrix}.$$} To construct $H$, begin with $A$ and add on the remaining $n-5$ columns by concatenating the $A_r$ arrays for each value of $r$ between $0$ and $2m-1$. Let the reordering be \begin{center} $R = (9,13,...,n;5,6,10,...,n-3;3,7,11,...,n-2;1,8,12,...,n-1,4,2).$ \end{center} Then \begin{center} \begin{tabular}{|rcl|} \hline $P_{1,1} $&=&$ [2,2m]_2 \cup [2m+1,4m-1]_2$ \\ \hline $P_{1,2} $&=&$ [46m+31,48m+29]_2 \cup [4m+1,6m+1]_2 $ \\ \hline $P_{1,3} $&=&$ [35m+22,36m+22] \cup [22m+14,23m+13] $ \\ \hline $P_{1,4} $&=&$ [42m+28,43m+28] \cup [11m+8,12m+7] \cup \{38m+24,0\}$ \\ \hline \end{tabular}\end{center} \noindent Thus $P_1 = \{0\} \cup [2,2m]_2 \cup [2m+1,4m-1]_2 \cup [4m+1,6m+1]_2 \cup [11m+8,12m+7] \cup [22m+14,23m+13] \cup [35m+22,36m+22] \cup \{38m+24\} \cup [42m+28,43m+28] \cup [46m+31,48m+29]_2.$ \begin{center} \begin{tabular}{|rcl|} \hline $P_{2,1} $&=&$ [47m+31,48m+30] \cup [18m+14,19m+13] $ \\ \hline $P_{2,2} $&=&$ [17m+13,18m+13] \cup [32m+22,33m+21] $\\ \hline $P_{2,3} $&=&$ [22m+15,24m+15]_2 \cup [24m+17,26m+15]_2 $\\ \hline $P_{2,4} $&=&$ [8m+6,10m+6]_2 \cup [14m+12,16m+10]_2 \cup \{40m+26,0\}$ \\ \hline \end{tabular}\end{center} \noindent Thus $P_2 = \{0\} \cup [8m+6,10m+6]_2 \cup [14m+12,16m+10]_2 \cup [17m+13,18m+13] \cup [18m+14,19m+13] \cup [22m+15,24m+15]_2 \cup [24m+17,26m+15]_2 \cup [32m+22,33m+21] \cup \{40m+26\} \cup [47m+31,48m+30].$ \begin{center} \begin{tabular}{|rcl|} \hline $P_{3,1} $&=&$ [47m+31,48m+30] \cup [26m+18,27m+17] $ \\ \hline $P_{3,2} $&=&$ [16m+11,17m+10] \cup [25m+17,26m+17] $\\ \hline $P_{3,3} $&=&$ [2,m+1] \cup [37m+25,38m+25] $ \\ \hline $P_{3,4} $&=&$ [21m+13,22m+12] \cup [44m+28,45m+28] \cup \{18m+12,0\}$ \\ \hline \end{tabular}\end{center} \noindent So $P_3 = \{0\} \cup [2,m+1] \cup [16m+11,17m+10] \cup \{18m+12\} \cup [21m+13,22m+12] \cup [25m+17,26m+17]\cup [26m+18,27m+17] \cup [37m+25,38m+25] \cup [44m+28,45m+28] \cup [47m+31,48m+30]. $ \bigskip \noindent If $\bm{n \equiv 6 \pmod 8, n \geq 6}$: In this case, $m = \frac{n-6}{8}$. All the arithmetic in this case will be in ${\mathbb Z}_{48m+37}$. The first six columns are: {\small $$A = \begin{bmatrix} 24m + 18 & -16m - 13 & -1 & 8m + 4 & -4m - 3 & -8m - 5 \\ 2 & 8m + 6 & -10m - 8 & -20m - 14 & -16m - 12 & -12m - 11 \\ 24m + 17 & 8m + 7 & 10m + 9 & 12m + 10 & 20m + 15 & 20m + 16 \end{bmatrix}.$$} For each $0 \leq r \leq 2m-1$ define {\small $$A_r = (-1)^r\begin{bmatrix} (-8m + 2r - 3) & (-4m + 2r - 1) & (-4m + 2r - 2) & (8m - 2r + 2) \\ (16m - r + 11) & (-10m - r - 10) & (22m - r + 16) & (16m + r + 14) \\ (-8m - r - 8) & (14m - r + 11) & (-18m - r - 14) & (-24m + r - 16) \end{bmatrix}.$$} To construct $H$, begin with $A$ and add on the remaining $n-6$ columns by concatenating the $A_r$ arrays for each value of $r$ between $0$ and $2m-1$. Let the reordering be \begin{center} $R = (10,14,...,n;2,9,13,...,n-1;4,7,11,...,n-3;1,8,12,...,n-2,5,3,6).$ \end{center} Then \begin{center} \begin{tabular}{|rcl|} \hline $P_{1,1} $&=&$ [2,2m]_2 \cup [6m+4,8m+2]_2 $ \\ \hline $P_{1,2} $&=&$ [30m+22,32m+20]_2 \cup [32m+24,34m+24]_2 $ \\ \hline $P_{1,3} $&=&$ [32m+25,34m+23]_2 \cup [38m+28,40m+28]_2 $ \\ \hline $P_{1,4} $&=&$ [10m+8,12m+6]_2 \cup [12m+9,14m+9]_2 \cup \{8m+6, 8m+5,0\}$ \\ \hline \end{tabular}\end{center} \noindent Thus $P_1 = \{0\} \cup [2,2m]_2 \cup [6m+4,8m+2]_2 \cup \{8m+5, 8m+6\} \cup [10m+8,12m+6]_2 \cup [12m+9,14m+9]_2 \cup [30m+22,32m+20]_2 \cup [32m+24,34m+24]_2 \cup [32m+25,34m+23]_2 \cup [38m+28,40m+28]_2. $ \begin{center} \begin{tabular}{|rcl|} \hline $P_{2,1} $&=&$ [16m+14,7m+13] \cup [47m+37,48m+36] $\\ \hline $P_{2,2} $&=&$ [7m+6,8m+6] \cup [28m+23,29m+22] $\\ \hline $P_{2,3} $&=&$ [36m+29,37m+29] \cup [3m+4,4m+3]$\\ \hline $P_{2,4} $&=&$ [26m+22,27m+21] \cup [37m+31,38m+31] \cup \{22m+19, 12m+11,0\} $ \\ \hline \end{tabular}\end{center} \noindent Thus $P_2 = \{0\} \cup [3m+4,4m+3] \cup [7m+6,8m+6] \cup \{12m+11\} \cup [16m+14,7m+13] \cup \{22m+19\} \cup [26m+22,27m+21] \cup [28m+23,29m+22] \cup [36m+29,37m+29] \cup [37m+31,38m+31] \cup [47m+37,48m+36]. $ \begin{center} \begin{tabular}{|rcl|} \hline $P_{3,1} $&=&$ [24m+21,25m+20] \cup [47m+37,48m+36] $ \\ \hline $P_{3,2} $&=&$ [36m+31,37m+30] \cup [7m+7,8m+7] $ \\ \hline $P_{3,3} $&=&$ [20m+17,21m+17] \cup [11m+10,12m+9] $ \\ \hline $P_{3,4} $&=&$ [10m+9,11m+8] \cup [45m+34,46m+34] \cup \{18m+12,28m+21,0\}$ \\ \hline \end{tabular}\end{center} \noindent So $P_3 = \{0\} \cup [7m+7,8m+7] \cup [10m+9,11m+8] \cup [11m+10,12m+9] \cup \{18m+12\} \cup [20m+17,21m+17] \cup [24m+21,25m+20] \cup \{28m+21\} \cup [36m+31,37m+30] \cup [45m+34,46m+34] \cup [47m+37,48m+36]. $ \bigskip \noindent If $\bm{n \equiv 7 \pmod 8, n \geq 7}$: Now let $m = \frac{n-7}{8}$. All the arithmetic in this case will be in ${\mathbb Z}_{48m+43}$. The first seven columns are: {\small $$A = \begin{bmatrix} 24m + 21 & 16m + 15 & 4m + 3 & -4m - 4 & -20m - 18 & -12m - 11 & -8m - 6 \\ 2 & -8m - 8 & -12m - 12 & 14m + 14 & 1 & 20m + 16 & -14m - 13 \\ 24m + 20 & -8m - 7 & 8m + 9 & -10m - 10 & 20m + 17 & -8m - 5 & 22m + 19 \end{bmatrix}.$$} For each $0 \leq r \leq 2m-1$ define {\small $$A_r = (-1)^r \begin{bmatrix} (-16m + r - 14) & (-8m + 2r - 3) & (-18m - r - 16) & (4m - 2r + 1) \\ (8m + r + 10) & (-16m - r - 16) & (22m - r + 18) & (10m + r + 11) \\ (8m - 2r + 4) & (24m - r + 19) & (-4m + 2r - 2) & (-14m + r - 12) \end{bmatrix}.$$} To construct $H$, begin with $A$ and add on the remaining $n-7$ columns by concatenating the $A_r$ arrays for each value of $r$ between $0$ and $2m-1$. Let the reordering be \begin{center} $R = (10,14,...,n-1;2,8,12,...,n-3;6,11,15,...,n;7,9,13,...,n-2;4,3 ,1,5).$ \end{center} Then \begin{center} \begin{tabular}{|rcl|} \hline $P_{1,1} $&=&$ [1,m] \cup [29m+28,30m+27] $\\ \hline $P_{1,2} $&=&$ [m+1,2m] \cup [16m+15,17m+15]$ \\ \hline $P_{1,3} $&=&$ [6m+7,8m+]_2 \cup [4m+4,6m+4]_2 $ \\ \hline $P_{1,4} $&=&$ [38m+38,40m+36]_2 \cup [44m+41,46m+41]_2 \cup \{40m+37,44m+40,20m+18,0\}$ \\ \hline \end{tabular}\end{center} \noindent Thus $P_1 = \{0\} \cup [1,m] \cup [m+1,2m] \cup [4m+4,6m+4]_2 \cup [6m+7,8m+]_2 \cup [16m+15,17m+15] \cup \{20m+18\} \cup [29m+28,30m+27] \cup [38m+38,40m+36]_2 \cup \{40m+37,44m+40\} \cup [44m+41,46m+41]_2. $ \begin{center} \begin{tabular}{|rcl|} \hline $P_{2,1} $&=&$ [1,m] \cup [21m+19,22m+18]$ \\ \hline $P_{2,2} $&=&$ [m+2,2m+1] \cup [40m+35,41m+35]$ \\ \hline $P_{2,3} $&=&$ [11m+8,12m+8] \cup [22m+19,23m+18] $\\ \hline $P_{2,4} $&=&$ [45m+38,46m+38] \cup [28m+23,29m+22] \cup \{12m+9,48m+40,48m+42,0\}$ \\ \hline \end{tabular}\end{center} \noindent Thus $P_2 = \{0\} \cup [1,m] \cup [m+2,2m+1] \cup [11m+8,12m+8] \cup \{12m+9\} \cup [21m+19,22m+18] \cup [22m+19,23m+18] \cup [28m+23,29m+22] \cup [40m+35,41m+35] \cup [45m+38,46m+38] \cup \{48m+40,48m+42\}.$ \begin{center} \begin{tabular}{|rcl|} \hline $P_{3,1} $&=&$ [46m+43,48m+41]_2 \cup [44m+41,46m+39]_2 $ \\ \hline $P_{3,2} $&=&$ [44m+42,46m+40]_2 \cup [38m+36,40m+36]_2 $ \\ \hline $P_{3,3} $&=&$ [18m+19,19m+18] \cup [31m+31,32m+31] $ \\ \hline $P_{3,4} $&=&$ [5m+7,6m+7] \cup [28m+27,29m+26] \cup \{44m+40,4m+6,28m+26,0\}$ \\ \hline \end{tabular}\end{center} \noindent So $P_3 = \{0\} \cup \{4m+6\} \cup [5m+7,6m+7] \cup [18m+19,19m+18] \cup \{28m+26\} \cup [28m+27,29m+26] \cup [31m+31,32m+31] \cup [38m+36,40m+36]_2 \cup \{44m+40\} \cup [44m+41,46m+39]_2 \cup [44m+42,46m+40]_2 \cup[46m+43,48m+41]_2. $ \end{proof} Now that we have established that for every $n\geq 3$ there exists simple row and column orderings for each Heffter array $H(3,n)$ from \cite{Ain} we can prove the main result of this paper. \begin{thm} For every $n \geq 3$, there exists an orientable biembedding of $K_{6n+1}$ such that every edge is on a 3-cycle and a simple $n$-cycle, or equivalently, for every $n \geq 3$, there exists an orientable biembedding of a Steiner triple system and a simple $n-$cycle system, both on $6n+1$ points. Furthermore, each of the two cycle systems is cyclic modulo $6n+1$. \end{thm} \begin{proof} By Theorem \ref{reord2}, given any $n \in \mathbb Z$ with $n \geq 3$, there exists a $3 \times n$ simple Heffter array. By Corollary \ref{heffter-biembed.3xn} it follows that there exists an embedding of $K_{6n+1}$ on an orientable surface such that every edge is on a simple cycle face of size $3$ and a simple cycle face of size $n$. From Proposition \ref{prop2.1} we have that each cycle system is cyclic. \end{proof} It is interesting to note on which orientable surface we are biembedding. Euler's formula, $V - E + F = 2 - 2g$, can be used to determine the genus of the surface. It is easy to compute that for $K_{6n+1}$; the number of vertices is $V = 6n + 1$, the number of edges is $E = {6n+1 \choose 2}$, and the number of faces is $F = {6n+1 \choose 2} (1/3 + 1/n)$. Substituting these values into Euler's formula we get the following proposition. \begin{prop} \label{euler} For $n \geq 3$, $K_{6n+1}$ can be biembedded such that every edge is on an $n$-cycle and a 3-cycle on the orientable surface with genus $$g =1 - 1/2\Big[6n + 1 + {6n+1 \choose 2}(1/3 + 1/n -1)\Big].$$ \end{prop} \begin{example} Letting $n = 5$ in Proposition \ref{euler} above we get $$g = 1 - 1/2 \Big[31 + {31 \choose 2 }(1/3 + 1/5 -1) \Big] = 1 - 1/2(31 + 465(-7/15)) = 94.$$ So $K_{31}$ can be embedded on an orientable surface with genus 94 such that every edge is on both a 3-cycle and a 5-cycle. \end{example} \section{$5 \times n$ Heffter Arrays} \label{section4} An obvious continuation of the $3 \times n$ result is to ask whether we can use $5 \times n$ Heffter arrays to biembed $K_{10n + 1}$ such that every edge is on both a $5-$ cycle and an $n-$cycle. Via Theorem \ref{heffter-biembed.thm}, since 5 is odd we have the following corollary. \begin{corollary}\label{5xn} If there exists a simple Heffter array $H(5,n)$, then there exists an orientable embedding of $K_{10n+1}$ such that every edge is on a simple cycle face of size $5$ and a simple cycle face of size $n$ \end{corollary} As was done in the case of the $H(3,n)$ we start with an $H(5,n)$ and rearrange it so that the resulting $H(5,n)$ is simple. All of the necessary $H(5,n)$ exist via the following theorem. \begin{thm}\label {5xn,exist}\cite{Ain} There exists a $5 \times n$ Heffter array for every $n \geq 3$. \end{thm} Considering the $H(5,n)$ from \cite{Ain} one can easily verify that the columns are simple in the standard ordering and so again we only need to reorder to rows. However, unlike the $3 \times n$ case, we were unable to do this in general. In order to obtain a partial result we found reorderings for $3\leq n \leq 100$ using a computer (this was not difficult). We again use a single permutation for every row, which in fact reorders the Heffter array by permuting the columns as units. We do not list the computed permuations here; the interested reader can find them in Appendix A of \cite{M15}. \begin{prop}\label {5xn,100} \cite{M15} There exists a simple $5 \times n$ Heffter array for every $3 \leq n \leq 100$. \end{prop} So via Theorem \ref{5xn,exist} and Proposition \ref{5xn,100} we have the main result of this section. \begin{thm} For every $3 \leq n \leq 100$, there exists an orientable biembedding of $K_{10n+1}$ such that every edge is on a simple cycle face of size $5$ and a simple cycle face of size $n$. \end{thm} \section{Conclusion} In this paper we have shown that for every $n \geq 3$, there exists an orientable biembedding of a Steiner triple system and a simple $n-$cycle system, both on $6n+1$ points. This paper is the first to exploit the connection which Archdeacon found between Heffter arrays and biembeddings the complete graph on a surface to explicitly biembed a class of graphs. We considered only the case of biembeddings arising from the existence of Heffter arrays $H(3,n)$ and $H(5,n)$. In \cite{Ain} Heffter arrays $H(m,n)$ are given for all $m,n \geq 3$. Hence there is certainly an opportunity to use these for the biembedding problem (if one can find simple orderings of the rows and columns of these arrays). In addition, a more general definition of Heffter arrays from \cite{A14} leads to biembeddings of other complete graphs (in addition to $K_{2mn+1}$) as well as to biembeddings on nonorientable surfaces. More Heffter arrays which could possibly be used to construct biembeddings can be found in the paper \cite{Aincomp}. \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The magneto-electric (ME) effect, i.e. the production of an electric polarization (magnetic moment) by application of a magnetic (electric) field, has been known for a long time. Multiferroics, e.g. materials that show ferromagnetism and ferroelectricity coexisting, have also been known where spins and electric dipoles order at different temperatures. Quite recently a new class of multiferroics has been found, in which the magnetism and ferroelectricity are strongly coupled; e.g., they order at the same temperature. For this effect, however, the magnetic structure cannot be simple ferromagnetism or a collinear spin state; the (vector) magnetization density must show spatial variation in its direction as, e.g. for a spiral spin state. I refer to the introductory paragraphs of several recent papers for a more detailed history and list of references.~\cite{ sergienko,mostovoy, katsura, tokura,yamasaki} Until very recently, the effect has been found in antiferromagnets, e.g., simple spirals, with no net magnetic moment. In~\cite{yamasaki}, however, the magnetic structure of the material studied, CoCr$_2$O$_4$, is approximately a ferrimagnetic spiral (there is a net moment).~\cite{menyuk,lyons, tomiyasu, kaplan} The work of Mostovoy~\cite{mostovoy} is a phenomenological theory (see also~\cite{lawes}), whereas microscopic models exhibiting the effect are presented in~\cite{sergienko} and \cite{katsura}. Both of the latter involve superexchange (where electron hopping between magnetic ions involves an intervening oxygen ion). In~\cite{katsura} the situation is considered where a nearest neighbor pair of magnetic ions has an inversion center at the mid-point, so that there is no Dzyaloshinsky-Moriya interaction (DMI)~\cite{dzyaloshinskii,moriya}; the polarization comes from a distortion of electronic density without ionic or atomic displacements; also the $t_{2g}$ orbitals considered on each magnetic site are chosen to diagonalize the spin-orbit coupling (\emph{intra-atomic} SO coupling), partially removing degeneracy of these orbitals. In contrast, the essential mechanism invoked in~\cite{sergienko} depends on the existence of the DMI plus electron-lattice interactions and orbital degeneracy (Jahn-Teller effect), and the polarization results from ionic displacements. It seems sensible to ask, is it necessary to consider such complicated models to obtain an essential understanding of this unusual ME effect? We present here a much simpler model which yields the effect, namely the electric dipole moment for a pair of magnetic sites,~\cite{katsura} \begin{equation} \pi \propto \mathbf{R}\times(\mathbf{S}_a\times\mathbf{S}_b),\label{0} \end{equation} where $\mathbf{R}$ connects the sites a and b, and $\mathbf{S}_a,\mathbf{S}_b$ are the average spins at the sites. It is most closely related to that of~\cite{katsura}, in that it considers the interaction of two magnetic sites with a center of symmetry (so there is no DMI). Also, the source of the spatial variation of the ordered spin density is due to outside effects of spin-spin exchange interactions, as in~\cite{katsura}. The hopping is direct--there is no oxygen, and no ionic displacements or orbital degeneracies. It involves \emph{inter}-atomic spin-orbit coupling, a mechanism different from the model of~\cite{katsura}; it is obviously different from that of~\cite{sergienko}. We add that these characteristics are more appropriate than the others for CoCr$_2$O$_4$, where there is no orbital degeneracy.\cite{lyons,kaplan} A further motivation is a difficulty with the work of~\cite{katsura}. In the 3d transition metal ions, the spin-orbit coupling $V^{SO}$ is the smallest of the various energies, namely Coulomb interactions, transfer or hopping integrals, and (cubic) crystal field splitting. But in~\cite{katsura}, $V^{SO}$ is taken as essentially infinite: it is diagonalized, along with the crystal field, before the hopping energies are considered. This leads to a doublet ($\Gamma_7$) and a higher energy quartet $(\Gamma_8)$, and the quartet is dropped. This is probably the reason why $\pi$ does not show the expected vanishing when $V^{SO}\rightarrow0$. Our model does not have this difficulty. Our simple model has two atoms or ions, a and b, lying on the x-axis; each has an s-type orbital and 3 p-type orbitals lying an energy $\Delta$ higher. Also, there are two electrons, 1 per site. Later we describe briefly the generalization to the case where each site has 3 $t_{2g}$ electrons, as for Cr$^{3+}$ on an octahedral site, appropriate to the B-sites in XCr$_2$O$_4$, X=Co,Mn. For our simple model we first constrain the 1-electron basis set (including spin) to make the average spins $<\mathbf{S}_a>,<\mathbf{S}_b>$ lie at particular angles in the x-y plane and make an angle $\Theta$ between them. The 1-electron basis states are \begin{eqnarray} s_a&=&u(\mathbf{r}-\mathbf{R}_a)\chi_a, \ \ s_b=u(\mathbf{r}-\mathbf{R}_b)\chi_b \nonumber\\ p_a^\nu&=&\nu v(\mathbf{r}-\mathbf{R}_a)\chi_a,\ p_b^\nu=\nu v(\mathbf{r}-\mathbf{R}_a)\chi_b,\label{1} \end{eqnarray} where the spin states are \begin{eqnarray} \chi_a&=&(\alpha+e^{i\phi_0}\beta)/\sqrt{2} \nonumber\\ \chi_b&=&(\alpha+e^{i\phi_b}\beta)/\sqrt{2}.\label{2} \end{eqnarray} ($\alpha, \beta$ are the usual ``up, down" states along the z-direction.) And we take $\phi_b=\phi_0+\Theta$ as in FIG. 1. This spin arrangement is chosen for simplicity. \begin{figure}[h] \centering\includegraphics[height=2in]{figforcme-effect.eps} \caption{General spin configuration in x-y plane.}\label{fig:spinconfiguration} \end{figure} Also, $\nu=x,y,z$, $\mathbf{R}_a$ and $\mathbf{R}_b$ are the locations of atoms $a$ and $b$, and the origin is at the mid-point between the two atoms. Finally the ``radial" functions $u,v$ and the one-electron potential $V(\mathbf{r})$ are assumed to be invariant under $y\rightarrow -y$ and $z\rightarrow -z$, $V$ also being even in $x$. We can allow hopping for any orbital to any orbital, although, again for simplicity, we assume no intra-site transition matrix elements, and that the intra-orbital Coulomb repulsion is so large as to exclude such inter-site hopping. We include also the intra-site inter-orbital (s-p) Coulomb repulsion $U_0$. Finally we include the essential hopping processes $s_a\leftrightarrow p_b^\nu, s_b\leftrightarrow p_a^\nu$ caused by the spin-orbit interaction \begin{equation} V_{SO}=c_o\nabla V\times \mathbf{p}\cdot\mathbf{s},\label{3} \end{equation} with $c_o=\hbar^2/(2m^2c^2), (\mathbf{p},\mathbf{s})$ = (momentum/$\hbar$, spin/$\hbar$). The crucial matrix elements are, e.g., \begin{equation} <p_a^\nu|V_{SO}|s_b>=c_0<\nu v_a|\nabla V\times \mathbf{p}|u_b>\cdot<\chi_a|\mathbf{s}|\chi_b>.\label{4} \end{equation} I've put $v_a=v(\mathbf{r}-\mathbf{R}_a)$, etc. It is convenient to consider the spatial factor here, \begin{equation} m_j^\nu=<\nu v_a|(\nabla V\times\mathbf{p})_j|u_b>, \end{equation} where $j=x,y,z$. Using the symmetry properties of $u, v, V$ stated above, one can see readily that all the quantities $m^\nu_i$ vanish except $m^y_z$ and $m^z_y$. The required spin factors in~(\ref{4}) are straightforwardly found to be \begin{eqnarray} <\chi_a|s^z|\chi_b>&=& (1-e^{i\Theta})/4\equiv\xi_z\nonumber\\ <\chi_a|s^y|\chi_b>&=&\frac{i}{4}[e^{-i\phi_0}-e^{i(\phi_0+\Theta)}]\equiv\xi_y.\label{5} \end{eqnarray} We will also need \begin{equation} \eta\equiv<\chi_a|\chi_b>=\frac{1}{2}(1+e^{i\Theta}).\label{6} \end{equation} Consider first the simplest case, $\phi_0=-\Theta/2$, which implies $\xi_y=0$. Then we need only the term $m_z^y$, which is seen to be \begin{equation} m^y_z=i\gamma, \end{equation} where \begin{equation} \gamma=-c_o\int d^3r\ v_a(\mathbf{r}) y \left(\frac{\partial V}{\partial x}\frac{\partial }{\partial y}-\frac{\partial V}{\partial y}\frac{\partial }{\partial x}\right)u_b(\mathbf{r})\label{8} \end{equation} is real; further, there is no symmetry reason for this to vanish. Thus the remaining SO matrix element~(\ref{4}) is \begin{equation} M=i\gamma\xi_z. \end{equation} The basic symmetry of the situation allows the assumption $v_a\leftrightarrow v_b, u_a\leftrightarrow u_b$ under $x\rightarrow -x$. It follows that $m^y_z\rightarrow -m^y_z$ under $x\rightarrow -x$, i.e. under $a\leftrightarrow b$. The above results plus hermiticity of $\nabla V\times\mathbf{p}$ give all the relevant matrix elements. Because only the $p^y$ orbitals are connected to the ground state orbitals $s_a,s_b$, we can drop the other p-states. This leaves four 1-electron states,~(\ref{1}) with $\nu=y$, and therefore six 2-electron states. We write them conveniently in terms of $A_s^\dagger,A_p\dagger$, which respectively, create an electron in states $s_a,p_a^y$; similarly for $B_s^\dagger$, etc.: \begin{eqnarray} \Phi_1&=&A_s^\dagger B_s^\dagger|0>,\ \Phi_2=A_s^\dagger B_p^\dagger|0>,\ \Phi_3=A_p^\dagger B_s^\dagger|0>,\nonumber\\ \Phi_4&=&A_p^\dagger B_p^\dagger|0>,\ \Phi_5=A_s^\dagger A_p^\dagger|0>,\ \Phi_6=B_s^\dagger B_p^\dagger|0>. \end{eqnarray} Our final simplification before writing down the Hamiltonian $H$ is the Hubbard-like assumption where hopping is a 1-electron operator, the essential contribution from the Coulomb terms being the intra-site inter-orbital Coulomb term $U_0$. Thus, in the basis $(\Phi_1,\cdots\Phi_6)$, \begin{equation} H=\left( \begin{array}{cccccc} 0 & 0 & 0 & 0 &-i\gamma \xi_z^*&-i\gamma \xi_z\\ 0 & \Delta & 0 & 0 & t'& t\\ 0 & 0 & \Delta & 0 &-t& -t'\\ 0 & 0 & 0 & 2\Delta & i\gamma\xi_z^* & i\gamma\xi_z\\ i\gamma\xi_z & t' & -t & -i\gamma\xi_z & \Delta+U_0 & 0\\ i\gamma\xi_z^* & t & -t' & -i\gamma\xi_z^* & 0 & \Delta+U_0\label{11} \end{array}\right) \end{equation} The real quantities $t,t'$ are ordinary (kinetic + Coulomb) hopping terms: $t$ hops $s_a$ to $s_b$, $t'$ hops $p_a$ to $p_b$.~\cite{other} Let us calculate the electric dipole moment in perturbation theory, where all hoppings are small. A non-zero value occurs in first order, surprising to us, having expected the dipole moment to come from \emph{intra}-atomic mixing of s-like and p-like orbitals (but this occurs in higher order only). The ground state wave function to 1st order can be picked off from the first column of $H$,~(\ref{11}), and using the unperturbed energy of the doubly-occupied sites: \begin{equation} \Psi_g=\Phi_1-\frac{i\gamma}{\Delta+U_0}(\xi_z\Phi_5+\xi_z^*\Phi_6).\label{12} \end{equation} The dipole moment operator is $\pi=e(\mathbf{r}_1+ \mathbf{r}_2)$. To 1st order, one needs the results \begin{equation} <\Phi_1|\pi|\Phi_5>=-<\Phi_1|\pi|\Phi_6>^*=\hat{y}\tilde{y}<\chi_b|\chi_a>,\label{13} \end{equation} where \begin{equation} \tilde{y}=\int d^3r\ u(\mathbf{r}-\mathbf{R}_a)y^2v(\mathbf{r}-\mathbf{R}_b). \end{equation} Note that $\tilde{y}$ has dimensions of length. The x and z components of $\pi$ vanish by symmetry. Then, after a bit of arithmetic, we find \begin{equation} <\pi>=<\Psi_g|\pi|\Psi_g>=-\hat{y}\frac{e\gamma \tilde{y}}{U_0+\Delta}sin \Theta.\label{15} \end{equation} Our spins being in the x-y plane, this result is clearly consistent with~(\ref{0}). It is instructive to consider the electron density, \begin{equation} n(\mathbf{r})=u_a(\mathbf{r})^2+u_b(\mathbf{r})^2-\frac{\gamma\ sin \Theta}{2(U_0+\Delta)} y\ n_{ov}(\mathbf{r}),\label{17} \end{equation} where the ``overlap density" is \begin{equation} n_{ov}(\mathbf{r})=u_a(\mathbf{r})v_b(\mathbf{r})+u_b(\mathbf{r})v_a(\mathbf{r}).\label{18} \end{equation} Thus the charge density responsible for the dipole moment exists mainly between the two sites. Note that the result $\rightarrow0$ as $V_{SO}\rightarrow0$, as expected. We now generalize to the case where $\mathbf{S}_a$ and $\mathbf{S}_b$ respectively make angles $\phi_0$ and $\phi_0+\Theta$ with the positive x-axis, and remain in the x-y plane (see Fig.~\ref{fig:spinconfiguration}). We will see that the dipole moment is independent of $\phi_0$. But the non-vanishing of $\xi_y$ forces consideration of $p_a^z$ and $p_b^z$, due to~(\ref{4}) and the above statement that $m_y^z\ne0$. Thus the number of necessary 1-electron states is increased from 4 to 6, increasing the number of 2-electron states to 15. Nevertheless, it is again quite simple to write down the ground state to 1st order in the hopping terms, yielding the generalization of~(\ref{12}), which in turn gives the electron density: \begin{equation} n(\mathbf{r})=u_a(\mathbf{r})^2+u_b(\mathbf{r})^2+2\frac{n_{ov}(\mathbf{r})}{U_0+\Delta}[\gamma Im(\xi_z\eta^*)y+\gamma'Im(\xi_y \eta^*)z].\label{23} \end{equation} But from the definitions~(\ref{5}) and~(\ref{6}), \begin{equation} Im (\xi_y\eta^*)=0. \end{equation} Hence the term $\propto z$ doesn't enter, and so $\gamma'$ is irrelevant, and the result reduces to the previous expression~(\ref{17}). To finish this consideration of the 2-site model, we consider the case where the two spins lie in the y-z plane, i.e. perpendicular to $\mathbf{R}$. Then the spin states are \begin{equation} \chi_\mu=[cos(\theta_\mu/2)\alpha+isin(\theta_\mu/2)\beta]/\sqrt2, \ \mu=a,b. \end{equation} In this case one can easily see that $\xi_z,\xi_y, \eta$ are all real, so that~(\ref{23}) says the dipole moment vanishes, again consistent with~(\ref{0}). Now apply these results to a crystal where the spins form a spiral~\cite{yoshimori, villain,kaplan2} \begin{equation} <\mathbf{S}_n>=1/2[\hat{x} cos(\mathbf{k}\cdot\mathbf{R}_n)+\hat{y} cos(\mathbf{k}\cdot\mathbf{R}_n)] \end{equation} For simplicity we can consider a cubic crystal and the propagation vector $\mathbf{k}$ along a principle cubic direction, say x. Then $\mathbf{k}\cdot\mathbf{R}_n$ can be written $n\Theta$, where $\Theta$ is the spiral turn angle. By choosing $\phi_0=n\Theta$, one sees from FIG.~\ref{fig:spinconfiguration} that a spiral is generated by increasing $n$ by steps of unity. Then the induced change in electron density in the bond $j,j+1$ is \begin{equation} \delta n(\mathbf{r})_{j,j+1}=\frac{n_{ov}(\mathbf{r})_{j,j+1}}{2(U_0+\Delta)} \gamma y sin \Theta . \end{equation} That is, the dipole moment in each bond is the same (ferroelectric ordering), and the overlap density is merely the translation of this density from one bond to the next. This way of generating results for a crystal from those of a pair of magnetic atoms follows that of~\cite{katsura}. In summary, the present microscopic model yields an electric dipole moment $\mathbf{\pi}$ resulting from canted spins, in the direction of $\mathbf{f}=\mathbf{R}\times(\mathbf{S}_a\times\mathbf{S}_b)$ with the same dependence, $sin\Theta$, on the angle $\Theta$ between the spins. When applied to a crystal with a simple spiral spin state, the $\mathbf{f}$ component of $\pi$ is the same for each bond, yielding a ferroelectric state, just as in the previous closely related theory of Katsura et al~\cite{katsura}. In contrast, however, our result $\rightarrow0$ as $V_{SO}\rightarrow0$. We mention a very rough estimate of the size of the effect. We assume hydrogen 1s and 2p orbitals, R=3$\AA$ (the Cr-Cr distance in CoCr$_2$O$_4$), $U_0+\Delta$=2eV, and $V$ the potential energy of an electron in the field of 2 protons. We find the coefficient of $sin\Theta$ in~(\ref{15}) (the dipole moment for one bond), to be $~2\times10^{-36}$C-m (Coulomb-meters). Assuming a simple cubic lattice with spiral propagation along a principle cubic direction, and a volume per site of $(3\AA)^3$, yields a polarization of $\sim 0.1\ \mu C/m^2$, about an order of magnitude smaller than found~\cite{yamasaki} in CoCr$_2$O$_4$. In view of the crudeness of this estimate, it suggests that the mechanism is probably relevant to real materials. Finally, we extended our model to the case where each site is like Cr$^{3+}$ in an octahedral field (B-site), the three 3d-electrons being in the high-spin state $t_{2g}^3$, as in Co and Mn chromite. The p-states are from the 4p shell. Also note that the dominant B-B exchange interaction is direct, the superexchange being $90^{o}$~\cite{menyuk2}, suggesting that our simple model neglecting an intervening oxygen might be somewhat realistic for the B-B pairs in these spinels. Our motivation is to check that our basic mechanism is robust in going to a more realistic model. The model Hilbert space is now much larger. Our basis functions are constructed as follows. The 3 $t_{2g}$ orbitals are $t^\nu(\mathbf{r})=g_\nu(\mathbf{r})u(\mathbf{r}), g_1=xy,g_2=yz, g_3=zx$. Similarly the p-orbitals are $p^\nu(\mathbf{r})=h_\nu v(\mathbf{r}),h_1=x,h_2=y,h_3=z; u(\mathbf{r})$ and $v(\mathbf{r})$ are invariant under inversion, $\mathbf{r}\rightarrow-\mathbf{r}$. The full 1-electron basis states are \begin{eqnarray} T_{\stackrel{a}{b}}^\nu&=&t_{\stackrel{a}{b}}^\nu(\mathbf{r})\chi_{\stackrel{a}{b}}\nonumber\\ P_{\stackrel{a}{b}}^\nu&=&p_{\stackrel{a}{b}}^\nu(\mathbf{r})\chi_{\stackrel{a}{b}}; \end{eqnarray} $t_a^\nu(\mathbf{r})=t^\nu(\mathbf{r}-\mathbf{R}_a)$, etc. We again simplify somewhat by choosing the spin states $\chi_{\stackrel{a}{b}}$ as in~(\ref{2}) with $\phi_0=-\Theta/2$. Let $A_t^{\nu\dagger}, A_p^\nu$ create $T_a^\nu,P_a^\nu$ respectively, and similarly for $B_t^{\nu\dagger},B_p^{\nu\dagger}$. Write these as $C_\gamma^\dagger, \gamma=1,\cdots,12$, with $\gamma=1,\cdots,6$ corresponding to the T-states, $\gamma=7,\cdots,12$ to the P-states. Then the spin-orbit interaction is conveniently written \begin{equation} V_{SO}\equiv c_0\sum_{i=1}^6\nabla_iV\times\mathbf{p}_i\cdot\mathbf{s}_i =\sum_{\gamma,\gamma^\prime}<\gamma|v^{so}(\mathbf{r},s)|\gamma^\prime>C_\gamma^\dagger C_{\gamma^\prime}, \end{equation} where $v^{so}(\mathbf{r},s)=c_0\nabla V\times\mathbf{p}\cdot\mathbf{s}$. Then the 6-electron basis states (single determinants) are \begin{eqnarray} \Phi_1&=&\Pi_1^6C_\gamma^\dagger|0>\equiv|0),\nonumber\\ \Phi_{\gamma^{\prime}\gamma}&=&C^\dagger_{\gamma^{\prime}} C_\gamma|0),\ \gamma\le6,\ \gamma^{\prime}>6. \end{eqnarray} For the hole vacuum $|0)$, the $C_\gamma^\dagger$ are ordered $C_1^\dagger C_2^\dagger \cdots C_6^\dagger$. The ground state to 1st order in the overlap is then \begin{equation} \Psi_g=\Phi_1-\sum_{\gamma\le6,\gamma^{\prime}>6} \frac{\Phi_{\gamma^{\prime}\gamma}<\gamma^{\prime}|v^{so}|\gamma>}{E_{\gamma^{\prime}\gamma}^0-E_1^0}, \end{equation} where the $E^0$ are the unperturbed energies. The terms that contribute to the electric dipole moment are the interatomic elements, e.g. \begin{eqnarray} <P_a^\nu|v^{so}|T_b^\mu>&=&c_0<p_a^\nu|\nabla V\times\mathbf{p}|t_b^\mu>\cdot <\chi_a|\mathbf{s}|\chi_b>\nonumber\\ &\equiv& i\mathbf{o}_{\nu\mu} \cdot <\chi_a|\mathbf{s}|\chi_b>. \end{eqnarray} Most of the matrix elements of $\mathbf{o}$ vanish by symmetry; noting that the y-component of the spin matrix element $\xi_y$ vanishes (by the choice of $\phi_0$, as before), we need only the components $o^i, i =x,z$. The only ones of these matrix elements that survive are \begin{eqnarray} io_{11^{\prime}}^z&=&c_0<p_a^1|(\nabla V\times\mathbf{p})_z|t_b^1>\nonumber\\ io_{32^{\prime}}^z&=&c_0<p_a^3|(\nabla V\times\mathbf{p})_z|t_b^2> \end{eqnarray} plus 3 terms for $o^x$. But in calculating $\pi$, these matrix elements get multiplied by corresponding elements of the position or displacement operator as in $\mathbf{r}_{\gamma\gamma^{\prime}}v^{so}_{\gamma^{\prime}\gamma}$, and many of the position matrix elements vanish by symmetry. It turns out that only the $o^z$ terms contribute, and the final result for the dipole moment is \begin{equation} <\pi>=-\frac{e\hat{y}}{U_0+\Delta}sin\Theta(\gamma_1\tilde{y}_1+\gamma_2\tilde{y}_2), \end{equation} where \begin{eqnarray} \gamma_1&=&<p_a^1|o^z|t_b^1>,\ \gamma_2=<p_a^3|o^z|t_b^2>\nonumber\\ \tilde{y}_1&=&\int d^3 r p_a^1 y t_b^1=(1/2)\int d^3r (x^2-R^2/4)n_{ov}\nonumber\\ \tilde{y}_2&=&\int d^3 r p_a^3 y t_b^2=(1/2)\int d^3r y^2z^2n_{ov}. \end{eqnarray} Thus the result is similar to the previous one~(\ref{15}). The terms, $\tilde{y}_i\gamma_i, i=1,2$ do not vanish by symmetry. Thus we have confirmed that the basic mechanism, involving inter-atomic SO coupling, gives the interesting ME effect in this rather realistic model. One of us (T.A.K) thanks C. Piermarocchi, J. Bass, and M. Dykman for helpful discussions. \thebibliography{0} \bibitem{sergienko} I. A. Sergienko and E. Dagotto, Phys. Rev. B \textbf{73}, 094434 (2006). \bibitem{mostovoy} Maxim Mostovoy, Phys. Rev. Lett. \textbf{96}, 067601 (2006). \bibitem{katsura} H. Katsura, N. Nagaosa, and A. V. Balatsky, Phys. Rev. Lett. \textbf{95}, 057205 (2006). \bibitem{tokura} Y. Tokura, Science \textbf{312}, 1481 (2006). \bibitem{yamasaki} Y. Yamasaki et al., Phys. Rev. Lett. \textbf{96}, 207204 (2006). \bibitem{menyuk} N. Menyuk, K. Dwight, and A. Wold, J. Phys. (Paris) \textbf{25},528 (1964) \bibitem{lyons} D. H. Lyons, T. A. Kaplan, K. Dwight, and N. Menyuk, Phys. Rev. \textbf{126}, 540 (1962) \bibitem{tomiyasu} K. Tomiyasu, J. Fukunaga, and H. Suzuki, Phys. Rev. B \textbf{70}, 214434 (2004) \bibitem{kaplan} For a review see T. A. Kaplan and N. Menyuk, \emph{Spin ordering in 3-dimensional crystals with strong competing exchange interactions}, submitted to Philosophical Magazine. \bibitem{lawes} G. Lawes et al., Phys. Rev. Lett.~\textbf{95}, 087205 (2005). \bibitem{dzyaloshinskii} I. Dzyaloshinskii, J. Phys. Chem. Solids \textbf{4},241 (1958) \bibitem{moriya} T. Moriya, Phys. Rev. \textbf{120}, 91 (1960) \bibitem{yoshimori} A. Yoshimori, J. Phys. Soc. Japan~\textbf{14} 807 (1959). \bibitem{villain} J. Villain, J. Phys. Chem. Solids~\textbf{11} 303 (1959) \bibitem{kaplan2} T. A. Kaplan, Phys. Rev. \textbf{116}, 888 (1959) \bibitem{menyuk2} N. Menyuk, \emph{Magnetism}, in \emph{Modern Aspects of Solid State Chemistry}, Edited by C. N. R. Rao, (Plenum Press, N.Y., 1960), pg.159. \bibitem{other} Other ordinary $s_a\rightarrow p_b^\nu$ hopping terms won't contribute to $\pi$ in leading order. \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} While fashion is a multi-billion dollar global industry it comes with severe \textbf{environmental and social costs} worldwide. The fashion industry is considered to be the world's second largest polluter, after oil and gas. Fashion accounts for 20 to 35 percent of microplastic flows into the ocean and outweighs the carbon footprint of international flights and shopping combined \cite{bof-article-2020-the-year-ahead}. Every stage in a garment's life threatens the planet and its resources. For example, it can take more than 20,000 liters of water to produce 1kg of cotton, equivalent to a single t-shirt and pair of jeans. Up to 8,000 different chemicals are used to turn raw materials into clothes, including a range of dyeing and finishing processes. This also has social costs with factory workers being underpaid and exposed to unsafe workplace conditions, particularly when handling materials like cotton and leather that require extensive processing \cite{mckinsey-article-2016-style}. Since fashion is heavily trend-driven and most retailers operate by season (for example, spring/summer, autumn/winter, holiday etc.), at the end of each season any unsold inventory is generally liquidated. While smaller retailers generally move the merchandise to second-hand shops, large brands resort to recycling or destroying the merchandise. In recent years this has led to addressing \textbf{sustainability} challenges as a core agenda for most fashion companies. Increasing pressure from investors, governments and consumer groups are leading to companies adopting sustainable practices to reduce their carbon footprint. Moreover, several companies may have sustainability targets (due to government regulations and/or self-imposed) to honor, which may lead to significant changes in the entire fashion supply/value chain. Sustainable practices can be adopted at various stages of the fashion value chain and several efforts are underway including more sustainable farming practices for growing fabric (for example, cotton), material innovation for alternatives (to cotton fabric, leather, dyes etc.), end-to-end transparency/visibility in the entire supply chain, sourcing from sustainable suppliers, better recycling technologies and sustainability index for measuring the full life-cycle impact of an apparel. In this paper, our main focus is to address sustainability challenges in the pre-season assortment planning activity in the fashion supply chain. \textbf{Assortment planning} is a common pre-season planning done by buyers and merchandisers. Typically a fashion retailer has a large set of products under consideration to be potentially launched for the next up coming season. These could be a combination of \textit{existing products} from the earlier seasons along with the \textit{new products} that are designed for the next season. The designers interpret the fashion trends to design and develop a certain number of products for each category as specified in the option plan. The final products (both existing and new) are presented to the buyer and merchandiser who then curate/select a subset of them as the assortment for the next season. This assortment planning is typically based on her estimation of how well the product will sell (based on historical sales data and her interpretation of trends). During the initial planning the team works only with the initial designs or some times a sample procured from a vendor. Once the assortment has been selected the buyer then works with the sourcing team and the vendors to procure the products. The choice of the final assortment is a crucial decision since it has a big impact on the sell through rate, unsold inventory and eventually the revenue for the next season. In practice, the merchandiser has to actually select a different assortment for each region or store, referred to as, \textbf{hyper-local assortment planning}. While a retailer has a large set of products to offer, due to budget and space constraints only a smaller number of products can be stocked at each store. In this context, one of the most crucial planning tasks for most retail merchandisers is to decide the right assortment for a store, that is, what set of products to stock at each store. The current practice for assortment planning is heavily spreadsheet driven and relies on the expertise and intuition of the merchandisers, coupled with trends identified from the past sales transactions data. While it is still manageable for a merchandiser to plan an assortment for a single store, it is not scalable when a merchandiser has to do planning for hundreds of stores. Typically stores are grouped into store clusters and an assortment is planned for each cluster rather than store. A sub-optimal assortment results in excess leftover inventory for the unpopular items, increasing the inventory costs, and stock outs of popular items, resulting in lost demand and unsatisfied customers. With better assortment planning algorithms retailer are now open to more algorithmic store-level automated assortments. The task of hyper-local assortment planning is to determine the optimal subset of products (from a larger set of products) to be stocked in each store so that the revenue/profit is maximized under various constraints and at the same time the assortment is localized to the preferences of the customers shopping in that store. The notion of a store can be generalized to a location and can potentially include store, region, country, channel, distribution center etc. Existing approaches to assortment planning only maximize the expected revenue under certain store and budget constraints. Along with the revenue the choice of the final assortment has also an environmental cost associated with it. The final environmental impact of an assortment is eventually the sum of the environmental impact of each of the products in the assortment. In this paper, we address the notion of \textbf{sustainable assortments} and optimize the assortments under additional sustainability constraints. To achieve this we need a metric to measure the environmental impact of an apparel. One of the main deciding factors is the fabric or the kind of material used in the apparel. For example, cotton, accounting for about 30 percent of all textile fiber consumption, is usually grown using a lot of water, pesticides, and fertilizer, and making 1 kilogram of fabric generates an average of 23 kilograms of greenhouse gases. In this work we use the \textbf{Higg Material Sustainability Index} (MSI) score which is the apparel industry's most trusted tool to accurately measure the environmental sustainability impacts of materials \cite{higg-msi}. The Higg MSI score allows us to quantify the effect of using different materials, for example, while the cotton fabric has a score of 98, viscose/rayon fabric is a more sustainable fabric with score of 62. While we demonstrate our algorithms with the Higg MSI score any other suitable sustainability metric can be incorporated in our framework. While designers and merchandisers strive to make sustainable fabric choices during the design phase there is always a trade-off involved between sustainable choices and achieving high sell through rates. Also, the choice will typically be made at an individual product level and it is hard for the designer or buyer to assess the environmental impact of the assortment as a whole. The trade-off between revenue and environmental impact is balanced through a multi-objective optimization approach, that yields a Pareto-front of optimal assortments for merchandisers to choose from. The rest of the paper is organized as follows. In \S~\ref{ref:assortment-planning} we define the problem of hyper-local assortment planning. In \S~\ref{ref:sustainability-scores}, we present the sustainability score calculations. In \S~\ref{ref:sustainable-assortment-planning}, we outline our approach to do a sustainable assortment planning. In \S~\ref{ref:experiments}, we present experimental results of our approach. \section{Hyper-local assortment planning} \label{ref:assortment-planning} We define an (hyper-local) assortment for a store as a subset of $k$ products carried in the store from the total $n$ (potential) products. The task of \textbf{assortment planning} is to determine the optimal subset of $k$ products to be stocked in each store so that the \textit{assortment is localized to the preferences of the customers shopping in that store}. The optimization is done to maximize sales or gross margin subject to financial (limited budget for each store), store space (limited shelf life for displaying products) and other constraints. Broadly there are three aspects to assortment planning, (1) the choice of the \textbf{demand model}, (2) \textbf{estimating the parameters} of the chosen demand model and (3) using the demand estimates in an \textbf{assortment optimization} setup. \subsection{Demand Models} The starting point for any assortment planning is to leverage an accurate demand forecast at a store level for a product the retailer is planning to introduce this season. The demand for a product is dependent on the assortment present in the store when the purchase was made. Several models have been proposed in the literature to model the demand. The forecast demand will then be used in a suitable stochastic optimization algorithm to do the assortment planning and refinement. Given a set of $n$ substitutable products $\mathcal{N} = \{1,2,...,n\}$ and $m$ stores $\mathcal{S} = \{1,2,...,m\}$, let $d_{js}(\mathbf{q}_s)$ be the \textbf{demand} for product $j \in \mathcal{N}$ at store $s \in \mathcal{S}$ when the assortment offered at the store was $\mathbf{q}_s \subset \mathcal{N}$. An alternate construct is to specify it as a customer \textbf{choice} model $p_{js}(\mathbf{q}_s)$ which is the probability that a random customer chooses/prefers the product $j$ at store $s$ over other products in the assortment offered at the store. \textbf{Independent demand model} The simplest approach is to assume product demand to be independent of the offer set or the assortment, that is, the demand for a product does not depend on other available products. This model can therefore be specified by a discrete probability distribution over each of the products. \begin{equation} p_{js}(\mathbf{q}_s) = \mu_{js} \quad \text{if } j \in \mathbf{q}_s \quad \text{such that} \sum_{j \in \mathcal{N}} \mu_{js} = 1 \end{equation} This is the simplest demand model that has been traditionally around in retail operations, and assumes no substitution behavior. In practice the demand for a product is heavily influenced by the assortment that is under offer mainly due to \textbf{product substitution} (cannibalization) and \textbf{product complementarity} (halo-effect). The literature here is mainly focused on various parametric and non-parametric discrete choice models to capture product substitution, including, multinomial logit and variants\cite{kok-2008}, the exponomial discrete choice model\cite{alptekinoglu-2016}, deep neural choice models \cite{otsuka-2016} \cite{mottini-2017} and non-parametric rank-list models \cite{farias-2017}. Since the main focus is to address the notion of sustainability in assortment for ease of exposition in this paper, we mainly focus on this simple independent demand model and ignore the effects of substitution. In general, any demand model can be plugged into the optimization framework. \subsection{Estimating demand models} Once an appropriate demand/choice model is chosen the parameters of the model have to be estimated based on historical sales and inventory data. Different demand models come with its own challenges and computation complexities in estimating the model parameters and include least squares, standard gradient based optimization, column generation and EM algorithms to maximize the likelihood. Berbeglia et al. 2019 \cite{berbeglia-2018} presents a good overview and a comparative empirical study of different choice-based demand models and their parameter estimation algorithms. For the independent demand model, we mainly rely on the historical store-level sales data to get an estimate of $d_{js}$ and multiply it by a suitable scalar to capture the trend increase or decrease for that year. \begin{itemize} \item For existing products that were historically carried at a store, this is essentially the number of units of the products sold in the last season. \item However, in general, not all products are historically carried at all stores. For existing products that were not carried at the store, we use \textbf{matrix factorization} approaches to estimate the demand by modeling the problem as a product $\times$ store matrix and filling in the missing entries via matrix completion. This is described in more detail in Section \nameref{MF}. \item For completely new products without any previous sales history, then we use its visual and textual attributes to get a multi-modal embedding, and based on that we forecast the store-wise potential sales. \cite{ekambaram-kdd-2020}. \end{itemize} \subsection{Matrix Factorization} \label{MF} Matrix factorization (MF) popularized in the collaborative filtering and recommender systems literature \cite{koren-2009} is commonly used to impute missing data. Let $\mathbf{X}$ be a $\texttt{product} \times \texttt{store}$ matrix of dimension $n \times m$ where each element $X_{ij}$ of the matrix represents the metric (for example, total sales) associated with product $i$ at store $j$. This matrix is sparse with elements missing for products not carried at the store. MF essentially decomposes this sparse matrix into two lower dimensional matrices $\mathbf{U}$ and $\mathbf{V}$ where $\mathbf{U} \in \mathbb{R}^{n \times D}$ and $\mathbf{V} \in \mathbb{R}^{m \times D}$, such that rows in $\mathbf{U}$ and $\mathbf{V}$ encapsulate the product and store embeddings of dimension $D$. These $D$ dimensional embeddings (latent vectors) namely $\mathbf{U}_{i}$ and $\mathbf{V}_{j}$ are expected to capture the underlying hidden structure that influences the sales for product $i$ and store $j$ respectively. A common approach towards MF is to use Alternating Least Squares algorithm, however, other regularization extensions have also been characterized at length in the literature. In this paper, we have adopted the Alternating Least Squares approach and minimize the following loss function. \begin{align}\begin{split} \mathbf{L}(\mathbf{X},\mathbf{U},\mathbf{V}) = \sum_{i,j}c_{ij}(X_{ij} - \mathbf{U}_{i}\mathbf{V}_j^{T} - \beta_{i}-\gamma_{j})^{2} + \lambda(\sum_{i}(\|\mathbf{U}_{i}\|+\beta_{i})^{2} + \\ \sum_{j}(\|\mathbf{V}_{j}\|+\gamma_{j})^{2}) \end{split}\end{align} where $\mathbf{\beta}$ and $\mathbf{\gamma}$ are product and store bias vectors of dimension $n$ and $m$ respectively and $c_{ij}$ be the weightage given to observed entries based on their upper and lower bounds limit. Once the loss function gets minimized we estimate the unseen entries $X_{ij}^{*}$, as follows. \begin{equation} X_{ij}^{*} = \mathbf{U}_{i}\mathbf{V}_j^{T} + \beta_{i} + \gamma_{j} \end{equation} Thus, matrix $\mathbf{X}$ which was initially sparse now gets completely filled and is fed into our assortment planning module. \subsection{Assortment optimization} The forecast demand will then be used in a suitable stochastic optimization algorithm to do the assortment planning. The task of assortment optimization is to choose an optimal subset of products to maximize the expected revenue \textbf{subject to various constraints}. \begin{equation} \mathbf{q}_s^{*} = \argmax_{\mathbf{q}_s \subset \mathcal{N}} \sum_{j \in \mathbf{q}_s} \pi_{js} d_{js}(\mathbf{q}_s) \end{equation} where $\pi_{js}$ is the expected revenue when the product $j$ is sold at store $s$. Some of the commonly used constraints include, \textbf{Cardinality constraints} The number of products to be included in an assortment is specified via a coarse range plan (sometimes also called an option plan or buy plan) for a store. The range plan specifies either the count of products or the total budget the retailer is planning to launch for a particular season as the granularity of category, brands, attributes and price points. \textbf{Diversity constraints} For some domain it is important to ensure that the selected assortment is \textit{diverse} to offer greater variety to the consumer. Without the diversity constraints the assortment tends to prefer products that are similar to each other. The general framework is to define a \textbf{product similarity function} which measures the similarity between two products and use that as an additional constraint in the optimization. \textbf{Complementarity constraints} The other important aspect is that a good assortment has products that are frequently bought together. Product complementarity (sometimes referred to as halo-effect) refers to behavior where a customer buys another product (say, a \textit{blue jeans}) that typically goes well with a chosen product (say, \textit{white top}). In this paper, we mainly focus on cardinality constraints. Our main contribution is to introduce environmental impact as additional constraints to the assortment optimization problem. As a result, this helps in making optimal assortment decisions in supply chains while accounting for both the economic and the environmental impact. \section{Sustainability scores} \label{ref:sustainability-scores} We need a metric to measure the environmental impact of an apparel. One of the main deciding factors is the fabric or the kind of material used in the apparel. We calculate the sustainability score for a product using the \textbf{Higg Material Sustainability Index (MSI)} developed by the Sustainable Apparel Coalition \cite{higg-msi}. The Higg MSI quantifies impact score for each fabric by taking into account various processes involved in the manufacturing of fabrics such as raw material procurement, yarn formation, textile formation, dyeing etc. Higg MSI calculates the impact on climate change, eutrophication, resource depletion, water scarcity and chemistry. The score is calculated for each impact area, then normalized followed by a weighted average. The Higg MSI score allows us to quantify the effect of using different materials; for example, while cotton has a score of 98, viscose/rayon is a more sustainable fabric with a score of 62. The Higg MSI value corresponds to consolidated environmental impact of 1 kg of a given material. Moreover, products made up of these constituent materials will typically have different weights. Thus, we adjust the Higg MSI of a product based on its weight. For blended fabrics, we take a weighted average of Higg MSI of individual fabrics in the same proportions as they are in the blend. \begin{equation} h_j = (\sum_{f \in F}H_f * p_f) \times w_j \end{equation} where $p_f$ is fabric percentage for each fabric $f$ present in the blended fabric $F$, $H_f$ is the Higg MSI for fabric $f$, $w_j$ is the weight of the product in $\texttt{kg}$, $h_j$ is the sustainability score for product $j$. For a set of $N$ products in an assortment, the sustainability score can be calculated as \begin{equation} h_{a_N} = \frac{1}{N} (\sum_{j \in \mathcal{N}}h_j ) \end{equation} It should be noted that the Higg MSI is a cradle-to-gate index and doesn't consider downstream processes such as the impact due to laundry, wear and tear etc. \section{Sustainable assortment planning} \label{ref:sustainable-assortment-planning} Once we have the store-wise product-wise demand/sales forecasts and the Higg MSI score for each product we can formulate this as a multi-objective optimization problem where we are interested in selecting those products for which we have better sales forecasts and at the same time that result in a sustainable assortment. Moreover, instead of just one solution, we would like to give the user a set of solutions near the Pareto Optimal front so that the user can visualize and select whichever assortment satisfies her criteria. We solve the following multi-objective problem for each store $s$. \begin{equation} \mathbf{x}_s^{*} = \argmax_{\mathbf{x}_s \in \{0,1\}^{n}, ||\mathbf{x}_s|| \le k} \frac{(1- \lambda)}{k} \underbrace{\sum_{j \in \mathcal{N}} \pi_{js} d_{js} x_{js}}_{\text{revenue}} - \frac{\lambda}{k}\underbrace{\sum_{j \in \mathcal{N}} h_j x_{js}}_{\text{sustainability}} \end{equation} where $x_{js}$ is a binary variable denoting presence or absence of product $j$ from the assortment at the store $s$, $d_{js}$ the demand for product $j$ at store $s$, $h_j$ is the weighted Higg MSI score for that product and $\lambda$ is a parameter through which the user can specify the relative importance of each objective. In the results section we show the optimal Pareto frontier by varying $\lambda$. \subsection{Multi-objective optimization} As described in the earlier section, the objective of the assortment planning problem is to determine optimal assortments that have the least Higg MSI score (least environmental impact) with a minimal impact on the sales. Optimizing these two objectives – maximizing sales and minimizing the Higg MSI score – individually will likely yield fundamentally different assortment solutions that may lead to superior sales but with a high Higg MSI (high environmental impact) or vice versa. To address this trade-off, formulating the assortment planning problem as a multi-objective optimization problem that optimizes the sales and the Higg MSI score at the same time is justified. Multi-objective optimization problems have been formulated and solved using classical methods as well as meta-heuristics in literature \cite{deb-2001}. Of the available methods, the weighted sum method is employed to formulate and solve the assortment planning problem, due to its simplicity in configuration and use. In this method, relative importance of different objectives, as represented by multiplicative coefficients of the objective functions, is continually changed; and for each realization of these coefficients, a single-objective optimization problem is solved yielding an optimal assortment. Solving the single-optimization problem for multiple coefficient realizations yields a family of Pareto-optimal assortments that are non-dominated with respect to each other in the objective function space of sales and the Higg MSI score. The merchandiser can then choose from these optimal assortment solutions, depending on the preferred balance between sales and environmental impact. In the proposed formulation, for a given $\lambda$, we compute the single objective function score for each product, which is a weighted combination of the sustainability and quality (revenue) scores. We then choose the top $k$ products as the assortment. \section{Experimentation Validation} \label{ref:experiments} For our experimental validation, our main goals were to visualize the effect of including sustainability on the assortment. We used a dataset obtained from a leading fashion retailer consisting of 3484 products and sales over the time period Spring-Summer 2018 season. Product weight and the product's fabric composition were used to calculate the Higg MSI score for each product. We analyzed the \textit{upper} category that mainly consisted of \textit{t-shirts, shirts, tops} (total 1600 products). We calculated the Higg MSI score and the quality score (sales forecast) using the methods outlined in the previous sections. \subsection{Sustainability and Quality Distribution} Before planning the assortments, we visualized the distribution of different sustainability and quality scores that the products had (Figures \ref{fig:higg-dist}, \ref{fig:quality-dist}). In the plots we can see that the quality scores are evenly distributed; however, there are three peaks in the Higg MSI scores. On investigating further, we found that these corresponded to those products for which the fabric composition was either 100\% cotton or 100\% viscose or 100\% polyester where cotton had the highest Higg MSI (least sustainable) and polyester had the least Higg MSI (most sustainable). \begin{figure} \includegraphics[width=0.4\textwidth]{images/histogram-sust.png} \caption{Histogram distribution of product Higg MSI scores.} \label{fig:higg-dist} \end{figure} \begin{figure} \includegraphics[width=0.4\textwidth]{images/histogram-quality.png} \caption{Histogram distribution of product quality scores.} \label{fig:quality-dist} \end{figure} \subsection{Pareto Front for Assortment Optimization} We ran our optimization algorithm for multiple assortment sizes and plotted the Pareto optimal front by varying $\lambda$, the relative importance weight given to sustainability and revenue (quality) from 0 to 1. (Figure \ref{fig:pareto}) \begin{figure}[] \centering \subfigure[Assortment size 1: All products are plotted.]{\includegraphics[scale=0.3]{images/size-1-allproducts.png}} \subfigure[Assortment size 10: All points on Pareto Front are plotted, besides 2000 randomly chosen assortments.]{\includegraphics[scale=0.3]{images/sum-size-10.png}} \caption{Pareto Optimal fronts for varying assortment sizes.} \label{fig:pareto} \end{figure} In the plots, the blue curve corresponds to the optimal Pareto frontier. We can see that as we increase the assortment size, the Pareto frontier and the assortment cluster shrinks \textit{relative to the frontier}. This is because as we aggregate the scores of more products, the consolidated scores move closer to their mean. Also, the 3 horizontal clusters in assortment size 1 plot is consistent with our observation that the Higg MSI score distribution also contains 3 peaks corresponding to 100\% cotton, 100\% viscose and 100\% polyester products respectively. \subsection{Fabric composition variation} We further investigated and visualized the assortment compositions for 3 points on the Pareto Optimal frontier for assortment size 100, corresponding to $\lambda = 0.0, 0.5, 1.0$ and saw the interesting distributions plotted in Figure \ref{fig:fabric-comp}. \begin{figure}[!t] \centering \subfigure[Pareto optimal for $\lambda = 0.0$]{\includegraphics[scale=0.2]{images/lambda0_size100.png}} \subfigure[Pareto optimal for $\lambda = 0.5$]{\includegraphics[scale=0.2]{images/lambdamid_size100.png}} \subfigure[Pareto optimal for $\lambda = 1.0$]{\includegraphics[scale=0.2]{images/lambdamax_size100.png}} \caption{Fabric Composition of extreme and middle points on the Pareto Optimal Frontier for assortment size 100.} \label{fig:fabric-comp} \end{figure} We can see that for $\lambda=1.0$ (maximum importance to sustainability), the fabric composition in the assortment products comprises mostly of polyester since its Higg MSI is the lowest signifying that it is most sustainable fabric. Looking at $\lambda=0.0$ and $\lambda=0.5$ plots we see that viscose fabric is dominant since its Higg MSI is lower than cotton and it has the best quality score in terms of quality scores as well. \section{Conclusions and Future Work} In this work, we have proposed a method of assortment planning that jointly optimizes the environmental impact of an assortment and the revenue. We formulated the problem as a multi- objective optimization problem whose optimal solutions lie on the Pareto Optimal front. The proposed approach would allow retailers to meet their sustainability targets with minimal impact on the revenue. In future work, we would like to consider cannibalization and halo effects in demand modeling as well. We would also like to consider diversity and complementarity of products in the assortment in the optimization formulation. Another extension would be to use a cradle-to-grave sustainability metric for assortment planning. \bibliographystyle{ACM-Reference-Format}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:Intro}Introduction} \textit{Introduction.} \textcolor{black}{Epidemic models are useful for understanding the general dynamics of infectious diseases, rumors, election outcomes, fads, and computer viruses\cite{keeling:infectious_diseases,AnderssonBook,RevModPhys.87.925,rodrigues2016application,billings2002unified,RevModPhys.80.1275,hindes2019degree,10.1137/19M1306658}. Moreover, in the early days of emerging disease outbreaks, such as the current COVID-19 pandemic, societies rely on epidemics models for disease forecasting, as well as identifying the most effective control strategies\cite{ModelingCOVID-19,Ray2020.08.19.20177493,PhysRevE.103.L030301,Catching2020.08.12.20173047}}. To this end it is useful to quantify the risks of local epidemic outbreaks of various sizes. Within a given population, outbreak dynamics are typically described in terms of compartmental models\cite{keeling:infectious_diseases,MathematicalepidemiologyPastPresenFuture,rodrigues2016application}. For example, starting from some seed infection, over time individuals in a population make transitions between some number of discrete disease states (susceptible, exposed, infectious, etc.) based on prescribed probabilities for a particular disease\cite{ModelingCOVID-19,Ray2020.08.19.20177493,Catching2020.08.12.20173047,doi:10.1146/annurev-statistics-061120-034438,SIRSi,miller2019distribution}. In the limit of infinite populations the stochastic dynamics approach deterministic (mean-field) differential equations for the expected fraction of a population in each state\cite{keeling:infectious_diseases,MathematicalepidemiologyPastPresenFuture,rodrigues2016application,StochasticEpidemicModels}. Yet for real finite populations, outbreak dynamics have a wide range of different outcomes for each initial condition, which are not predicted by mean-field models. \textcolor{black}{A natural and canonical question (for both statistical physics and population dynamics) is, what is the distribution of outbreak sizes?} Beside stochastic simulations\cite{doi:10.1146/annurev-statistics-061120-034438,keeling:infectious_diseases,StochasticEpidemicModels,doi:10.1098/rspa.2012.0436}, methods exist for e.g., recursively computing the outbreak statistics\cite{ball_1986,ball_clancy_1993,miller2019distribution}, solving the master equation for the stochastic dynamics directly by numerical linear algebra\cite{doi:10.1098/rspa.2012.0436}, or deriving scaling laws for small outbreaks near threshold\cite{PhysRevE.69.050901,Scaling,PhysRevE.89.042108}. Yet, in addition to being numerically unstable for large populations, computationally expensive, or limited in scope, such methods also fail to provide physical and analytical insights into how unusual and extreme outbreaks occur. Here we develop an analytical approach based on WKB-methods\cite{Assaf_2017,Dykman1994,Meerson2010} which provides a closed-form expression for the asymptotic outbreak distribution in SIR, SEIR, and COVID-19 models with fixed population sizes ($N$) and heterogeneity in infectivity and recovery\cite{Subramaniane2019716118,Covasim,Schwartz2020,Catching2020.08.12.20173047}. We show that each outbreak is described by a unique most-probable path, \textcolor{black}{ and provide an effective picture of how stochasticity is manifested during a given outbreak.} For instance, compared to the expected mean-field dynamics each outbreak entails a unique, depletion or boost in the pool of susceptibles and an increase or decrease in the effective recovery rate, depending on whether the final outbreak is larger or smaller than the mean-field prediction. Most importantly, unlike usual rare-event predictions for epidemic dynamics, such as extinction or other large fluctuations from an endemic state\cite{Assaf2010,Meerson2010,PhysRevLett.117.028302,Black_2011}, and fade-out\cite{PhysRevE.80.041130}, our results do not rely on metastability \cite{Nasell:Book,SchwartzJRS2011,Dykman1994,PhysRevE.77.061107,doi:10.1137/17M1142028}, and thus are valid for the comparatively short time scales of outbreaks, $\mathcal{O}(\ln{N})$ \cite{TURKYILMAZOGLU2021132902}. In sharp contrast to systems undergoing escape from a metastable state, we show that the outbreak distribution corresponds to an infinite number of distinct paths-- one for every possible extensive outbreak. Each outbreak connects two unique fixed-points in a Hamiltonian system, {\it both} with non-zero probability flux. \textcolor{black}{Hence, by solving a canonical problem in population dynamics and non-equilibrium statistical physics, we uncover a new degenerate class of rare events for discrete-state stochastic systems.} \textit{Baseline model.} We begin with the Susceptible-Infected-Recovered (SIR) model, often used as a baseline model for disease outbreaks\cite{keeling:infectious_diseases,AnderssonBook,MathematicalepidemiologyPastPresenFuture,rodrigues2016application}. Individuals are either susceptible (capable of getting infected), infected, or recovered$/$deceased, \textcolor{black}{and can make transitions between these states through two basic processes: infection and recovery}. Denoting the total number of susceptibles $S$, infecteds $I$, and recovereds $R$ in a population of fixed size $N$, the probability per unit time that the number of susceptibles decreases by one and the number of infecteds increases by one is $\beta SI\!/\!N$, where $\beta$ is the infectious contact rate\cite{keeling:infectious_diseases,AnderssonBook,rodrigues2016application}. Similarly, the probability per unit time that the number of infecteds decreases by one is $\gamma I$, where $\gamma$ is the recovery rate\cite{keeling:infectious_diseases,AnderssonBook,rodrigues2016application}. \textcolor{black}{Combining both processes results in a discrete-state system with the following stochastic reactions: \begin{eqnarray} \label{eq:SIRreac1}&(S,I) \rightarrow (S-1,I+1) \;\; \text{with rate }\; \beta S I/N,\\ \label{eq:SIRreac2}&(I,R) \rightarrow (I-1,R+1) \;\; \text{with rate }\; \gamma I. \end{eqnarray}}As $N$ is assumed constant, the model is appropriate for the short time scales of early emergent-disease outbreaks, for example, with an assumed separation between the outbreak dynamics and demographic time scales, as well as re-infection\cite{keeling:infectious_diseases}. From the basic reactions (\ref{eq:SIRreac1}-\ref{eq:SIRreac2}), the master equation describing the probability of having $S$ susceptibles and $I$ infecteds at time $t$ is \begin{align} \label{eq:M} &\frac{\partial P}{\partial t} (S,I,t) = -\frac{\beta SI}{N}P(S,I,t) -\gamma IP(S,I,t) + \\ &\frac{\beta (S\!+\!1)(I\!-\!1)}{N}P(S\!+\!1,I\!-\!1,t) +\gamma(I\!+\!1)P(S,I\!+\!1,t)\nonumber. \end{align} \textcolor{black}{Solving this equation allows one to predict the probability that a particular proportion of a population eventually becomes infected for a given set of parameters. This is our goal here, as in many other works \cite{doi:10.1146/annurev-statistics-061120-034438,keeling:infectious_diseases,StochasticEpidemicModels,doi:10.1098/rspa.2012.0436,ball_1986,ball_clancy_1993,miller2019distribution, doi:10.1098/rspa.2012.0436,PhysRevE.69.050901,Scaling,PhysRevE.89.042108}.} Yet, in full generality such equations cannot be solved analytically, and one must resort to high-dimensional numerics, recursive computations, and$/$or large numbers of simulations\cite{doi:10.1098/rspa.2012.0436}. Yet, if $N$ is large it is possible to construct an asymptotic solution to Eq.(\ref{eq:M}) for all $\mathcal{O}(N)$ outbreaks using WKB methods\cite{Assaf_2017,Dykman1994,Meerson2010}, as we will show. \textcolor{black}{First, to summarize what is known for large $N$, let us define the fraction of individuals in each disease state $x_{w}\!=\!W/N$ where $W\!\in\!\{S,I,R\}$. Note that as the total population size is constant, $1\!=\!x_{r}\!+\!x_{s}\!+\!x_{i}$. The mean-field limit of the reactions (\ref{eq:SIRreac1}-\ref{eq:SIRreac2}), corresponds to a simple set of differential equations: $\dot{x}_{s}=-\beta x_{i}x_{s}$, $\;\dot{x}_{i}=\beta x_{i}x_{s}-\gamma x_{i}$, and $\dot{x}_{r}=\gamma x_{i}$. Of particular interest is the total fraction of the population infected in the long-time limit, $x_{r}^{*}=x_{r}(t\!\rightarrow\!\infty)$, whose average, $\overline{x_{r}^{*}}$, can be found by integrating the mean-field system. For small initial fractions infected, the solution (according to the mean-field) depends only on the basic reproductive number, $R_{0}\!\equiv\!\beta/\gamma$ \cite{keeling:infectious_diseases,AnderssonBook,MathematicalepidemiologyPastPresenFuture,rodrigues2016application}, and solves the equation $1-\overline{x_{r}^{*}}\!=\!e^{-R_0 \overline{x_{r}^{*}}}$~\cite{keeling:infectious_diseases,HARKO2014184}.} \textcolor{black}{But, what about a half, a fourth, twice, etc. of this expected outbreak, or a case in which the entire population eventually becomes infected? Since the SIR-model is inherently stochastic and governed by Eq.(\ref{eq:M}), such solutions are also possible. To get a sense of how the probabilities for various outbreaks arise, and to guide our analysis, we perform some stochastic simulations, and plot (on a semi-log scale) the fraction of outcomes that result in a given total-fraction infected. Examples are shown in Fig.\ref{fig:EO0} for outbreaks: $100\%$ (blue), $98\%$ (red), and $96\%$ (green) when $R_{0}\!=\!2.5$. For reference, the mean-field outbreak of $89\%$ \textcolor{black}{(magenta) is also plotted}. Here and throughout, simulations were performed using Gillespie's direct method\cite{Gillespie2013,keeling:infectious_diseases,doi:10.1146/annurev-statistics-061120-034438} starting from a single infectious individual. Notice that for each outbreak value, $\ln{P}$ is linear in $N$, with a slope that depends on the outbreak, $\ln{P}(x_{r}^{*})\!\simeq\!N\mathcal{S}(x_{r}^{*})$. This asymptotic WKB scaling is consistent with what we expect on general theoretical grounds for large deviations in stochastic population models with a small $\mathcal{O}(1/N)$ noise parameter \cite{Assaf_2017,Dykman1994,Assaf2010,Meerson2010}.} \begin{figure}[h] \center{\includegraphics[scale=0.24]{Fig0_ExtremeOutbreaks.pdf}} \vspace{-2mm}\caption{Extreme outbreak probability scaling with the population size in the SIR model. Plotted is the probability that $100\%$ (blue), $98\%$ (red), $96\%$ (green), \textcolor{black}{and $89\%$ (magenta)} of the population are infected during an outbreak vs $N$. Results from $10^{11}$ simulations (symbols) are compared with theoretical lines whose slopes are given by Eq.(\ref{eq:Action}). Here $R_{0}\!=\!2.5$.} \label{fig:EO0} \end{figure} \textcolor{black}{Equipped with the WKB hypothesis for the distribution of outbreaks, we substitute \textcolor{black}{the ansatz $P(x_{s},x_{i},t)\!\sim\!\exp[-N\mathcal{S}(x_{s},x_{i},t)]$ into Eq.(\ref{eq:M}), and keep leading-order terms in $N\gg 1$}. In particular, we do a Taylor expansion of $P(x_s,x_i,t)$; e.g., $\;P(x_s+ 1/N,x_i- 1/N,t)\simeq e^{-N \mathcal{S}(x_{s},x_{i},t)-\partial \mathcal{S}/\partial x_s+\partial \mathcal{S}/\partial x_i}$. This allows finding the leading-order solution~\footnote{\textcolor{black}{Sub-leading order contributions to the probability can be found by continuing the large-N expansion \cite{Assaf_2017,Assaf2010,Meerson2010,Black_2011}}}, called the action, given by $\mathcal{S}(x_s,x_i,t)$\cite{Assaf_2017,Assaf2010,Meerson2010}. Taking the large-$N$ limit in this way converts the master equation (\ref{eq:M}) into a Hamilton-Jacobi equation, $\partial_{t} {\cal S}(x_{s},x_{i},t)+H(x_{s},x_{i},p_{s},p_{i})\!=\!0$ \cite{Assaf_2017,Dykman1994}, with a Hamiltonian given by \begin{equation} H=\beta x_{i}x_{s}\big(e^{p_{i}-p_{s}}-1\big) +\gamma x_{i}\big(e^{-p_{i}}-1\big). \label{eq:H} \end{equation} Here the momenta of the susceptibles and infecteds are respectively defined as $p_s=\partial\mathcal{S}/\partial x_s$ and $p_i=\partial\mathcal{S}/\partial x_i$.} As a consequence, in the limit of $N\gg 1$ the outbreak dynamics satisfy Hamilton's equations: $\dot{x}_{w}\!=\!\partial H/\partial p_w$ and $\dot{p}_{w}\!=\!-\partial H/\partial x_w$, just as in analytical mechanics\cite{Landau1976Mechanics}. Furthermore, solutions are minimum-action\cite{Dykman1994}, or maximum-probability. Namely, given boundary conditions for an outbreak, Hamilton's equations will provide the most-likely dynamics. As in mechanics, once the dynamics are solved, the action ${\cal S}(x_{s},x_{i},t)$ can be calculated along an outbreak path: \begin{equation}\label{action} {\cal S}(x_{s},x_{i},\textcolor{black}{t})=\textcolor{black}{\int_{0}^{t}\!\!\left(p_{s}\dot{x}_{s}+p_{i}\dot{x}_{i}-H\right)dt'} \end{equation} Before continuing our analysis, let us comment on the distribution, $P(x_{s},x_{i},t)$, and explain the sense in which certain outbreaks are extreme. As $P(x_{s},x_{i},t)$ scales exponentially with $N$ (for large $N$), if the action ${\cal S}(x_{s},x_{i},t)$ associated with an outbreak differs significantly from $0$, the outbreak will occur with an exponentially small probability, just as we observe in Fig.\ref{fig:EO0}. \textcolor{black}{In fact, the special case of ${\cal S}\!=\!0$ ($p_{i}\!=\!p_{s}\!=\!0$) is nothing other than the aforementioned mean-field prediction, which nicely quantifies why it is the most-likely extensive outbreak.} \textcolor{black}{\textit{Results.} In order to find the probability distribution of outbreaks, we observe that Hamiltonian (\ref{eq:H}) does not depend explicitly on time; that is $H$ evaluated along an outbreak is conserved in time\cite{Landau1976Mechanics}. Now, we substitute $\dot{p}_{i}=-\partial H/\partial x_i$, and write the Hamiltonian for the SIR model in a suggestive form, $H\!=\!-x_{i}\;\dot{p}_{i}$. Thus, if we consider the same large-population limit as the usual mean-field analysis discussed above, and restrict ourselves to outbreaks that start from small infection, e.g., $x_{i}(t\!=\!0)\!=\!1/N$ with $N\!\gg\!1$, it must be that $H\!\simeq\!0$. \textcolor{black}{Notably, because the energy is zero, we can drop the explicit time dependence in Eq.(\ref{action})}. As a result, since the number of infecteds grows and then decreases during the course of an outbreak with $x_{i}(t)\!\neq\!0$ for general $t$, one must have $p_{i}\!=\!$ {\it const.}} \textcolor{black}{At this point, we highlight a crucial difference between our analysis for stochastic outbreak dynamics, and the traditional use of WKB for analyzing large deviations in population models with metastable states. In the latter, the traditional $H\!\simeq\!0$ condition of the WKB usually derives from the fact that the model has a locally unique stable fixed-point for the mean-field coordinates, e.g, $\dot{\bold{x}}\!=\!0$ \cite{Assaf2010,Meerson2010,PhysRevLett.117.028302,Black_2011, Nasell:Book,SchwartzJRS2011,Dykman1994,PhysRevE.77.061107,doi:10.1137/17M1142028}. Common examples are stochastic switching and extinction from endemic equilibria. In our case, the zero-energy condition corresponds to a conserved momentum, and in fact, an infinite number of them. The non-zero momentum boundary conditions entailed by the conserved momenta are distinct from other known categories of extreme processes in discrete-state non-equilibrium systems and stochastic populations, and hence we uncover a new {\it degenerate} class.} \textcolor{black}{Now that we know that outbreaks in the SIR model are defined according to a \textcolor{black}{conserved} momenta, i.e., $m\equiv e^{p_i}$, we can equate the Hamiltonian~(\ref{eq:H}) to zero, and find the non-constant fluctuational momentum $p_{s}$, along an outbreak in terms of $x_{s}$, $m$, and $R_{0}$, \begin{equation} p_{s}= \ln\left\{R_{0}x_{s}m^{2}/[m(R_{0}x_{s}+1)-1]\right\}. \label{eq:ps} \end{equation} This momentum is necessary for evaluating Eq.~(\ref{action}). Continuing on toward our main goal of calculating the action, we note that the integral over $p_i$ vanishes, since it is a constant of motion and $x_i(t=0)=x_i(t\to\infty)\simeq 0$. Furthermore, the integral over $H$ also vanishes since $H\simeq 0$. As a result, in order to determine the action, we need to compute the integral over $p_s$ [Eq.~(\ref{eq:ps})] from the initial state $x_s=1$ to the final state $x_{s}(t\!\rightarrow\!\infty)=x_s^*$. The only thing left for us is to express $x_s^*$ in terms of the \textcolor{black}{conserved momentum} $m$. \textcolor{black}{This can be done by using Hamilton's equations, see SM for details}. Doing so, we arrive at the total action accumulated in the course of an outbreak \begin{align} \label{eq:Action} &{\cal S}(\textcolor{black}{x_{s}^{*}})=\ln x_s^*+(1-x_s^*)\\ &\times\left[m(1+R_0x_s^*)-1+\ln\left[(m(R_0+1)-1)/(x_s^* m^2 R_0)\right]\right]\nonumber. \end{align} \noindent \textcolor{black}{Note that $\mathcal{S}$ is a function of $x_s^*$ only, since for fixed $R_{0}$ there is a complete mapping between the final outbreak size and $m$ (see SM Eq.(A9) for $x_s^*(m)$).} Equation (\ref{eq:Action}) is our main result: the asymptotic solution of Eq.(\ref{eq:M}) for the distribution of all ${\cal O}(N)$ outbreaks.~\footnote{\textcolor{black}{For brevity we have dropped the dependence on $x_{i}$ in the final-outbreak action Eq.(\ref{eq:Action}), since all final states have the same $x_{i}\!=\!0$}}} \textcolor{black}{Our main result can now be tested in several ways. First, we go back to the motivating Fig.\ref{fig:EO0}. Recall that our approach predicts that, as a function of $N$, the action gives the slope of $\ln{P}(x_{r}^{*})\!\simeq\!N\mathcal{S}(x_{r}^{*})$. As such, we can overlay lines in Fig.\ref{fig:EO0}, where the slopes are predictions from Eq.(\ref{eq:Action}). Doing so for three extreme outbreak values (as well as the mean-field), we observe very good agreement, especially for larger values of $N$. Second, we can fix $N$ and $R_{0}$, and see how well Eq.(\ref{eq:Action}) predicts the full distribution. Such comparisons with stochastic simulations are shown in the upper panel of Fig.\ref{fig:EO1} (a).} In particular, we plot the fraction of $10^{12}$ simulations that resulted in an outbreak $x_{r}^{*}$ in blue, and the solutions of Eq.~(\ref{eq:Action}) with a black line. Again, the agreement between the two is quite good for the population size $N\!=\!2000$ and $R_{0}\!=\!1.7$. Disagreement increases as the outbreak sizes approach $\mathcal{O}(1/N)$. \textcolor{black}{Qualitatively, we can see that our theory captures the full cubic structure of the outbreak distribution, with a local maxima at the smallest outbreak (here $1/N$) and the mean-field solution, $\overline{x_r^*}\simeq 0.69$~\cite{Assaf_2017,Assaf2010,SchwartzJRS2011,PhysRevE.80.041130}}. {\color{black} To get more insight into the outbreak distribution, one can use Eq.~(\ref{eq:Action}) to compute the action, e.g. in the vicinity of the mean-field, $\overline{x_r^{*}}$. Locally the distribution is a Gaussian around $\overline{x_r^{*}}$, with a relative variance that takes a minimum at $R_0\simeq 5/3$, for which stochastic deviations from the mean-field outbreak are minimized (See SM for further details on the distribution's unique shape).} \begin{figure}[t] \center{\includegraphics[scale=0.23]{Fig1_ExtremeOutbreaks.pdf}} \vspace{-5mm}\caption{Outbreak distributions. (a) Final outbreak distribution for the SIR model (top) and a higher-dimensional COVID-19 model (bottom). Stochastic simulation results (blue squares) are compared with theory (black lines). Parameters are given in the main text. \textcolor{black}{Despite the varying complexity, the outbreak distributions in both models are captured by the same theory.} (b) \textcolor{black}{Histogram of 2000 stochastic trajectories in the SIR model that result in the same final \textcolor{black}{(non mean-field)} outbreak} $x_{r}^{*}\!=\!0.86$. The Eq.(\ref{eq:xi}) prediction is shown with a blue curve. Parameters are $N\!=\!1000$ and $R_{0}\!=\!1.7$. The colormap for the histogram is on log-scale.} \label{fig:EO1} \end{figure} Before moving to more general outbreak models we mention a few important qualitative details that emerge from our approach. In particular, let us consider the stochastic dynamics for the fraction of the population infected, $\dot{x}_i=\partial H/\partial p_i$. Substituting Eq.(\ref{eq:ps}) into $\dot{x}_i$, yields \begin{equation} \label{eq:Infectious} \dot{x}_{i}=\beta x_{i}\Big[(m-1)/(m R_{0}) +x_{s}\Big] -(\gamma/m)x_{i}. \end{equation} First, note that when $m\!=\!1$ ($p_{i}\!=\!p_{s}\!=\!0$), we uncover the mean-field SIR model system, $\dot{x}_{i}=\beta x_{i}x_{s} -\gamma x_{i}$. From the mean-field, we can recover Eq.(\ref{eq:Infectious}) with the suggestive transformations $x_{s}\!\rightarrow\! x_{s} +(m-1)/mR_{0}$, and $\gamma\rightarrow\gamma/m$~\footnote{A similar effect occurs in cell biology in a mRNA-protein genetic circuit, where fluctuations in the mRNA copy number can be effectively accounted for by taking a protein-only model with a modified production rate~\cite{roberts2015dynamics}.}. Recalling that each outbreak is parameterized by a unique constant $m$, evidently the effect of demographic stochasticity is to add an effective constant reduction (or boost) to the pool of susceptibles and to increase (or decrease) the effective recovery rate, depending on whether the final outbreak is smaller ($m<1$) or larger ($m>1$) than the mean-field, respectively. \textcolor{black}{We can test our prediction that a conserved $m$ constrains an entire outbreak path by picking a particular final outbreak size, corresponding to a particular value of $m$, and compare to stochastic trajectories. One method for comparison is to build a histogram in the $(x_{i},x_{s})$ plane from many simulations that end in the same outbreak size, and plot the constant-$m$ prediction. The latter can be found by solving the differential equation $dx_{i}/dx_{s}=\dot{x}_{i}/\dot{x}_{s}$} from Hamilton's equations, or \begin{equation} x_{i}(x_{s},m)= 1-x_{s} +\ln\!\left[\!\frac{m(R_{0}x_{s}+1)-1}{m(R_{0}+1)-1}\!\right]\big/\!R_{0}m. \label{eq:xi} \end{equation} An example is shown in Fig.\ref{fig:EO1} (b) for a final outbreak of $86\%$ when $R_{0}=1.7$ (the mean-field prediction is $69\%$). \textcolor{black}{The color map for the histogram is plotted along with the prediction from Eq.(\ref{eq:xi}). As expected, the outbreak-path prediction lies in the maximum density region. Thus, not only does our approach predict probabilities, but also the optimal dynamics that leads to outbreaks-- driven by an effective conserved momentum, $m$.} \textcolor{black}{\textit{General model.} We now generalize our results to more complex and realistic outbreak models. Typically, such models derive from the same basic assumptions as SIR, but have more states and free parameters.} \textcolor{black}{For example}, epidemiological predictions for COVID-19 (at a minimum) require an incubation period of around $5$ days, and an asymptomatic disease state, i.e., a group of people capable of spreading the disease without documented symptoms\cite{Subramaniane2019716118,Covasim,Schwartz2020}. \textcolor{black}{Both features: finite incubation and heterogeneity in infectious states, can form the basis of a more general class of outbreak models\cite{ModelingCOVID-19,Ray2020.08.19.20177493,Subramaniane2019716118,Covasim,Schwartz2020,Catching2020.08.12.20173047}}. Within this class, we assume that upon infection, susceptible individuals first become exposed (E), and then enter an infectious state at a finite rate $\alpha$. By assumption there are several possible infectious states (e.g., asymptomatic, mild, severe, tested, quarantined, etc.) that an exposed individual can enter according to prescribed probabilities~\cite{ModelingCOVID-19,Ray2020.08.19.20177493,Subramaniane2019716118,Covasim,Schwartz2020,Catching2020.08.12.20173047}. In addition, infectious states can have their own characteristic infection rates and recovery times. Putting these ingredients together, let us define $\mathcal{N}$ infectious states, $I_{n}$, where $n\!\in\!\{1,2,...,\mathcal{N}\}$, each with their own infectious contact rate $\beta_{n}$ and recovery rate $\gamma_{n}$, and which appear from the exposed state with probabilities $z_{n}$\cite{Schwartz2020,Catching2020.08.12.20173047,10.1371/journal.pone.0244706,Subramaniane2019716118,Covasim}. \textcolor{black}{See SM for list of reactions.} Following the WKB-prescription above, the Hamiltonian \textcolor{black}{for our general class of outbreak models} is \begin{eqnarray}\label{eq:H2} H=\sum_{n}&\beta_{n} x_{i,n}x_{s}\big(e^{p_{e}-p_{s}}-\!1\big) +\gamma_{n} x_{i,n}\big(e^{-p_{i,n}}-\!1\big) \nonumber \\ +&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\alpha z_{n}x_{e}\big(e^{p_{i,n}-p_{e}}-\!1\big). \end{eqnarray} \textcolor{black}{Despite the increased dimensionality and parameter heterogeneity, the general outbreak system defined through Eq.(\ref{eq:H2}) can also be solved analytically by precisely the same approach as the baseline SIR model. As in the latter, the essential property that makes the system solvable is the constancy of all momenta except for $p_{s}$. This property ensures that, here again, there is one free constant, $m$, that determines all momenta and the final outbreak size. Demonstrating this requires a few additional steps of algebra, but the result is a simple update to Eq.(\ref{eq:Action}) that involves a sum over the heterogeneities $\{z_{n},\beta_{n},\gamma_{n}\}$. See SM for general outbreak solution, Eq.(A29). An important consequence of the general solution is that, in the special case of the SEIR model\cite{keeling:infectious_diseases}, where there is only one infectious state, the outbreak action is identical to the SIR model, Eq.(\ref{eq:Action}). Namely, finite incubation changes the dynamics of outbreaks, but has only a sub-exponential contribution to their probability.} An example prediction from our general analysis is shown in the lower panel of Fig.\ref{fig:EO1} (a). The analytical solution (black line) is in very good agreement \textcolor{black}{with stochastic simulations of a COVID-19 model with asymptomatic ($n\!=\!1$) and symptomatic ($n\!=\!2$) infectious individuals. The infection parameters\footnote{For COVID-19 modelling, a typical choice for time units would be $t\!=\!1$ corresponding to 10 days.} take realistic heterogeneous values, i.e., $\beta_{1}\!=\!1.8$, $\beta_{2}\!=\!1.12$, $\gamma_{1}\!=\!1$, $\gamma_{2}\!=\!0.8$, $\alpha\!=\!2$, $z_{1}\!=\!0.3$, and $N\!=\!4000$ \cite{Schwartz2020,Catching2020.08.12.20173047,10.1371/journal.pone.0244706,Subramaniane2019716118,Covasim}, where $z_{1}=0.3$ is a typical value for the fraction of asymptomatic infection.} \textcolor{black}{Despite the increased complexity, the distribution in the more general model is also well-captured by our theory.} Before concluding, it is worth mentioning that although in real outbreaks the parameters in Eq.(\ref{eq:H2}) may fluctuate in time, if the fluctuations are fast compared to outbreak time-scales $\mathcal{O}(\ln{N})$ \cite{TURKYILMAZOGLU2021132902}, we expect the distribution to approach the SIR model with effective time-averaged parameters, which can be computed using methods detailed in~\cite{assaf2008population, assaf2013extrinsic}. On the other hand, if the fluctuations are slow with respect to the same time scales, we expect the distribution to be described by integrating over the solution of Eq.(\ref{eq:H2}), with weights given by the probability-density of rates~\cite{assaf2008population, assaf2013extrinsic}. In the intermediate regime, one must solve a Hamiltonian system with increased dimensionality, which includes both demographic noise and environmental variability. In this way, our results can provide a basis for understanding even more realistic outbreak dynamics. \textit{Conclusions.} \textcolor{black}{We solved the canonical problem of predicting the outbreak distribution of epidemics in large, fixed-sized populations.} Our theory was based on the exponential scaling of the probability of extensive outbreaks on the population size, which allowed the use of a semiclassical approximation. \textcolor{black}{By analyzing SIR, SEIR, and COVID-19 models, we were able to derive simple formulas for the paths and probabilities of all extensive outbreaks, and find an effective picture of how stochasticity is manifested during outbreaks.} Most importantly we showed that, unlike other well-known examples of rare events in population models, the statistics of extreme outbreaks depend on an infinite number of minimum-action paths satisfying \textcolor{black}{a unique set of boundary conditions with conserved momenta}. Due to their distinct and degenerate phase-space topology, extreme outbreaks represent a new class of rare process for discrete-state stochastic systems. \textcolor{black}{As with other extreme processes, our solution can form the basis for predictions in many other scenarios, including stochastic outbreaks mediated through complex networks.} JH and IBS were supported by the U.S. Naval Research Laboratory funding (N0001419WX00055), and the Office of Naval Research (N0001419WX01166) and (N0001419WX01322). MA was supported by the Israel Science Foundation Grant No. 531/20, and by the Humboldt Research Fellowship for Experienced Researchers of the Alexander von Humboldt Foundation. \section*{Appendix (Supplementary Material)} \renewcommand{\theequation}{A\arabic{equation}} \renewcommand{\thefigure}{S\arabic{figure}} \setcounter{figure}{0} \setcounter{equation}{0} \section{\label{SM:Sec1} SIR model: WKB approximation and outbreak distribution} In this section we use the WKB approximation to obtain Hamilton's equations governing the (leading order of the) stochastic dynamics in the SIR model. Then we derive the probability distribution of outbreak sizes and the trajectory of infected along an outbreak. To treat the master equation [Eq.~(3)] via the WKB method we employ the ansatz $P(x_{s},x_{i})\sim e^{-N\mathcal{S}(x_{s},x_{i})}$, where $\mathcal{S}$ is called the action function, and $x_s=S/N$ and $x_i=I/N$ are the fractions of susceptibles and infected respectively. Substituting the WKB ansatz into Eq. (3), we do a Taylor expansion of the action around $(x_s,x_i)$; i.e., $\;P(x_s+ 1/N,x_i- 1/N)\simeq e^{-N \mathcal{S}(x_{s},x_{i})-\partial \mathcal{S}/\partial x_s+\partial \mathcal{S}/\partial x_i}$ and $\;P(x_s,x_i+ 1/N)\simeq e^{-N \mathcal{S}(x_{s},x_{i})+\partial \mathcal{S}/\partial x_i}$. Doing so, we arrive at a Hamilton-Jacobi equation, $\partial_{t} {\cal S}(x_{s},x_{i})+H(x_{s},x_{i},p_{s},p_{i})\!=\!0$, where the Hamiltonian is given by [Eq.(4)], or \begin{equation} H=\beta x_{i}x_{s}\left(e^{p_{i}-p_{s}}-1\right) +\gamma x_{i}\left(e^{-p_{i}}-1\right). \label{SM:Hamil} \end{equation} Here the momenta of the susceptibles and infected are respectively defined as $p_s=\partial\mathcal{S}/\partial x_s$ and $p_i=\partial\mathcal{S}/\partial x_i$. As in classical mechanics, the outbreak dynamics satisfy Hamilton's equations \begin{align} \label{H1}&\dot{x}_s=\frac{\partial H}{\partial p_s}=-\beta x_i x_s e^{p_i-p_s},\\ \label{H2}&\dot{x}_i=\frac{\partial H}{\partial p_i}=\beta x_i x_s e^{p_i-p_s}-\gamma x_i e^{-p_i},\\ \label{H3}&\dot{p}_s=-\frac{\partial H}{\partial x_s}=-\beta x_i (e^{p_i-p_s}-1),\\ \label{H4}&\dot{p}_i=-\frac{\partial H}{\partial x_i}=-\beta x_s (e^{p_i-p_s}-1)-\gamma(e^{-p_i}-1). \end{align} As shown in the main text, since the Hamiltonian does not depend explicitly on time it is a constant of motion, and furthermore, it can be shown that $H\simeq 0$ when $x_{i}(t=0)\simeq 0$. As a consequence, $p_i$ is a free constant throughout the epidemic outbreak, and we can define the constant $m=e^{p_i}$. Equating the Hamiltonian (\ref{SM:Hamil}) to zero, after some algebra we find: \begin{align} &e^{p_{s}}= R_{0}x_{s}m^{2}/[m(R_{0}x_{s}+1)-1]. \label{SM:ps} \end{align} At this point we can use Hamilton's equations (\ref{H1}-\ref{H2}) to express the final outbreak size in terms of the initial momentum of infected. Because the total population density is constant, the density of the recovered individuals, $x_r=R/N$, satisfies: $\dot{x}_r=-\dot{x}_s-\dot{x}_i=\gamma x_i/m$. As a result, we can write a differential equation for $dx_s/dx_r$ by dividing Eq.~(\ref{H1}) by $\dot{x}_r$, which yields: \begin{equation} \frac{dx_s}{dx_r}=-R_0 x_s m^2 e^{p_s}. \end{equation} Substituting $e^{p_s}$ from Eq.~(\ref{SM:ps}) into this equation, and integrating $x_{s}$ from $1$ to $x_{s}^{*}$, and $x_{r}$ from $0$ to $1-x_{s}^{*}$, we find an implicit equation for the final outbreak versus $m$: \begin{equation} e^{R_{0}m(1-x_{s}^{*})}=[m(R_{0}+1)-1]/[m(R_{0}x_{s}^{*}+1)-1]. \label{SM:xs_star} \end{equation} This equation can be solved for $m$, and the result is \begin{equation}\label{SM:formalsol} x_s^*=\left\{1\!-\!m\!-\!W_0\left[(1\!-\!m(R_0\!-\!1))e^{1-m(R_0\!+\!1)}\right]\right\}/(mR_0), \end{equation} where $W_0(z)$ is the principle solution for $w$ of $z=we^w$. Thus, for fixed $R_{0}$, we have a complete mapping between the final outbreaks and the free-parameter $m$: each outbreak corresponds to a unique value of $m$. In addition to finding the final outbreak size as function of $m$, we can find the trajectory for the population fraction infected during an outbreak by solving $dx_{i}/dx_{s}$. Using Hamilton's equations (\ref{H1}-\ref{H2}) and Eq.~(\ref{SM:ps}) we find \begin{equation} \frac{dx_i}{dx_s}=-1+\frac{1}{m(R_0x_s+1)-1}. \end{equation} Integrating $x_{i}$ from $0$ to $x_i$, and $x_{s}$ from $1$ to $x_s$, results in Eq.(9) from the main text, or \begin{align} &x_{i}(x_{s},m)= 1-x_{s} +\ln\!\left[\!\frac{m(R_{0}x_{s}+1)-1}{m(R_{0}+1)-1}\!\right]\big/\!R_{0}m. \label{SM:xi_xs} \end{align} Notably, at $m=1$ (the mean-field solution), Eq.~(\ref{SM:xs_star}) becomes $x_s^*=e^{-R_0(1-x_s^*)}$, which yields the well-known mean-field total outbreak size $x_r^*=1-x_s^*=1+W_0\left(-R_0e^{-R_0}\right)/R_0$. In addition, Eq.~(\ref{SM:xi_xs}) becomes $x_i(x_s)=1-x_s+\ln(x_s)/R_0$, and we recover the well-known mean-field result of how the fraction of infected depends on that of the susceptibles. Having found $p_{s}(x_{s})$, and $x_s^*(m)$ we can find the action by integrating $\int p_{s} dx_{s}$ between $x_s(0)=1$ and $x_s^*$. The result is Eq.~(7) from the main text. \section{Generalized SIR model} In this section, we generalize the SIR-model results to a broader class of outbreak models with finite incubation and heterogeneity in infectious states. First, let us list the possible reactions in the larger class (typical of COVID-19 models) described in the main text: \begin{align} \label{eq:Reactions} &(S,E) \rightarrow (S\!-\!1,E\!+\!1) \;\; \text{with rate }\; S\sum_{n}\beta_{n} I_{n}/N, \\ &(E,I_{n}) \rightarrow (E\!-\!1,I_{n}\!+\!1) \;\; \text{with rate }\; z_{n}\alpha E, \\ &(I_{n},R) \rightarrow (I_{n}\!-\!1,R\!+\!1) \;\; \text{with rate }\; \gamma_{n} I_{n}. \end{align} From these, the Hamiltonian Eq.~(10) directly follows from the WKB limit described in the main text and in SM.\ref{SM:Sec1}: \begin{eqnarray}\label{eq:H2A} H&=&\sum_{n}\beta_{n} x_{i,n}x_{s}\left(e^{p_{e}-p_{s}}-\!1\right) \\ &+&\alpha z_{n}x_{e}\left(e^{p_{i,n}-p_{e}}-\!1\right)+\gamma_{n} x_{i,n}\left(e^{-p_{i,n}}-\!1\right).\nonumber \end{eqnarray} To solve the system Eq.(\ref{eq:H2A}), let us adopt the convenient notation, $m\!=\!e^{p_{e}}$, $m_{s}\!=\!e^{p_{s}}$, and $m_{i,n}\!=\!e^{p_{i,n}}$. As before, we look for solutions with $\dot{p}_{i,n}=-\partial H/\partial x_{i,n}=0$, which implies \begin{equation} \gamma_{n}(1-1/m_{i,n})/\beta_{n}\!=\!x_{s}(m/m_{s}-1). \label{eq:mi} \end{equation} Because the left hand side is a constant and the right hand side has no explicit dependence on ($\beta_{n}$,$\gamma_{n}$), we define an outbreak {\it constant}, \begin{equation} \label{eq:C1} C(m)\equiv x_{s}(m/m_{s}-1). \end{equation} Second, because $H\!=\!-x_{e}\dot{p}_{e}\!-\!\sum_{n}x_{i,n}\dot{p}_{i,n}=0$, by substitution of Hamilton's equations into Eq.(\ref{eq:H2A}), we have that $\dot{p}_{e}\!=\!0$. From $\dot{p}_{e}=-\partial H/\partial x_{e}=0$, we get \begin{equation} m=\sum_{n}z_{n}m_{i,n}. \label{eq:me} \end{equation} Combining Eqs.(\ref{eq:mi}-\ref{eq:me}), we find that $C(m)$ is a constant solution of \begin{equation} \label{eq:C} m=\sum_{n}\frac{z_{n}}{1-C(m)\beta_{n}/\gamma_{n}}. \end{equation} So far, we have a free constant $m$, which determines $C$ and $m_{i,n}$. Next, in order to calculate the general outbreak action, \begin{equation}\label{action2} {\cal S}(\bold{x})=\int\!p_{s} dx_{s} +\int\!p_{e} dx_{e} + \sum_{n}\int\!p_{i,n} dx_{i,n} -\int\!H dt, \end{equation} we need to know the upper and lower limits for the integrals in Eq.(\ref{action2}). As with the SIR model, since: all momenta are constant except $p_{s}$, $H=0$, $x_{e}(t\!=\!0)\!\approx\!0$, $x_{i,n}(t\!=\!0)\!\approx\!0$, $x_{e}(t\!\rightarrow\!\infty)\!\rightarrow\!0$, and $x_{i,n}(t\!\rightarrow\!\infty)\!\rightarrow\!0$, the only non-zero integral comes from $p_{s}$, which depends on $x_{s}(t\!\rightarrow\!\infty)\equiv x_{s}^{*}$. One useful strategy for finding the final fraction of susceptibles $x_{s}^{*}$ is to find relationships between the time-integrals of $x_{e}$, $x_{i,n}$ , and $x_{s}$. As in the SIR model, let us start with three of Hamilton's equations determined from Eq.(\ref{eq:H2A}): \begin{align} \label{eq:Hamilton1} &\dot{x}_{s} = -x_{s}\left(\frac{m}{m_{s}}\right)\sum_{n}\beta_{n}x_{i,n},\\ \label{eq:Hamilton2} &\dot{x}_{i,n} = \alpha x_{e}z_{n}\left(\frac{m_{i,n}}{m}\right)-(\gamma_{n}/m_{i,n})x_{i,n},\\ \label{eq:Hamilton3} &\dot{x}_{r} = \sum_{n}(\gamma_{n}/m_{i,n})x_{i,n}. \end{align} By defining $I_{e}\equiv\int_{0}^{\infty}\!x_{e}(t)dt$ and $I_{i,n}\equiv\int_{0}^{\infty}\!x_{i,n}(t)dt$, we can integrate Eqs.(\ref{eq:Hamilton2}-\ref{eq:Hamilton3}) with respect to $t$. Remembering that $x_{i,n}(t=0)\!\approx\!0$, $x_{i,n}(t\!\rightarrow\!\infty)\!\rightarrow\!0$, and $x_{r}(t\!\rightarrow\!\infty)\!\rightarrow\!1-x_{s,}^{*}$, the result is: \begin{align} \label{eq:I2} &0 = \alpha I_{e}z_{n}\left(\frac{m_{i,n}}{m}\right)-(\gamma_{n}/m_{i,n})I_{i,n}\\ \label{eq:I3} &1-x_{s}^{*} = \sum_{n}(\gamma_{n}/m_{i,n})I_{i,n}. \end{align} Similarly, separating $t$ and $x_{s}$ in Eq.~(\ref{eq:Hamilton1}) and integrating over all time we get \begin{align} \label{eq:I1} &\int_{1}^{x_{s}^{*}}\frac{m_{s}(x_{s})dx_{s}}{m\;x_{s}} =-\sum_{n}\beta_{n}I_{i,n}. \end{align} Finally, if we insert $m_{s}\!=\!mx_{s}/[x_{s}+C(m)]$ from Eq.(\ref{eq:C1}) into Eq.~(\ref{eq:I1}), we can solve Eqs.(\ref{eq:I2}-\ref{eq:I1}) for $x_{s}^{*}(m,C(m))$: \begin{align} \label{eq:FinalNew} \frac{-\ln\!\left[\frac{x_{s}^{*}+C(m)}{1+C(m)}\right]}{1-x_{s}^{*}}= \sum_{n} \frac{z_{n}}{m}\frac{\beta_{n}}{\gamma_{n}}\Bigg(\frac{\gamma_{n}}{\gamma_{n}-C(m)\beta_{n}}\Bigg)^{2}. \end{align} Using the constants $C(m)$ and $x_{s}^{*}$, the total accumulated action of an outbreak from Eq.(\ref{action2}) is given by the solvable integral \begin{align} \label{eq:LastIntegral} {\cal S}(m,x_{s}^{*}(m,C(m))\!=\!\int_{1}^{x_{s}^{*}}\ln[m x_{s}/(x_{s}+C(m))]dx_{s}, \end{align} and can be expressed as a function of the constant momentum $m$, \begin{align} \label{eq:FinalAction} {\cal S}(m,x_{s}^{*}(m,C(m))= x_{s}^{*}\ln\left[\frac{mx_{s}^{*}}{C(m)+x_{s}^{*}}\right]\nonumber \\ -\ln\left[\frac{m}{C(m)+1}\right] +C(m)\ln\left[\frac{C(m)+1}{C(m)+x_{s}^{*}}\right]. \end{align} All parameters in Eq.~(\ref{eq:FinalAction}) depend on $m$: $C(m)$ through Eq.~(\ref{eq:C}) and $x_{s}^{*}(m,C(m))$ through Eq.~(\ref{eq:FinalNew}). \section{Shape of the outbreak distribution} In this section we explore the unique shape of the outbreak distribution. In terms of analytical scaling, in Eq.~(\ref{SM:formalsol}) we have expressed $x_s^*$ (in the SIR model) as a function of $m$ which allows finding an explicit solution for the outbreak distribution as a function of $x_s^*$. While this gives rise to a cumbersome expression, further analytical progress can be done in the vicinity of $m\simeq 1$, which is a local maximum of the distribution. Expanding the right hand side of Eq.~(\ref{SM:formalsol}) around $m=1$, we find $m$ in terms of $x_s^*$, \begin{align} \label{eq:Limit1} m\simeq 1-\frac{R_0(1-R_0 \overline{x_s^*})(x_s^*-\overline{x_s^*})}{(1-\overline{x_s^*})(1+R_0^2 \overline{x_s^*})}, \end{align} with $\overline{x_s^*}\equiv -W_0(-R_0e^{-R_0})/R_0$ being the mean-field solution for $x_s^*$. Indeed, $m$ is close to $1$ in the vicinity of the maximum of the distribution, $x_s^*\simeq \overline{x_s^*}$. Plugging Eq.(\ref{eq:Limit1}) into the action function Eq.(7), and approximating up to second order in $x_s^*-\overline{x_s^*}$, we find ${\cal S}(x_s^*)\simeq (1/2){\cal S}''(\;\overline{x_s^*}\;)(x_s^*-\overline{x_s^*}\;)^2$, with \begin{align} \label{eq:Limit2} {\cal S}''(\;\overline{x_s^*}\;)=\frac{(R_0\overline{x_s^*}-1)^2}{(1-\overline{x_s^*}\;)\overline{x_s^*}(R_0^2\overline{x_s^*}+1)}. \end{align} Equation (\ref{eq:Limit2}) means that the distribution in the vicinity of the mean field outbreak size is a Gaussian with a width of $\sigma=(N {\cal S}''(\;\overline{x_s^*}\;))^{-1/2}$. Notably, close to the bifurcation, $R_0-1\ll 1$, $\overline{x_s^*}\simeq 1-2(R_0-1)$, and the width simplifies to $\sigma\simeq 2/\sqrt{N(R_0-1)}$, whereas for $R_0\gg 1$, $\overline{x_s^*}\simeq e^{-R_0}$ and $\sigma\simeq N^{-1/2}e^{-R_0/2}$. These calculations lead to a very interesting result: the coefficient of variation (COV), $COV=\sigma/\;\overline{x_s^*}$, receives a minimum at $R_0=1.66$. That is, the deviation from the mean-field outbreak size is minimized at $R_0\simeq 5/3$, whereas at $R_0\to 1$ or $R_0\gg 1$, the COV diverges, see Fig.~\ref{fig:EO2} (a). \begin{figure}[t] \center{\includegraphics[scale=0.18]{Fig2_ExtremeOutbreaks.pdf}} \caption{Shape of the outbreak distribution. (a) Ratio of the distribution standard deviation (around the mean field) to its mean vs. $R_0-1$ on a log-log scale. Symbols are solutions of Eq.(7) and $N=5000$, while the line is given by $\sigma/\;\overline{x_s^*}$ for the SIR model. (b) The least-likely (blue line) and most-likely (red line) outbreaks in the SEIR model versus $R_{0}\!=\!\beta/\gamma$, computed from Eq.~(7). Squares and triangles represent measured distribution minima and maxima from stochastic simulations. Population sizes were chosen so that $N\!{\cal S}(x_{r}^{*})\!=\!17$. Other parameters are $\gamma\!=\!1$ and $\alpha\!=\!2$.} \label{fig:EO2} \end{figure} Another unique aspect of the outbreak distribution is the least-likely small outbreak, $x_{r}^{\text{min}}$, which lies in between the mean field and the minimum outbreak $x_{r}^{*}\!=\!0$. The least-likely small outbreak satisfies $\partial {\cal S}/\partial x_{r}^{*}(x_{r}^{\text{min}})=0$ from the main text Eq.(7). For outbreaks smaller than the mean field, $x_{r}^{\text{min}}$ can be used to separate outbreaks into increasing or decreasing likelihoods. Usefully, we can track its dependence on parameters and compare to both Monte-Carlo simulations and the mean field. An example is shown in Fig.\ref{fig:EO2} (b), where we plot $x_{r}^{\text{min}}$ and $\overline{x_r^*}$ versus $R_{0}$. The predicted value of $x_{r}^{\text{min}}$ (from solving Eq.(7)) is shown with a blue line, which can be compared directly with simulation results shown with blue squares. The latter were determined by first building histograms from $10^{11}$ stochastic simulations in the SEIR model, similar to Fig.(2) (a), and then fitting the smallest-probability region below the mean-field value with a quartic polynomial of $x_{r}^{*}$. The polynomial-fits were done to $log_{10}(P)$. After fitting, the local minimum was extracted for each plotted value of $R_{0}$. The least-likely small outbreaks can also be compared to the mean-field result, $\overline{x_r^*}$, shown in red. Similar to the blue series, lines are theory predictions and points represent the local maxima of the simulation-based histograms. Note that we predicted that SEIR-model distributions are identical (on log scale) to SIR-model distributions, as described in the main text. This claim is tested in Fig.\ref{fig:EO2} (b), since simulations were performed under the former, while theory derived from the latter. As we can see, the two agree very well.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The evolution of the clustering of galaxies as a function of redshift provides a sensitive probe of the underlying cosmology and theories of structure formation. In an ideal world we would measure the spatial correlation function of galaxies as a function of redshift and type and use this to compare with the predictions of different galaxy formation theories. Observationally, however, our ability to efficiently measure galaxy spectra falls rapidly as a function of limiting magnitude and consequently we are limited to deriving spatial statistics from small galaxy samples and at relatively bright magnitude limits (e.g.\ $I_{AB} < 22.5$, Le Fevre et al.\ 1996, Carlberg et al.\ 1997). To increase the size of the galaxy samples and thereby reduce the shot noise the standard approach has been to measure the angular correlation function, i.e.\ the projected spatial correlation function (Brainerd et al.\ 1996, Woods and Fahlman 1997). While this allows us to extend the measure of the clustering of galaxies to fainter magnitude limits ($R<29$, Villumsen et al.\ 1997) it has an associated limitation. For a given magnitude limit the amplitude of the angular correlation function is sensitive to the width of the galaxy redshift distribution, N(z). At faint magnitude limits N(z) is very broad and consequently the clustering signal is diluted due to the large number of randomly projected pairs. In this letter we introduce a new approach for quantifying the evolution of the angular correlation function; we apply photometric redshifts (Connolly et al.\ 1995, Lanzetta et al.\ 1996, Gwyn and Hartwick 1996, Sawicki et al.\ 1997) to isolate particular redshift intervals. In so doing we can remove much of the foreground and background contamination of galaxies and measure an amplified angular clustering. We discuss here the particular application of this technique to the Hubble Deep Field (HDF; Williams et al.\ 1996). \section{The Photometric Catalog} From version 2 of the ``drizzled'' HDF images (Fruchter and Hook 1996) we construct a photometric catalog, in the $U_{300}$, $B_{450}$, $V_{606}$ and $I_{814}$ photometric passbands, using the Sextractor image detection and analysis package of Bertin and Arnout (1996). Object detection was performed on the $I_{814}$ images using a 1 arcsec detection kernel. For those galaxies with $I_{814} < 27$ we measure magnitudes in all four bands using a 2 arcsec diameter aperture magnitude. The final catalog comprises 926 galaxies and covers $\sim 5$ sq arcmin. From these data we construct a photometric redshift catalog. For $I_{814}<24$, we apply the techniques of Connolly et al. (1995, 1997), i.e.\ we calibrate the photometric redshifts using a training set of galaxies with known redshift. For fainter magnitudes ($24< I_{814} < 27$) we estimate the redshifts by fitting empirical spectral energy distributions (Coleman et al.\ 1980) to the observed colors (Gwyn and Hartwick 1996, Lanzetta et al.\ 1996 and Sawicki et al.\ 1997). A comparison between the predicted and observed redshifts shows the photometric redshift relation has an intrinsic dispersion of $\sigma_z \sim 0.1$. \section{The Angular Correlation function} We calculate the angular correlation function, $w(\theta)$, using the estimator derived by Landy and Szalay (1993), \begin{equation} w(\theta) = \frac{DD - 2DR + RR}{RR}, \end{equation} where $DD$ and $RR$ are the autocorrelation function of the data and random points respectively and $DR$ is the cross-correlation between the data and random points. In the limit of weak clustering this statistic is the 2-point realization of a more general representation of edge-corrected n-point correlation functions (Szapudi and Szalay 1997). As such it provides an optimal estimator for the HDF where the small field-of-view makes the corrections for the survey geometry significant. We calculate $w(\theta)$ between 1 and 220 arcsec with logarithmic binning. In the subsequent analysis we impose a lower limit of 3 arcsec to remove any artificial correlations due to the possibility that the image analysis routines may decompose a single galaxy image into multiple detections (at $z=1$ this corresponds to 12 h$^{-1}$ kpc for $q_o = 0.5$). For the random realizations we construct a catalog of 10000 points (approximately 50 times the number of galaxies per redshift interval) with the same geometry as the photometric data. To account for the small angular size of the HDF we apply an integral constraint assuming that the form of $w(\theta)$ is given by a power law with a slope of $-0.8$. Errors are estimated assuming Poisson statistics. The expected uncertainty in each bin is calculated from the number of random pairs (when scaled to the number of data points). Over the range of angles for which we calculate the correlation function errors derived from Poisson statistics are comparable to those from bootstrap resampling (Villumsen et al.\ 1997). \subsection{The Angular Correlation Function in the HDF} In Figure 1a we show the angular correlation function of the full $I_{814}<27$ galaxy sample (filled triangles). The error bars represent one sigma errors. The amplitude of the correlation function is comparable to that found by Villumsen et al.\ (1997) for an R selected galaxy sample in the HDF. It is consistent with a positive detection of a correlation signal at the 2$\sigma$ significance level. Superimposed on this figure is the correlation function for those galaxies with $1.0<z<1.2$ (filled squares). Isolating this particular redshift interval the amplitude of the correlation function is amplified by approximately a factor of ten. If we parameterize the angular correlation function as a power law with $w(\theta) = A_w \theta^{1-\gamma}$ then, from Limber's equations (Limber 1954), we can estimate how the amplitude, $A_w$, should scale as a function of redshift and width of the redshift distribution, \begin{equation} A_w = \sqrt{\pi} \frac{\Gamma[(\gamma - 1)/2]}{\Gamma[\gamma/2]} r_0^\gamma \frac {\int_0^{\infty} dz N(z)^2 (1+z)^{-(3+\epsilon)} x(z)^{1-\gamma} g(z)} {\left[ \int_0^\infty N(z) dz \right]^{-2}} \end{equation} where, \begin{equation} x(z) = 2\frac{((\Omega - 2) (\sqrt{1+\Omega z} -1) + \Omega z)} {\Omega^2 (1+z)^2}, \end{equation} is the comoving angular diameter distance, \begin{equation} g(z) = (1+z)^2 \sqrt{1+\Omega z}, \end{equation} N(z) is the redshift distribution and $\epsilon$ represents a parameterization of the evolution of the spatial correlation function (see below). For a normalized Gaussian redshift distribution, centered at $\bar{z}$, with $\bar{z} \gg 0$ and dispersion $\sigma_z$, $A_w$ is proportional to $1/\sigma_z$. Therefore, the amplitude of the angular correlation function should be inversely proportional to the width of the redshift distribution over which it is averaged. In Figure 1 if we assume that the magnitude limited sample ($I_{814}<27$) has a mean redshift of $z=1.1$ and a dispersion of $\Delta z = 0.5$ (consistent with the derived photometric redshift distribution) then isolating the redshift interval $1.0<z<1.2$ should result in an amplification of a factor of 5 in the correlation function (comparable to that which we detect). \subsection{The Angular correlation function as a function of redshift} A limitation on studying large-scale clustering with the HDF is its small field of view. For $\Omega=1$, the 220 arcsec maximal extent of the WFPC2 images corresponds to 0.7 h$^{-1}$ Mpc\ and 0.9 h$^{-1}$ Mpc\ at redshifts of $z=0.4$ and $z=1.0$ respectively. Isolating very narrow intervals in redshift (e.g.\ binning on scales of $\sigma_z <0.1$, the intrinsic dispersion in the photometric-redshift relation) can, therefore, result in the correlation function being dominated by a single structure, e.g.\ a cluster of galaxies. To minimize the effect of the inhomogeneous redshift distribution observed in the HDF (Cohen et al. 1996) we divide the HDF sample into bins of width $\Delta z = 0.4$ based on their photometric redshifts. For each redshift interval, $0.0<z<0.4$, $0.4<z<0.8$, $0.8<z<1.2$ and $1.2<z<1.6$, we fit the observed correlation function, with a power law with a slope of $-0.8$, over the range $3<\theta<220$ arcsec. From this fit we measure the amplitude of the correlation function at a fiducial scale of 10 arcsec. The choice of this particular angle is simply a convenience as it is well sampled at all redshift intervals. In Figure 2 we show the evolution of the amplitude as a function of redshift. For redshifts $z>0.4$ the relation is relatively flat with a mean value of 0.12. At $z<0.4$ we would expect the amplitude to rise rapidly with redshift due to the angular diameter distance relation. We find, however, that the amplitude remains flat even for the lowest redshift bin. This implies that there is a bias in the clustering signal inferred from the $0.0<z<0.4$ redshift interval (see Section 4). The value of the correlation function amplitude is comparable to those derived from deep magnitude limited samples of galaxies. Hudon and Lilly (1997) find an amplitude, measured at 1 degree, of $\log A_w = -2.68 \pm 0.08$ for an $R<23.5$ galaxy sample. Woods and Fahlman for a somewhat deeper survey, $R<24$, derive a value of $-2.94 \pm 0.06$. At these magnitude limits the mean redshift is approximately 0.56 (Hudon and Lilly, 1997) and the width is comparable to the redshift intervals of $\Delta z = 0.4$ that we apply to the HDF data. Therefore, our measured amplitude of $-2.92 \pm 0.06$ is in good agreement with these previous results. \section{Modeling the Clustering Evolution} We parameterize the redshift evolution of the spatial correlation function as $(1+z)^{-(3+\epsilon)}$, where values of $\epsilon = -1.2$, $\epsilon = 0.0$ and $\epsilon = 0.8$ correspond to a constant clustering amplitude in comoving coordinates, constant clustering in proper coordinates and linear growth of clustering respectively (Peebles 1980). From Equation 2 we construct the expected evolution of the amplitude of the angular correlation function, projected to 10 arcsec, for a range of values of $r_0$, $\epsilon$ and $\Omega$. We assume that the intrinsic uncertainty of the photometric redshift for each galaxy can be approximated by a Gaussian distribution, with a dispersion $\sigma_z = 0.1$ (consistent with observations), and determine the N(z) within a particular redshift interval as being composed of a sum of these Gaussian distributions. In Figure 2 we illustrate the form of this evolution for two sets of models, one with $r_0$ =5.4 h$^{-1}$ Mpc\ and the second with $r_0$ =2.37 h$^{-1}$ Mpc\ (the best fit to the data). For each model we assume $\Omega=1$ and plot the evolutionary tracks for $\epsilon = -1.2$ (solid line), $\epsilon = 0.0$ (dotted line) and $\epsilon = 0.8$ (dashed line). For $z>0.4$ and a low $r_0$\ the observed amplitude of the correlation function is well matched by that of the predicted evolution. For redshifts $z<0.4$ the observed clustering is approximately a factor of three below the $r_0$=2.37 h$^{-1}$ Mpc\ model. This is not unexpected given the selection criteria for the HDF. The field was chosen to avoid bright galaxies visible on a POSS II photographic plate. This corresponds to a lower limit for the HDF photometric sample of $F814W \sim 20$ (Marc Postman, private communication). The redshift distribution for an $I_{814}<20$ magnitude limited sample has a median value of $z=0.25$ and a width of approximately $\Delta z = 0.25$ (Lilly et al.\ 1995). Therefore, by excluding the bright galaxies within the HDF we artificially suppress the clustering amplitude in the redshift range $0.0<z<0.5$. We can expect, as we have found, that the first redshift bin in the HDF will significantly underestimate the true clustering signal. Those redshifts bins at $z>0.5$ are unlikely to be significantly affected by this magnitude limit. To constrain the models for the clustering evolution we, therefore, exclude the lowest redshift point in our sample, i.e.\ $0.0<z<0.4$, and determine the goodness-of-fit of each model using a $\chi^2$ statistic. The three dimensional $\chi^2$ distribution was derived for the phase space given by $1<r_0<5$ h$^{-1}$ Mpc, $-4<\epsilon<4$ and $0.2<\Omega<1$. We find that $\epsilon$ is relatively insensitive to the value of $\Omega$ with a variation of typically 0.4 for the range $0.2<\Omega<1.0$. As this is small when compared to the intrinsic uncertainty in measuring $\epsilon$ we integrated the probability distribution over all values of $\Omega$. In Figure 3a we show the range of possible values for $\epsilon$ as a function of $r_0$. The errorbars represent the 95\% confidence intervals derived from the integrated probability distribution. Figure 3b shows the log likelihood for these fits as a function of $r_0$. The HDF data are best fitted by a model with a comoving $r_0 = 2.37$ h$^{-1}$ Mpc\ and $\epsilon = -0.4^{+0.37}_{-0.65}$. The value of $r_0$\ is comparable to recent spectroscopic and photometric surveys with Hudon and Lilly (1996) finding $r_0$ = 2.75 $\pm 0.64$ h$^{-1}$ Mpc\ and Le F\'{e}vre et al.\ (1996) $r_0$ = 2.03 $\pm 0.14$ h$^{-1}$ Mpc. Given our redshift range, the I-band selected HDF data are comparable to a sample of galaxies selected in the restframe U ($z=1.4$) through V ($z=0.6$). To tie these observations into the clustering of local galaxies we, therefore, compare our results with the B band selected clustering analysis of Davis and Peebles (1983) and Loveday et al.\ (1992). Assuming a canonical value of $r_0$ = 5.4 h$^{-1}$ Mpc\ we require $\epsilon = 2.10^{+0.43}_{-0.64}$ to match the high redshift HDF data (i.e.\ significantly more evolution than that predicted by linear theory). A bias may be introduced into the analysis of the clustering evolution due to the fact that the I band magnitude selection corresponds to a selection function that is redshift dependent (see above). If, as is observed in the local Universe, the clustering length is dependent on galaxy type then selecting different inherent populations may mimic the observed clustering evolution. To determine the effect of this bias we allow $r_0$\ to be a function of redshift (with $r_0$\ varying by 2 h$^{-1}$ Mpc\ from $z=0$ to $z=2$). The magnitude of this change in $r_0$\ is consistent with the morphological dependence of $r_0$\ observed locally (Loveday et al.\ 1995, Iovino et al.\ 1993). Allowing for this redshift dependence reduces the value of $\epsilon$ by approximately 0.5 for all values of $r_0$\ (e.g.\ for $r_0$ = 5.4 h$^{-1}$ Mpc\ $\epsilon = 1.6^{+0.43}_{-0.64}$). It is worth noting that even with these large values of $\epsilon$ and accounting for the bias due to the I band selection the evolution of the clustering in the HDF is better fitted by a low value of $r_0$\ (the log likelihood is 4.15 less than the fit to $r_0$ =2.37 h$^{-1}$ Mpc). Parameterising the evolution of galaxy clustering is, therefore, not particularly well represented by the form $(1+z)^{-(3+\epsilon)}$ and it may be better for future studies to discuss the evolution in terms of the amplitude at a particular comoving scale rather than $r_0$\ and $\epsilon$. \section{Conclusions} Photometric redshifts provide a simple statistical means of directly measuring the evolution of the clustering of galaxies. By isolating narrow intervals in redshift space we can reduce the number of randomly projected pairs and detect the clustering signal to high redshift and faint magnitude limits. Applying these techniques to the HDF we can characterize the evolution of the angular 2 pt correlation function out to $z=1.6$. For redshifts $0.4<z<1.6$ we find that the amplitude of the angular correlation function is best parameterized by a comoving $r_0$=2.37 h$^{-1}$ Mpc\ and $\epsilon = -0.4^{+0.37}_{-0.65}$. To match, however, the canonical local value for the clustering length, $r_0$=5.4 h$^{-1}$ Mpc, requires $\epsilon = 2.1^{+0.4}_{-0.6}$, significantly more than simple linear growth. It must be noted that while these results are in good agreement with those from published photometric and spectroscopic surveys (Le F\`{e}vre et al 1996, Hudon and Lilly 1996) there are two caveats that should be considered before applying them to constrain models of structure formation. The small angular extent of the HDF (at a redshift, $z=1$, the field-of-view of the HDF is approximately 0.9 h$^{-1}$ Mpc) means that fluctuations on scales larger than we probe will contribute to the variance of the measured clustering (Szapudi and Colombi 1996). Secondly, the requirement that the HDF be positioned such that it avoids bright galaxies ($I_{814}<20$) biases our clustering statistics by artificially suppressing the number of low redshift galaxies (a bias that will be present in most deep photometric surveys). Therefore, the clustering evolution in the HDF may not necessarily be representative of the general field population. Given this, there is enormous potential for the application of this technique to systematic wide angle multicolor surveys, such as the Sloan Digital Sky Survey, \acknowledgments We would like to thank Marc Postman and Mark Dickinson for helpful comments on the selection and interpretation of the Hubble Deep Field data. We acknowledge partial support from NASA grants AR-06394.01-95A and AR-06337.11-94A (AJC) and an LTSA grant (ASZ).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} We explore the implications of two simple insights: First, a change in one asset's price relative to all other assets' prices must cause the distribution of relative asset prices to change. Second, an increase in the price of a relatively high-priced asset increases the dispersion of the asset price distribution. These two facts amount to accounting identities, and we show that they link the performance of a wide class of portfolios to the dynamic behavior of asset price dispersion. To formalize and explore the relationship between asset price distributions, portfolio returns, and efficient markets, we represent asset prices as continuous semimartingales and show that returns on a large class of portfolios (relative to the market) can be decomposed into a drift and changes in the dispersion of asset prices: \begin{equation} \label{intuitiveEq} \text{relative return} \;\; = \;\; \text{drift} \; - \; \text{change in asset price dispersion}, \end{equation} where the drift is non-negative and roughly constant over time, asset price dispersion is volatile over time. Fluctuations in asset price dispersion thus drive fluctuations in portfolio returns relative to the market. The decomposition \eqref{intuitiveEq} is achieved using few assumptions about the dynamics of individual asset prices, which means our results are sufficiently general that they apply to almost any equilibrium asset pricing model, in a sense made precise in Section 2 below. Indeed, the decomposition \eqref{intuitiveEq} is little more than an accounting identity that is approximate in discrete time and exact in continuous time. By tying the volatility of relative portfolio returns to changes in asset price dispersion, we characterize a class of asset pricing factors that are universal across different economic models and econometric specifications. Our results thus formally address concerns about the implausibly high number of factors and anomalies uncovered by the empirical asset pricing literature \citep{Novy-Marx:2014,Harvey/Liu/Zhu:2016,Bryzgalova:2016}. Indeed, the decomposition \eqref{intuitiveEq} provides a novel workaround to such criticisms, since its existence need not be rationalized by a particular equilibrium asset pricing model. This generality is one of the strengths of our unconventional approach. The continuous semimartingales we use to represent asset prices allow for a practically unrestricted structure of time-varying dynamics and co-movements that are consistent with the endogenous price dynamics of any economic model. In this manner, our framework is applicable to both rational \citep{Sharpe:1964,Lucas:1978,Cochrane:2005} and behavioral \citep{Shiller:1981,DeBondt/Thaler:1989} theories of asset prices, as well as to the many econometric specifications of asset pricing factors identified in the empirical literature. Our results also have implications for the efficiency of asset markets. We show that in a market in which dividends and the entry and exit of assets over time play small roles, non-negativity of the drift component of \eqref{intuitiveEq} implies that a wide class of portfolios must necessarily outperform the market except in the special case where asset price dispersion increases at a sufficiently fast rate over time. Thus, in order to rule out predictable excess returns, asset price dispersion must be increasing on average over time at a rate sufficiently fast to overwhelm the predictable positive drift. This result re-casts market efficiency in terms of a constraint on the dynamic behavior of asset price distributions. Through this lens, our approach provides a novel mechanism to uncover risk factors or inefficiencies that persist across a variety of different asset markets. We test our theoretical predictions using commodity futures. This market provides a clear test of our theory, since commodity futures contracts do not pay dividends and rarely exit from the market. Although dividends and asset entry/exit can be incorporated into our framework and do not overturn the basic insight of the decomposition \eqref{intuitiveEq}, they do alter and complicate the form of our results. Furthermore, some of our results have been applied, albeit with a different interpretation, to equity markets \citep{Vervuurt/Karatzas:2015}, so the focus on commodity futures provides a completely novel set of empirical results that best aligns with our theoretical results. In the decomposition \eqref{intuitiveEq}, asset price dispersion is any convex and symmetric function of relative asset prices, where each such function admits the decomposition for a specific portfolio. Our empirical analysis focuses on two special case measures of price dispersion, minus the geometric mean and minus the constant-elasticity-of-substitution (CES) function, and their associated equal- and CES-weighted portfolios. We decompose the relative returns of the equal- and CES-weighted commodity futures portfolios from 1974-2018 as in \eqref{intuitiveEq}, with the market portfolio defined as the price-weighted portfolio that holds one unit of each commodity futures contract. Empirically, we show that measures of commodity futures price dispersion increased only slightly over the forty year period we study. Consequently, and as predicted by the theory, the CES and equal-weighted portfolios exhibit positive long-run returns relative to the market driven by the accumulating positive drift. These portfolios consistently and substantially outperform the price-weighted market portfolio of commodity futures, with excess returns that have Sharpe ratios of 0.7-0.8 in most decades. It is important to emphasize that the mathematical methods we use to derive our results are well-established and the subject of active research in statistics and mathematical finance. Our results and methods are most similar to \citet{Karatzas/Ruf:2017}, who provide a return decomposition similar to \eqref{intuitiveEq}. Their results are based on the original characterization of \citet{Fernholz:2002}, which led to subsequent contributions by \citet{Fernholz/Karatzas:2005}, \citet{Vervuurt/Karatzas:2015}, and \citet{Pal/Wong:2016}, among others. These contributions focus primarily on the solutions of stochastic differential equations and the mathematical conditions under which different types of arbitrage do or do not exist. Our results, in contrast, are expressed in terms of the distribution of asset prices and interpreted in an economic setting that focuses on questions of asset pricing risk factors and market efficiency. Ours is also the first paper to empirically examine these results in the commodity futures market, which, as discussed above, most closely aligns with the assumptions that underlie our theoretical results. Our results raise the possibility of a unified interpretation of different asset pricing anomalies and risk factors in terms of the dynamics of asset price distributions. For example, the value anomaly for commodities uncovered by \citet{Asness/Moskowitz/Pedersen:2013} is similar in construction to the equal- and CES-weighted portfolios we study. Our theoretical results link these excess returns to fluctuations in commodity futures price dispersion and the approximate stability of this dispersion over long time periods. Therefore, any attempt to explain these excess returns or the related value anomaly for commodity futures of \citet{Asness/Moskowitz/Pedersen:2013} must also explain the dynamics of commodity price dispersion. The same conclusion applies to the surprising finding of \citet{DeMiguel/Garlappi/Uppal:2009} that a naive strategy of weighting each asset equally --- $1/N$ diversification --- outperforms a variety of portfolio diversification strategies including a value-weighted market portfolio based on CAPM. The relative return decomposition \eqref{intuitiveEq} implies that such outperformance is likely a consequence of the approximate stability of asset price dispersion over time, given the positive drift. As with the value anomaly for commodities, then, any attempt to explain the relative performance of naive $1/N$ diversification must also explain the dynamics of asset price dispersion. Our results also raise questions regarding the implications of equilibrium asset pricing models for price dispersion. Since our results show that asset price dispersion operates as a universal risk factor, different models' predictions for this dispersion become a major question of interest. In particular, our results imply that price dispersion should be linked to an endogenous stochastic discount factor that in equilibrium is linked to the marginal utility of economic agents. It is not obvious what economic and financial forces might underlie such a link, however. Nonetheless, our paper shows that unless asset price dispersion is consistently and rapidly rising over time, such links must necessarily exist. \vskip 50pt \section{Theory} \label{sec:theory} In this section we ask what, if anything, can be learned about portfolio returns from information about the evolution of individual asset prices relative to each other. We do this by characterizing the close relationship between the distribution of relative asset prices and the returns for a large class of portfolios relative to the market. Importantly, our characterization is sufficiently broad as to nest virtually all equilibrium asset pricing theories, meaning that our results require no commitment to specific models of trading behavior, agent beliefs, or market microstructure. \subsection{Setup and Discussion} \label{sec:setup} Consider a market that consists of $N > 1$ assets. Time is continuous and denoted by $t$ and uncertainty in this market is represented by a probability space $(\Omega, \CMcal{F}, P)$ that is endowed with a right-continuous filtration $\{\CMcal{F}_t ; t \geq 0\}$. Each asset price $p_i$, $i = 1, \ldots, N$, is characterized by a positive continuous semimartingale that is adapted to $\{\CMcal{F}_t ; t \geq 0\}$, so that \begin{equation} \label{contSemimart} p_i(t) = p_i(0) + g_i(t) + v_i(t), \end{equation} where $g_i$ is a continuous process of finite variation, $v_i$ is a continuous, square-integrable local martingale, and $p_i(0)$ is the initial price. The semimartingale representation \eqref{contSemimart} decomposes asset price dynamics into a time-varying cumulative growth component, $g_i(t)$, whose total variation is finite over every interval $[0, T]$, and a randomly fluctuating local martingale component, $v_i(t)$. By representing asset prices as continuous semimartingales, we are able to impose almost no structure on the underlying economic environment. For any continuous semimartingales $x, y$, let $\langle x, y \rangle$ denote the cross variation of these processes and $\langle x \rangle = \langle x, x \rangle$ denote the quadratic variation of $x$. Since the continuous semimartingale decomposition \eqref{contSemimart} is unique and the finite variation processes $g_i$ and $g_j$ all have zero cross-variation \citep{Karatzas/Shreve:1991}, it follows that \begin{equation} \label{crossVariation} \langle p_i, p_j \rangle (t) = \langle v_i, v_j \rangle (t), \end{equation} for all $i, j = 1, \ldots, N$ and all $t$. The cross-variation processes $\langle p_i, p_j \rangle$ measure the cumulative covariance between asset prices $p_i$ and $p_j$, and thus the differentials of these processes, $d \langle p_i, p_j \rangle (t)$, measure the instantaneous covariance between $p_i$ and $p_j$ at time $t$. Similarly, the quadratic variation processes $\langle p_i \rangle$ measure the cumulative variance of the asset price $p_i$, and thus the differential of that process, $d \langle p_i \rangle (t)$, measures the instantaneous variance of $p_i$. Our approach in this paper is unconventional in that we do not impose a specific model of asset pricing. Instead, we derive results in a very general setting, with the understanding that the minimal assumptions behind these results mean that they will be consistent with almost any underlying economic model. Indeed, essentially any asset price dynamics generated endogenously by a model can be represented as general continuous semimartingales of the form \eqref{contSemimart}. This generality is crucial, since we wish to provide results that apply to all economic and financial environments. Before proceeding, we pause and ask what assumptions the framework \eqref{contSemimart} relies on and how these assumptions relate to other asset pricing theories. A first important assumption is that assets do not pay dividends, so that returns are driven entirely by capital gains via price changes. In this sense, we can think of the $N$ assets in the market as rolled-over futures contracts that guarantee delivery of some underlying real asset on a future date. We emphasize that this assumption is for simplicity only. Our results can easily be extended to include general continuous semimartingale dividend processes similar to \eqref{contSemimart}. Including such dividend processes complicates the theory but does not change the basic insight of our results, a point that we discuss further below. Second, we consider a closed market in which there is no asset entry or exit over time. In other words, we assume that the $N$ assets in the market are unchanged over time. As with dividends, our basic framework can be extended to include asset entry and exit using local time processes that measure the intensity of crossovers in rank (see, for example, \citet{Fernholz:2017a}). In such an extension, only the top $N$ assets in the market at a given moment in time are considered, and there is a local time process that measures the impact of entry and exit into and out of that top $N$, just like in the framework of \citet{Fernholz/Fernholz:2018}. For simplicity, we do not include asset entry and exit and the requisite local time processes in our theoretical analysis. We do, however, discuss how such entry and exit might impact our results. Furthermore, for our empirical analysis in Section \ref{sec:empirics} we consider commodity futures contracts in which asset exit --- the more significant omission --- does not occur over our sample period, thus aligning our empirical analysis as closely as possible with the theoretical assumptions of no dividends and no entry or exit. In addition, we emphasize that our assumption that the continuous semimartingale price processes \eqref{contSemimart} are positive is only for simplicity and can be relaxed. Indeed, \citet{Karatzas/Ruf:2017} show how many of our theoretical results can be extended to a market in which zero prices are possible. Since zero prices are essentially equivalent to asset exit, this extension of our results provides another example of how the exit of assets from the market can be incorporated into our framework without overturning the basic insight of our results. The last assumption behind \eqref{contSemimart} is that the prices $p_i$ are continuous functions of time $t$ that are adapted to the filtration $\{\CMcal{F}_t ; t \geq 0\}$. The assumption of continuity is essential for mathematical tractability, since we rely on stochastic differential equations whose solutions are readily obtainable in continuous time to derive our theoretical results. Given the generality of our setup, it is difficult to see how introducing instantaneous jumps into the asset price dynamics \eqref{contSemimart} would meaningfully alter our conclusions. Nonetheless, it is important and reassuring that in Section \ref{sec:empirics} we confirm the validity of our continuous-time results using monthly, discrete-time asset price data. This is not surprising, however, since an instantaneous price jump is indistinguishable from a rapid but continuous price change --- this is allowed according to \eqref{contSemimart} --- using discrete-time data. Finally, the assumption that asset prices $p_i$ are adapted means only that they cannot depend on the future. This reflects the reality that agents are not clairvoyant, and cannot relay information about the future realization of stochastic processes to the present. The decomposition \eqref{contSemimart} separates asset price dynamics into two distinct parts. The first, the finite variation process $g_i$, has an instantaneous variance of zero (zero quadratic variation) and measures the cumulative growth in price over time. Despite its finite variation, the cumulative growth process $g_i$ can constantly change depending on economic and financial conditions as well as other factors, including the prices of the different assets. In Section \ref{sec:generalPort}, we show that our main relative return decomposition result consists of a finite variation process as well. In the subsequent empirical analysis Section \ref{sec:empirics}, we construct this finite variation process using discrete-time asset price data and show a clear contrast between its time-series behavior and the behavior of processes with positive quadratic variation (instantaneous variance greater than zero). The second part of the decomposition \eqref{contSemimart} consists of the square-integrable local martingale $v_i$. In general, this process has a positive instantaneous variance (positive quadratic variation), and thus its fluctuations are much larger and faster than for the finite variation cumulative growth process $g_i$. Note that a local martingale is more general than a martingale \citep{Karatzas/Shreve:1991}, and thus includes an extremely broad class of continuous stochastic processes. Intuitively, the process $v_i$ can be thought of as a random walk with a variance that can constantly change depending on economic and financial conditions as well as other factors. Furthermore, we allow for a rich structure of potentially time-varying covariances among the local martingale components $v_i$ of different asset prices, which are measured by the cross-variation processes \eqref{crossVariation}. The commodity futures market we apply our theoretical results to in Section \ref{sec:empirics} offers one of the cleanest applications of our theory, since commodities rarely exit the market and their futures contracts do not pay dividends. A number of studies have decomposed commodity futures prices into risk premia and forecasts of future spot prices \citep{Fama/French:1987,Chinn/Coibion:2014}. Commodity spot prices, which are a major determinant of futures prices, have in turn been linked to storage costs and fluctuations in supply and demand \citep{Brennan:1958,Alquist/Coibion:2014}. In the context of this literature, there are many potential mappings from the fundamental economic and financial forces that determine spot and futures commodity prices to the general continuous semimartingale representation of asset prices \eqref{contSemimart}. Indeed, higher storage costs, increases in demand, rising risk premia, and many other factors can be represented as increases in the cumulative growth process $g_i$. Similarly, all of the unpredictable random shocks that impact commodity markets can be represented as changes in the local martingale $v_i$. The crucial point, however, is that all of these models and the different economic and financial factors that they emphasize are consistent with the reduced form representation of asset prices \eqref{contSemimart}. After all, any model that proposes an explanation for the growth and volatility of commodity prices can be translated into our setup. The advantage of \eqref{contSemimart} is that we need not commit to any particular model of asset pricing, thus allowing us to derive results that are consistent across all the different models. \subsection{Portfolio Strategies} \label{sec:port} A \emph{portfolio strategy} $s(t) = (s_1(t), \ldots, s_N(t))$ specifies the number of shares of each asset $i = 1, \ldots, N$ that are to be held at time $t$. The shares $s_1, \ldots, s_N$ that make up a portfolio strategy must be measurable, adapted, and non-negative.\footnote{The assumption that portfolios hold only non-negative shares of each asset, and hence do not hold short positions, is only for simplicity. Our theory and results can be extended to long-short portfolios as well.} The \emph{value} of a portfolio strategy $s$ is denoted by $V_s > 0$, and satisfies \begin{equation} \label{valueEq} V_s(t) = \sum_{i=1}^N s_i(t)p_i(t), \end{equation} for all $t$. It is sometimes also useful to describe portfolio strategies $s$ in terms of \emph{weights}, denoted by $w^s(t) = (w^s_1(t), \ldots, w^s_N(t))$, which measure the fraction of portfolio $s$ invested in each asset. The shares of each asset held by a portfolio strategy, $s_i$, are easily linked to the weights of that portfolio strategy, $w^s_i$. In particular, a portfolio strategy $s(t) = (s_1(t), \ldots, s_N(t))$ has weights equal to \begin{equation} \label{weightsEq} w^s_i(t) = \frac{p_i(t)s_i(t)}{V_s(t)}, \end{equation} for all $i = 1, \ldots, N$ and all $t$, since \eqref{weightsEq} is equal to the dollar value invested in asset $i$ by portfolio $s$ divided by the dollar value of portfolio $s$. It is easy to confirm using \eqref{valueEq} and \eqref{weightsEq} that the weights $w^s_i$ sum up to one. We require that all portfolios satisfy the self-financibility constraint, which ensures that gains or losses from the portfolio strategy $s$ account for all changes in the value of the investment over time. This implies that \begin{equation} \label{selfFinanceEq} V_s(t) - V_s(0) = \int_0^t \sum_{i=1}^N s_i(t)\,dp_i(t), \end{equation} for all $t$. In addition, in order to permit comparisons on an even playing field, we set the initial holdings for all portfolios equal to each other. Without loss of generality, we set this initial value equal to the combined initial price of all assets in the economy, so that \begin{equation} \label{valueNormalizationEq} V_s(0) = \sum_{i=1}^N p_i(0), \end{equation} for all portfolio strategies $s$. One simple example of a portfolio strategy that will play a central role in much of our theoretical and empirical analysis is the \emph{market portfolio strategy}, which we denote by $m$. The market portfolio $m$ holds one share of each asset, so that $m(t) = (1, \ldots, 1)$ for all $t$. Following \eqref{valueEq}, we have that the value of the market portfolio strategy, $V_m$, is given by \begin{equation} \label{marketValueEq} V_m(t) = \sum_{i=1}^N m_i(t)p_i(t) = \sum_{i=1}^N p_i(t), \end{equation} for all $t$. Note that the market portfolio satisfies the self-financibility constraint, since \begin{equation} \int_0^t \sum_{i=1}^N m_i(t)\,dp_i(t) = \int_0^t \sum_{i=1}^N \,dp_i(t) = \sum_{i=1}^N p_i(t) - \sum_{i=1}^N p_i(0) = V_m(t) - V_m(0), \end{equation} for all $t$. It also satisfies the initial condition \eqref{valueNormalizationEq}, as shown by evaluating \eqref{marketValueEq} at $t = 0$. Our definition of a portfolio strategy is very broad and includes many strategies that would be difficult or costly to implement in the real world. This broadness is intentional, as it helps to showcase the generality and power of our theoretical results. Indeed, one of our main contributions is to show that the returns for a large class of portfolios can be characterized parsimoniously under almost no assumptions about the underlying dynamics of asset prices. Once we have established this decomposition for general portfolio strategies, we will turn to specific examples to explain and highlight our results. \subsection{The Distribution of Asset Prices} \label{sec:distribution} The distribution of asset prices in our framework can be described in a simple way as a function of relative prices. Let $\theta = (\theta_1, \ldots, \theta_N)$, where each $\theta_i$, $i = 1, \ldots, N$, is given by \begin{equation} \label{relPricesEq} \theta_i(t) = \frac{p_i(t)}{\sum_{i=1}^N p_i(t)}. \end{equation} Because the continuous semimartingales $p_i$ are all positive by assumption, it follows that $0 < \theta_i < 1$, for all $i = 1, \ldots, N$. By construction, we also have that $\theta_1 + \cdots + \theta_N = 1$. We denote the range of the relative price vector $\theta = (\theta_1, \ldots, \theta_N)$ by $\Delta$, so that \begin{equation} \label{delta} \Delta = \left\{ (\theta_1, \ldots, \theta_N) \in (0, 1)^N : \sum_{i=1}^N \theta_i = 1 \right\}. \end{equation} Note that the market portfolio strategy $m$, which is defined as holding one share of each asset at all times, has weights equal to the relative price vector $\theta$. This is an immediate consequence of \eqref{weightsEq} and \eqref{marketValueEq}, which together imply that \begin{equation} \label{marketWeights} w^m_i(t) = \frac{p_i(t)}{V_m(t)} = \frac{p_i(t)}{\sum_{i=1}^N p_i(t)} = \theta_i(t), \end{equation} for all $i = 1, \ldots, N$ and all $t$. The portfolio strategies we characterize are constructed using measures of the dispersion of the asset price distribution. We demonstrate that the returns on these portfolios relative to the market portfolio depend crucially on changes in this asset price dispersion. The following definition makes dispersion of the asset price distribution a precise concept. \begin{defn} \label{dispersionDef} A twice continuously differentiable function $F : \Delta \to {\mathbb R}$ is a \emph{measure of price dispersion} if it is convex and invariant under permutations of the relative asset prices $\theta_1, \ldots, \theta_N$. \end{defn} We say that asset prices are more (less) dispersed as a measure of price dispersion $F$ increases (decreases). The following lemma explains why Definition \ref{dispersionDef}, which is the convex analogue of the diversity measure from \citet{Fernholz:2002}, accurately captures the concept of asset price dispersion. \begin{lem} \label{dispersionLem} Let $F$ be a measure of price dispersion and $\theta, \theta' \in \Delta$. Suppose that \begin{equation} \max(\theta) = \max(\theta_1, \ldots, \theta_N) > \max(\theta'_1, \ldots, \theta'_N) = \max(\theta'), \end{equation} and that $\theta_i = \theta'_i$ for all $i$ in some subset of $\{1, \ldots, N\}$ that contains $N - 2$ elements. Then $F(\theta) \geq F(\theta')$. Furthermore, if $F$ is strictly convex, then $F(\theta) > F(\theta')$. \end{lem} To see how Lemma \ref{dispersionLem} explains the validity of Definition \ref{dispersionDef}, let us consider two relative price vectors $\theta, \theta' \in \Delta$. Suppose that the maximum relative price for $\theta$ is greater than for $\theta'$, while all other relative prices but one are equal to each other.\footnote{Note that if $\max(\theta) > \max(\theta')$, then it must be that $\theta_i \neq \theta'_i$ for at least two indexes $i = 1, \ldots, N$.} In this case, the relative price vector $\theta$ is more dispersed than $\theta'$, since these two are equal except that $\theta$ has a higher maximum price than $\theta'$. According to Lemma \ref{dispersionLem}, any measure of price dispersion $F$ will be weakly greater for $\theta$ than for $\theta'$ in this case, thus demonstrating that $F$ is weakly increasing in asset price dispersion. By a similar logic, the lemma also establishes that any strictly convex measure of price dispersion $F$ is strictly increasing in asset price dispersion. We wish to consider two specific measures of price dispersion, both of which play a crucial role in forming portfolios for our empirical analysis. The first measure is based on the \emph{geometric mean function} $G : \Delta \to [0, \infty)$, defined by \begin{equation} \label{geometricMeanEq} G(\theta(t)) = \left( \theta_1(t)\cdots\theta_N(t) \right)^{1/N}. \end{equation} Because the geometric mean function is concave, the function $-G < 0$ is a measure of price dispersion according to Definition \ref{dispersionDef}. The second measure of price dispersion is based on the \emph{constant elasticity of substitution (CES) function} $U : \Delta \to [0, \infty)$, defined by \begin{equation} \label{cesEq} U(\theta(t)) = \left( \sum_{i=1}^N \theta^{\gamma}_i(t) \right)^{1/\gamma}, \end{equation} where $\gamma$ is a nonzero constant, for all $i = 1, \ldots, N$.\footnote{For simplicity, we rule out the case where $\gamma = 0$ and $U$ becomes a Cobb-Douglas function.} As with the geometric mean function, the CES function is also concave, and hence the function $-U < 0$ is a measure of price dispersion according to Definition \ref{dispersionDef}. The portfolio strategies that we construct using measures of price dispersion have relative returns that can be decomposed into changes in asset price dispersion and a non-negative drift process. This drift process is defined in terms of an associated measure of price dispersion. For any measure of price dispersion $F$ and any $i, j = 1, \ldots, N$, let $F_i$ denote the partial derivative of $F$ with respect to $\theta_i$, $\frac{\partial F}{\partial \theta_i}$, let $F_{ij}$ denote the partial derivative of $F$ with respect to $\theta_i$ and $\theta_j$, $\frac{\partial^2 F}{\partial \theta_i \partial \theta_j}$, and let $H_F = (F_{ij})_{1 \leq i,j \leq N}$ denote the Hessian matrix of $F$. \begin{defn} \label{driftDef} For any measure of price dispersion $F$, the associated \emph{drift process} $\alpha_F$ is given by \begin{equation} \label{alpha} \alpha_F(\theta(t)) = \frac{1}{2}\sum_{i, j = 1}^N F_{ij}(\theta(t))\,d \langle \theta_i, \theta_j \rangle (t). \end{equation} \end{defn} \begin{lem} \label{alphaLem} For any measure of price dispersion $F$, the drift process $\alpha_F$ satisfies $\alpha_F \geq 0$. Furthermore, if $\operatorname{rank}(H_F) > 1$ and the covariance matrix $\left(d \langle p_i, p_j \rangle \right)_{1 \leq i, j \leq N}$ is positive definite for all $t$, then $\alpha_F > 0$. \end{lem} The non-negativity of the drift process $\alpha_F$ is significant. We show that, together with changes in price dispersion, this process accurately describes the returns of a large class of portfolio strategies relative to the market via a decomposition of the form \eqref{intuitiveEq}. Thus, if asset price dispersion is roughly unchanged over long time periods, then the long-run relative returns for many portfolios will be dominated by the drift process and hence will be non-negative. Furthermore, in this scenario the long-run relative return will be strictly positive if the measure of price dispersion $F$ is chosen appropriately --- so that $\operatorname{rank}(H_F) > 1$ --- and the instantaneous variance of asset prices is positive and not perfectly correlated --- so that $\left(d \langle p_i, p_j \rangle \right)_{1 \leq i, j \leq N}$ is positive definite. In fact, in Section \ref{sec:empirics} we confirm the approximate stability of commodity futures price dispersion over long time periods and the predictable positive relative returns that this stability implies. \subsection{General Results} \label{sec:generalPort} In this section, we characterize the returns for a broad class of portfolio strategies relative to the market. We show that these relative returns can be decomposed into the non-negative drift process defined in the previous section and changes in price dispersion, just like in \eqref{intuitiveEq}. One of the key ideas that underlies our results is that each measure of price dispersion $F$ has a corresponding portfolio strategy whose returns relative to the market are characterized by the value of the associated non-negative drift process $\alpha_F$ and changes in $F$. For this reason, measures of price dispersion are commonly said to ``generate'' the corresponding portfolio that admits such a decomposition \citep{Fernholz:2002,Karatzas/Ruf:2017}. One implication of this result is that there is a one-to-one link between measures of price dispersion and portfolio strategies whose relative returns depend on changes in that measure of price dispersion. The following theorem, which is similar to the more general results in Proposition 4.7 of \citet{Karatzas/Ruf:2017}, formalizes this idea. \begin{thm} \label{relValueThm} Let $F$ be a measure of price dispersion, and suppose that $F(\theta) < 0$ for all $\theta \in \Delta$. Then, the portfolio strategy $s(t) = (s_1(t), \ldots, s_N(t))$ with \begin{equation} \label{strategyEq} s_i(t) = \frac{V_s(t)}{V_m(t)}\left( 1 + \frac{1}{F(\theta(t))}\left( F_i(\theta(t)) - \sum_{j=1}^N \theta_j(t)F_j(\theta(t)) \right) \right), \end{equation} for each $i = 1, \ldots, N$, has a value process $V_s$ that satisfies\footnote{Note that the stochastic integral $\int \alpha_F$ is evaluated with respect to the cross variation processes contained in $\alpha_F$, according to \eqref{alpha}.} \begin{equation} \label{relValueEq} \log V_s(T) - \log V_m(T) = -\int_0^T\frac{\alpha_F(\theta(t))}{F(\theta(t))} + \log ( -F(\theta(T)) ), \end{equation} for all $T$. \end{thm} Theorem \ref{relValueThm} is powerful because it decomposes the returns for a broad class of portfolio strategies into the cumulative value of the non-negative drift process $\alpha_F$ and price dispersion as measured by $F$.\footnote{In Appendix \ref{supp}, we show that it is not necessary to characterize the decomposition in \eqref{strategyEq} in terms of logarithms. See Theorem \ref{relValueThmApp}.} Crucially, these portfolio strategies are easily implemented \emph{without any knowledge of the underlying fundamentals of the assets}. The portfolio $s$ of \eqref{strategyEq} specifies a number of shares of each asset to hold at time $t$ as a function of the prices of different assets relative to each other at time $t$, as measured by the relative price vector $\theta(t)$, and the relative value of the portfolio at time $t$, as measured by $V_s(t)/V_m(t)$. These quantities are easily observed over time, and do not require difficult calculations or costly information acquisition. The decomposition \eqref{relValueEq} from Theorem \ref{relValueThm} characterizes the log value of the portfolio strategy $s$ relative to the log value of the market portfolio strategy $m$ at time $T$ in terms of the cumulative value of the associated drift process adjusted by price dispersion, $-\int_0^T\frac{\alpha_F(\theta(t))}{F(\theta(t))}$, and the log value of minus asset price dispersion, $\log ( -F(\theta(T)) )$. In order to go from this characterization of relative portfolio values to a characterization of relative portfolio returns, we take differentials of both sides of \eqref{relValueEq}. This yields \begin{equation} \label{relReturnEq} d\log V_s(t) - d\log V_m(t) = -\frac{\alpha_F(\theta(t))}{F(\theta(t))} + d\log ( -F(\theta(t)) ), \end{equation} for all $t$. According to \eqref{relReturnEq}, then, the log return of the portfolio $s$ relative to the market can be decomposed into the non-negative value of the drift process adjusted by price dispersion, measured by $-\alpha_F/F \geq 0$, and changes in asset price dispersion, measured by $d\log ( -F )$. The relative return characterization \eqref{relReturnEq} is of the same form as the intuitive version \eqref{intuitiveEq} presented in the Introduction. Therefore, Theorem \ref{relValueThm} implies that increases (decreases) in asset price dispersion lower (raise) the relative returns on a large class of portfolios. It also implies that if price dispersion is unchanged, then the relative returns on this large class of portfolios will be either non-negative or positive, since the drift process from \eqref{relValueEq} is either non-negative or positive according to Lemma \ref{alphaLem}. We confirm both of these predictions using commodity futures data in Section \ref{sec:empirics}. Another implication of Theorem \ref{relValueThm} is that one part of the decomposition of the relative value of the portfolio strategy $s$ is a finite variation process. In particular, the cumulative value of the drift process adjusted by price dispersion, $-\int_0^T\frac{\alpha_F(\theta(t))}{F(\theta(t))}$, is a finite variation process by construction. To see why, note that the stochastic integral of a non-negative continuous process is continuous and non-decreasing, and any non-decreasing continuous process is a finite variation process \citep{Karatzas/Shreve:1991}. Recall from Section \ref{sec:setup} that a finite variation process has finite total variation over every interval $[0, T]$. This means that the process has zero quadratic variation, or equivalently, zero instantaneous variance. In Section \ref{sec:empirics}, we decompose actual relative returns as described by Theorem \ref{relValueThm} using monthly commodity futures data and show a clear contrast between the time-series behavior of the zero-instantaneous-variance process $-\int_0^T\frac{\alpha_F(\theta(t))}{F(\theta(t))}$ and the positive-instantaneous-variance process $\log ( -F(\theta(T)) )$. In particular, we find that the sample variance of the finite variation process is orders of magnitude lower than that of the positive quadratic variation process, as predicted by the theorem. The decompositions \eqref{relValueEq} and \eqref{relReturnEq} are little more than accounting identities, which are approximate in discrete time and exact in continuous time. There are essentially no restrictive assumptions about the underlying dynamics of asset prices and their co-movements that go into these results, making it difficult to imagine an equilibrium model of asset pricing that meaningfully clashes with Theorem \ref{relValueThm}. Despite this generality, two simplifying assumptions behind these results --- that assets do not pay dividends, and that the market is closed so that there is no asset entry or exit over time --- merit further discussion. If we were to include dividends in our framework, we would get a relative value decomposition that is very similar to \eqref{relValueEq}. The only difference in this case would be an extra term added to \eqref{relValueEq} measuring the cumulative dividends from the portfolio strategy $s$ relative to the cumulative dividends from the market portfolio strategy. In the presence of dividends, then, relative capital gains could still be decomposed into the drift process and changes in price dispersion as in Theorem \ref{relValueThm}. The only complication would be an extra term that measures relative cumulative dividends as part of relative investment value. The result of Theorem \ref{relValueThm} can also be extended to include asset entry and exit over time. As discussed in Section \ref{sec:setup}, the closed market assumption can be relaxed by introducing a local time process that measures the impact of asset entry and exit to and from the market, as detailed by \citet{Fernholz/Fernholz:2018}. If we were to relax this assumption and include asset entry and exit in our framework, we would get a relative value decomposition that is identical to \eqref{relReturnEq} plus one extra term that measures the differential impact of entry and exit on the returns of the portfolio strategy $s$ versus the market portfolio strategy. As with dividends, then, relative returns could still be decomposed into the drift process and changes in price dispersion as in Theorem \ref{relValueThm} in this case. The only complication would be an extra term that measures the relative impact of entry and exit on returns. \subsection{Proof Sketch} \label{sec:proof} We present the proof of Theorem \ref{relValueThm} in Appendix \ref{proofs}. In this subsection, we provide a sketch of this proof using second-order Taylor approximations of functions, with the understanding that these approximations are exact in continuous time by It{\^o}'s lemma \citep{Karatzas/Shreve:1991,Nielsen:1999}. Furthermore, for any function $f$, we use the notation $df(x)$ and $d^2f(x)$ to denote, respectively, $f(x) - f(x')$ and $(f(x) - f(x'))^2$, where $x - x' \in {\mathbb R}^N$ is approximately equal to $(0, \ldots, 0)$. Let $F < 0$ be a measure of price dispersion, which has a second-order Taylor approximation given by \begin{equation} \label{pfSketchEq1} dF(\theta(t)) \approx \sum_{i=1}^N F_i(\theta(t)) \, d\theta_i(t) + \frac{1}{2}\sum_{i,j =1}^N F_{ij}(\theta(t)) \, d\theta_i(t) \, d\theta_j(t). \end{equation} Consider a portfolio strategy $s$ that holds shares \begin{equation} \label{pfSketchEq2} s_i(t) = \frac{V_s(t)}{V_m(t)}\left(c(t) + \frac{F_i(\theta(t))}{F(\theta(t))}\right), \end{equation} for each $i = 1, \ldots, N$, where $c(t)$ is potentially time-varying and sets $\sum_{i=1}^N p_i(t)s_i(t) = V_s(t)$ for all $t$, thus ensuring that $s$ is a valid portfolio strategy according to \eqref{valueEq}. Note that \eqref{pfSketchEq2} defines the portfolio strategy $s$ in the same way as \eqref{strategyEq} in Theorem \ref{relValueThm}. Following \eqref{selfFinanceEq}, for this proof sketch we assume that \begin{equation} \label{pfSketchEq3} d\frac{V_s(t)}{V_m(t)} = \sum_{i=1}^Ns_i(t) \, d\theta_i(t), \end{equation} and we leave the derivation of this equation to Appendix \ref{proofs}. Substituting \eqref{pfSketchEq2} into \eqref{pfSketchEq3} yields \begin{equation} \label{pfSketchEq4} d\frac{V_s(t)}{V_m(t)} = \frac{V_s(t)}{V_m(t)} \sum_{i=1}^N \left(c(t) + \frac{F_i(\theta(t))}{F(\theta(t))}\right) d\theta_i(t) = \frac{V_s(t)}{V_m(t)} \sum_{i=1}^N\frac{F_i(\theta(t))}{F(\theta(t))} \, d\theta_i(t), \end{equation} for all $t$, where the last equality follows from the fact that $c(t)$ does not vary across different $i$ and $\sum_{i=1}^N \theta_i(t) = 1$ for all $t$, so that $d \sum_{i=1}^N\theta_i(t) = 0$. If we substitute \eqref{alpha} and \eqref{pfSketchEq4} into \eqref{pfSketchEq1}, then we have \begin{equation} \label{pfSketchEq5} \frac{d(V_s(t)/V_m(t))}{V_s(t)/V_m(t)} \approx -\frac{\alpha_F(\theta(t))}{F(\theta(t))} + \frac{dF(\theta(t))}{F(\theta(t))}, \end{equation} for all $t$. Let $\Omega$ be the process \begin{equation} \label{omega} \Omega(\theta(t)) = -F(\theta(t)) \exp\int_0^t-\frac{\alpha_F(\theta(s))}{F(\theta(s))}. \end{equation} According to It{\^o}'s product rule \citep{Karatzas/Shreve:1991}, the second-order Taylor approximation of $\Omega$ is given by\footnote{For an informal derivation of It{\^o}'s product rule, note that the second-order Taylor approximation of $f(x_1, x_2) = x_1x_2$ is given by \begin{equation*} df = d (x_1x_2) \approx x_1 \, dx_2 + x_2 \, dx_1 + dx_1 \, dx_2. \end{equation*}} \begin{equation} \label{pfSketchEq6} \begin{aligned} d \Omega(\theta(t)) & \approx -dF(\theta(t)) \, \exp\int_0^t-\frac{\alpha_F(\theta(s))}{F(\theta(s))} + \alpha_F(\theta(t))\exp\int_0^t-\frac{\alpha_F(\theta(s))}{F(\theta(s))} \\ & \qquad \qquad \qquad \qquad \qquad + \; d(-F(\theta(t))) \, d\left( \exp\int_0^t-\frac{\alpha_F(\theta(s))}{F(\theta(s))} \right), \end{aligned} \end{equation} for all $t$. As discussed above, the stochastic integral $\int_0^t\frac{\alpha_F(\theta(s))}{F(\theta(s))} $ is a finite variation process, and therefore the third term on the right-hand size of \eqref{pfSketchEq6}, which measures the cross variation of this stochastic integral and $-F$, is equal to zero. This yields, for all $t$, \begin{align*} d \Omega(\theta(t)) & \approx -dF(\theta(t)) \, \exp\int_0^t-\frac{\alpha_F(\theta(s))}{F(\theta(s))} + \alpha_F(\theta(t))\exp\int_0^t-\frac{\alpha_F(\theta(s))}{F(\theta(s))} \\ & = \big( \alpha_F(\theta(t)) - dF(\theta(t)) \big)\exp\int_0^t-\frac{\alpha_F(\theta(s))}{F(\theta(s))}, \end{align*} which implies that \begin{equation} \label{pfSketchEq7} \frac{d \Omega(\theta(t))}{\Omega(\theta(t))} \approx -\frac{\alpha_F(\theta(t))}{F(\theta(t))} + \frac{dF(\theta(t))}{F(\theta(t))}, \end{equation} for all $t$. Since the right-hand sides of \eqref{pfSketchEq5} and \eqref{pfSketchEq7} are equivalent, it follows that \begin{equation} \frac{V_s(t)}{V_m(t)} = \Omega(\theta(t)) = -F(\theta(t)) \exp\int_0^t\frac{\alpha_F(\theta(s))}{F(\theta(s))}, \end{equation} for all $t$, which establishes \eqref{relReturnEq} and hence Theorem \ref{relValueThm}. This derivation shows that, once \eqref{pfSketchEq3} is established, the proof of Theorem \ref{relValueThm} is simply a matter of applying It{\^o}'s lemma in a clever way. This proof sketch also highlights the manner in which our results rely on the continuous time framework \eqref{contSemimart}. It{\^o}'s lemma only holds for continuous time stochastic processes, and therefore the precision achieved by \eqref{relValueEq} requires the assumption that time is continuous. In the absence of a continuous time framework, the second-order Taylor approximations in the above proof sketch would be approximations only. \subsection{Examples} \label{sec:examples} Theorem \ref{relValueThm} is quite general and characterizes the performance of a broad class of portfolio strategies relative to the market. We wish to apply this general characterization to the two measures of price dispersion introduced in Section \ref{sec:distribution}, minus the geometric mean, $-G$, and minus the CES function, $-U$. The two corollaries that follow are simple applications of Theorem \ref{relValueThm} \begin{cor} \label{returnsCor1} The portfolio strategy $g(t) = (g_1(t), \ldots, g_N(t))$ with \begin{equation} \label{strategyGEq} g_i(t) = \frac{V_g(t)}{N\theta_i(t)V_m(t)}, \end{equation} for each $i = 1, \ldots, N$, has a value process $V_g$ that satisfies \begin{equation} \label{relValueGEq} \log V_g(T) - \log V_m(T) = - \int_0^T\frac{\alpha_G(\theta(t))}{G(\theta(t))} + \log G(\theta(T)), \end{equation} for all $T$. \end{cor} In Corollary \ref{returnsCor1}, the shares of each asset $i$ held at time $t$, denoted by $g(t) = (g_1(t), \ldots, g_N(t))$, are calculated by evaluating \eqref{strategyEq} from Theorem \ref{relValueThm} using the measure of price dispersion $-G$. The results of this evaluation are given by \eqref{strategyGEq}. In terms of the portfolio weights $w^g$ defined in \eqref{weightsEq}, the shares $g_i$ imply an equal-weighted portfolio in which equal dollar amounts are invested in each asset, since \begin{equation} \label{equalWeightsEq} w^g_i(t) = \frac{g_i(t)p_i(t)}{V_g(t)} = \frac{1}{N}, \end{equation} for $i = 1, \ldots, N$ and all $t$. For this reason, we shall refer to the portfolio strategy $g$ as the \emph{equal-weighted portfolio strategy}. One consequence of Theorem \ref{relValueThm} and Corollary \ref{returnsCor1}, then, is that the return of the equal-weighted strategy relative to the market can be decomposed into the non-negative drift $\alpha_G$ and changes in price dispersion as measured by minus the geometric mean of the asset price distribution, according to \eqref{relValueGEq}. \begin{cor} \label{returnsCor2} The portfolio strategy $u(t) = (u_1(t), \ldots, u_N(t))$ with \begin{equation} \label{strategyUEq} u_i(t) = \frac{V_u(t)\theta^{\gamma-1}_i(t)}{V_m(t)U^{\gamma}(\theta(t))}, \end{equation} for each $i = 1, \ldots, N$, has a value process $V_u$ that satisfies \begin{equation} \label{relValueUEq} \log V_u(T) - \log V_m(T) = -\int_0^T\frac{\alpha_U(\theta(t))}{U(\theta(t))} + \log U(\theta(T)), \end{equation} for all $T$. \end{cor} As with Corollary \ref{returnsCor1}, the shares of each asset $i$ held at time $t$ in Corollary \ref{returnsCor2}, denoted by $u(t) = (u_1(t), \ldots, u_N(t))$, are calculated by evaluating \eqref{strategyEq} using the measure of price dispersion $-U$ and the results of this evaluation are given by \eqref{strategyUEq}. The portfolio weights for the strategy $u$ are given by \begin{equation} \label{cesWeightsEq} w^u_i(t) = \frac{u_i(t)p_i(t)}{V_u(t)} = \frac{p_i(t)\theta^{\gamma-1}_i(t)}{V_m(t)U^{\gamma}(\theta(t))} = \frac{\theta^{\gamma}_i(t)}{U^{\gamma}(\theta(t))} = \frac{\theta^{\gamma}_i(t)}{\sum_{j=1}^N \theta^{\gamma}_j(t)}, \end{equation} for $i = 1, \ldots, N$ and all $t$. We shall refer to this portfolio strategy as the \emph{CES-weighted portfolio strategy}. Like with the equal-weighted strategy, Theorem \ref{relValueThm} and Corollary \ref{returnsCor2} imply that the return of the CES-weighted strategy relative to the market can be decomposed into the non-negative drift $\alpha_U$ and changes in price dispersion as measured by minus the CES function applied to the asset price distribution, according to \eqref{relValueUEq}. Each different value of the non-negative CES parameter $\gamma$ implies a different CES function and hence a different portfolio strategy $u$. Note that as $\gamma$ tends to zero, the CES-weighted portfolio strategy converges to the equal-weighted strategy since the weights \eqref{cesWeightsEq} tend to $1/N$. For positive (negative) values of $\gamma$, the CES-weighted portfolio strategy is more (less) invested in higher-priced assets than the equal-weighted portfolio. Finally, if $\gamma$ is equal to one, then the CES-weighted portfolio is equivalent to the market portfolio, since in this case, for all $t$, \begin{equation} u_i(t) = \frac{V_u(t)}{V_m(t)U(\theta(t))} = \frac{V_u(t)}{V_m(t)}, \end{equation} and any portfolio strategy that purchases an equal number of shares of each asset is equivalent to the market portfolio. \vskip 50pt \section{Empirical Results} \label{sec:empirics} Having characterized the relationship between relative returns and asset price distributions in full generality in Section \ref{sec:theory}, we now turn to an empirical analysis. We wish to investigate the accuracy of the decomposition in Theorem \ref{relValueThm} using real asset price data. In particular, we show in this section that the decompositions characterized in Corollaries \ref{returnsCor1} and \ref{returnsCor2} provide accurate descriptions of actual relative returns for the equal- and constant-elasticity-of-substitution (CES)-weighted portfolio strategies, as predicted by the theory. \subsection{Data} \label{sec:data} We use data on the prices of 30 different commodity futures from 1969-2018 to test our theoretical predictions. The choice to focus on commodity futures is motivated by two main factors. First, the two most important assumptions we impose on our theoretical framework --- that assets do not pay dividends, and that the market is closed and there is no asset entry or exit over time --- align fairly closely with commodity futures markets. These assets do not pay dividends, with returns driven entirely by capital gains. Commodity futures also rarely exit from the market, which is notable since such exit can substantially affect the relative returns of the equal- and CES-weighted portfolio strategies. In fact, no commodity futures contracts that we are aware of disappear from the market from 1969-2018, so this potential issue is irrelevant over the time period we consider. While new commodity futures contracts do enter into our data set between 1969-2018, such entry does not affect our empirical results and is easily incorporated into our framework as we explain in detail below. Second, previous studies have already examined the return of equal- and CES-weighted portfolio strategies relative to the market using equities, so our choice of commodity futures provides an environment for truly novel empirical results. \citet{Vervuurt/Karatzas:2015}, for example, construct a CES-weighted portfolio of equities similar to the portfolio we construct for commodity futures below. These authors show that the CES-weighted equity portfolio consistently outperforms the market from 1990-2014 as predicted by Theorem \ref{relValueThm}, despite the fact that dividends and entry and exit in the form of IPOs and bankruptcies are important factors in equity markets. Table \ref{commInfoTab} lists the start date and trading market for the 30 commodity futures in our 1969-2018 data set. These commodities encompass the four primary commodity domains (energy, metals, agriculture, and livestock) and span many bull and bear regimes. The table also reports the annualized average and standard deviation of daily log price changes over the lifetime of each futures contract. These data were obtained from the Pinnacle Data Corp., and report the two-month-ahead futures price of each commodity on each day that trading occurs, with the contracts rolled over each month. Relative asset prices as defined by the $\theta_i$'s in \eqref{relPricesEq} are crucial to our theoretical framework and results. This concept, however, is essentially meaningless in the context of commodity prices, since different commodities are measured using different units such as barrels, bushels, and ounces. In order to give relative prices meaning in the context of commodity futures, we normalize all contracts with data on the January 2, 1969 start date so that their prices are equal to each other. All subsequent price changes occur without modification, meaning that price dynamics are unaffected by our normalization. For those commodities that enter into our data set after 1969, we set their initial log prices equal to the average log price of those commodities already in our data set on that date. After these commodities enter into the data set with a normalized price, all subsequent price changes occur without modification. The normalized commodity futures prices we construct are similar to price indexes, with all indexes set equal to each other on the initial start date and any indexes that enter after this start date set equal to the average of the existing indexes. Figure \ref{relPricesFig} plots the normalized log commodity futures prices relative to the average for all 30 contracts in our data set from 1969-2018. This figure shows how normalized prices quickly disperse after the initial start date, with commodity futures prices constantly being affected by different shocks. After an initial period of rapid dispersion, however, the normalized commodity futures prices are roughly stable relative to each other with what looks like only modest increases in dispersion occurring after approximately 1980. These patterns are quantified and analyzed in our empirical analysis below. \subsection{Portfolio Construction} For our empirical analysis, it is necessary to construct a market portfolio strategy as defined by the weights \eqref{marketWeights}. In the context of commodity futures, the market portfolio cannot hold one share of each asset since futures contracts are simply agreements between two parties with no underlying asset held. This issue is easily resolved, however, since the market portfolio weights \eqref{marketWeights} are well-defined in the context of normalized commodity futures prices. In particular, \eqref{marketWeights} implies that the market portfolio invests in each commodity futures contract an amount that is proportional to the normalized price of that commodity. For this reason, we often refer to the market portfolio strategy as the price-weighted market portfolio strategy in the empirical analysis of this section. Note that the market portfolio of commodity futures requires no rebalancing, since price changes automatically cause the weights of each commodity in the portfolio to change in a manner that is consistent with price-weighting. In addition to the price-weighted market portfolio, we construct equal- and CES-weighted portfolios of commodity futures as described in Corollaries \ref{returnsCor1} and \ref{returnsCor2}. The weights that define these two portfolio strategies are given by \eqref{equalWeightsEq} and \eqref{cesWeightsEq}, and are constructed using the normalized prices for which relative price is a meaningful concept. For the CES-weighted portfolio strategy, we set the value of $\gamma$ equal to $-0.5$, meaning that this portfolio places greater weight on lower-priced commodity futures than does the equal-weighted portfolio (see the discussion at the end of Section \ref{sec:examples}). Both the equal- and CES-weighted portfolio strategies require active rebalancing since, unlike the price-weighted market portfolio, their weights tend to deviate from \eqref{equalWeightsEq} and \eqref{cesWeightsEq} as prices change over time. Each portfolio is rebalanced once each month. Finally, even though our commodity futures data cover 1969-2018, the fact that we normalize prices by setting them equal to each other on the 1969 start date implies that the distribution of relative prices will have little meaning until these prices are given time to disburse. In a manner similar to the commodity value measure of \citet{Asness/Moskowitz/Pedersen:2013}, we wait five years before forming the equal-, CES-, and price-weighted market portfolios, so that these portfolios are constructed using normalized prices from 1974-2018. \subsection{Results} \label{sec:results} Figure \ref{returnsFig} plots the log cumulative returns for the price-weighted (market) portfolio strategy and the equal- and CES-weighted portfolio strategies from 1974-2018. The figure shows that all three portfolios have roughly similar behavior over time, but that the monthly rebalanced equal- and CES-weighted portfolios gradually and consistently outperform the price-weighted portfolio over time. These patterns are quantified in Table \ref{returnsTab}, which reports the annualized average and standard deviation of monthly returns for all three portfolio strategies over this time period. The monthly returns of the market portfolio have correlations of 0.95 and 0.89 with the returns of the equal- and CES-weighted portfolios, respectively. The outperformance of the equal- and CES-weighted portfolio strategies relative to the price-weighted market portfolio is also evident in Table \ref{relReturnsTab}, which reports the annualized average, standard deviation, and Sharpe ratio of monthly relative returns for the equal- and CES-weighted portfolios from 1974-2018. Tables \ref{returnsTab} and \ref{relReturnsTab} also report returns statistics for each decade in our long sample period. The results of Tables \ref{returnsTab} and \ref{relReturnsTab} show that the equal- and CES-weighted portfolios consistently and substantially outperformed the price-weighted market portfolio over the 1974-2018 time period. This outperformance is most evident from the high Sharpe ratios for the excess returns of both the equal- and CES-weighted portfolio, as shown in Table \ref{relReturnsTab}. Notably, both of these Sharpe ratios consistently rise above 0.5 after 1980, which is after most of the commodity futures contracts in our data set have started trading according to Table \ref{commInfoTab}. In other words, as the number of tradable assets $N$ rises, portfolio outperformance also rises. This is not surprising, since a greater number of tradable assets generally implies a greater value for the non-negative drift process $\alpha_F$, and it is this process that mostly determines relative portfolio returns over long horizons, as we demonstrate below. The general theory of Section \ref{sec:theory} does not make any statements about the size of portfolio returns. Instead, this theory states that the returns for a large class of portfolio strategies relative to the market can be decomposed into a non-negative drift and changes in asset price dispersion, according to Theorem \ref{relValueThm}. When applied to the equal- and CES-weighted portfolios as in Corollaries \ref{returnsCor1} and \ref{returnsCor2}, this implies that the relative return of the equal-weighted portfolio strategy can be decomposed into the drift process adjusted by price dispersion, $-\alpha_G/G \geq 0$, and changes in the geometric mean of the asset price distribution, as in \eqref{relValueGEq}. Similarly, the relative return of the CES-weighted portfolio strategy can be decomposed into the drift process adjusted by price dispersion, $-\alpha_U/U \geq 0$, and changes in the CES function applied to the asset price distribution, as in \eqref{relValueUEq}. In order to empirically investigate the decomposition \eqref{relValueEq} from Theorem \ref{relValueThm}, in Figure \ref{returnsEWFig} we plot the cumulative abnormal returns --- returns relative to the price-weighted market portfolio strategy --- of the equal-weighted portfolio strategy together with the cumulative value of the drift process adjusted by price dispersion, $-\alpha_G/G$, from 1974-2018. In addition, Figure \ref{dispEWFig} plots price dispersion as measured by minus the log of the geometric mean of the commodity price distribution, $G$, normalized relative to its average value for 1974-2018. In addition to the consistent and substantial outperformance of the equal-weighted portfolio relative to the price-weighted portfolio, these figures show that short-run relative return fluctuations for the equal-weighted portfolio closely follow fluctuations in commodity price dispersion while the long-run behavior of these relative returns closely follow the smooth adjusted drift. Indeed, there is a striking contrast between the high volatility of price dispersion in Figure \ref{dispEWFig} and the near-zero volatility of the adjusted drift in Figure \ref{returnsEWFig}. This is an important observation that is a direct prediction of Theorem \ref{relValueThm} and Corollary \ref{returnsCor1}, a point we discuss further below. In addition to the contrasting volatilities of price dispersion and the adjusted drift, Figures \ref{returnsEWFig} and \ref{dispEWFig} show that the cumulative abnormal returns of the equal-weighted portfolio strategy are equal to the cumulative value of the adjusted drift process, $\int_0^T -\alpha_G(\theta(t))/G(\theta(t))$, plus the log of the geometric mean of the commodity price distribution, $\log G(\theta(T))$. Indeed, the solid black line in Figure \ref{returnsEWFig} (cumulative abnormal returns) is equal to the dashed red line in that same figure (cumulative value of the adjusted drift process) minus the line in Figure \ref{dispEWFig} (minus the log of the geometric mean of the commodity price distribution). This is exactly the relationship described by \eqref{relValueGEq} from Corollary \ref{returnsCor1}. We stress, however, that this empirical relationship is a necessary consequence of how the non-negative adjusted drift process, $-\alpha_G/G$, is calculated. For each day that we have data, the cumulative value of $-\alpha_G/G$ up to that day is calculated by subtracting the log value of the geometric mean of the commodity price distribution, $\log G$, from the cumulative abnormal returns, $\log V_g - \log V_m$, according to the identity \eqref{relValueGEq} from Corollary \ref{returnsCor1}. Given that the empirical decomposition of Figures \ref{returnsEWFig} and \ref{dispEWFig} is constructed so that \eqref{relValueGEq} must hold, it is natural to wonder what the usefulness of this decomposition is. Some of this usefulness lies in the prediction that one part of this decomposition, the cumulative value of the adjusted drift process, $-\alpha_G/G$, is non-decreasing. This prediction is clearly confirmed by the smooth upward slope of the cumulative value of the adjusted drift line in Figure \ref{returnsEWFig}, and has implications for the long-run relative performance of the equal- and price-weighted portfolio strategies, as we discuss below. Most of the usefulness of the decomposition \eqref{relValueGEq} lies, however, in the prediction that the cumulative value of the adjusted drift process is a finite variation process, while the other part, the log value of the geometric mean of the commodity price distribution, $\log G$, is not. Recall from the discussions in Sections \ref{sec:setup} and \ref{sec:generalPort} that a finite variation process has zero quadratic variation, or zero instantaneous variance. To be clear, the prediction that the cumulative value of the adjusted drift process is a finite variation process is not a prediction that the sample variance of changes in the cumulative value of the adjusted drift process computed using monthly, discrete-time data will be equal to zero, but rather a prediction that these changes will be roughly constant over time.\footnote{Note that the sample variance of a continuous-time finite variation process computed using discrete-time data will never be exactly equal to zero.} In other words, our results predict that the cumulative value of the adjusted drift process will grow at a roughly constant rate with only few and small changes over time. This smooth growth is exactly what is observed in the dashed red line of Figure \ref{returnsEWFig}, and, as mentioned above, is in stark contrast to the highly volatile behavior of price dispersion shown in Figure \ref{dispEWFig}. This contrast can be quantified by noting that the coefficient of variation of changes in the cumulative value of the adjusted drift process is equal to 3.14, while the coefficient of variation of changes in the log of minus price dispersion is equal to 124.64. These results confirm one of the key predictions of Theorem \ref{relValueThm} and Corollary \ref{returnsCor1}. The positive and relatively constant values of the adjusted drift, $-\alpha_G/G$, over time have an important implication for the long-run return of the equal-weighted portfolio strategy relative to the price-weighted market portfolio strategy. Since \eqref{relReturnEq} and Theorem \ref{relValueThm} imply that relative returns can be decomposed into the adjusted drift and changes in asset price dispersion, a consistently positive adjusted drift over long time horizons can only be counterbalanced by consistently rising asset price dispersion. In the absence of such rising dispersion, the positive drift guarantees outperformance relative to the market. Therefore, the relatively small increase in commodity price dispersion shown in Figure \ref{dispEWFig} together with the positive values of the adjusted drift shown in Figure \ref{returnsEWFig} ensure that the equal-weighted portfolio outperforms the market portfolio over the 1974-2018 time period. In a similar manner to Figure \ref{returnsEWFig}, Figure \ref{returnsCESFig} plots the cumulative abnormal returns of the CES-weighted portfolio strategy together with the cumulative value of the drift process adjusted by price dispersion, $-\alpha_U/U$, over the same 1974-2018 time period. Figure \ref{dispCESFig} plots price dispersion as measured by minus the log of the CES function applied to the asset price distribution, $U$, normalized relative to its average value over this time period. As with the equal-weighted portfolio, the cumulative value of the adjusted drift process in Figure \ref{returnsCESFig} is calculated using the identity \eqref{relValueUEq} from Corollary \ref{returnsCor2}. The results in Figures \ref{returnsCESFig} and \ref{dispCESFig} for the CES-weighted portfolio align closely with the results in Figures \ref{returnsEWFig} and \ref{dispEWFig} for the equal-weighted portfolio. Indeed, Figures \ref{returnsCESFig} and \ref{dispCESFig} show that short-run relative return fluctuations for the CES-weighted portfolio strategy closely follow fluctuations in commodity price dispersion, as measured by minus the CES function, while the long-run behavior of these relative returns closely follow the smoothly accumulating adjusted drift. Much like in Figure \ref{returnsEWFig}, Figure \ref{returnsCESFig} shows that the cumulative value of the adjusted drift, which is a finite variation process according to Theorem \ref{relValueThm}, grows at a roughly constant rate over time, with a clear contrast between this stable growth and the rapid fluctuations in price dispersion shown in Figure \ref{dispCESFig}. As discussed above, the fact that the adjusted drift, $-\alpha_U/U$, is approximately constant over time is consistent with the prediction that its cumulative value is a finite variation process, thus confirming one of the key results in Theorem \ref{relValueThm} and Corollary \ref{returnsCor2}. Finally, Figure \ref{returnsCESFig} confirms the consistent and substantial outperformance of the CES-weighted portfolio relative to the price-weighted market portfolio like in Tables \ref{returnsTab} and \ref{relReturnsTab}. As with the equal-weighted portfolio, this long-run outperformance is predicted by \eqref{relValueEq} and Theorem \ref{relValueThm} given the relatively small change in price dispersion observed in Figure \ref{dispCESFig} compared to the large increase in the cumulative value of the adjusted drift observed in Figure \ref{returnsCESFig}. \vskip 50pt \section{Discussion} \label{sec:discuss} The empirical results shown in Figures \ref{returnsEWFig}-\ref{dispCESFig} confirm the prediction of Theorem \ref{relValueThm} and Corollaries \ref{returnsCor1} and \ref{returnsCor2} that the drift component of the decomposition \eqref{relReturnEq} is nearly constant. As a consequence, this decomposition and its intuitive version \eqref{intuitiveEq} can be understood as \begin{equation} \label{intuitiveEq2} \text{relative return} \;\; = \;\; \text{constant} \; - \; \text{change in asset price dispersion}. \end{equation} Furthermore, the results of Figures \ref{returnsEWFig} and \ref{returnsCESFig} clearly show that this non-negative constant drift is in fact positive in the case of the equal- and CES-weighted portfolios of commodity futures. \subsection{The Price Dispersion Asset Pricing Factor} Taken together, our theoretical and empirical results show that changes in asset price dispersion are key determinants of the returns for a large class of portfolios relative to the market. Thus, the distribution of relative asset prices, as measured by the dispersion of those prices, is necessarily an asset pricing factor. This fact is apparent from \eqref{intuitiveEq2}, which is setup in the same way as empirical asset pricing factor regression models \citep{Fama/French:1993}. Crucially, however, the theoretical results of Theorem \ref{relValueThm} that establish the intuitive version \eqref{intuitiveEq2} are achieved under minimal assumptions that should be consistent with essentially any model of asset pricing, meaning that this price dispersion factor is universal across different economic and financial environments. Our empirical results in Figures \ref{returnsEWFig}-\ref{dispCESFig} help to confirm this universality, especially when taken together with previous studies documenting the accuracy of the decomposition of Theorem \ref{relValueThm} for U.S.\ equity markets \citep{Vervuurt/Karatzas:2015}. The generality of our results provides a novel workaround to many of the criticisms that have been raised recently about the empirical asset pricing literature. In particular, the implausibly high and rising number of factors and anomalies that this literature has identified has drawn a number of rebukes. \citet{Harvey/Liu/Zhu:2016}, for example, examine hundreds of different asset pricing factors and anomalies that have been uncovered using standard empirical methods and conclude that most are likely invalid. They also propose a substantially higher standard for statistical significance in future empirical analyses. Similarly, \citet{Bryzgalova:2016} shows that standard empirical methods applied to inappropriate risk factors in linear asset pricing models can generate spuriously high significance. \citet{Novy-Marx:2014} provides a different critique, demonstrating that many supposedly different anomalies are potentially driven by one or two common risk factors. All of these studies suggest that the extensive list of factors and anomalies proposed by the literature overstates the true number. The asset price dispersion factor established by Theorem \ref{relValueThm} is not derived using a specific economic model or a specific regression framework, but rather using general mathematical methods that represent asset prices as continuous semimartingales that are consistent with essentially all models and empirical specifications. For this reason, the price dispersion asset pricing factor we characterize is not subject to the criticisms of this literature. \subsection{Price Dispersion, Value, and Naive Diversification} The results of Theorem \ref{relValueThm} and Corollaries \ref{returnsCor1} and \ref{returnsCor2} offer new interpretations for both the value anomaly for commodities uncovered by \citet{Asness/Moskowitz/Pedersen:2013} and the surprising effectiveness of naive $1/N$ diversification described by \citet{DeMiguel/Garlappi/Uppal:2009}. The value anomaly for commodities of \citet{Asness/Moskowitz/Pedersen:2013} is constructed by ranking commodity futures prices relative to their average price five years earlier and then comparing the returns of portfolios of low-rank, high-value commodities to the returns of portfolios of high-rank, low-value commodities. This price ranking system is similar to the price normalization we implement based on the prices of commodity futures on the 1969 start date. Because the equal- and CES-weighted commodity futures portfolios put more weight on lower-normalized-priced commodities than does the price-weighted market portfolio, it follows that the predictable excess returns that we report in Tables \ref{returnsTab} and \ref{relReturnsTab} are similar to the value effect for commodities of \citet{Asness/Moskowitz/Pedersen:2013}. A key difference between these results and our results, however, is that we link the predictable excess returns of the equal- and CES-weighted portfolio strategies to the approximate stability of commodity price dispersion as measured by minus the geometric mean and CES functions. This link is essential to understanding the economic and financial mechanisms behind these excess returns. Our results imply that such excess returns decrease as asset price dispersion rises, with positive excess returns ensured only if asset price dispersion does not rise substantially over time. Thus, any attempt to explain the predictable excess returns of Tables \ref{returnsTab} and \ref{relReturnsTab} and the related value anomaly for commodity futures must also explain the fluctuations in commodity price dispersion shown in Figures \ref{dispEWFig} and \ref{dispCESFig}, since these fluctuations are driving the excess return fluctuations. This conclusion points to the importance of a deeper understanding of the economic and financial mechanisms behind fluctuations in asset price dispersion. \citet{DeMiguel/Garlappi/Uppal:2009} consider a number of different portfolio diversification strategies using several different data sets and show that a naive strategy of weighting each asset equally consistently outperforms almost all of the more sophisticated strategies. One of the strategies that is outperformed by naive $1/N$ diversification is a value-weighted market portfolio strategy based on CAPM. This strategy is equivalent to the market portfolio we defined in Section \ref{sec:port}, since our price weights are equivalent to their value weights. The results of Corollary \ref{returnsCor1}, therefore, can be applied to the excess returns of the equal-weighted portfolio relative to the value-weighted market portfolio uncovered by \citet{DeMiguel/Garlappi/Uppal:2009}. In particular, the corollary implies that this excess return is determined by a non-negative drift and the change in asset price dispersion as measured by minus the geometric mean of relative prices. The decomposition of Corollary \ref{returnsCor1} provides a novel interpretation of the results of \citet{DeMiguel/Garlappi/Uppal:2009} in terms of the stability of the distributions of the various different empirical data sets these authors consider. As with the value anomaly for commodities, our theoretical decomposition implies that the excess returns of the naive $1/N$ diversification strategy decrease as asset price dispersion rises and are positive only if dispersion does not substantially rise over time. Thus, our results strongly suggest that the values of the various different assets considered by \citet{DeMiguel/Garlappi/Uppal:2009} are stable relative to each other, in a manner similar to what we observe for commodity futures prices in Figures \ref{dispEWFig} and \ref{dispCESFig}. In the absence of such stability, there would be no reason to expect the equal-weighted portfolio to outperform the value-weighted market portfolio as the authors observe. Once again, these conclusions highlight the importance of a deeper understanding of the economic and financial mechanisms behind fluctuations in asset price dispersion. \subsection{Price Dispersion and Efficient Markets} The relative return decomposition of Theorem \ref{relValueThm} reveals a novel dichotomy for markets in which dividends and asset entry/exit over time play small roles. On the one hand, the dispersion of asset prices may be approximately stable over time, in which case \eqref{intuitiveEq2} implies that predictable excess returns exist for a large class of portfolio strategies. This is the scenario we observe for commodity futures in Figures \ref{returnsEWFig}-\ref{dispCESFig}. In such markets, fluctuations in asset price dispersion are linked to excess returns via the accounting identity \eqref{relReturnEq} from Theorem \ref{relValueThm}. In a standard equilibrium model of asset pricing, these predictable excess returns may exist only if they are compensation for risk. This risk, in turn, is defined by an endogenous stochastic discount factor that is linked to the marginal utility of economic agents. It is not clear, however, how marginal utility might be linked to the dispersion of asset prices. It is also not clear why marginal utility should be higher when asset prices grow more dispersed, yet these are necessary implications of any standard asset pricing model in which price dispersion is asymptotically stable, according to our results. On the other hand, the dispersion of asset prices may not be stable over time. In this case, asset price dispersion is consistently and rapidly rising, and the decomposition \eqref{intuitiveEq2} no longer predicts excess returns. Instead, this decomposition predicts rising price dispersion that cancels out the non-negative drift component of \eqref{intuitiveEq2} on average over time. The relative return decomposition of Theorem \ref{relValueThm} makes no predictions about the stability of asset price dispersion, so this possibility is not ruled out by our theoretical results. Nonetheless, it is notable that both our empirical results for commodity futures and the empirical results of \citet{Vervuurt/Karatzas:2015} for U.S.\ equities are inconsistent with this no-stability, no-excess-returns market structure. In light of these results, future work that examines the long-run properties of price dispersion in different asset markets and attempts to distinguish between the two sides of this dichotomy --- asymptotically stable markets with predictable excess returns versus asymptotically unstable markets without predictable excess returns --- may yield interesting new insights. This novel dichotomy has several implications. First, it provides a new interpretation of market efficiency in terms of a constraint on cross-sectional asset price dynamics and the dispersion of relative asset prices. Either asset price dispersion rises consistently and rapidly over time, consistent with this constraint, or there exists a market inefficiency or a risk factor based on the decomposition \eqref{relReturnEq}. Second, it raises the possibility that well-known asset pricing risk factors such as value, momentum, and size \citep{Banz:1981,Fama/French:1993,Asness/Moskowitz/Pedersen:2013} may be interpretable in terms of the dynamics of asset price dispersion. To the extent that the decomposition of Theorem \eqref{relValueThm} is universal, the predictable excess returns underlying each of these risk factors may potentially be linked to a violation of the constraint on cross-sectional asset price dynamics and the dispersion of relative asset prices mentioned above. In other words, traditional asset pricing risk factors imply specific behavior for asset price dispersion over time and hence may be interpreted in terms of that specific behavior. \vskip 50pt \section{Conclusion} \label{sec:conclusion} We represent asset prices as general continuous semimartingales and show that the returns on a large class of portfolio strategies relative to the market can be decomposed into a non-negative drift and changes in asset price dispersion. Because of the minimal assumptions underlying this result, our decomposition is little more than an accounting identity that is consistent with essentially any asset pricing model. We show that the drift component of our decomposition is approximately constant over time, thus implying that changes in asset price dispersion determine relative return fluctuations. This conclusion reveals an asset pricing factor --- changes in asset price dispersion --- that is universal across different economic and financial environments. We confirm our theoretical predictions using commodity futures, and show that equal- and constant-elasticity-of-substitution-weighted portfolios consistently and substantially outperformed the price-weighted market portfolio from 1974-2018. \vskip 50pt \begin{spacing}{1.2} \bibliographystyle{chicago}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \indent The Minimal Supersymmetric Standard model (MSSM) is an attractive extension \cite{Haber:1984rc,Nilles:1983ge} of the very successful Standard Model. One property of this theory is its rich spectrum of new heavy particles which might be discovered at the LHC if they are lighter than $\approx 2\ensuremath{\,\mathrm{TeV}}$. Searches for superymmetric particles are performed at the Tevatron and the LHC. No superpartners have been discovered so far. In the mSugra framework of the Minimal Supersymmetric Standard Model (MSSM), the lightest stop squark, one of the scalar supersymmetric partner particle of the top quark, is assumed to be the lightest supersymmetric coloured particle, lighter than the other scalar quarks. This is due to large nondiagonal elements in the stop mixing matrix, see f. e. the review~\cite{Aitchison:2007fn}. The lightest stop squark might be the first coloured SUSY particle to be discovered. The cross section delivers information about the stop mass or, if the mass of the stop squark is roughly known from elsewhere, information about its spin\cite{Kane:2008kw}. If these particles cannot be discovered at the Tevatron or LHC, precise cross sections help to improve mass exclusion limits. In this paper, I study the hadroproduction of stop-antistop-pairs \begin{eqnarray} pp/p\bar{p} &\rightarrow& \tilde{t}_i \tilde{t}_i^\ast X,\enspace i=1,\,2, \end{eqnarray} with its partonic subprocesses \begin{eqnarray} gg\rightarrow \tilde{t}_i \tilde{t}_i^\ast\enspace \text{and}\enspace \ensuremath{q \bar{q}} \rightarrow\tilde{t}_i\tilde{t}_i^\ast,\enspace q = u,d,c,s,b, \end{eqnarray} including NNLO threshold contributions. The relevant leading order (LO) Feynman diagrams are shown in Fig.~\ref{fig:ggbarchannel}. The production of mixed stop pairs $\tilde{t}_1\tilde{t}_2^\ast$ or $\tilde{t}_2\tilde{t}_1^\ast$ starts at next to leading order (NLO)~\cite{Beenakker:1997ut} and is therefore suppressed. This case will not be considered in this paper. The top parton density distribution in a proton is assumed to be zero, in contrast to the other quark parton density distributions. As a consequence, there is no gluino exchange diagram as for squark antisquark hadroproduction. For that reason, the $\ensuremath{q \bar{q}}$ channel is suppressed by a larger power of $\beta = \sqrt{1-4 m^2_{\tilde{t}}/s}$ due to $P$-wave annihilation. The final state must be in a state with angular momentum $l=1$ (denoted as $P$) to balance the spin of the gluon. Therefore, the case of stop-antistop hadroproduction needs a special treatment. At NLO, there is the $gq$ channel as an additional production mechanism. At the LHC with a center of mass energy of $7\ensuremath{\,\mathrm{TeV}}$, one can expect for a luminosity of $1\ensuremath{\,\mathrm{fb}}^{-1}$, $100$ to $10^4$ events; even $10^5$ events are possible, if the stop is sufficiently light. At the LHC with a final center of mass energy of $14\ensuremath{\,\mathrm{TeV}}$, even more events are expected to be collected. Hence it is necessary to predict the hadronic cross section with high accuracy. \begin{figure}[t] \centering \scalebox{0.65}{\includegraphics{stopgg.eps}} \scalebox{0.46}{\includegraphics{stopqq.eps}}\vspace*{3mm} \caption{\small{LO production of a \ensuremath{\tilde{t}_i\,\tilde{t}_i^{\ast}}~pair via $gg$ annihilation (diagrams a-d) and \ensuremath{q \bar{q}}~annihilation (diagram e).}} \label{fig:ggbarchannel} \end{figure} So far, stop pairs have been searched for at the CDF ~\cite{Aaltonen:2007sw,Ivanov:2008st,Aaltonen:2009sf,Aaltonen:2010uf} and D0 experiments~\cite{Abazov:2009ps,Abazov:2008kz} at the Tevatron using different strategies, for details see Tab.~\ref{tab:stopsuche}. Squarks carry colour charge, so it is not surprising that processes involving Quantum Chromodynamics (QCD) obtain large higher-order corrections. For the production of colour-charged superymmetric particles, the NLO corrections have been calculated in Ref.~\cite{Beenakker:1996ch}, NLL and approximated NNLO corrections can be found in the Refs~\cite{Kulesza:2008jb,Langenfeld:2009eg,Beenakker:2009ha,Beneke:2010gm}. It has been found that these corrections are quite sizeable. Electroweak NLO corrections to stop-antistop production are discussed in Ref.~\cite{Hollik:2007wf,Beccaria:2008mi}. The theoretical aspects of \ensuremath{\tilde{t}_i\,\tilde{t}_i^{\ast}} - production up to NLO have been discussed in Ref.~\cite{Beenakker:1997ut} and of its NLL contributions in Ref.~\cite{Beenakker:2010nq}. The hadronic LO and NLO cross section can be evaluated numerically using the programme \texttt{Prospino}~\cite{Beenakker:1996ed}. In this paper, I calculate and study soft gluon effects to hadronic stop-antistop production in the framework of the $R$-parity- conserving MSSM. I use Sudakov resummation to generate the approximated NNLO corrections and include approximated two-loop Coulomb corrections and the exact scale dependence. I follow the approach for top-antitop production at the LHC and the Tevatron~\cite{Moch:2008qy,Langenfeld:2009wd}. This paper is organised as follows. I review the LO and NLO contributions to the cross section. Then I describe the necessary steps to construct the approximated NNLO corrections. Using these results I calculate the approximated NNLO cross section and discuss the theoretical uncertainty due to scale variation and the error due to the parton density functions (PDFs). I give an example how these NNLO contributions reduce the scale uncertainty and improve exclusion limits. \begin{table}[tbh] \centering \begin{tabular}{llcl} \toprule Ref. &Process & Exclusion limit & Assumptions or comments\\ \midrule \cite{Aaltonen:2007sw} &$\tilde{t}_1\to c\tilde{\chi}^0_1$& $\mstop[1] < 100\ensuremath{\,\mathrm{GeV}}$& $m_{\chi^{0}_1} > 50\ensuremath{\,\mathrm{GeV}}$\\[2mm] \cite{Abazov:2009ps}&$p\bar{p}\to \tilde{t}_1\tilde{t}_1^\ast$ & $130\ensuremath{\,\mathrm{GeV}} < \mstop[1] < 190\ensuremath{\,\mathrm{GeV}}$ &Comparison of theor. predictions\\[0mm] &&&with experimental and observed limits\\[2mm] \cite{Ivanov:2008st,Aaltonen:2009sf}& $\tilde{t}_1\to b\tilde{\chi}^\pm_1 \to b \tilde{\chi}^0_1\ell^\pm\nu_\ell$& $128\ensuremath{\,\mathrm{GeV}} < \mstop[1] < 135\ensuremath{\,\mathrm{GeV}}$& \\[2mm] \cite{Abazov:2008kz,Aaltonen:2010uf}&$\tilde{t}_1\to b \ell^+ \tilde{\nu}_\ell$& $\mstop[1] > 180\ensuremath{\,\mathrm{GeV}}$& $m_{\tilde{\nu}} \geq 45\ensuremath{\,\mathrm{GeV}}$\\ && $\mstop[1] = 100\ensuremath{\,\mathrm{GeV}}$& $75\ensuremath{\,\mathrm{GeV}} \leq m_{\tilde{\nu}} \leq 95\ensuremath{\,\mathrm{GeV}}$\\ \bottomrule \end{tabular} \caption{\small{Exclusion limits for stop searches at the Tevatron.}} \label{tab:stopsuche} \end{table} \section{Theoretical Setup} I focus on the inclusive hadronic cross section of hadroproduction of stop pairs, $\sigma_{p p \rightarrow \ensuremath{\tilde{t}_i\,\tilde{t}_i^{\ast}} X}$, which is a function of the hadronic center-of-mass energy $\sqrt{s}$, the stop mass $\mstop$, the gluino mass $m_{\tilde{g}}$, the renormalisation scale $\mu_r$ and the factorisation scale $\mu_f$. In the standard factorisation approach of perturbative QCD, it reads \begin{eqnarray} \label{eq:totalcrs} \sigma_{pp/p\bar{p} \to \tilde{t}\tilde{t}^\ast X}(s,\mstop^2,m_{\tilde{g}}^2,{\mu^{\,2}_f},{\mu^{\,2}_r}) &=& \sum\limits_{i,j = q,{\bar{q}},g} \,\,\, \int\limits_{4\mstop^2}^{s }\, d {{\hat s}} \,\, L_{ij}({\hat s}, s, {\mu^{\,2}_f})\,\, \hat{\sigma}_{ij \to \tilde{t}\tilde{t}^\ast} ({{\hat s}},\mstop^2,m_{\tilde{g}}^2,{\mu^{\,2}_f},{\mu^{\,2}_r})\, \end{eqnarray} where the parton luminosities $L_{ij}$ are given as convolutions of the PDFs $f_{i/p}$ defined through \begin{eqnarray} \label{eq:partonlumi} L_{ij}({\hat s}, s, {\mu^{\,2}_f}) &=& {\frac{1}{s}} \int\limits_{{\hat s}}^s {\frac{dz}{z}} f_{i/p}\left({\mu^{\,2}_f},{\frac{z}{s}}\right) f_{j/p}\left({\mu^{\,2}_f},{\frac{{\hat s}}{z}}\right)\, \enspace . \end{eqnarray} Here, ${\hat s}$ denotes the partonic center of mass energy and ${\mu^{\,2}_f}, {\mu^{\,2}_r}$ are the factorisation and the renormalisation scale. The partonic cross section is expressed by dimensionless scaling function $f^{(kl)}_{ij}$ \begin{eqnarray} \hat{\sigma}_{ij} &=& \frac{\alpha_s^2}{\mstop^2} \biggl[ f^{(00)}_{ij} + 4\pi\alpha_s \Bigl(f^{(10)}_{ij} + f^{(11)}_{ij}L_N\Bigr) + (4\pi\alpha_s)^2\Bigl(f^{(20)}_{ij} + f^{(21)}_{ij}L_N + f^{(22)}_{ij}L_N^2 \Bigr) \biggr] \end{eqnarray} with $L_N = \Ln$. The LO scaling functions are given by~\cite{Beenakker:1997ut} \begin{eqnarray} f^{(00)}_{\ensuremath{q \bar{q}}} &=& \frac{\pi}{54}\beta^3\rho \enspace= \enspace\frac{\pi}{54}\beta^3 + \mathcal{O}(\beta^5),\\ f^{(00)}_{gg} &=& \frac{\pi}{384}\rho \biggl[41\beta -31\beta^3+\Bigl(17-18\*\beta^2+\beta^4\Bigr) \log\biggl(\frac{1-\beta}{1+\beta}\biggr)\biggr] \enspace= \enspace\frac{7\pi}{384}\beta + \mathcal{O}(\beta^3)\enspace . \end{eqnarray} Formulas for the higher orders of the $gg$-channel and its threshold expansions can be found in Refs~\cite{Beenakker:1997ut,Langenfeld:2009eg,Langenfeld:2009eu}, if one takes into account that in the case of stop-antistop production no sum over flavours and helicities is needed. $f^{(10)}_{gg}$ has been calculated numerically using \texttt{Prospino} ~\cite{Beenakker:1996ed}. A fit to this function for an easier numerical handling can be found in~\cite{Langenfeld:2009eg}. At NLO, $f^{(10)}_{\ensuremath{q \bar{q}}}$ is given at threshold by ~\cite{Beenakker:1997ut,Beenakker:2010nq} \begin{eqnarray} f^{(10)}_{\ensuremath{q \bar{q}}} &=& \frac{f^{(00)}_{\ensuremath{q \bar{q}}}}{4\pi^2} \biggl(\frac{8}{3}\*\log^2\big(8\*\beta^2\big) - \frac{155}{9}\*\log\big(8\*\beta^2\big) - \frac{\pi^2}{12\beta} + 54\*\pi\* a_{1}^{\ensuremath{q \bar{q}}}\biggr)\enspace. \end{eqnarray} The constant $a_{1}^{\ensuremath{q \bar{q}}}$ can be determined from a fit and is approximately given as $a_{1}^{\ensuremath{q \bar{q}}} \approx 0.042\pm 0.001$. It depends mildly on the squark and gluino masses and on the stop mixing angle. The $gq$-channel is absent at tree level. Its NLO contribution has been extracted from \texttt{Prospino}. This channel is strongly suppressed at threshold. The \lnbeta~terms which appear in the threshold expansions of the NLO scaling functions can be resummed systematically to all orders in perturbation theory using the techniques described in~\cite{Contopanagos:1996nh,Catani:1996yz,Kidonakis:1997gm,Moch:2005ba,Czakon:2009zw}. Logarithmically enhanced terms for the hadronic production of heavy quarks admitting an $S$-wave are also studied in Ref.~\cite{Beneke:2009ye} for arbitrary $SU(3)_{\text{colour}}$ representations . The resummation is performed in Mellin space after introducing moments $N$ with respect to the variable $\rho = 4\mstop^2/{\hat s}$ of the physical space: \begin{eqnarray} \label{eq:mellindef} \hat{\sigma}(N,\mstop^2) &=& \int\limits_{0}^{1}\,d\rho\, \rho^{N-1}\, \hat{\sigma}({\hat s},\mstop^2)\, . \end{eqnarray} The resummed cross section is obtained for the individual color structures denoted as ${\bf{I}}$ from the exponential \begin{eqnarray} \label{eq:sigmaNres} \frac{\hat{\sigma}_{ij,\, {\bf I}}(N,\mstop^2)} { \hat{\sigma}^{B}_{ij,\, {\bf I}}(N,\mstop^2)} &=& g^0_{ij,\, {\bf I}}(\mstop^2) \cdot \exp\, \Big[ G_{ij,\,{\bf I}}(N+1) \Big] + {\cal O}(N^{-1}\log^n N) \, , \end{eqnarray} where all dependence on the renormalisation and factorisation scale ${\mu^{}_r}$ and ${\mu^{}_f}$ is suppressed and the respective Born term is denoted $\hat{\sigma}^{B}_{ij,\, {\bf I}}$. The exponent $G_{ij,\, {\bf I}}$ contains all large Sudakov logarithms $\log^k N$ and the resummed cross section~(\ref{eq:sigmaNres}) is accurate up to terms which vanish as a power for large Mellin-$N$. To NNLL accuracy, $G_{ij,\, {\bf I}}$ is commonly written as \begin{eqnarray} \label{eq:GNexp} G_{ij,\, {\bf I}}(N) = \log N \cdot g^1_{ij}(\lambda) + g^2_{ij,\, {\bf I}}(\lambda) + \frac{\alpha_s }{ 4 \pi}\, g^3_{ij,\, {\bf I}}(\lambda) + \dots\, , \end{eqnarray} where $\lambda = \beta_0\, \log N\, \alpha_s/(4 \pi)$. The exponential $\exp\, \big[ G_{ij,\,{\bf I}}(N+1) \big]$ in Eq. (\ref{eq:GNexp}) is independent from the Born cross section~\cite{Moch:2008qy,Czakon:2009zw}. The functions $g^k_{ij}$, $k=1,2,3$, for the octet color structure are explicitly given in Ref.~\cite{Moch:2008qy} and can be taken over from the case of top-quark hadroproduction, the function $g^0_{\ensuremath{q \bar{q}}}$ is given by Eq.~(\ref{eq:g0qq}) in the App.~\ref{subsec:resum}. All $g^k_{ij}$, $0\leq k \leq 3$, depend on a number of anomalous dimensions, \textit{i.e.}~the well-known cusp anomalous dimension $A_q$, the functions $D_{Q\bar{Q}}$ and $D_{q}$ controlling soft gluon emission, and the coefficients of the QCD $\beta$-function. The strength of soft gluon emission is proportional to the Casimir operator of the $SU(3)_{\mathrm{colour}}$ representation of the produced state. This is identical for $t\bar{t}$ and \ensuremath{\tilde{t}_i\,\tilde{t}_i^{\ast}} - production. Expressions for $A_q$ and $D_{q}$ are given in the Refs.~\cite{Moch:2004pa,Kodaira:1981nh}, and for $D_{Q\bar{Q}}$ in Ref.~\cite{Beneke:2009rj}. At higher orders, they also depend on the chosen renormalisation scheme, thus on the dynamical degrees of freedom. For my fixed order NNLO calculation, I extracted the $\alpha_s^2$-terms from the right hand side of Eq.~(\ref{eq:sigmaNres}). At the end, I used Eqs~(\ref{eq:inveins}) - (\ref{eq:invczwei}) given in App.~\ref{subsec:mellininversion} to convert the Mellin space result back to the physical $\rho$ space. I kept all those terms which are of the order $\beta^3\lnbeta[k]$, $0\leq k \leq 4$. Eventually, I end up with the following threshold expansion for $f^{(20)}_{q\bar{q}}$: \begin{eqnarray} \label{eq:fqq20-num} f^{(20)}_{q\bar{q}} &=& \frac{f^{(00)}_{q\bar{q}}}{(16\pi^2)^2}\* \Biggl [ \frac{8192}{9} \* \lnbeta[4] + \biggl(-\frac{175616}{27}+\frac{16384}{3}\*\lnzwei +\frac{1024}{27}n_f\biggr)\lnbeta[3]\notag\\[2mm] &&\hspace*{15mm} + \biggl( \frac{525968}{27}-\frac{87808}{3}\*\lnzwei -\frac{4480}{9}\*\pi^2 +\frac{512}{3}\*C^{(1)}_{\ensuremath{q \bar{q}}} +12288\*\lnzwei[2] \notag\\[2mm] &&\hspace*{20mm} +\frac{512}{3}\*\lnzwei\,\* n_f-\frac{2080}{9}\*n_f -\frac{128}{9}\frac{\pi^2}{\beta} \biggr)\lnbeta[2]\notag\\[2mm] &&\hspace*{15mm} + \biggl( \frac{525968}{9}\*\lnzwei-43904\*\lnzwei[2] +12288\*\lnzwei[3]-\frac{4960}{9}\*C^{(1)}_{\ensuremath{q \bar{q}}} -\frac{2980288}{81}\notag\\[2mm] &&\hspace*{20mm} +\frac{61376}{9}\*\zeta_3+\frac{49280}{27}\*\pi^2 -\frac{4480}{3}\*\pi^2\*\lnzwei +512\,\*C^{(1)}_{\ensuremath{q \bar{q}}}\*\lnzwei - 2D^{(2)}_{Q\bar{Q}}\notag\\[2mm] &&\hspace*{20mm} +\Big(-\frac{128}{9}\*\pi^2+256\*\lnzwei[2]+\frac{45568}{81} -\frac{2080}{3}\*\lnzwei\Big)\*n_f\notag\\[2mm] &&\hspace*{20mm} +\Big(\frac{266}{9}-\frac{128}{9}\*\lnzwei-\frac{4}{9}n_f \Big) \frac{\pi^2}{\beta} \biggr)\lnbeta\notag\\[2mm] &&\hspace*{15mm} +\biggl( -\frac{13}{3} + \frac{22}{3}\lnzwei +\Big(\frac{10}{27} - \frac{4}{9}\*\lnzwei\Big)\*n_f \biggr)\frac{\pi^2}{\beta} +\frac{1}{27}\frac{\pi^4}{\beta^2} + C^{(2)}_{q{\bar q}} \Biggr ]\enspace . \end{eqnarray} $C^{(1)}_{\ensuremath{q \bar{q}}}$ is given as $C^{(1)}_{\ensuremath{q \bar{q}}}= 216\,\pi\, a^{\ensuremath{q \bar{q}}}_1 -\tfrac{310}{27}$, $C^{(2)}_{q{\bar q}}$ is the unknown 2-loop matching constant, which is set to zero in the numerical evaluation and $D^{(2)}_{Q\bar{Q}} = 460 -12\pi^2+72\zeta_3-\tfrac{88}{3}n_f$, see Ref.~\cite{Beneke:2009rj}. The $\beta^3$-behaviour of the threshold expansion of the LO cross section comes \emph{only} from the $P$-wave of the final state $\ensuremath{\tilde{t}_i\,\tilde{t}_i^{\ast}}$ as mentioned in the introduction and does not spoil the factorisation properties in the threshold region of the phase space. Note that the formulas given in Ref.~\cite{Czakon:2009zw} can easily extended to Mellin transformed cross sections $\omega$, which vanish as a power $\beta^k$ with $k\geq 1$. These two more powers of $\beta$ in the \ensuremath{q \bar{q}}-channel of \ensuremath{\tilde{t}_i\,\tilde{t}_i^{\ast}} - production lead to an additional $1/N$ factor in Mellin space. Eq.~(\ref{eq:sigmaNres}) reproduces the known NLO threshold expansion given in Ref.~\cite{Beenakker:1997ut,Beenakker:2010nq} for the $q\bar{q}$ channel. This is a check that the approach works. Logarithmically enhanced terms which are suppressed by an additional $1/N$ factor appear in the resummation of the (sub)leading $\log^k(N)/N$-terms of the corrections for the structure function $F_L$ and are studied in detail in the Refs~\cite{Moch:2009hr,Laenen:2008gt}. For these reasons I apply the formulas derived for heavy quark hadroproduction. The coefficients of the $\ln^4\beta$, $\ln^3\beta$, and $\ln^2\beta$ terms depend only on first order anomalous dimensions and on the constant $C^{(1)}_{\ensuremath{q \bar{q}}}$ which is related to the NLO constant $a^{\ensuremath{q \bar{q}}}_1$, see the equation above. The linear $\log\beta$ term depends on $C^{(1)}_{\ensuremath{q \bar{q}}}$ as well and on other first order (NLO) contributions, but also on second-order anomalous dimensions and non-Coulomb potential contributions~\cite{Beneke:2009ye}. In Tab.~\ref{tab:terms}, I show for four examples how these parts contribute to the hadronic NNLO threshold corrections. The numbers show that terms which have an NLO origin contribute most and that NNLO contributions have a small but sizeable effect. I also included the Coulomb corrections up to NNLO. For the singlet case, the Coulomb contributions are studied in Ref.~\cite{Czarnecki:1997vz}. Generalisation to other colour structures requires the substitution of the corresponding group factors and decomposition of the colour structures of the considered process in irreducible colour representations. The last step will not be necessary for stop pair production in the $q\bar{q}$-annihilation channel. The NLO Coulomb corrections agree with the NLO Coulomb corrections for top antitop production~\cite{Langenfeld:2009eg,Beenakker:1997ut}. In both cases, only the colour octet contributes to the scaling function at the corresponding leading order in $\beta$. Therefore, I have used as an approximation for the NNLO Coulomb contributions for $\tilde{t}\tilde{t}^\ast$-production the same NNLO Coulomb contributions as for $t\bar{t}$-production~\cite{Czarnecki:1997vz,Moch:2008qy,Langenfeld:2009eg}. Gauge invariance together with Supersymmetry support this approximation. Note that the $\log^2\beta/\beta$-term comes from interference of the NLO Coulomb-contribution with the NLO threshold logarithms. Tab.~\ref{tab:terms} shows also the NLO and the pure NNLO Coulomb contributions to the NNLO threshold corrections at the hadronic level. The scale-dependent scaling functions are derived by renormalisation group techniques following Refs~\cite{vanNeerven:2000uj,Kidonakis:2001nj}: \begin{eqnarray} \label{eq:f11} f_{ij}^{(11)}&=& \frac{1}{16\pi^2}\*\left( 2\* \beta_0 \*f_{ij}^{(00)} - f_{kj}^{(00)}\otimes P_{ki}^{(0)} - f_{ik}^{(00)}\otimes P_{kj}^{(0)} \right) \, ,\\[2mm] \label{eq:f21} f_{ij}^{(21)}&=& \frac{1}{(16\pi^2)^2}\*\left( 2\* \beta_1\* f_{ij}^{(00)} - f_{kj}^{(00)}\otimes P_{ki}^{(1)} - f_{ik}^{(00)}\otimes P_{kj}^{(1)}\right) \nonumber\\ & & + \frac{1}{16\pi^2}\*\left( 3 \*\beta_0 \*f_{ij}^{(10)} -f_{kj}^{(10)}\otimes P_{ki}^{(0)} - f_{ik}^{(10)}\otimes P_{kj}^{(0)}\right) \, , \\[2mm] \label{eq:f22} f_{ij}^{(22)}&=& \frac{1}{(16\pi^2)^2}\*\left( f_{kl}^{(00)}\otimes P_{ki}^{(0)}\otimes P_{lj}^{(0)} +\frac{1}{2} f_{in}^{(00)}\otimes P_{nl}^{(0)}\otimes P_{lj}^{(0)} +\frac{1}{2} f_{nj}^{(00)}\otimes P_{nk}^{(0)}\otimes P_{ki}^{(0)} \right.\notag\\[2mm] & & \hspace{18mm}\left. + 3 \*\beta_0^2 \*f_{ij}^{(00)} - \frac{5}{2}\*\beta_0 \*f_{ik}^{(00)}\otimes P_{kj}^{(0)} - \frac{5}{2}\*\beta_0 \*f_{kj}^{(00)}\otimes P_{ki}^{(0)} \right) \, , \end{eqnarray} where $\otimes$ denotes the standard Mellin convolution; these are ordinary products in Mellin space using Eq.~(\ref{eq:mellindef}). Repeated indices imply summation over admissible partons. However, I restrict myself for phenomenological applications to the numerically dominant diagonal parton channels at two-loop. Note that the scale dependence is exact at all energies, even away from threshold, because the Eqs (\ref{eq:f11})-(\ref{eq:f22}) depend on functions which are at least one order lower than they themselves have. The functions $P_{ij}(x)$ are called splitting functions and govern the PDF evolution. They have the expansion \begin{eqnarray} \label{eq:splitting} P_{ij}(x) &=& \frac{\alpha_s}{4\pi}P_{ij}^{(0)}(x) + \left(\frac{\alpha_s}{4\pi}\right)^2P_{ij}^{(1)}(x) + \ldots . \end{eqnarray} Explicit expressions for the $P_{ij}^{(k)}$ can be found in Refs~\cite{Moch:2004pa,Vogt:2004mw}. Analytical results for $f^{(11)}_{gg}$ and $f^{(11)}_{\ensuremath{q \bar{q}}}$ are given in Ref.~\cite{Langenfeld:2009eu}. For the $gq$-channel, Eq.~(\ref{eq:f11}) simplifies to \begin{eqnarray} \label{eq:sqg11} f^{(11)}_{gq} &=& -\frac{1}{16\pi^2}\left(P^{(0)}_{gq}\otimes f^{(0)}_{gg} +\frac{1}{2 n_f}P^{(0)}_{qg}\otimes f^{(0)}_{\ensuremath{q \bar{q}}}\right). \end{eqnarray} The integration can be done explicitly yielding \begin{align} f^{(11)}_{gq} \,\,=\,\,\,\,\,&\frac{1}{51840\*\pi}\* \biggl[ \beta\*\Big(-176 - 1083\*\rho +1409\*\rho^2\Big) +15\*\rho\*\Big(26 - (27-24\*\ln 2)\*\rho - 4\*\rho^2\Big)\*L_2 \notag\\[2mm] &\hspace*{17mm}+180\*\rho^2\*\Big(2\*L_4-L_6\Big) \biggr], \end{align} where the functions $L_2$, $L_4$, and $L_6$~\cite{Langenfeld:2009eu} are defined as \begin{eqnarray} L_2 = \log\left(\tfrac{1+\beta}{1-\beta}\right),\enspace L_4 = \Li\left(\tfrac{1-\beta}{2}\right) - \Li\left(\tfrac{1+\beta}{2}\right), \enspace L_6 = \log^2(1-\beta) - \log^2(1+\beta). \end{eqnarray} The high energy limit of this scaling function is \begin{eqnarray} \lim_{\beta \to 1} f^{(11)}_{gq} = -\frac{11}{3240\*\pi}, \end{eqnarray} which agrees with the result given in Ref.~\cite{Beenakker:1997ut}. The threshold expansions of the NNLO-scale-dependent scaling functions of the $\ensuremath{q \bar{q}}$ channel read \begin{eqnarray} f^{(21)}_{\ensuremath{q \bar{q}}} &=& -\frac{f_{\ensuremath{q \bar{q}}}^{(00)}}{(16\pi^2)^2} \biggl[ {\frac {8192}{9}}\, \lnbeta[3] + \left( {\frac {256}{3}}\,n_f+{\frac {32768}{9}}\,\lnzwei -{\frac {46976}{9}} \right) \lnbeta[2] \notag\\[2mm] &&\hspace*{5mm} + \biggl( -{\frac {383104}{27}}\,\lnzwei +{\frac {798872}{81}}+{\frac {14336}{3}}\, \lnzwei[2] -{\frac {8080}{27}}\,n_f-{\frac {2240}{9}}\,{\pi }^{2} -{\frac {64}{9}}\,{\frac {{\pi }^{2}}{\beta}} \notag\\[2mm] &&\hspace*{5mm} +{\frac {256}{3}}\,C^{(1)} +256\,n_f\,\lnzwei \biggr) \lnbeta +{\frac {4540}{81}}\,n_f-{\frac {1924}{9}}\,C^{(1)} +2048\,\lnzwei[3] \notag\\[2mm] &&\hspace*{5mm} +{\frac {393004}{27}}\,\lnzwei -{\frac {1449488}{243}} +8\,C^{(1)}\,n_f-{\frac {85856}{9}}\, \lnzwei[2] +{\frac {14240}{9}}\,\zeta_3 \notag\\[2mm] &&\hspace*{5mm} +{\frac {11024}{27}}\,{\pi }^{2}-{\frac {1088}{3}}\,{\pi }^{2}\lnzwei +192\, \lnzwei[2]\,n_f +{\frac {25}{3}}\,{\frac {{\pi }^{2}}{\beta}} +{\frac {256}{3}}\,C^{(1)}\,\lnzwei \notag\\[2mm] &&\hspace*{5mm} -{\frac {11800}{27}}\,n_f\,\lnzwei -{\frac {32}{27}}\,{\pi }^{2}n_f-\frac{2}{3}\,n_f {\frac {{\pi }^{2}}{\beta}} \biggr],\\[3mm] f^{(22)}_{\ensuremath{q \bar{q}}} &=& \frac{f_{\ensuremath{q \bar{q}}}^{(00)}}{(16\pi^2)^2} \biggl[ {\frac {2048}{9}}\, \lnbeta[2]+ \left( -{\frac {27616}{27}}+{\frac {320}{9}}\,n_f +{\frac {4096}{9}}\,\lnzwei \right) \lnbeta \notag\\[2mm] &&\hspace*{5mm} -{\frac {2108}{27}}\,n_f+{\frac {112351}{81}} -{\frac {27616}{27}}\,\lnzwei +{\frac {2048}{9}}\, \lnzwei[2] +{\frac {320}{9}}\,n_f\,\lnzwei \notag\\[2mm] &&\hspace*{5mm} -{\frac {256}{9}}\,{\pi }^{2}+\frac{4}{3}\,n_f^{2} \biggr] \end{eqnarray} with $C^{(1)} = 54\,\pi\,a_1^{\ensuremath{q \bar{q}}}$. In Fig.~\ref{fig:scalingfunctions}, I show the LO, NLO, and NNLO scaling functions. The scaling functions $f^{(00)}_{\ensuremath{q \bar{q}}}$, $f^{(11)}_{\ensuremath{q \bar{q}}}$ $f^{(20)}_{\ensuremath{q \bar{q}}}$, and $f^{(22)}_{\ensuremath{q \bar{q}}}$ depend only on the dimensionless variable $\eta = \tfrac{\hat{s}}{4\mstop^2}-1$, but $f^{(10)}_{\ensuremath{q \bar{q}}}$ and $f^{(21)}_{\ensuremath{q \bar{q}}}$ depend also mildly on the masses of the squarks and the gluino and the stop mixing angle~\cite{Beenakker:1997ut}. At the hadronic level, the effect for the NLO + NLL cross section is smaller than $2\%$~\cite{Beenakker:2010nq}, so I neglect them. As example point, I have chosen the following masses: $\mstop[1]=300\,\ensuremath{\,\mathrm{GeV}}$, $m_{\tilde{q}} = 400\,\ensuremath{\,\mathrm{GeV}} = 1.33\,\mstop[1]$, $\mstop[2]=480\,\ensuremath{\,\mathrm{GeV}} = 1.6\,\mstop[1]$, $m_{\tilde{g}} = 500\,\ensuremath{\,\mathrm{GeV}} = 1.67\,\mstop[1]$, and $\theta =\pi/2$, \textit{i.e.}~$\mstop[1]=\mstop[\text{R}]$ and $\mstop[2]=\mstop[\text{L}]$. When varying the stop mass I conserve these mass relations. I restrict myself to the lighter stop, but the results also apply to the heavier stop, because the gluon-stop-stop interactions entering my process do not distinguish between the left-handed and the right-handed stop squarks. \begin{figure} \centering \scalebox{0.37}{\includegraphics{stopscaleNLO.eps}} \vspace*{10mm} \scalebox{0.37}{\includegraphics{stopscale2i.eps}} \vspace*{10mm} \scalebox{0.37}{\includegraphics{stopscaleNNLO.eps}} \vspace*{2mm} \caption{\small{Scaling functions $f^{(ij)}_{\ensuremath{q \bar{q}}}$ with $i = 0,1,2$ and $j\le i$. The masses are $\mstop[1]=300\,\ensuremath{\,\mathrm{GeV}}$, $m_{\tilde{q}} = 400\,\ensuremath{\,\mathrm{GeV}}$, $\mstop[2]=480\,\ensuremath{\,\mathrm{GeV}}$ $m_{\tilde{g}} = 500\,\ensuremath{\,\mathrm{GeV}}$.}} \label{fig:scalingfunctions} \end{figure} \begin{table} \centering \begin{tabular}{cc r rrr r rrr r} \toprule[0.08em] Collider&$\mstop$&$ \sum$\phantom{N} & $\ln^4(\beta)$ & $\ln^3(\beta)$ &$\ln^2(\beta)$ & \multicolumn{3}{c}{$\log(\beta)$} &C$_{\rm{NLO}}$&C$_{\rm{NNLO}}$\\ &&&&&&nC&$D^{(2)}_{Q\bar{Q}}$&rest&&\\ \midrule[0.03em] LHC $14 \ensuremath{\,\mathrm{TeV}}$&$300\ensuremath{\,\mathrm{GeV}}$&74.54&5.33&18.80&31.87&\multicolumn{3}{c}{27.71}&-9.45& 0.29\\[1mm] &&&&&&-2.23&17.90&12.04&&\\[1mm] \hline LHC $14 \ensuremath{\,\mathrm{TeV}}$&$600\ensuremath{\,\mathrm{GeV}}$& 2.93&0.24& 0.81& 1.28&\multicolumn{3}{c}{0.97}& -0.37&-0.01\\[1mm] &&&&&&-0.08&0.63&0.42&&\\[1mm] \hline LHC $ 7 \ensuremath{\,\mathrm{TeV}}$&$300\ensuremath{\,\mathrm{GeV}}$&17.5&1.42& 4.83& 7.66&\multicolumn{3}{c}{5.85}& -2.21&-0.06\\[1mm] &&&&&&-0.47&3,78&2.54&&\\[1mm] \hline Tevatron &$300\ensuremath{\,\mathrm{GeV}}$&1.41 &0.16& 0.48& 0.62&\multicolumn{3}{c}{0.34}& -0.17&-0.02\\ &&&&&&-0.03&0.22&0.15&&\\[1mm] \bottomrule[0.08em] \end{tabular} \caption{\small{Individual hadronic contributions of the $\log$-powers and Coulomb-corrections to the NNLO threshold contributions of the \ensuremath{q \bar{q}}-channel in $\ensuremath{\,\mathrm{fb}}$. $\ln^4(\beta)$ has to be understood as $\tfrac{f^{(00)}_{\ensuremath{q \bar{q}}}}{(16\pi^2)^2}\cdot \tfrac{8192}{9}\ln^4(\beta)$, analogously for the other terms. The linear $\log$-term is decomposed into contributions coming from non-Coulomb potential terms, from the two loop anomalous dimension $D^{(2)}_{Q\bar{Q}}$, and, finally, the rest. The Coulomb contribution are decomposed into contributions coming from the interference of NLO threshold logarithms with NLO Coulomb corrections C$_{\rm{NLO}}$ and pure NNLO Coulomb corrections C$_{\rm{NNLO}}$. $\sum$ denotes the sum over all NNLO threshold contributions. The PDF set used is MSTW 2008 NNLO~\cite{Martin:2009iq}.}} \label{tab:terms} \end{table} \section{Results} \subsection{Hadronic cross section} I start with the discussion of the total hadronic cross section, which is obtained by convoluting the partonic cross section with the PDFs, see Eq.~(\ref{eq:totalcrs}). I keep the $gg$ and the $\ensuremath{q \bar{q}}$ channel at all orders up to NNLO, and for the scale-dependent terms, only contributions coming from diagonal parton channels are considered. Only the NLO contributions of the $gq$ channel are considered, which are the leading contributions of this channel. The scale dependence of this channel is given by Eq.~(\ref{eq:sqg11}). I define the NLO and NNLO $K$ factors as \begin{eqnarray} \label{eq:kfactor} K_{\text{NLO}} = \frac{\sigma^{\text{NLO}}}{\sigma^{\text{LO}}},\quad K_{\text{NNLO}} = \frac{\sigma^{\text{NNLO}}}{\sigma^{\text{NLO}}}\enspace . \end{eqnarray} I use for all orders MSTW2008 NNLO PDFs~\cite{Martin:2009iq}, if not otherwise stated. Therefore, the $K$ factors account only for the pure higher order corrections of the partonic cross section (convoluted with the PDFs) and not for higher order corrections of the PDFs and the strong coupling constant $\alpha_s$. In the left column of Fig.~\ref{fig:tothadxsec14}, I show the total hadronic cross section for the LHC ($7\ensuremath{\,\mathrm{TeV}}$ first row, $14\ensuremath{\,\mathrm{TeV}}$ second row) and the Tevatron (third row) as a function of the stop mass. Similar to top-antitop~\cite{Moch:2008qy,Langenfeld:2009wd} and squark-antisquark production~\cite{Langenfeld:2009eg}, the total cross section shows a strong mass dependence. At the LHC ($14\ensuremath{\,\mathrm{TeV}}$), the cross section decreases within the shown stop mass range from about $1000\ensuremath{\,\mathrm{pb}}$ to $10^{-2}\ensuremath{\,\mathrm{pb}}$. In the right column of Fig.~\ref{fig:tothadxsec14}, I show the corresponding $K$ factors. For example, for a stop mass of $300\ensuremath{\,\mathrm{GeV}}$ produced at the LHC with $14\ensuremath{\,\mathrm{TeV}}$, I have a total cross section of $6.57\ensuremath{\,\mathrm{pb}}$, $9.96\ensuremath{\,\mathrm{pb}}$, $10.92\ensuremath{\,\mathrm{pb}}$ at LO, NLO, NNLO$_\text{approx}$, respectively. For a stop mass of $600\ensuremath{\,\mathrm{GeV}}$, I find $0.146\ensuremath{\,\mathrm{pb}}$, $0.216\ensuremath{\,\mathrm{pb}}$, $0.244\ensuremath{\,\mathrm{pb}}$. The $K$-factors are $K_{\text{NLO}}\approx 1.5$ and $K_{\text{NNLO}}\approx 1.1$ and $1.13$. For the LHC at $7\ensuremath{\,\mathrm{TeV}}$, one finds similar values for stop masses in the interval $100\ensuremath{\,\mathrm{GeV}} \leq \mstop \leq 600\ensuremath{\,\mathrm{GeV}}$. At the Tevatron, I have $K_{\text{NLO}} = 1.3\ldots 1.4$ and $K_{\text{NNLO}}\approx 1.2$ for stop masses in the range $100\ensuremath{\,\mathrm{GeV}} \leq \mstop \leq 300\ensuremath{\,\mathrm{GeV}}$. \begin{figure}[t!] \centering \vspace*{15mm} \scalebox{0.28}{\includegraphics{./pics/lhc07mstw2008totalxsecA.eps}} \hspace*{5mm} \scalebox{0.28}{\includegraphics{./pics/lhc07mstw2008totalxsecB.eps}} \vspace*{8mm} \scalebox{0.28}{\includegraphics{./pics/lhc14mstw2008totalxsecA.eps}} \hspace*{5mm} \scalebox{0.28}{\includegraphics{./pics/lhc14mstw2008totalxsecB.eps}} \vspace*{8mm} \scalebox{0.28}{\includegraphics{./pics/tevamstw2008totalxsecA.eps}} \hspace*{5mm} \scalebox{0.28}{\includegraphics{./pics/tevamstw2008totalxsecB.eps}} \caption{\small{Total hadronic cross section at LO, NLO, and NNLO$_{\text{approx}}$ at the LHC 7\ensuremath{\,\mathrm{TeV}} (first row) and 14\ensuremath{\,\mathrm{TeV}} (second row) and the Tevatron (1.96\ensuremath{\,\mathrm{TeV}}, third row). The right column shows the corresponding $K$ factors. The PDF set used is MSTW2008 NNLO~\cite{Martin:2009iq}.}} \label{fig:tothadxsec14} \end{figure} In Tabs~\ref{tab:xsecvalues} - \ref{tab:xsecvaluesteva}, see App.~\ref{subsec:tables}, values for the total hadronic cross section for different masses, PDF sets, scales and colliders are shown. The values for the PDF sets Cteq6.6~\cite{Nadolsky:2008zw}, MSTW 2008 NNLO~\cite{Martin:2009iq},and CT10~\cite{Lai:2010vv} show only small differences, whereas the ABKM09 NNLO (5 flavours) PDFs differ in the treatment of the gluon PDF from the other PDF sets. This leads to sizeable differences in the total cross sections. \subsection{Theoretical and Systematic Uncertainty and PDF error} \begin{figure} \centering \scalebox{0.4}{\includegraphics{./pics/lhc14mstwcompscaleuncert.eps}} \hspace*{3mm} \scalebox{0.3}{\includegraphics{./pics/scalevarlhc14.eps}} \caption{\small{Left hand side: Theoretical uncertainty of the total hadronic cross section at the LHC (14\ensuremath{\,\mathrm{TeV}}) at LO (upper figure, blue band), NLO (central figure, green band), NNLO$_{\text{approx}}$ (lower figure, purple line). At NNLO$_{\text{approx}}$, the theoretical uncertainty has shrunk to a small band. Right hand side: Scale dependence of the total hadronic cross section for the example point $\mstop[1] = 300\ensuremath{\,\mathrm{GeV}}$, $m_{\tilde{q}} = 400\,\ensuremath{\,\mathrm{GeV}}$, $\mstop[2]=480\,\ensuremath{\,\mathrm{GeV}}$, $m_{\tilde{g}} = 500\,\ensuremath{\,\mathrm{GeV}}$. The vertical bars indicate the total scale variation in the range $[\mstop/2,2\*\mstop]$.} } \label{fig:tothadtheorunc} \end{figure} In this section, I address the following sources for errors: the systematic theoretical error, the scale uncertainty, and the PDF error. In Tab.~\ref{tab:terms}, I listed the individual $\lnbeta[k]$ contributions to the total NNLO contributions. Note that the NNLO matching constants $C^{(2)}_{ij}$ are unknown and set to zero. Compared to the total NNLO contributions the $\lnbeta[1]$ term is quite sizeable, this translates to a roughly $3-5\%$ contribution to the NNLO cross section. To estimate the systematic error coming from the NNLO matching constants $C^{(2)}_{ij}$, I proceed as described in Ref.~\cite{Langenfeld:2009wd}. I find for the ratio $\sigma_{\text{NLL+Coul}}/\sigma_{\text{exact}} = 1.10\ldots 1.25$. This ratio translates to an estimate for the relative systematic error coming from the NNLO matching constants as $1-2.5\%$. The total hadronic LO, NLO, and NNLO cross sections are shown on the left of Fig.~\ref{fig:tothadtheorunc} as a function of the stop mass and for variations of the scale $\mu$ with $\mstop/2 \leq \mu \leq 2\mstop$, where I have identified the factorisation scale with the renormalisation scale. The width of the band indicates the scale uncertainty, which becomes smaller when going from LO to NLO and NNLO. On the right-hand side of Fig.~\ref{fig:tothadtheorunc}, the scale dependence for the example point is shown in more detail. I quote as theoretical uncertainty \begin{eqnarray} \min\sigma(\mu) \leq \sigma(\mstop) \leq \max\sigma(\mu), \end{eqnarray} where the $\min$ and $\max$ are to be taken over the interval $[\mstop[1]/2, 2\mstop[1]]$. At LO and NLO, the minimal value is taken at $\mu=2\,\mstop$ and the maximal value at $\mu = \mstop/2$. However, this is not longer true at NNLO. For the theoretical error, one finds \begin{eqnarray} \sigma_{\text{LO}} = 6.57^{+2.06}_{-1.43}\ensuremath{\,\mathrm{pb}},\quad \sigma_{\text{NLO}} = 9.96^{+1.17}_{-1.22}\ensuremath{\,\mathrm{pb}},\quad \sigma_{\text{NNLO}} = 10.90^{+0.01}_{-0.18}\ensuremath{\,\mathrm{pb}} \enspace . \end{eqnarray} As one can see, there is a strong scale dependence at LO, becoming weaker at NLO, and is flattend out at NNLO within the considered range. This flattening gives a hint that the approach is reliable. Using renormalisation group techniques, one recovers the full dependence on the renormalisation scale $\mu_r$ and factorisation scale $\mu_f$. I have done this for the example point for the NLO and the NNLO cross section, see Fig.~\ref{fig:murmuf}. I define the theoretical uncertainty coming from an independent variation of $\mu_r$ and $\mu_f$ in the standard range $\mu_r$, $\mu_f\in [\mstop/2, 2\mstop]$ as \begin{eqnarray} \label{eq:inderror} \min \sigma(\mu_r,\mu_f) \leq \sigma(\mstop) \leq \max\sigma(\mu_r,\mu_f). \end{eqnarray} The contour lines of the total cross section for the example point with an independent variation of $\mu_r$ and $\mu_f$ are shown in Fig.~\ref{fig:murmuf}. Note that the range of the axes is from $\log_2(\mu_{r,f}/\mstop[1]) = -1$ to $\log_2(\mu_{r,f}/\mstop[1]) =1$. The scale variation with fixed scales $\mu_r=\mu_f$ proceeds along the diagonal from the lower left to the upper right corner of the figure. The gradient of the NLO contour lines lies approximately in the $\mu_r=\mu_f$ direction, meaning that the theoretical error from the definition in Eq.~(\ref{eq:inderror}) is the same as if one sets $\mu_r=\mu_f$. For the NNLO case one observes the opposite situation: the contour lines are nearly parallel to the diagonal $\mu_r=\mu_f$. I obtain a larger uncertainty in that case: \begin{eqnarray} \sigma_{\text{NLO}} = 9.96^{+1.17}_{-1.22}\ensuremath{\,\mathrm{pb}},\quad \sigma_{\text{NNLO}} = 10.90^{+1.05}_{-0.46}\ensuremath{\,\mathrm{pb}}. \end{eqnarray} \begin{figure} \centering \scalebox{0.7}{\includegraphics{./pics/murmufNLO.eps} \put(-247,50){\rotatebox{106}{${11\ensuremath{\,\mathrm{pb}}}$}} \put(-220,70){\rotatebox{108}{${10.75\ensuremath{\,\mathrm{pb}}}$}} \put(-190,90){\rotatebox{106}{${10.5\ensuremath{\,\mathrm{pb}}}$}} \put(-163,110){\rotatebox{108}{${10.25\ensuremath{\,\mathrm{pb}}}$}} \put(-132,130){\rotatebox{108}{${10\ensuremath{\,\mathrm{pb}}}$}} \put(-107,150){\rotatebox{108}{${9.75\ensuremath{\,\mathrm{pb}}}$}} \put(-78,170){\rotatebox{108}{${9.5\ensuremath{\,\mathrm{pb}}}$}} \put(-52,190){\rotatebox{108}{${9.25\ensuremath{\,\mathrm{pb}}}$}} \put(-22,210){\rotatebox{106}{${9\ensuremath{\,\mathrm{pb}}}$}} } \hspace*{3mm} \scalebox{0.7}{\includegraphics{./pics/murmuf.eps} \put(-247,263){\rotatebox{32}{${11.8\ensuremath{\,\mathrm{pb}}}$}} \put(-229,243){\rotatebox{32}{${11.6\ensuremath{\,\mathrm{pb}}}$}} \put(-207,223){\rotatebox{35}{${11.4\ensuremath{\,\mathrm{pb}}}$}} \put(-174,203){\rotatebox{39}{${11.2\ensuremath{\,\mathrm{pb}}}$}} \put(-136,183){\rotatebox{48}{${11\ensuremath{\,\mathrm{pb}}}$}} \put(-122,163){\rotatebox{52}{${10.9\ensuremath{\,\mathrm{pb}}}$}} \put(-104,143){\rotatebox{58}{${10.8\ensuremath{\,\mathrm{pb}}}$}} \put(-78,123){\rotatebox{71}{${10.7\ensuremath{\,\mathrm{pb}}}$}} \put(-51,108){\rotatebox{79}{${10.6\ensuremath{\,\mathrm{pb}}}$}} \put(-20,103){\rotatebox{80}{${10.5\ensuremath{\,\mathrm{pb}}}$}} } \caption{\small{Contour lines of the total hadronic NLO (left) and NNLO (right) cross section from the independent variation of the renormalisation and factorisation scale $\mu_r$ and $\mu_f$ for LHC, $14\ensuremath{\,\mathrm{TeV}}$, with PDF set MSTW2008 NNLO~\cite{Martin:2009iq} for the example point with $\mstop=300\ensuremath{\,\mathrm{GeV}}$. The dot in the middle of the figure indicates the cross section for $\mu_r=\mu_f=\mstop$, and the range corresponds to $\mu_f,\mu_r \in [\mstop/2,2\mstop]$.} } \label{fig:murmuf} \end{figure} Another source of error to discuss is the PDF error. I calculated the PDF uncertainty according to Ref.~\cite{Nadolsky:2008zw} for the two PDF sets CT10 and MSTW2008 NNLO (90\% C.L.). In both cases, the uncertainty increases with higher stop masses due to large uncertainties of the gluon PDF in high $x$-ranges. For CT10, I find as relative errors $\approx 3\%$ for $\mstop=100\ensuremath{\,\mathrm{GeV}}$ and $\approx 18\%$ for $\mstop=1000\ensuremath{\,\mathrm{GeV}}$, and for MSTW2008 NNLO, the relative errors are $\approx 3\%$ for $\mstop=100\ensuremath{\,\mathrm{GeV}}$ and $\approx 10\%$ for $\mstop=1000\ensuremath{\,\mathrm{GeV}}$. The relative error of the MSTW2008 NNLO PDF set is smaller for large stop masses compared to the CT10 PDFs. \begin{figure} \centering \vspace*{10mm} \scalebox{0.32}{\includegraphics{./pics/lhc14pdfuncertaintyct10.eps}} \hspace*{3mm} \scalebox{0.32}{\includegraphics{./pics/lhc14pdfuncertaintymstw.eps}} \caption{\small{PDF uncertainty of the total NNLO cross section for the two PDF sets CT10~\cite{Lai:2010vv} (left figure) and MSTW2008 NNLO~\cite{Martin:2009iq} (right figure) at the LHC (14\ensuremath{\,\mathrm{TeV}}).}} \label{fig:tothadpdfunc} \end{figure} Combining theoretical uncertainty and PDF error one obtains \begin{eqnarray} \sigma_{\text{NNLO}} &=& 10.90\ensuremath{\,\mathrm{pb}}\enspace {}^{+0.01}_{-0.18}\ensuremath{\,\mathrm{pb}}\enspace (\text{scale})\enspace{}^{+0.55}_{-0.55}\ensuremath{\,\mathrm{pb}}\enspace(\text{MSTW2008 NNLO})\\[1mm] \sigma_{\text{NNLO}} &=& 10.86\ensuremath{\,\mathrm{pb}}\enspace {}^{+0.01}_{-0.19}\ensuremath{\,\mathrm{pb}}\enspace (\text{scale})\enspace{}^{+0.65}_{-0.64}\ensuremath{\,\mathrm{pb}}\enspace(\text{CT10}). \end{eqnarray} \subsection{Mass exclusion limits} The approximated NNLO contributions enlarge the stop-antistop production cross section by $\approx10 - 20\%$, depending on the hadron collider, its center of mass energy, and the stop mass. This could be converted into larger exclusion limits for the mass of the stop squark. A stop with a mass $\mstop[1]=120\ensuremath{\,\mathrm{GeV}}$ has a NLO production cross section of $5.05\ensuremath{\,\mathrm{pb}}$, the same cross section corresponds at NNLO to a stop with a mass of $123.5\ensuremath{\,\mathrm{GeV}}$. At higher stop masses, the exclusion limit is even further enhanced: $0.164\ensuremath{\,\mathrm{pb}}$ corresponds at NLO to a stop with a mass of $210\ensuremath{\,\mathrm{GeV}}$, but at NNLO to a mass of $215\ensuremath{\,\mathrm{GeV}}$. At the LHC ($14\ensuremath{\,\mathrm{TeV}}$) one would have a similar situation. An NLO cross section of $750\ensuremath{\,\mathrm{pb}}$ corresponds to $\mstop[1] = 120\ensuremath{\,\mathrm{GeV}}$, but to $\mstop[1] = 121.5\ensuremath{\,\mathrm{GeV}}$ at NNLO. And if the stops are heavier, a cross section of $10\ensuremath{\,\mathrm{pb}}$ is related to $\mstop[1] = 300\ensuremath{\,\mathrm{GeV}}$ at NLO, but to $\mstop[1] = 305.5\ensuremath{\,\mathrm{GeV}}$ at NNLO. \section{Conclusion and Summary} In this paper, I computed the NNLO threshold contributions including Coulomb corrections for stop-antistop production at hadron colliders. \begin{itemize} \item I presented analytical formulas for the threshold expansion of the NNLO scaling function using resummation techniques for the scale independent scaling function and RGE techniques for the scale-dependent scaling functions. \item After convolution with suitable PDF sets, the NNLO corrections are found to be about $20\%$ for the Tevatron and $10-20\%$ for the LHC compared to the hadronic NLO cross section. The PDF sets Cteq6.6~\cite{Nadolsky:2008zw}, MSTW 2008 NNLO~\cite{Martin:2009iq}, and CT10~\cite{Lai:2010vv} show only small differences in the total cross section, whereas the values obtained with the PDF set ABKM09 NNLO (5 flavours)~\cite{Alekhin:2009ni} differ by $10-35\%$ due to differences in the gluon PDF. \item I calculated the exact scale dependence and I found a remarkable stabilisation of the cross section under scale variation. For my example point, the theoretical error is reduced from $12\%$ at NLO to better than $2\%$ at NNLO. \item I discussed three types of errors: systematic theoretical errors, uncertainties due to scale variation and PDF errors. The systematic error was estimated to be about $3-6\%$, the scale uncertainty to be about $2\%$ or better, and the PDF error $3-18\%$ depending on the stop mass and the PDF set used. \item Finally, I demonstrated how NNLO cross sections could enlarge exclusion limits. The improvement of the lower exclusion limit were about a few $\ensuremath{\,\mathrm{GeV}}$. \end{itemize} \section*{Acknowledgments} I would like to thank P. Uwer and A. Kulesza, W. Porod, and S. Uccirati for useful discussions and S. Moch and M. Kr\"amer for reading the manuscript and giving helpful comments. This work is supported in part by the Helmholtz Alliance {\it ``Physics at the Terascale''} (HA-101) and by the research training group GRK 1147 of the Deutsche Forschungs\-gemein\-schaft.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The AdS/CFT correspondence \cite{MGW} has been deeply revealed by comparing the anomalous dimensions of certain single trace operators in the planar limit of the $\mathcal{N}=4$ super Yang-Mills (SYM) theory and the energies of certain string states in the type IIB string theory on $AdS_5 \times S^5$ \cite{BMN,GKP,FT}. In particular, integrability has made its appearance in both theories and has shed light on the AdS/CFT correspondence. The spectrum of anomalous dimension for a local composite operator in the $\mathcal{N}=4$ SYM theory has been computed by the Bethe ansatz \cite{MZ} for diagonalization of the dilatation operator \cite{BKS,BZP} that is represented by a Hamiltonian of an integrable spin chain with length $L$. Further, the asymptotic all-loop gauge Bethe ansatz (GBA) equations for the integrable long-range spin chains have been proposed in the $su(2), su(1|1)$ and $sl(2)$ sectors \cite{BDS,MS,BS}. The integrability for the classical $AdS_5 \times S^5$ string sigma model has been investigated by verifying the equivalence between the classical string Bethe equation for the string sigma model and the Bethe equation for the spin chain \cite{KMM,VKZ}. Combining the classical string Bethe ansatz and the asymptotic all-loop GBA, a set of discrete Bethe ansatz equations for the quantum string sigma model have been constructed \cite{AFS,NB,BS}, where the integrable structure is assumed to be maintained at the quantum level and the quantum string Bethe ansatz (SBA) equations are obtained by modifying the GBA equations with the dressing factor. To fix the dressing factor the SBA equation has been studied by comparing its prediction with the quantum world-sheet correction to the spinning string solution \cite{SZZ}. An all-order perturbative expression for the dressing factor at strong coupling has been proposed \cite{BHL} such that it satisfies the crossing relation \cite{RJ} and matches with the known physical data at strong coupling \cite{HL}. The spectrum of the highest state has been studied by analyzing the GBA equation in the thermodynamic limit $L \rightarrow \infty$ for the $su(2)$ sector \cite{KZ} and the GBA and SBA equations for the $su(1|1)$ sector \cite{AT}. The flow of the spectrum from weak to strong coupling has been numerically derived by solving the GBA and SBA equations for the $su(2)$ and $su(1|1)$ sectors in the large but finite $L$ \cite{BD}. The strong coupling behavior of the su(2) spectrum has been investigated by using the Hubbard model which is regarded as the microscopic model behind the integrable structure of the $\mathcal{N}=4$ SYM dilatation operator \cite{RSS}. The highest states for the $su(2)$ and $su(1|1)$ sectors of the $AdS_5 \times S^5$ superstring have been studied analytically in the framework of the light-cone Bethe ansatz equations \cite{BAD}. For the $sl(2)$ sector the large-spin anomalous dimension of twist-two operator has been computed by solving the GBA equation and the SBA equation in the thermodynamic limit by means of the Fourier transformation \cite{ES}. In the former integral equation which we call the ES equation, the anomalous dimension leads to the universal all-loop scaling function $f(g)$ with the gauge coupling constant $g$ satisfying the Kotikov-Lipatov transcendentality \cite{AKL}, whereas in the latter integral equation $f(g)$ is modified at the three-loop order as compared to the ES equation and the transcendentality is not preserved. In the GBA equation with a weak-coupling dressing factor \cite{BES} which is an analytic continuation of a crossing-symmetric strong-coupling dressing factor \cite{BHL}, which is called the BES equation, the universal scaling function has been shown to be so modified at the four-loop order as to obey the Kotikov-Lipatov transcendentality and be consistent with the planar multi-gluon amplitude of $\mathcal{N}=4$ SYM theory at the four-loop order \cite{BCD}. The strong coupling behavior of $f(g)$ for the BES equation has been studied numerically by analyzing the equivalent set of linear algebraic equation to reproduce the asymptotic form predicted by the string theory \cite{BBK}. By truncating the strong coupling expansion of the matrices entering the linear algebraic equation, the strong coupling limit of $f(g)$ has been extracted analytically \cite{AAB}. In ref. \cite{KL} the ES and BES equations have been analyzed by using the Laplace transformation where the analytic properties of the solutions at strong coupling are studied and the strong coupling limit of $f(g)$ is estimated analytically by deriving a singular solution for the integral equation. Further for the $su(2)$ and $su(1|1)$ sectors the GBA equations with the weak-coupling dressing factor have been analyzed and the anomalous dimensions of the highest states have been presented in the weak coupling expansion \cite{RSZ}, where the anomalous dimensions of a state built from a field strength operator and a certain one-loop $so(6)$ singlet state also have been computed. The physical origin of the full weak-coupling dressing factor has been argued \cite{SS}. Without resorting to the Fourier transformation the strong coupling solutions for the SBA equations in the rapidity plane have been analytically derived for the highest states in the $su(2)$ and $su(1|1)$ sectors and the strong coupling limit of the universal scaling function $f(g)$ in the $sl(2)$ sector has been estimated from the BES equation by deriving the leading density of Bethe roots in the rapidity plane \cite{KSV}. On the other hand the Fourier-transform of the SBA equation for the $sl(2)$ sector has been analyzed to study the strong coupling behavior of $f(g)$ \cite{BDF}. We will analyze the SBA equations for the highest states in the thermodynamic limit $L \rightarrow \infty$ for the $su(1|1)$ and $su(2)$ sectors. By solving these equations through the Fourier transformation we will derive the anomalous dimensions of the highest states in the weak coupling expansion. Specially the weak coupling spectrum for the $su(1|1)$ sector derived by computing the Fourier-transfomed density of Bethe roots will be compared with the result \cite{AT,BD} which was produced by analyzing the SBA equation in the large but finite $L$ and computing the Bethe momenta. Applying the Laplace transformation prescription of ref. \cite{KL} to the GBA equation for the $su(1|1)$ sector we will construct a singular solution for the integral equation at strong coupling to compute the strong coupling limit of the anomalous dimension analytically. \section{Weak coupling spectrum of the highest state in the $su(1|1)$ sector} We consider the highest state in the $su(1|1)$ sector which corresponds to the purely-fermionic operator $\mathrm{tr}(\psi^L)$ \cite{AT}, where $\psi$ is the highest-weight component of the Weyl spinor from the vector multiplet. The asymptotic all-loop GBA equation \cite{BS} for the highest state is given by \begin{equation} \left(\frac{x_k^+}{x_k^-}\right)^L = \prod_{j\neq k}^L \frac{1- g^2/2x_k^+ x_j^-}{1- g^2/2x_k^- x_j^+}, \hspace{1cm} g^2 = \frac{\lambda}{8\pi^2}, \label{gba}\end{equation} where $u_k \; (k=1, \cdots, L)$ are rapidities of elementary excitations and \begin{equation} x_k^{\pm} = x^{\pm}(u_k) = \frac{u_{\pm}}{2} \left( 1 + \sqrt{ 1- \frac{2g^2}{u_{\pm}^2} }\right), \hspace{1cm} u_{\pm} = u_k \pm \frac{i}{2}. \end{equation} The all-loop ansatz (\ref{gba}) is a generalization of a three-loop Bethe ansatz \cite{MS} and is obtained by deforming the spectral parameter $u_k$ into $x_k^{\pm}$ in such a way as $u_k \pm i/2 = x_k^{\pm} + g^2/2x_k^{\pm}$, where the deformation parameter is the Yang-Mills coupling constant $g$. The asymptotic all-loop energy $E(g)$ of the highest state is \begin{equation} E(g) = g^2 \sum_{k=1}^{L} \left( \frac{i}{x^+(u_k)} - \frac{i}{x^-(u_k)} \right), \label{egl}\end{equation} which gives its dimension $\Delta = 3L/2 + E(g)$. Taking the thermodynamic limit in the logarithm of (\ref{gba}) and differentiating in the rapidity $u$ we have an integral equation for the density of Bethe roots $\rho(u)$ \cite{AT} \begin{equation} \frac{1}{i}\left( \frac{1}{\sqrt{u_+^2 - 2g^2}} - \frac{1}{\sqrt{u_-^2 - 2g^2}} \right) = -2\pi \rho(u) - \frac{i}{2} \int_{-\infty}^{\infty}dv \rho(v)\frac{\partial}{\partial u} \log \left( \frac{1- g^2/2x^+(u)x^-(v)}{1- g^2/2x^-(u)x^+(v)} \right)^2 \label{ind}\end{equation} where $u_{\pm}= u \pm i/2$ and in the second term the density is integrated against the kernel \begin{equation} K_m(u,v) = i\frac{\partial}{\partial u} \log \left( \frac{1- g^2/2x^+(u)x^-(v)}{1- g^2/2x^-(u)x^+(v)} \right)^2. \label{kme}\end{equation} In this continuum limit the energy shift $E(g)$ (\ref{egl}) is also expressed as an integral representation \begin{equation} \frac{E(g)}{L} = i g^2 \int_{-\infty}^{\infty} du \rho(u) \left( \frac{1}{x^+(u)} - \frac{1}{x^-(u)} \right). \label{ene}\end{equation} Following the Fourier transformation procedure in ref. \cite{ES}, we solve the integral equation to obtain $E(g)$. The Fourier transform of the density $\rho(u)$ is defined by \begin{equation} \hat{\rho}(t) = e^{-|t|/2}\int_{-\infty}^{\infty} du e^{-itu}\rho(u). \label{den}\end{equation} We are interested in the symmetric density $\rho(-u) = \rho(u)$ so that $\hat{\rho}(t)$ is also symmetric $\hat{\rho}(-t) = \hat{\rho}(t)$. Therefore the kernel $K_m(u,v)$ in (\ref{ind}) can be symmetrized under the exchange $v \leftrightarrow -v$ \begin{equation} i\partial_u \log \left( \frac{1- g^2/2x^+(u)x^-(u)}{1- g^2/2x^-(u)x^+(u)} \right)^2 \rightarrow \frac{i}{2}\partial_u \log \left( \frac{(1- g^2/2x^+(u)x^-(v)) (1+ g^2/2x^+(u)x^+(v))} {(1- g^2/2x^-(u)x^+(v))(1+ g^2/2x^-(u)x^-(v))} \right)^2, \end{equation} which is further described by \cite{ES} \begin{equation} g^2\int_{-\infty}^{\infty}dt e^{iut} \int_{-\infty}^{\infty}dt' e^{ivt'} |t|e^{-(|t| + |t'|)/2}\hat{K}_m(\sqrt{2}g |t|, \sqrt{2}g|t'|), \label{mke}\end{equation} whose $\hat{K}_m$ is expressed in terms of the Bessel functions as \begin{equation} \hat{K}_m(x, x') = \frac{J_1(x)J_0(x') - J_0(x)J_1(x')}{x-x'}. \end{equation} We use the expression (\ref{mke}) to take the Fourier transformation as $e^{-|t|/2}\int_{-\infty}^{\infty}du e^{-itu}\times$equation (\ref{ind}) and obtain \begin{equation} \hat{\rho}(t) = e^{-|t|} \left( J_0(\sqrt{2}g t) - g^2|t|\int_0^{\infty}dt' \hat{K}_m(\sqrt{2}g|t|,\sqrt{2}g t') \hat{\rho}(t') \right). \label{inf}\end{equation} By solving this integral equation iteratively we derive the transformed density $\hat{\rho}(t)$ expanded in even powers of $g$ as \begin{eqnarray} \hat{\rho}(t) = e^{-|t|} \biggl( 1 - \frac{g^2}{2}(t^2 + |t|) + \frac{g^4}{16} ( t^4 + 2(|t|^3 -t^2 +8|t|) ) \nonumber \\ - \frac{g^6}{288} ( t^6 + 3(|t|^5 -2t^4 + 26|t|^3 - 60t^2 + 348|t|) ) \nonumber \\ + \frac{g^8}{9216}( t^8 + 4(|t|^7 - 3t^6 + 54|t|^5 - 246t^4 + 2520|t|^3 - 7200t^2 + 37296|t| )) + \cdots \biggr). \label{dex}\end{eqnarray} In deriving this solution we have used the following expansion \begin{eqnarray} \hat{K}_m(\sqrt{2}g|t|, \sqrt{2}g t') = \frac{1}{2} \biggl( 1 - \frac{g^2}{4}( t^2 - |t|t' + t'^2 ) + \frac{g^4}{48}(t^4 - 2|t|^3t' + 4t^2t'^2 - 2|t|t'^3 + t'^4 ) \nonumber \\ - \frac{g^6}{1152}( t^6 - 3|t|^5t' + 9t^4t'^2 - 9|t|^3t'^3 + 9t^2t'^4 - 3|t|t'^5 + t'^6 ) + \cdots \biggr). \label{kmt}\end{eqnarray} The energy shift $E(g)$ (\ref{ene}) can be expressed in terms of the transformed density through (\ref{den}) as \begin{equation} \frac{E(g)}{L} = 4g^2\int_0^{\infty} dt \hat{\rho}(t)\frac{J_1(\sqrt{2}g t)} {\sqrt{2}g t}. \label{eni}\end{equation} The substitution of the weak coupling solution (\ref{dex}) into (\ref{eni}) yields the anomalous dimension of the highest state \begin{equation} \frac{E(g)}{L} = 2g^2 - 4g^4 + \frac{29}{2}g^6 - \frac{259}{4}g^8 + \frac{1307}{4}g^{10} + \cdots, \label{egg}\end{equation} which reproduces the result of \cite{AT,BD}. In ref. \cite{AT} the most naive approximation to an exact expression for the dimension was guessed in a square-root form as \begin{eqnarray} \frac{\Delta_{fit}}{L} &=& 1 + \frac{1}{2}\sqrt{1 + \frac{\lambda}{\pi^2} } \nonumber \\ &=& \frac{3}{2} + \frac{\lambda}{4\pi^2} - \frac{\lambda^2}{16\pi^4} +\frac{32\lambda^3}{1024\pi^6} - \frac{320\lambda^4}{16384\pi^8} + \cdots, \label{exa}\end{eqnarray} which is compared with the following weak coupling expansion in $\lambda$ associated with the anomalous dimension (\ref{egg}) \begin{equation} \frac{\Delta}{L} = \frac{3}{2} + \frac{\lambda}{4\pi^2} - \frac{\lambda^2}{16\pi^4} + \frac{29\lambda^3}{1024\pi^6} - \frac{259\lambda^4}{16384\pi^8} + \cdots. \end{equation} Now we analyze the SBA equation for the highest state \begin{equation} \left(\frac{x_k^+}{x_k^-}\right)^L = \prod_{j\neq k}^L \frac{1- g^2/2x_k^+ x_j^-}{1- g^2/2x_k^- x_j^+}\sigma^2(x_k,x_j), \label{sbe}\end{equation} whose string dressing factor $\sigma(x_k,x_j)$ is defined by \begin{equation} \sigma(x_k,x_j) = \left(\frac{1- g^2/2x_k^+ x_j^-}{1- g^2/2x_k^- x_j^+} \right)^{-1}\left( \frac{(1- g^2/2x_k^+ x_j^-)(1- g^2/2x_k^- x_j^+)} {(1- g^2/2x_k^+ x_j^+)(1- g^2/2x_k^- x_j^-)}\right)^{i(u_k -u_j)}. \label{sdr}\end{equation} In the thermodynamic limit the SBA equation (\ref{sbe}) becomes an integral equation for the density $\rho(u)$ \begin{eqnarray} \frac{1}{i}\left( \frac{1}{\sqrt{u_+^2 - 2g^2}} - \frac{1}{\sqrt{u_-^2 - 2g^2}} \right) &=& -2\pi \rho(u) - \frac{1}{2}\int_{-\infty}^{\infty}dv K_m(u,v)\rho(v) \nonumber \\ &-& \int_{-\infty}^{\infty}dv ( K_s(u,v) - K_m(u,v))\rho(v), \label{inu}\end{eqnarray} where the main kernel $K_m(u,v)$ is given by (\ref{kme}) and \begin{equation} K_s(u,v) = -\partial_u(u-v)\log \left( \frac{(1- g^2/2x^+(u)x^-(v)) (1- g^2/2x^-(u)x^+(v))} {(1- g^2/2x^+(u)x^+(v))(1- g^2/2x^-(u)x^-(v))} \right)^2. \label{ksu}\end{equation} The last term of the r.h.s. of (\ref{inu}) specifed by $K_s - K_m$ appears as a contribution from the dressing factor, which is compared with (\ref{ind}). In the same way as (\ref{inf}) the Fourier transformation of (\ref{inu}) leads to \begin{eqnarray} -e^{-|t|} 2\pi J_0(\sqrt{2}g t) &=& -2\pi\hat{\rho}(t) + \pi g^2|t| e^{-|t|}\int_{-\infty}^{\infty}dt' \hat{K}_m(\sqrt{2}g|t|,\sqrt{2}g|t'|) \hat{\rho}(t') \nonumber \\ &-& e^{-|t|/2}\int_{-\infty}^{\infty}du e^{-itu}\int_{-\infty}^{\infty}dv K_s(u,v)\rho(v). \label{iks}\end{eqnarray} Using the $v \leftrightarrow -v$ symmetrized form of $K_s(u,v)$ we make the third term on the r.h.s. of (\ref{iks}) rewritten by \cite{ES} \begin{equation} -2\pi g^2 |t|e^{-|t|} \int_{-\infty}^{\infty}dt' \left( \hat{K}_m(\sqrt{2}g|t|,\sqrt{2}g|t'|) + \sqrt{2}g\tilde{K}(\sqrt{2}g|t|,\sqrt{2}g|t'|) \right) \hat{\rho}(t'), \label{ksm}\end{equation} where \begin{equation} \tilde{K}(x,x') = \frac{x(J_2(x)J_0(x') - J_0(x)J_2(x'))} {x^2 - x'^2}. \end{equation} Thus we obtain an integral equation for the transformed density \begin{eqnarray} \hat{\rho}(t) &=& e^{-|t|} \biggl( J_0(\sqrt{2}g t) - g^2|t|\int_0^{\infty}dt' \hat{K}_m(\sqrt{2}g|t|,\sqrt{2}g t') \hat{\rho}(t') \nonumber \\ &-& 2g^2|t|\int_0^{\infty}dt'\sqrt{2}g\tilde{K}(\sqrt{2}g|t|,\sqrt{2}g t') \hat{\rho}(t')\biggr). \label{til}\end{eqnarray} Comparing (\ref{til}) with (\ref{inu}) and (\ref{inf}) we see that the last term in (\ref{til}) specified by $\sqrt{2}g\tilde{K}$ is attributed to the dressing factor so that $\sqrt{2}g\tilde{K}$ is called a dressing kernel. In order to solve (\ref{til}) by taking the weak coupling expansion we first split the transformed density $\hat{\rho}(t)$ into a main part $\hat{\rho}_0(t)$ and a correction part $\delta\hat{\rho}(t)$ as $\hat{\rho}(t) = \hat{\rho}_0(t) + \delta\hat{\rho}(t)$, where $\hat{\rho}_0(t)$ satisfies the GBA equation (\ref{inf}). Therefore we have the following integral equation for $\delta\hat{\rho}(t)$ \begin{eqnarray} \delta\hat{\rho}(t) &=& -2g^2|t| e^{-|t|} \biggl( \int_0^{\infty} dt'\sqrt{2}g \tilde{K}(\sqrt{2}g|t|,\sqrt{2}g t')\hat{\rho}_0(t') \nonumber \\ &+& \frac{1}{2}\int_0^{\infty} dt' \hat{K}_m(\sqrt{2}g|t|,\sqrt{2}g t')\delta\hat{\rho}_0(t') + \int_0^{\infty}dt'\sqrt{2}g \tilde{K}(\sqrt{2}g|t|,\sqrt{2}g t') \delta\hat{\rho}_0(t') \biggr), \end{eqnarray} where the first term on the r.h.s. is regarded as an inhomogenous one with $\hat{\rho}_0(t')$ already known as (\ref{dex}). Using the expansion (\ref{kmt}) for $\hat{K}_m(x,x')$ and the following weak coupling expansion for $\tilde{K}(x,x')$ with $x = \sqrt{2}g|t|, x' = \sqrt{2}g t'$ \begin{equation} \tilde{K}(x,x') = \frac{\sqrt{2}g|t|}{8} \left( 1 - \frac{g^2}{6}(t^2 + t'^2) + \frac{g^4}{96}(t^4 + 3t^2t'^2 + t'^4) + \cdots \right) \label{tkx}\end{equation} we determine $\delta\hat{\rho}(t)$ iteratively \begin{eqnarray} \delta\hat{\rho}(t) &=& e^{-|t|} \biggl( - \frac{g^4}{2}t^2 + \frac{g^6}{12}( t^4 + 11t^2 + 6|t|) \nonumber \\ &-& \frac{g^8}{192}(t^6 + 30t^4 + 24|t|^3 + 384t^2 + 704|t|) + \cdots \biggr). \end{eqnarray} Combining them we obtain the anomalous dimension $E(g)/L = E_0(g)/L + \delta E(g)/L$ where the main part $E_0(g)/L$ is given by (\ref{egg}) and the correction part $\delta E(g)/L$ is evaluated as \begin{equation} \frac{\delta E(g)}{L} = -2g^6 + \frac{44}{3}g^8 - \frac{268}{3}g^{10} + \cdots, \label{deg}\end{equation} whose expansion starts from the three-loop order. The summation of (\ref{egg}) and (\ref{deg}) yields the dimension of the highest state \begin{equation} \frac{\Delta}{L} = \frac{3}{2} + 2g^2 - 4g^4 + \frac{25}{2}g^6 - \frac{601}{12}g^8 + \frac{2849}{12}g^{10} + \cdots, \end{equation} which recovers the result of \cite{AT,BD}. Thus we have solved the SBA equation in the thermodynamic limit $L \rightarrow \infty$ to derive the Fourier-transformed density iteratively, whereas in \cite{AT,BD} the Bethe momenta of excitations in a finite fixed $L$ were iteratively derived. In \cite{BES} the universal scaling function $f(g)$ in the $sl(2)$ sector was obtained from the all-loop GBA equation with a weak-coupling dressing factor and $f(g)$ was shown to satisfy the Kotikov-Lipatov transcendentality. Since the dressing factor is universal for the three rank-one sectors, we use it for the $su(1|1)$ sector. In the $sl(2)$ sector, if we compare the integral SBA equation for the transformed density in \cite{ES} with the integral GBA equation accompanied with the weak-coupling dressing factor in \cite{BES}, we note that the dressing kernel $\sqrt{2}g\tilde{K}(x,x')$ for the former case corresponds to the dressing kernel $2\hat{K}_c(x,x')$ for the latter case, where $\hat{K}_c(x,x')$ is given by \begin{eqnarray} \hat{K_c}(x,x') &=& 2g^2 \int_0^{\infty}dt"K_1(x,\sqrt{2}g t") \frac{t"}{e^{t"}-1}K_0(\sqrt{2}g t",x'), \nonumber \\ K_0(x,x')&=& \frac{xJ_1(x)J_0(x') - x'J_0(x)J_1(x')}{x^2 - x'^2}, \nonumber \\ K_1(x,x')&=& \frac{x'J_1(x)J_0(x') - xJ_0(x)J_1(x')} {x^2 - x'^2}. \end{eqnarray} Therefore by replacing $\sqrt{2}g\tilde{K}(x,x')$ in (\ref{til}) with $2\hat{K}_c(x,x')$ we obtain an integral equation for the transformed density in the $su(1|1)$ sector \begin{eqnarray} \hat{\rho}(t) &=& e^{-|t|} \biggl( J_0(\sqrt{2}g t) - g^2|t|\int_0^{\infty}dt' \hat{K}_m(\sqrt{2}g|t|,\sqrt{2}g t')\hat{\rho}(t') \nonumber \\ &-& 2g^2|t|\int_0^{\infty}dt'2\hat{K}_c(\sqrt{2}g|t|,\sqrt{2}g t')\hat{\rho}(t') \biggr). \label{rkc}\end{eqnarray} Recently this integral equation has been presented and iteratively solved in ref. \cite{RSZ}, where the energy modification owing to the weak-coupling dressing factor starts from the four-loop order. \section{Weak coupling spectrum of the highest state in the $su(2)$ sector} We turn to the highest state in the $su(2)$ sector which is described by the antiferromagnetic operator $\mathrm{tr}(Z^{L/2}\Phi^{L/2})+ \cdots$ where $Z$ and $\Phi$ are charged scalar fields in the $\mathcal{N}=4$ supermultiplet. The asymptotic all-loop GBA equation for the highest state is \begin{equation} \left(\frac{x_k^+}{x_k^-}\right)^L = \prod_{j\neq k}^{L/2} \frac{x_k^+ - x_j^-}{x_k^- - x_j^+} \frac{1- g^2/2x_k^+ x_j^-}{1- g^2/2x_k^- x_j^+}, \end{equation} whose thermodynamic limit leads to \cite{KZ} \begin{equation} \frac{1}{i}\left( \frac{1}{\sqrt{u_+^2 - 2g^2}} - \frac{1}{\sqrt{u_-^2 - 2g^2}} \right) = -2\pi \rho(u) - 2\int_{-\infty}^{\infty}dv \frac{\rho(v)}{(u-v)^2 + 1}. \label{urh}\end{equation} The Fourier transformation solves the integral equation (\ref{urh}) to give an exact expression of the transformed density \begin{equation} \hat{\rho}(t) = \frac{J_0(\sqrt{2}g t)}{e^{|t|} + 1}, \label{exd}\end{equation} which yields the dimension of the highest state $\Delta = L + E(g)$ in a closed form \begin{equation} \frac{E(g)}{L} = 4g^2 \int_0^{\infty}\frac{dt}{\sqrt{2}g t} \frac{J_0(\sqrt{2}g t)J_1(\sqrt{2}g t)}{e^t + 1}. \label{exe}\end{equation} We use the following representation of the Riemann zeta function \begin{equation} \zeta (n + 1) = \frac{1}{(1-2^{-n})n!}\int_0^{\infty}dt \frac{t^n}{e^t + 1} \label{zet}\end{equation} to expand (\ref{exe}) in $g^2$ \begin{eqnarray} \frac{E(g)}{L} &=& 2\log 2g^2 - \frac{9}{4}\zeta(3)g^4 + \frac{75}{8}\zeta(5)g^6 \nonumber \\ &-& \frac{11025}{256}\zeta(7)g^8 + \frac{112455}{512}\zeta(9)g^{10} + \cdots, \label{elg}\end{eqnarray} whereas the closed expression (\ref{exe}) can yield $E(g)/L = \sqrt{\lambda}/\pi^2$ in the strong coupling limit. Let us consider the SBA equation for the highest state \begin{equation} \left(\frac{x_k^+}{x_k^-}\right)^L = \prod_{j\neq k}^{L/2} \frac{x_k^+ - x_j^-}{x_k^- - x_j^+} \frac{1- g^2/2x_k^+ x_j^-}{1- g^2/2x_k^- x_j^+} \sigma^2(x_k,x_j), \label{sba}\end{equation} where the string dressing factor $\sigma(x_k,x_j)$ is given by (\ref{sdr}). The thermodynamic limit of (\ref{sba}) yields an integral equation for the density \begin{eqnarray} \frac{1}{i}\left( \frac{1}{\sqrt{u_+^2 - 2g^2}} - \frac{1}{\sqrt{u_-^2 - 2g^2}} \right) &=& -2\pi \rho(u) - 2\int_{-\infty}^{\infty}dv \frac{\rho(v)}{(u-v)^2 + 1} \nonumber \\ &-& \int_{-\infty}^{\infty}dv(K_s(u,v) - K_m(u,v))\rho(v), \label{sui}\end{eqnarray} where the kernels $K_m(u,v)$ and $K_s(u,v)$ are given by (\ref{kme}) and (\ref{ksu}) respectively. The Fourier transformation of (\ref{sui}) through (\ref{ksm}) gives an integral equation for the transformed density \begin{equation} ( 1 + e^{-|t|} )\hat{\rho}(t) = e^{-|t|} \left( J_0(\sqrt{2}g t) - 2g^2|t| \int_0^{\infty}dt'\sqrt{2}g \tilde{K}(\sqrt{2}g|t|,\sqrt{2}g t') \hat{\rho}(t') \right), \label{hrt}\end{equation} whose last term is the same as the last one in (\ref{til}) for the $su(1|1)$ sector. By using the expansion (\ref{tkx}) of $\tilde{K}(x,x')$ the transformed density is iteratively solved as $\hat{\rho}(t)= \hat{\rho}_0(t) + \delta\hat{\rho}(t)$ where the main part $\hat{\rho}_0(t)$ is given by (\ref{exd}) and the correction part $\delta\hat{\rho}(t)$ has the following weak coupling expansion \begin{eqnarray} \delta\hat{\rho}(t) &=& \frac{1}{e^{|t|} + 1} \biggl( -\frac{g^4}{2} \log2\; t^2 + \frac{g^6}{12}(\log2\;t^4 + 6\zeta(3)t^2 ) \nonumber \\ &-& \frac{g^8}{384}( 2\log2\; t^6 + 33\zeta(3)t^4 + 675\zeta(5)t^2 - 144\log2\zeta(3)t^2 ) + \cdots \biggr). \label{drh}\end{eqnarray} The substitution of (\ref{exd}) and (\ref{drh}) into (\ref{eni}) leads to a separation $E(g)/L = E_0(g)/L + \delta E(g)/L$ where the main part $E_0(g)/L$ takes the expression (\ref{elg}) and the correction part $\delta E(g)/L$ is estimated as \begin{eqnarray} \frac{\delta E(g)}{L} &=& -\frac{3}{2}\log2\zeta(3)g^6 + \left( \frac{75}{8}\log2\zeta(5) + \frac{3}{2}\zeta(3)^2 \right)g^8 \nonumber \\ &-& \left( \frac{6615}{128}\log2\zeta(7) + \frac{945}{64}\zeta(5) \zeta(3) - \frac{9}{8}\log2\zeta(3)^2 \right)g^{10} + \cdots. \label{loe}\end{eqnarray} Thus it is noted that the weak coupling expansion of the energy correction induced by the string dressing factor starts from the three-loop order in the same way as (\ref{deg}). Now for $\sigma(x_k,x_j)$ in (\ref{sba}) we use the weak-coupling dressing factor of the BES equation in ref. \cite{BES}. From the expression (\ref{hrt}) we replace the dressing kernel $\sqrt{2}g\tilde{K}(x,x')$ by the dressing kernel $2\hat{K}_c(x,x')$ to obtain \begin{equation} ( 1 + e^{-|t|} )\hat{\rho}(t) = e^{-|t|} \left( J_0(\sqrt{2}g t) - 2g^2|t| \int_0^{\infty}dt'2\hat{K}_c(\sqrt{2}g|t|,\sqrt{2}g t') \hat{\rho}(t') \right), \end{equation} whose last term is the same as the last one in (\ref{rkc}). Recently this integral equation has been derived and solved in ref. \cite{RSZ}, where the energy correction also starts from the four-loop order and a kind of transcendentality is observed if a degree of transcendentality is assigned to both the ``bosonic" $\zeta$-function (\ref{zet}) and the ``fermionic" $\zeta_a$-function defined by $\zeta_a(n+1) = (1-2^{-n})\zeta(n+1)$. On the other hand, the summation of (\ref{elg}) and (\ref{loe}) expressed in terms of $\zeta_a(1)= \log2$ leads to the following dimension of the highest state \begin{eqnarray} \frac{\Delta}{L} &=& \frac{3}{2} + 2\zeta_a(1)g^2 - \frac{9}{4}\zeta(3)g^4 + \left(\frac{75}{8}\zeta(5) - \frac{3}{2}\zeta_a(1)\zeta(3) \right)g^6 \nonumber \\ &+& \left( -\frac{11025}{256}\zeta(7) + \frac{75}{8}\zeta_a(1)\zeta(5) + \frac{3}{2}\zeta(3)^2 \right)g^8 \\ &+& \left( \frac{112455}{512}\zeta(9) -\frac{6615}{128} \zeta_a(1)\zeta(7) - \frac{945}{64}\zeta(3)\zeta(5) + \frac{9}{8}\zeta_a(1)\zeta(3)^2 \right)g^{10} + \cdots, \nonumber \end{eqnarray} which shows that the kind of transcendentality is not preserved for the SBA equation. \section{Strong coupling solution for the GBA equation in the $su(1|1)$ sector} Here using the Laplace transformation prescription in ref. \cite{KL} we analyze the strong coupling behavior of all-loop GBA equation for the $su(1|1)$ sector. The eq. (\ref{inf}) can be written in terms of $\hat{\rho}(t) = \epsilon f(x), t = \epsilon x, \epsilon=1/\sqrt{2}g$ as \begin{equation} \epsilon f(x) = e^{-t}\left( J_0(x) - \frac{t}{2}\int_0^{\infty} dx'\hat{K}_m(x,x')f(x') \right) \label{efx}\end{equation} for $t > 0$. The energy shift (\ref{eni}) is extracted by taking the following $t \rightarrow 0$ limit \begin{equation} \frac{E(g)}{L} = -4 \lim_{t \rightarrow 0}\frac{\epsilon f(x)e^t - J_0(x)}{t}. \label{lim}\end{equation} We use the expansion $\hat{K}_m(x,x')= 2\sum_{n=1}^{\infty}nJ_n(x) J_n(x')/xx'$ and perform the Laplace transformation of (\ref{efx}) through $\phi(j) = \int_0^{\infty}dxe^{-xj}f(x)$ to have \begin{eqnarray} \epsilon \sqrt{j^2 +1} \phi(j-\epsilon) &=& 1 - \epsilon\int_{-i\infty}^{i\infty} \frac{dj'}{2\pi i}\phi(j') \sum_{n=1}^{\infty}\left( \frac{-\sqrt{j'^2 + 1} + j'}{ \sqrt{j^2 + 1} +j} \right)^n \nonumber \\ &=& 1 - \epsilon\int_{-i\infty}^{i\infty} \frac{dj'}{2\pi i}\phi(j') \frac{-\sqrt{j'^2 + 1} + j'} {\sqrt{j^2 + 1} + j + \sqrt{j'^2 + 1} - j'}, \end{eqnarray} whose integration contour can be enclosed around the cut which is located to the right of it at the interval $-i < j' < i$. The anti-symmetrization of the integral kernel to extract the square-root singularity yields \begin{eqnarray} \epsilon \sqrt{j^2 +1} \phi(j-\epsilon) &=& 1 + \epsilon \int_{-i}^i \frac{dj'}{2\pi i}\phi(j') \biggl( \frac{-\sqrt{j'^2 + 1} + j'}{\sqrt{j^2 + 1} + j + \sqrt{j'^2 + 1} - j'} \nonumber \\ &-& \frac{\sqrt{j'^2 + 1} + j'} {\sqrt{j^2 + 1} + j - \sqrt{j'^2 + 1} - j'} \biggr). \end{eqnarray} This integral equation is further expressed in terms of the variable $z = \sqrt{j^2 + 1} + j$ and the new function $\chi(z) = \phi(j)$ as \begin{equation} \epsilon\frac{z^2 + 1}{2z} \chi(z_{\epsilon}) = 1 - \frac{\epsilon}{2}\int_{-i}^i \frac{dz'}{2\pi i} \frac{z'^2 +1}{z'^2} \chi(z') \left(\frac{z'}{z - z'} + \frac{1/z'}{z + 1/z'} \right), \label{ekz}\end{equation} where the integration over $z'$ is taken along a unit circle in the anti-clock direction from $-i$ to $i$, and $z_{\epsilon}$ is defined by \begin{equation} z_{\epsilon} = \left( \Bigl( \frac{z^2 - 1}{2z} - \epsilon \Bigr)^2 + 1 \right)^{1/2} + \frac{z^2 -1}{2z} - \epsilon. \label{zez}\end{equation} The transformation $z=z(j)$ provides the conformal mapping from two sheets of the Riemann surface in the $j$-plane for $\phi(j)$ to one sheet in the $z$-plane for $\chi(z)$. The eq. (\ref{ekz}) is rewritten by \begin{equation} \epsilon\frac{z^2 + 1}{2z} \chi(z_{\epsilon}) = 1 - \frac{\epsilon}{2}\int_{L} \frac{dz'}{2\pi i} \frac{z'^2 +1}{z'} \frac{\chi(\tilde{z'})}{z- z'}, \label{ecl}\end{equation} where the integration contour $L$ is given by a unit circle in the anti-clock direction and $\tilde{z}$ is defined by $\tilde{z}_{Rez>0}=z, \; \tilde{z}_{Rez<0}=-z^{-1}$. Here we assume that $\chi(\tilde{z'}) = \chi(z')$ in (\ref{ecl}), that is, the symmetry of $\chi(z')$ under the substitution $z' \rightarrow -1/z'$ which means an analytic continuation of the function $\phi(j')$ on the second sheet of the $j'$-plane with the substitution $\sqrt{j'^2 +1} \rightarrow - \sqrt{j'^2 +1}$. Then we have \begin{equation} \epsilon\frac{z^2 + 1}{2z} \chi(z_{\epsilon}) = 1 - \frac{\epsilon}{2}\int_{L} \frac{dz'}{2\pi i} \frac{z'^2 +1}{z'} \frac{\chi(z')}{z- z'}. \end{equation} For the singular part of $\chi(z)$ inside the circle $|z|< 1$ we obtain the following relation \begin{equation} \epsilon\frac{z^2 + 1}{2z}\chi(z_{\epsilon})_{sing} = 1 - \epsilon\frac{z^2 +1}{2z}\chi(z)_{sing}. \label{eki}\end{equation} In the strong coupling limit $\epsilon \rightarrow 0$ the eq. (\ref{zez}) becomes $z_{\epsilon} = z - 2z^2\epsilon/(1+z^2) + \cdots$ which makes the relation (\ref{eki}) transformed into a first-order differential equation \begin{equation} -\epsilon^2 \frac{\partial\chi(z)_{sing}}{\partial z} + \epsilon\frac{z^2 + 1}{z^2}\chi(z)_{sing} = \frac{1}{z}. \label{fde}\end{equation} The particular solution for this inhomogeneous differential equation is obtained in the form of the expansion \begin{equation} \chi(z)_{sing}^{inhom} = \sum_{n=1}^{\infty}\frac{d_n}{z^n}, \end{equation} where there is no regular term with $n=0$ and the coefficients are specified by $d_1=1/\epsilon, \; d_2 = -1, \; d_3 = 2\epsilon - 1/\epsilon, \cdots$. The homogeneous differential equation for (\ref{fde}) gives a solution \begin{equation} \chi(z)^{hom} = C e^{\frac{1}{\epsilon}\left(z - \frac{1}{z}\right) } = C\sum_{n=-\infty}^{\infty}z^nJ_n(2\epsilon^{-1}), \end{equation} where $C$ is an integral constant. The singular part of the solution $\chi(z)^{hom}$ is described by \begin{equation} \chi(z)_{sing}^{hom} = C\sum_{n=-\infty}^{-1}z^nJ_n(2\epsilon^{-1}) = C\sum_{n=1}^{\infty}\frac{(-1)^n}{z^n}J_n(2\epsilon^{-1}). \end{equation} Therefore the general solution for the first-order differential equation (\ref{fde}) is represented by $\chi(z)_{sing}= \chi(z)_{sing}^{hom} + \chi(z)_{sing}^{inhom}$ which is translated into \begin{equation} \phi(j) = \sum_{n=1}^{\infty}\frac{h_n}{j^n}, \end{equation} where \begin{equation} h_1 = \frac{1}{2}\left( \frac{1}{\epsilon} - CJ_1(2\epsilon^{-1}) \right), \hspace{1cm} h_2 = \frac{1}{4}\left( -1 + CJ_2(2\epsilon^{-1}) \right), \cdots. \end{equation} The inverse Laplace transformtion leads back to \begin{eqnarray} f(x) &=& \int_{-i\infty}^{i\infty}\frac{dj}{2\pi i} e^{xj}\phi(j) \nonumber \\ &=& \frac{1}{2}\left( \frac{1}{\epsilon} - CJ_1(2\epsilon^{-1}) \right) + \frac{1}{4}\left( -1 + CJ_2(2\epsilon^{-1}) \right)\frac{t}{\epsilon} + \cdots. \label{fxe}\end{eqnarray} The integral constant $C$ is fixed by the requirement that $E(g)/L$ in (\ref{lim}) should take a finite value in the limit $t \rightarrow 0$ \begin{equation} C = -\frac{1}{\epsilon J_1(2\epsilon^{-1})}. \end{equation} The second term in (\ref{fxe}) yields the leading anomalous dimension \begin{equation} \frac{\Delta}{L}= \frac{1}{\epsilon}\frac{J_2(2\epsilon^{-1})} {J_1(2\epsilon^{-1})}, \end{equation} which is the contribution from the homogeneous differential equation and is approximately expressed in the strong coupling region as \begin{equation} \lim_{g\rightarrow \infty} \frac{\Delta}{L} = \frac{\sqrt{\lambda}}{2\pi} \tan\left( \frac{2}{\epsilon} - \frac{3}{4}\pi \right). \label{gdl}\end{equation} The resulting expression where the $g \rightarrow \infty$ limit is taken after the $L \rightarrow \infty$ limit is compared with the estimation of ref. \cite{BD}, where the Bethe momenta were computed at the fixed $L$ and strong $g$ region and then the strong anomalous dimension was evaluated numerically by choosing large $L$ \begin{equation} \frac{\Delta}{L}= c_L \sqrt{\lambda}, \hspace{1cm} c_L \rightarrow 0.1405 \; \mathrm{as} \; L \rightarrow \infty. \end{equation} Thus in the strong coupling limit we obtain the anomalous dimension which rapidly oscillates around the estimation of ref. \cite{BD}. This result for the GBA equation in the $su(1|1)$ sector resembles the strong coupling behavior of the universal scaling function $f(g)$ for the GBA equation in the $sl(2)$ sector presented in ref. \cite{KL} where $f(g)$ oscillates around the value predicted from the string theory. Further we note that the factor $\sqrt{\lambda}/2\pi$ in (\ref{gdl}) coincides with the strong coupling limit of the conjectured square-root formula (\ref{exa}). \section{Conclusion} We have investigated the SBA equations for the highest states in the $su(1|1)$ and $su(2)$ sectors by applying the Fourier transformation procedure in the rapidity plane and using the expression of the Fourier-transformed dressing kernel. We have computed the anomalous dimensions of the highest states iteratively from the Fourier-transformed SBA equations and presented the alternative derivation of the anomalous dimension in the $su(1|1)$ sector which agrees with the result of \cite{AT,BD}. The SBA equation in the thermodynamic limit $L \rightarrow \infty$ has been treated and the Fourier-transformed density has been derived iteratively from the integral equation, while in \cite{AT,BD} the SBA equation in the large but finite $L$ has been analyzed and the Bethe momenta have been computed iteratively from the SBA equation in the momentum plane. In the same manner as the SBA equation for the universal scaling function in the $sl(2)$ sector, we have demonstrated that for the SBA equation in the $su(2)$ sector the contribution from the string dressing factor to the anomalous dimension starts from the three-loop order and there is a violation of the kind of transcendentality presented in ref. \cite{RSZ} for the GBA equation with the weak-coupling dressing factor. Following the Laplace transformation prescription we have analytically studied the strong coupling behavior of the GBA equation for the highest state in the $su(1|1)$ sector. The Laplace-transformed GBA equation expressed as an integral equation has been changed into a first-order differential equation in the strong coupling limit. By constructing a singular solution for the differential equation and taking the particular $t \rightarrow 0$ limit we have extracted the strong coupling behavior of the anomalous dimension and observed that it is mainly determined from the homogenous part of the differential equation. It has been shown that the analytically obtained dimension oscillates around the value evaluated numerically from the GBA equation in the momentum plane at large but finite $L$ in ref. \cite{BD} and also the value estimated from the square-root formula in ref. \cite{AT} conjectured by the extrapolation of the weak-coupling expanded expression.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The evolution of a signal $E(x,t)\in\mathbb{C}$ in optical fiber, where $x\ge0$ denotes the position in the fiber and $t\in\mathbb{R}$ the time, is well-described through the \emph{nonlinear Schr\"odinger equation (NSE)}. Through proper scaling and coordinate transforms, the NSE can be brought into its normalized form \begin{equation} \inum\frac{\partial E}{\partial x}+\frac{\partial^{2}E}{\partial t^{2}}+2\kappa|E|^{2}E=-\inum\Gamma E,\quad\kappa\in\{\pm1\}.\label{eq:NSE} \end{equation} The parameter $\kappa$ in Eq. (\ref{eq:NSE}) effectively determines whether the fiber dispersion is normal ($-1$) or anomalous ($+1$), while the parameter $\Gamma$ determines the loss in the fiber. In the following, it will be assumed that the loss parameter is zero. There are two important cases in which a fiber-optic communication channel can be modeled under this assumption. When the fiber loss is mitigated through periodic amplification of the signal, the average of the properly transformed signal is known to satisfy the NSE with zero loss \cite{Hasegawa1991b}. Furthermore, recently a distributed amplification scheme with an effectively unattenuated optical signal (quasi-lossless transmission directly described by the lossless NSE) has been demonstrated \cite{Ania-Castanon2008}. If the loss parameter is zero, the NSE can be solved using \emph{nonlinear Fourier transforms (NFTs)} \cite{Zakharov1972}. The spatial evolution of the signal $E(x,t)$ then reduces to a simple phase-shift in the \emph{nonlinear Fourier domain (NFD)}, similar to how linear convolutions reduce to phase-shifts in the conventional Fourier domain. The prospect of an optical communication scheme that inherently copes with the nonlinearity of the fiber has recently led to several investigations on how data can be transmitted in the NFD instead of the conventional Fourier or time domains \cite{Yousefi2014compact,Turitsyna2013,Prilepsky2013,Prilepsky2014a,Le2014,Hari2014,Zhang2014,Buelow2015}, with the original idea being due to Hasegawa and Nyu \cite{Hasegawa1993}. Next to potential savings in computational complexity, it is anticipated that subchannels defined in the NFD will not suffer from intra-channel interference, which is currently limiting the data rates achievable by wavelength-division multiplexing systems \cite{Essiambre2010}. The mathematics behind the NFT are however quite involved, and despite recent progress in the implementation of fast forward and inverse NFTs \cite{Wahls2013b,Wahls2013d,Wahls2015b} no integrated concept for a computationally efficient fiber-optic transmission system that operates in the NFD seems to be available. In this paper, therefore the problem of \emph{digital backpropagation (DBP)}, i.e. recovering the fiber input $E(0,t)$ from the output $E(x_{1},t)$ by solving (\ref{eq:NSE}), is addressed instead using NFTs \cite{Turitsyna2013}. Both concepts are compared in Fig. \ref{fig:mod-vs-db}. Although DBP does not solve the issue of intra-channel interference in fiber-optic networks because it is usually not feasible to join the individual subchannels of the physically separated users into a single super-channel in this case \cite[X.D]{Essiambre2010}, we note that there are other scenarios where this issue does not arise \cite{Maher2015}. The advantage of digital backpropagation over information transmission in the NFD is that it is not necessary to implement full forward and backward NFTs. Together with recent advantages made in \cite{Wahls2013b,Wahls2013d,Wahls2015b}, this observation will enable us to perform digital backpropagation in the NFD using only $\mathcal{O}(D\log^{2}D)$ floating point operations (\emph{flops}), where $D$ is the number of samples. This complexity estimate is independent of the length of the fiber because the spatial evolution of the signal will be carried out analytically. Conventional split-step Fourier methods, in contrast, have a complexity of $\mathcal{O}(MD\log D)$ flops, where $M$ is the number of spatial steps \cite[III.G]{Ip2008}. \begin{figure} \centering{}\includegraphics[width=1\columnwidth]{Drawing2}\caption{\label{fig:mod-vs-db}Information transmission (top) vs DBP (bottom) in the NFD} \end{figure} The goal of this paper is to present a new, fast algorithm for digital backpropagation that operates in the NFD and to compare it with traditional split-step Fourier methods through numerical simulations. The impact of noise resulting from the use of distributed Raman amplification will be of particular interest. The paper is structured as follows. In Sec. \ref{sec:DB-in-NFD}, the theory behind digital backpropagation in the NFD will be outlined, and the new, fast algorithm will be given and discussed. The simulation setup is described in Sec. \ref{sec:Simulation-Setup}, while results are reported in Sec. \ref{sec:Simulation-Results}. Sec. \ref{sec:Conclusion} concludes the paper. \section{Digital Backpropagation in the\protect \\ Nonlinear Fourier Domain\label{sec:DB-in-NFD}} In this section, first the theoretical and computational results that are required to perform digital backpropagation in the NFD are briefly recapitulated from \cite{Wahls2015b}. Afterwards, the fast algorithm is presented and its limitations are discussed. \subsection{Theory for the Continuous-Time Case} The \emph{Zakharov-Shabat scattering problem} associated to any signal $E(x,t)$ that vanishes sufficiently fast for $t\to\pm\infty$ is \begin{align} \frac{d}{dt}\boldsymbol{\mathbf{\phi}}(x,t,\lambda)= & \left[\begin{array}{cc} -\inum\lambda & E(x,t)\\ -\kappa\bar{E}(x,t) & \inum\lambda \end{array}\right]\boldsymbol{\mathbf{\phi}}(x,t,\lambda),\label{eq:Z-S-1}\\ \boldsymbol{\phi}(x,t,\lambda)= & \left[\begin{array}{c} \enum^{-\inum\lambda t}\\ 0 \end{array}\right]+o(1),\quad t\to-\infty.\label{eq:Z-S-2} \end{align} With $\phi_{1}$ and $\phi_{2}$ denoting the components of $\boldsymbol{\phi}$, now define \begin{align} \alpha(x,\lambda):= & \lim_{t\to\infty}\enum^{\inum\lambda t}\phi_{1}(x,t,\lambda),\label{eq:alpha(x,lambda)}\\ \beta(x,\lambda):= & \lim_{t\to\infty}\enum^{-\inum\lambda t}\phi_{2}(x,t,\lambda),\nonumber \end{align} where $\lambda\in\mathbb{C}$ is a parameter. If $E(x,t)$ satisfies the NSE (\ref{eq:NSE}) with zero loss parameter $\Gamma=0$, the corresponding $\alpha(x,\lambda)$ and $\beta(x,\lambda)$ turn out to depend on $x$ in very simple way: \begin{equation} \alpha(x,\lambda)=\alpha(0,\lambda),\quad\beta(x,\lambda)=\enum^{-4\inum\lambda^{2}x}\beta(0,\lambda).\label{eq:time-evolution-al-be} \end{equation} These functions are not the final form of the NFT, but for our needs it will be sufficient to stop here. The complicated spatial evolution of the signal $E(x,t)$ thus indeed becomes trivial if it is transformed into $\alpha(x,\lambda)$ and $\beta(x,\lambda)$. Based on this insight, ideal continuous-time digital backpropagation reduces to three basic steps in the NFD: \begin{equation} E(x_{1},t)\stackrel{\text{A}}{\to}\left[\begin{array}{c} \alpha(x_{1},\lambda)\\ \beta(x_{1},\lambda) \end{array}\right]\stackrel{\text{B}}{\to}\left[\begin{array}{c} \alpha(0,\lambda)\\ \beta(0,\lambda) \end{array}\right]\stackrel{\text{C}}{\to}E(0,t).\label{eq:three-steps} \end{equation} \subsection{Discretization of the Continuous-Time Problem} In order to obtain a numerical approximation of $\alpha(x,\lambda)$ and $\beta(x,\lambda)$ for any fixed $x$, choose a sufficiently large interval $[T_{1},T_{2}]$ inside which $E(x,t)$ has already vanished sufficiently. Without loss of generality, one can assume that $T_{1}=-1$ and $T_{2}=0$. In this interval, $D$ rescaled samples \[ E[x,n]:=\epsilon E\left(x,-1+n\epsilon-\frac{\epsilon}{2}\right),\,\epsilon:=\frac{1}{D},\,n\in\{1,\dots,D\}, \] are taken. With $z:=\enum^{-2\inum\lambda\epsilon}$, (\ref{eq:Z-S-1})--(\ref{eq:Z-S-2}) becomes \begin{align} \boldsymbol{\phi}[x,n,z]:= & z^{\frac{1}{2}}\left[\begin{array}{cc} 1 & z^{-1}E[x,n]\\ -\kappa\bar{E}[x,n] & z^{-1} \end{array}\right]\nonumber \\ & \times\frac{\boldsymbol{\phi}[x,n-1,z]}{\sqrt{1+\kappa|E[x,n]|^{2}}},\label{eq:discrete-Z-S-1}\\ \boldsymbol{\phi}[x,0,z]:= & z^{-\frac{D}{2}}\left[\begin{array}{c} 1\\ 0 \end{array}\right].\label{eq:discrete-Z-S-2} \end{align} This leads to the following polynomial approximations: \begin{align*} \alpha(x,\lambda)\approx a(x,z):= & {\textstyle \sum_{i=0}^{D-1}}a_{i}(x)z^{-i}:=\phi_{1}[x,D,z],\\ \beta(x,\lambda)\approx b(x,z):= & {\textstyle \sum_{i=0}^{D-1}}b_{i}(x)z^{-i}:=\phi_{2}[x,D,z]. \end{align*} \subsection{The Algorithm} The three steps in the diagram (\ref{eq:three-steps}) can be implemented with an overall complexity of $\mathcal{O}(D\log^{2}D)$ flops as follows. \subsubsection*{Step A } The discrete scattering problem to find the polynomials $a(x_{1},z)$ and $b(x_{1},z)$ from the known fiber output $E(x_{1},t)$ through (\ref{eq:discrete-Z-S-1})--(\ref{eq:discrete-Z-S-2}) is solved with only $\mathcal{O}(D\log^{2}D)$ flops using \cite[Alg. 1]{Wahls2013d} (also see \cite[Alg. 1]{Wahls2013b}). \subsubsection*{Step B} The unknown polynomials $a(0,z)$ and $b(0,z)$ are defined by the unknown fiber input $E(0,t)$ through (\ref{eq:discrete-Z-S-1})--(\ref{eq:discrete-Z-S-2}). In this step, approximations $\hat{a}(0,z)$ and $\hat{b}(0,z)$ of $a(0,z)$ and $b(0,z)$ are computed based on Eq. (\ref{eq:time-evolution-al-be}). The left-hand in (\ref{eq:time-evolution-al-be}) suggests the choice $\hat{a}(0,z):=\hat{a}(x_{1},z)$. For finding $\hat{b}(0,t)$, let us denote the $D$-th root of unity by $w:=\enum^{-2\pi\inum/D}$. The right-hand in (\ref{eq:time-evolution-al-be}) motivates us to find $\hat{b}(0,z)=\sum_{i=0}^{D-1}\hat{b}_{i}(0)z^{-i}$ by solving the well-posed interpolation problem \begin{align} \hat{b}(0,w^{n-\frac{1}{2}})= & \enum^{4\inum\left(\log(w^{n-\frac{1}{2}})/(-2\inum\epsilon)\right)^{2}x_{1}}b(x_{1},w^{n-\frac{1}{2}})\label{eq:interpolation-problem}\\ = & \enum^{4\pi^{2}\inum(n-\frac{1}{2})^{2}x_{1}}b(x_{1},w^{n-\frac{1}{2}}),\,n\in\{1,\dots,D\},\nonumber \end{align} with the fast Fourier transform, using only $\mathcal{O}(D\log D)$ flops. \subsubsection*{Step C} The inverse scattering problem of estimating $E[0,n]$ from $\hat{a}(0,z)$ and $\hat{b}(0,z)$ by inverting (\ref{eq:discrete-Z-S-1})--(\ref{eq:discrete-Z-S-2}) is solved using $\mathcal{O}(D\log^{2}D)$ flops as described in \cite[IV]{Wahls2015b}. \subsection{Limitations\label{sub:Limitations}} It was already mentioned above that the algorithm works for fibers with normal dispersion. In that case, the sign $\kappa$ in the NSE (\ref{eq:NSE}) will be negative and $E(x,t)$ is determined through the values that $\alpha(0,\lambda)$ and $\beta(0,\lambda)$ take on the real axis $\mathbb{R}\ni\lambda$ \cite[p. 285]{Ablowitz1974}. If the dispersion is anomalous, the sign will be positive and $E(x,t)$ is determined through the values that $\alpha(0,\lambda)$ and $\beta(0,\lambda)$ take on the real axis \emph{and} around the roots of $\alpha(0,\lambda)$ in the complex upper half-plane $\Im(\lambda)>0$ \cite[IV.B]{Ablowitz1974}. Taking the coordinate transform $z=\enum^{-2\inum\lambda\epsilon}$ into account, one sees that the interpolation problem (\ref{eq:interpolation-problem}) however enforces the phase-shift only for certain real values of $\lambda$. Thus, the algorithm is unlikely to work if $\alpha(0,\lambda)$ has roots in the upper half-plane. The condition $\int_{-\infty}^{\infty}|E(0,t)|dt<\frac{\pi}{2}$ is sufficient to ensure that there are no such roots \cite[Th. 4.2]{Klaus2003}. The actual threshold where solitons start to emerge however is expected to be higher due to the randomly oscillating character of waveforms used in fiber-optic communications \cite{Turitsyn2008,Derevyanko2008}. \section{Simulation Setup\label{sec:Simulation-Setup}} \begin{figure} \begin{centering} \includegraphics[width=1\columnwidth]{Fig2_wfonts.pdf} \par\end{centering} \caption{\label{fig:Simulation-setup}a) Simulation setup of coherent optical communication systems with DBP in the NFD, b) basic block functions of OFDM and Nyquist transceivers} \end{figure} In this section, the simulation setup that was used to assess the performance of DBP in the NFD is presented. The transmission link was assumed to be lossless due to ideal Raman amplification, with an \emph{amplified spontaneous emission (ASE)} noise density $N_{\text{ASE}}=\Gamma Lhf_{s}K_{T}$. Here, $\Gamma$ is the fiber loss, $L$ is the transmission distance, $hf_{s}$ is the photon energy, $f_{s}$ is the optical frequency of the Raman pump providing the distributed gain, and $K_{T}=1.13$ is the photon occupancy factor for Raman amplification of a fiber-optic communication system at room temperature. In the simulations, it was assumed that the long-haul fiber link consisted of $80$-km spans of fiber (\emph{standard single-mode fiber (SSMF)} in the anomalous case) with a loss of $0.2$ dB/km, nonlinearity coefficient of $1.22$ W/km, and a dispersion of $\pm16$ ps/nm/km (normal and anomalous dispersions). A photon occupancy factor of $4$ was used for more realistic conditions. The ASE noise was added after each fiber span. The data was modulated using high spectral efficiency modulation formats (QPSK and 64QAM) and either \emph{Nyquist pulses} (i.e. $\sinc$'s) \cite{Le2014} or \emph{orthogonal frequency division multiplexing (OFDM)} \cite{Prilepsky2013,Prilepsky2014a,Le2014}. The block diagram of the simulation setup and basic block functions of the OFDM and Nyquist transceivers are presented in Fig. \ref{fig:Simulation-setup}. In Fig. \ref{fig:Simulation-setup}(a), DBP in the NFD is performed at the receiver after coherent detection, synchronization, windowing and frequency offset compensation. For simplicity, both perfect synchronization and frequency offset compensation were assumed. The net data rates of the considered transmission systems were, after removing $7\%$ overhead due to the \emph{forward error correction (FEC)}, $100$ Gb/s and $300$ Gb/s for QPSK and 64QAM, respectively. For the OFDM system, the size of the \emph{inverse fast Fourier transform (IFFT)} was $128$ samples, and $112$ subcarriers were filled with data using \emph{Gray coding}. The remaining subcarriers were set to zero. The useful OFDM symbol duration was $2$ ns and no cyclic prefix was used for the linear dispersion removal. An oversampling factor of $8$ was adopted resulting in a total simulation bandwidth of \textasciitilde{}$448$GHz. At the receiver side, all digital signal processing operations were performed with the same sampling rate. The receiver\textquoteright s bandwidth is assumed to be unlimited in order to estimate the achievable gain offered by the proposed DBP algorithm. \begin{figure} \begin{centering} \includegraphics[width=0.9\columnwidth]{Fig3_wfonts.pdf} \par\end{centering} \caption{\label{fig:burst-mode}a) Illustration of a burst mode transmission at the transmitter side, in which neighboring packets are separated by a guard time, b) received signal at the receiver, which is broadened by chromatic dispersion, c) windowing signal for processing in the proposed nonlinear compensation scheme.} \end{figure} The data was transmitted in a burst mode where data packets where separated by a guard time. See Fig. \ref{fig:burst-mode} for an illustration. The guard time was chosen longer than the memory $\Delta T=2\pi B\beta_{2}L$ induced by the fiber \emph{chromatic dispersion (CD)}, where \textbf{$B$} is the signal\textquoteright s bandwidth, $\beta_{2}$ is the chromatic dispersion and $L$ is the transmission distance. One burst consisted of one data packet and the associated guard interval. At the receiver, after synchronization, each burst is extracted and processed separately. Since the forward and inverse NFTs require that the signal has vanished early enough before it reaches the boundaries, zero padding was applied to enlarge the processing window as in Fig. \ref{fig:burst-mode}(c). \section{Simulation Results\label{sec:Simulation-Results}} In this section, the performance of DBP in the NFD is compared with a traditional DBP algorithm based on the split-step Fourier method \cite{Ip2008} as well as with simple chromatic dispersion compensation; the latter works well only in the low-power regime where nonlinear effects are negligible. \subsection{Normal dispersion} In the normal dispersion case, a $56$ Gbaud Nyquist-shaped transmission scheme is considered in burst mode with $256$ symbols in each packet. The duration of each packet is \textasciitilde{}$4.6$ns. The burst size is $16000$ samples and the processing window size for each burst after zero padding is $D=65536$ samples. The guard time is \textasciitilde{}$10\%$ longer than the fiber chromatic dispersion induced memory of the link. The forward propagation is simulated using the split-step Fourier method \cite{Ip2008} with $80$ steps/span, i.e. $1$km step. Monte-Carlo simulations were performed to estimate the system performance using the \emph{error vector magnitude (EVM)} \cite[(5)]{Shafik2006}. For convenience, the EVM is then converted into the \emph{$Q$-factor} $20\log(\sqrt{2}\erfc^{-1}(2\BER))$ using the \emph{bit error rate (BER)} estimate \cite[(13)]{Shafik2006}. The performance of the $100$-Gb/s QPSK Nyquist-shaped system is depicted in Fig. \ref{fig:Simulation-results-1} as a function of the launch power in a 4000km link for various configurations. An exemplary fiber in- and output as well as the corresponding reconstructed input (via DBP in the NFD) are shown in Fig. \ref{fig:Simulation-results-0}. It can be seen in Fig. \ref{fig:Simulation-results-1} that the proposed DBP in the NFD provides a significant performance gain of \textasciitilde{}$8.6$ dB, which is comparable with the traditional DBP employing $20$ steps/span. Traditional DBP with 40 steps/span can be considered as ideal DBP in this experiment because a further increase of the number of steps/span did not improve the performance further. In the considered $4000$km link, $40$ steps/span DBP requires $M=2000$ steps in total. This illustrates the advantage of the proposed DBP algorithm whose complexity is in contrast independent of the transmission distance. The performance of DBP in the NFD was however found to degrade rapidly when the launch power is sufficiently high. We believe that this effect can be mitigated through an extension of the processing window at the cost of increased computational complexity. \begin{figure} \begin{centering} \includegraphics[width=0.9\columnwidth]{Fig5_wfonts.pdf} \par\end{centering} \caption{\label{fig:Simulation-results-1} Performance of 100-Gb/s QPSK Nyquist-shaping over 4000km.} \end{figure} \begin{figure} \begin{centering} \includegraphics[width=0.9\columnwidth]{Fig4_wfonts.pdf} \par\end{centering} \caption{\label{fig:Simulation-results-0} Top: True vs reconstructed (via DBP-NFD) fiber input for a QPSK-Nyquist signal. Bottom: Corresponding fiber output. Only real parts are shown.} \end{figure} A similar behavior can be observed in Fig. \ref{fig:Simulation-results-2} for a $300$-Gb/s 64QAM Nyquist-shaped system. It can be seen in Fig. \ref{fig:Simulation-results-2} that DBP in the NFD shows \textasciitilde{}$1.5$ dB better performance than DBP employing $20$ steps/span. The difference from ideal DBP ($40$ steps/span) is just \textasciitilde{}$0.6$ dB. This clearly shows that NFD-DBP can provide essentially the same performance as ideal DBP. \begin{figure} \begin{centering} \includegraphics[width=0.9\columnwidth]{Fig9_wfonts.pdf} \par\end{centering} \caption{\label{fig:Simulation-results-2} Performance of 300-Gb/s 64QAM Nyquit-shaping over 2000km.} \end{figure} \subsection{Anomalous dispersion} Most fibers used today (such as SSMF) have anomalous dispersion. Therefore, it is critical to evaluate the performance of the proposed DBP algorithm in optical links with anomalous dispersion. The DBP-NFD algorithm proposed earlier however cannot yet deal with signals where the function $\alpha(0,\lambda)$ defined in (\ref{eq:alpha(x,lambda)}) has roots in the upper half-plane. See Sec. \ref{sub:Limitations}. The condition that the $L_{1}$-norm of each burst is less than $\pi/2$ is sufficient to ensure this condition. The packet duration in a system with anomalous dispersion thus has to be kept small enough in order to apply the proposed algorithm. This constraint is not desirable in practice, as it reduces the total throughput of the link because the guard interval, which is independent of the packet duration, must be inserted more frequently. The $L_{1}$ norm of the Fourier transform is always lower or equal to the $L_{1}$ norm of the time-domain signal. Motivated by this observation, OFDM was chosen instead of Nyquist-shaping for anomalous dispersion. The performances of both traditional DBP and DBP in the NFD are shown in Fig. \ref{fig:Simulation-results-3}. DBP in the NFD can achieve a performance gain of \textasciitilde{}$3.5$ dB, which is \textasciitilde{}$1$ dB better than DBP employing $4$ steps/span. The performance however degrades dramatically once the launch power has become larger than $-5$dBm. We attribute this phenomenon to the emergence of upper half-plane roots of $\alpha(0,z)$, which corresponds to the formation of solitonic components in the signal. This argument is supported in Fig. \ref{fig:Simulation-results-4}, where the $L_{1}$-norm and the ratio between the power in the solitonic part ($\hat{E}$ in \cite[p. 4320]{Yousefi2014compact}) over the total signal power are plotted as functions of the signal power. At a launch power of $-4$ dBm, the signals begin to have solitonic components, which seems to have a significant impact on the DBP algorithm in the NFD. As anticipated in Sec. \ref{sub:Limitations}, solitons indeed only occur above an $L_{1}$-norm which is significantly higher than the bound $\pi/2$. \begin{figure} \begin{centering} \includegraphics[width=0.9\columnwidth]{Fig11_wfonts.pdf} \par\end{centering} \caption{\label{fig:Simulation-results-3} Performance of 100-Gb/s QPSK OFDM over 4000km in fiber with anomalous dispersion.} \end{figure} \begin{figure} \begin{centering} \includegraphics[width=0.9\columnwidth]{Fig13_wfonts.pdf} \par\end{centering} \caption{\label{fig:Simulation-results-4} $L_{1}$-norm and soliton-signal power ratio for 100Gb/s OFDM.} \end{figure} \section{Conclusion\label{sec:Conclusion}} The feasibility of performing digital backpropagation in the nonlinear Fourier domain has been demonstrated with a new, fast algorithm. In simulations, this new algorithm performed very close to ideal digital backpropagation implemented with a conventional split-step Fourier method for fibers with normal dispersion at a much lower computational complexity. In the anomalous dispersion case, it was found that the algorithm works well only if the signal power is low enough such that solitonic components do not emerge. We are currently working to remove this limitation. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} For over a decade dopants in Si have constituted the key elements in proposals for the implementation of a solid state quantum computer.~\cite{kane98,vrijen00,skinner03,barrett03,hollenberg041} Spin or charge qubits operate through controlled manipulation (by applied electric and magnetic fields) of the donor electron bound states. A shallow donor, as P or As in Si, can bind one electron in the neutral state, denoted by $D^0$, or two electrons in the negatively charged state, denoted by $D^-$. Proposed one and two-qubit gates involve manipulating individual electrons or electron pairs bound to donors or drawn away towards the interface of Si with a barrier material.~\cite{kane98,skinner03,calderonPRL06} In general, neutral and ionized donor states play a role in different stages of the prescribed sequence of operations. In the proposed quantum computing schemes, donors are located very close to interfaces with insulators, separating the Si layer from the control metallic gates. This proximity is required in order to perform the manipulation via electric fields of the donor spin and charge states. The presence of boundaries close to donors modifies the binding potential experienced by the electrons in a semiconductor. This is a well-known effect in Si MOSFETs,~\cite{macmillen84,calderon-longPRB07} where the binding energy of electrons is reduced with respect to the bulk value for distances between the donor and the interface smaller than the typical Bohr radius of the bound electron wave-function. On the other hand, in free-standing Si nanowires with diameters below $10$ nm, the binding energy of donor electrons significantly increases~\cite{delerue-lannoo,diarra07} leading to a strongly reduced doping efficiency in the nanowires.~\cite{bjork2009} The continuous size reduction of transistors along years, with current characteristic channel lengths of tens of nanometers, implies that the disorder in the distribution of dopants can now determine the performance, in particular the transport properties of the devices.~\cite{voyles02,shinada05,pierre2009} In specific geometries, like the nonplanar field effect transistors denoted by FinFETs,~\cite{sellier2006} isolated donors can be identified and its charge states (neutral $D^0$, and negatively charged $D^-$) studied by transport spectroscopy. The existence of $D^-$ donor states in semiconductors, analogous to the hydrogen negative ion $H^-$, was suggested in the fifties~\cite{lampertPRL58} and is now well established experimentally. Negatively charged donors in bulk Si were first detected by photoconductivity measurements.~\cite{taniguchi-narita76} The binding energies of $D^-$ donors, defined as the energy required to remove one electron from the ion ($D^- \rightarrow D^0+$ free-electron) $E_B^{D^-}=E_{D^0}-E_{D^-}$, are found experimentally to be small ($E_B^{D^-}\sim 1.7$ meV for P and $\sim 2.05$ meV for As) compared to the binding energies of the first electron $E_B^{D^0}$ (45 meV for P and 54 meV for As). For zero applied magnetic fields, no excited bound states of $D^-$ in bulk semiconductors~\cite{larsenPRB92} or superlattices~\cite{peetersPRB95} are found, similar to $H^-$ which has only one bound state in three-dimensions as shown in Refs.~\onlinecite{perkerisPR62,hillPRL77}. A relevant characteristic of negatively charged donors is their charging energy, $U= E_{D^-}-2 E_{D^0}$, which gives the energy required to add a second electron to a neutral donor. This extra energy is due to the Coulomb repulsion between the two bound electrons, and does not contribute in one electron systems, as $D^0$. The measured values in bulk Si are $U_{\rm As}^{\rm bulk, exp} = 52$ meV for As and $U_{\rm P}^{\rm bulk, exp} = 43$ meV for P. From the stability diagrams obtained from transport spectroscopy measurements we observe that the charging energy of As dopants in nanoscale Si devices (FinFETs) is strongly reduced compared to the well known bulk value. By using a variational approach within the single-valley effective mass approximation, we find that this decrease of the charging energy may be attributed to modifications on the bare insulator screening by the presence of a nearby metallic layer. For the same reason, we also find theoretically that it may be possible to have a $D^-$ bound excited state. This paper is organized as follows. In Sec.~\ref{sec:bulk}, we introduce the formalism for a donor in the bulk in analogy with the hydrogen atom problem. In Sec.~\ref{sec:interface}, we study the problem of a donor close to an interface within a flat band condition. We show experimental results for the charging energy and compare them with our theoretical estimations. We also calculate the binding energy of a $D^-$ triplet first excited state. In Sec.~\ref{sec:discussion} we present discussions including: (i) assessment of the limitations in our theoretical approach, (ii) considerations about the modifications of the screening in nanoscale devices, (iii) the implications of our results in quantum device applications, and, finally, we also summarize our main conclusions. \section{Donors in bulk silicon} \label{sec:bulk} A simple estimate for the binding energies of both $D^0$ and $D^-$ in bulk Si can be obtained using the analogy between the hydrogen atom $H$ and shallow donor states in semiconductors. The Hamiltonian for one electron in the field of a nucleus with charge $+e$ and infinite mass is, in effective units of length $a_B=\hbar^2/m_e e^2$ and energy $Ry=m_e e^4/2\hbar^2$, \begin{equation} h(r_1)=T(r_1)-\frac{2}{r_1} \, , \label{eq:hamil-1e} \end{equation} with $T(r)=- \nabla^2 $. The ground state is \begin{equation} \phi(r_1,a)=\frac{1}{\sqrt{\pi a^3}} e^{-r_1/a} \label{eq:wfH} \end{equation} with Bohr radius $a=1\,a_B$ and energy $E_{H}=-1\, Ry$. This corresponds to one electron in the $1s$ orbital. For negatively charged hydrogen (H$^-$) the two electrons Hamiltonian is \begin{equation} H_{\rm Bulk}=h(r_1)+h(r_2)+\frac{2}{r_{12}} \,, \end{equation} where the last term gives the electron-electron interaction ($r_{12}=|\vec{r}_1-\vec{r}_2|$). As an approximation to the ground state, we use a relatively simple variational two particles wave-function for the spatial part, a symmetrized combination of $1s$ atomic orbitals as given in Eq.~\ref{eq:wfH}, since the spin part is a singlet, \begin{equation} |1s,1s,s\rangle =\left[\phi(r_1,a) \phi(r_2,b) + \phi(r_1,b) \phi(r_2,a) \right] \,. \label{eq:wfH-} \end{equation} The resulting energy is $E^{H^-}=-1.027 Ry$ with $a=0.963\, a_B$ and $b=3.534 \,a_B$ (binding energy $E_B^{H^-}=0.027 Ry$).~\cite{Bethe-Salpeter} Here we may interpret $a$ as the radius of the inner orbital and $b$ of the outer orbital. This approximation for the wave-function correctly gives a bound state for $H^-$ but it underestimates the binding energy with respect to the value $E_B^{H^-}=0.0555$ Ry, obtained with variational wave-functions with a larger number of parameters, thus closer to the 'exact' value.~\cite{Bethe-Salpeter} Assuming an isotropic single-valley conduction band in bulk Si the calculation of the $D^0$ and $D^-$ energies reduces to the case of $H$ just described. Within this approximation, an estimation for $E_B^{D^-}$ can then be obtained by considering an effective rydberg $Ry^*=m^*e^4/2\epsilon_{\rm Si}^2 \hbar^2$ with an isotropic effective mass (we use $\epsilon_{\rm Si}=11.4$). We choose $m^* = 0.29819 \,m_e$ so that the ground state energy for a neutral donor is the same as given by an anisotropic wave-function in bulk: within a single valley approximation $E_B^{D^0} = -1Ry^* = -31.2$ meV and its effective Bohr radius is $a=1 a^*$ with $a^*={{\hbar^2\epsilon_{\rm Si}}/{m^* e^2}} =2.14$ nm. In this approximation, $E_B^{D^-}=0.84 $ meV. In the same way, an estimation for the charging energy can be made for donors in Si: $U=0.973 Ry^*= 30.35$ meV.~\cite{ramdas1981} Even though the trial wave function in Eq.~(\ref{eq:wfH-}) underestimates the binding energy, we adopt it here for simplicity, in particular to allow performing in a reasonably simple way the calculations for a negatively charged donor close to an interface reported below. In the same way, we do not introduce the multivalley structure of the conduction band of Si. The approximations proposed here lead to qualitative estimates and establish general trends for the effects of an interface on a donor energy spectrum. The limitations and consequences of our approach are discussed in Sec.~\ref{sec:discussion}. \begin{table}[t] \begin{tabular}{|c l l l|} \hline E$_{\rm{D}^0}$ & $=-1 Ry^*$ & & $a=1 a^*$\\ \hline E$_{\rm{D}^-}$ & $=-1.027 Ry^*$ & & $a=0.963 a^*$; $b=3.534 a^*$ \\ \hline E$_B$ & = E$_{\rm{D}^0}$- E$_{\rm{D}^-}$ &$ =0.027 Ry^*$ & \\ \hline U & = E$_{\rm{D}^-}$-2E$_{\rm{D}^0}$ & $ =0.973 Ry^*$ & \\ \hline \end{tabular}\caption{Bulk values of energies and orbital radii for the ground state of neutral and negatively charged donors within our approximation (see text for discussion). Effective units for Si are $a^*=2.14$ nm and $Ry^*=31.2$ meV. } \label{table:data} \end{table} \begin{figure} \resizebox{60mm}{!}{\includegraphics{scheme.eps}} \caption{(Color online) Schematic representation of a negatively charged donor in Si (solid circles) located a distance $d$ from an interface. The open circles in the barrier (left) represent the image charges. The sign and magnitude of these charges depend on the relation between the dielectric constants of Si and the barrier given by $Q=(\epsilon_{\rm barrier}-\epsilon_{\rm Si})/(\epsilon_{\rm barrier}+\epsilon_{\rm Si})$. For the electrons, $Q<0$ corresponds to repulsive electron image potentials and a positive donor image potential (opposite signs of potentials and image charges for $Q>0$, see Eq.~\ref{eq:beforeQ}). } \label{fig:scheme} \end{figure} \section{Donors close to an interface} \label{sec:interface} \subsection{$D^0$ and $D^-$ ground states} We consider now a donor (at $z=0$) close to an interface (at $z=-d$) (see Fig.~\ref{fig:scheme}). Assuming that the interface produces an infinite barrier potential, we adopt variational wave-functions with the same form as in Eqs.~(\ref{eq:wfH}) and (\ref{eq:wfH-}) multiplied by linear factors $(z_i+d)$ ($i=1,2$) which guarantee that each orbital goes to zero at the interface. We further characterize the Si interface with a different material by including charge image terms in the Hamiltonian. \begin{figure} \resizebox{80mm}{!}{\includegraphics{D0-vs-d-Q.eps}} \caption{(Color online) Energy of the neutral donor versus its distance $d$ from an interface for different values of $Q=(\epsilon_{\rm barrier}-\epsilon_{Si})/(\epsilon_{\rm barrier}+\epsilon_{Si})$. } \label{fig:E-D0} \end{figure} Before discussing the ionized donor $D^-$, we briefly present results for the neutral donor $D^0$ which are involved in defining donor binding and charging energies. For this case, the Hamiltonian is \begin{equation} H(r_1)=h(r_1)+h_{\rm images} (r_1) \label{eq:hamil+images-1e} \end{equation} with $h(r_1)$ as in Eq.~\ref{eq:hamil-1e} and \begin{equation} h_{\rm images}(r_1)=-\frac{Q}{2(z_1+d)}+\frac{2 Q}{\sqrt{x_1^2+y_1^2+(z_1+2d)^2}} \, , \label{eq:beforeQ} \end{equation} where $Q=(\epsilon_{\rm barrier}-\epsilon_{Si})/(\epsilon_{\rm barrier}+\epsilon_{Si})$. $\epsilon_{\rm barrier}$ is the dielectric constant of the barrier material. The first term in $h_{\rm images}$ is the interaction of the electron with its own image, and the second is the interaction of the electron with the donor's image. If the barrier is a thick insulator, for example SiO$_2$ with dielectric constant $\epsilon_{\rm SiO_2}=3.8$, $Q<0$ ($Q= -0.5$ in this case). In actual devices, the barrier is composed of a thin insulator (usually SiO$_2$), which prevents charge leakage, plus metallic electrodes which control transport and charge in the semiconductor. This composite heterostructure may effectively behave as a barrier with an effective dielectric constant larger than Si, since $\epsilon_{\rm metal} \to \infty$, leading to an effective $Q>0$. Depending on the sign of $Q$, the net image potentials will be repulsive or attractive, which may strongly affect the binding energies of donors at a short distance $d$ from the interface. Using a trial wave-function $\phi_{D^0} \propto e^{-r/a} (z+d)$, most of the integrals involved in the variational calculation of $E^{D^0}$ can be performed analytically. $E^{D^0}$ is shown in Fig.~\ref{fig:E-D0} for different values of $Q$ and compare very well with the energy calculated by MacMillen and Landman~\cite{macmillen84} for $Q= -1$ with a much more complex trial wave-function. The main effect of the interface is to reduce the binding energy when the donor is located at very small distances $d$. For $Q<0$ (corresponding to insulating barriers with a dielectric constant smaller than that of Si), the energy has a shallow minimum for $d\sim 8 a^*$. This minimum arises because the donor image attractive potential enhances the binding energy but, as $d$ gets smaller, the fact that the electron's wave-function is constrained to $z>-d$ dominates, leading to a strong decrease in the binding energy.~\cite{macmillen84} $Q=0$ corresponds to ignoring the images. $Q=1$ would correspond to having a metal at the interface with an infinitesimal insulating barrier at the interface to prevent leackage of the wave-function into the metal.~\cite{slachmuylders08} We show results for $Q=0.5$ as an effective value to take account of a realistic barrier composed of a thin (but finite) insulator plus a metal. The bulk limit $E=-1 \, Ry^*$ is reached at long distances for all values of $Q$. Adding a second electron to a donor requires the inclusion of the electron-electron interaction terms. The negative donor Hamiltonian parameters are schematically presented in Fig.~\ref{fig:scheme}, and the total two electrons Hamiltonian is \begin{eqnarray} H&=&H(r_1)+H(r_2)+\frac{2}{r_{12}}\nonumber \\ &-&\frac{4 Q}{\sqrt{(x_1-x_2)^2+(y_1-y_2)^2+(2d+z_1+z_2)^2}}\, , \end{eqnarray} where $H(r_i)$ includes the one-particle images (Eq.~\ref{eq:hamil+images-1e}) and the last term is the interaction between each electron and the other electron's image. In Figs.~\ref{fig:E-EB-b-Q-05} and \ref{fig:E-EB-b-Q05}, we plot $E^{D^-}$ and the binding energy $E_B^{D^-}=E^{D^0}-E^{D^-}$ assuming a trial wave-function $\propto \left[\phi(r_1,a) \phi(r_2,b) + \phi(r_1,b) \phi(r_2,a) \right ] (z_1+d) (z_2+d)$ with variational parameters $a$ and $b$, for $Q=-0.5$ and $Q=0.5$ respectively. The radius of the inner orbital is $a \sim 1a^*$ while $b$, the radius of the outer orbital, depends very strongly on $Q$ and $d$ and is shown in Figs.~\ref{fig:E-EB-b-Q-05}(c) and \ref{fig:E-EB-b-Q05}(c). We have done calculations for several values of $Q$, ranging from $Q=+1$ to $Q=-1$. The general trends and qualitative behavior of the calculated quantities versus distance $d$ are the same for all $Q > 0$ (effective barrier dominated by the metallic character of the interface materials), which differ from the also general behavior of $Q \leq 0$ (effective barrier dominated by the insulator material). For $Q \leq 0$ (illustrated for the particular case of $Q=-0.5$ in Fig.~\ref{fig:E-EB-b-Q-05}), $D^-$ is not bound for small $d$ (for $d <4\,a^*$ in the case of $Q=-0.5$). For larger $d$'s, the binding energy is slightly enhanced from the bulk value. The radius of the outer orbital $b$ is very close to the bulk value for $d \geq 4\,a^*$. For $Q>0$ (illustrated by $Q=0.5$ in Fig.~\ref{fig:E-EB-b-Q05}), $D^-$ is bound at all distances $d$, though the binding energy is smaller than in bulk. The radius of the outer orbital $b$ is very large and increases linearly with $d$ up to $d_{\rm crossover} \sim 14.5 a^*$ [see Fig.~\ref{fig:E-EB-b-Q05} (c)]. For larger $d$, $b$ is suddenly reduced to its bulk value. This abrupt behavior of the $b$ that minimizes the energy is due to two local minima in the energy versus $b$: for $d < d_{\rm crossover}$ the absolute minimum corresponds to a very large (but finite) orbital radius $b$ while for $d > d_{\rm crossover}$ the absolute minimum crosses over to the other local minimum, at $b\sim b_{\rm bulk}$. As $d$ increases from the smallest values and $b$ increases up to the discontinuous drop, a ``kink'' in the D$^-$ binding energy is obtained at the crossover point [see Fig.~\ref{fig:E-EB-b-Q05} (b)], changing its behavior from a decreasing to an increasing dependence on $d$ towards the bulk value as $d\to\infty$. \begin{figure} \resizebox{90mm}{!}{\includegraphics{Q-05.eps}} \caption{(Color online) Results for $Q=-0.5$. (a) Energy for a neutral donor $D^0$, and for the ground $D^-|1s,1s,s\rangle$ and first excited $D^{-}|1s,2s,t\rangle$ negatively charged donor. (b) Binding energies of the $D^-$ states. (c) Value of the variational parameter $b$ for the $D^-$ ground state. For $d <4$, $D^- |1s,1s,s\rangle$ is not stable and the energy is minimized with $b \rightarrow \infty$. Bulk values are represented by short line segments on the right.} \label{fig:E-EB-b-Q-05} \end{figure} \begin{figure} \resizebox{90mm}{!}{\includegraphics{Q05.eps}} \caption{(Color online) Same as Fig.~\ref{fig:E-EB-b-Q-05} for $Q=0.5$. } \label{fig:E-EB-b-Q05} \end{figure} \subsection{Charging energy: experimental results} \label{sec:experiments} The charging energy of shallow dopants can be obtained by using the combined results of photoconductivity experiments to determine the $D^-$ binding energy~\cite{dean1967} and direct optical spectroscopy to determine the binding of the $D^0$ state.~\cite{ramdas1981} It was shown recently that the charging energy in nanostructures can be obtained directly from charge transport spectroscopy at low temperature.~\cite{sellier2006} Single dopants can be accessed electronically at low temperature in deterministically doped silicon/silicon-dioxide heterostructures~\cite{morello10} and in small silicon nanowire field effect transistors (FinFETs), where the dopants are positioned randomly in the channel.~\cite{sellier2006,lansbergen-NatPhys,pierre2009} Here we will focus in particular on data obtained using the latter structures.~\cite{sellier2006,lansbergen-NatPhys} FinFET devices in which single dopant transport have been observed typically consist of crystalline silicon wire channels with large patterned contacts fabricated on silicon-on-insulator. Details of the fabrication can be found in Ref.~\onlinecite{sellier2006}. In this kind of samples, few dopants may diffuse from the source/drain contacts into the channel during the fabrication modifing the device characteristics both at room~\cite{pierre2009} and low temperatures.~\cite{sellier2006,lansbergen-NatPhys,pierre2009} In some cases, subthreshold transport is dominated by a single dopant.~\cite{lansbergen-NatPhys} Low temperature transport spectroscopy relies on the presence of efficient Coulomb blockade with approximately zero current in the blocked region. This requires the thermal energy of the electrons, $k_BT$, to be much smaller than $U$, a requirement that is typically satisfied for shallow dopants in silicon at liquid helium temperature and below, i.e. $\leq 4.2K$. At these temperatures the current is blocked in a diamond-shaped region in a stability diagram, a color-scale plot of the current -- or differential conductance $\text{d}I/\text{d}V_b$ -- as a function of the source/drain, $V_b$, and gate voltage, $V_g$. In Fig.~\ref{fig:experimentaldata}, the stability diagram of a FinFET with only one As dopant in the conduction channel is shown. At small bias voltage ($eV_b \ll k_BT$), increasing the voltage on the gate effectively lowers the potential of the donor such that the different donor charge states can become degenerate with respect to the chemical potentials in the source and drain contacts and current can flow. The difference in gate voltage between the D$^+/$D$^0$ and D$^0/$D$^-$ degeneracy points (related to the charging energy) depends, usually in good approximation, linearly on the gate voltage times a constant capacitive coupling to the donor.~\cite{sellier2006} Generally a more accurate and direct way to determine the charging energy is to determine the bias voltage at which the Coulomb blockade for a given charge state is lifted for all gate voltages, indicated by the horizontal arrow in Fig.~\ref{fig:experimentaldata}. This method is especially useful when there is efficient Coulomb blockade. For the particular sample shown in Fig.~\ref{fig:experimentaldata}, $U=36$ meV. This is similar to other reported values in the literature~\cite{sellier2006,lansbergen-NatPhys,pierre2009} ranging from $\sim 26$ to $\sim 36$ meV. There is therefore a strong reduction in the charging energy compared to the bulk value $U_{\rm bulk} =52$ meV. The ratio between the observed and the bulk value is $\sim 0.6-0.7$. \begin{figure} \resizebox{90mm}{!}{\includegraphics{A17J17_GVStab_300mK.eps}} \caption{(Color online) Differential conductance stability diagram showing the transport characteristics of a single As donor in a FinFET device.~\cite{sellier2006} The differential conductance is obtained by a numerical differentiation of the current with respect to $V_b$ at a temperature of $0.3$ K. Extracting the charging energy from the stability diagram can be done by determining the gate voltage for which Coulomb blockade of a given charge state (the $D^0$ charge state in this case) is lifted for all $V_g$. The transition point is indicated by the horizontal arrow, leading to a charging energy $U=36$ meV, as given by the vertical double-arrow. The inset shows the electrical circuit used for the measurement.} \label{fig:experimentaldata} \end{figure} Theoretically, we can extract the charging energy from the results in Figs.~\ref{fig:E-EB-b-Q-05} and \ref{fig:E-EB-b-Q05}. The results are shown as a function of $d$ for $Q=-0.5$, $0$, and $0.5$ in Fig.~\ref{fig:U}. A reduction of the charging energy $U$ of the order of the one observed occurs at $d \sim 2 a^*$ for $0.1<Q<1$ (only $Q=0.5$ is shown in the figure). Therefore, the experimentally observed behavior of $U$ is consistent with a predominant influence of the metallic gates material in the D$^-$ energetics. On the other hand, for $Q \leq 0$, $U$ is slightly enhanced as $d$ decreases and, for the smallest values of $d$ considered, the outer orbital is not bound. At very short distances $d$, the difference in behavior between the insulating barrier ($Q<0$) and the barrier with more metallic character ($Q>0$) is in the interaction between each electron and the other electron's image, which is repulsive in the former case and attractive in the latter. Although this interaction is small, it is critical to lead to a bound $D^-$ for $Q>0$ and an unbound $D^-$ for $Q<0$ at very short $d$. \begin{figure} \resizebox{90mm}{!}{\includegraphics{U-Q05-Q0-Q-05-2.eps}} \caption{(Color online) Charging energy $U$ of the D$^-$ ground state for three different values of $Q$. For $Q \leq 0$, the charging energy is nearly constant with $d$. For these cases, the negatively charged donor is not bound for small $d$. For $Q > 0$ the charging energy decreases as the donor gets closer to the interface, at relatively small distances $d$. The latter is consistent with the experimental observation.}\label{fig:U} \end{figure} \subsection{$D^-$ first excited state.} It is well established that in 3 dimensions (with no magnetic field applied) there is only one bound state of $D^-$.\cite{hillPRL77,larsenPRB92} Motivated by the significant changes in the ground state energy produced by nearby interfaces, we explore the possibility of having a bound excited state in a double-charged single donor. Like helium, we expect the $D^-$ first excited state to consist of promoting one $1s$ electron to the $2s$ orbital. The spin triplet $|1s,2s,t \rangle$ state (which is orthogonal to the singlet ground state) has a lower energy than $|1s,2s,s\rangle$.~\cite{Bransden-Joachain} As a trial wave-function for $|1s,2s,t \rangle$ we use the antisymmetrized product of the two orbitals $1s$ and $2s$ and multiply by $(z_1+d) (z_2+d)$ to fulfill the boundary condition, namely, \begin{eqnarray} \Psi_{1s,2s,t}&=& N\left[e^{-\frac{r_1}{a}} e^{-\frac{r_2}{2b}}\left(\frac{r_2}{2b}-1\right) - e^{-\frac{r_2}{a}} e^{-\frac{r_1}{2b}}\left(\frac{r_1}{2b}-1\right)\right] \nonumber \\ &\times& (z_1+d) (z_2+d) \, \label{eq:wf2Striplet} \end{eqnarray} with $a$ and $b$ variational parameters and $N$ a normalization factor. Note that, for a particular value of $b$, the outer electron in a $2s$ orbital would have a larger effective orbital radius than in a $1s$ orbital due to the different form of the radial part. For $Q < 0$, the outer orbital is not bound and the energy reduces to that of $D^0$ (see Fig.~\ref{fig:E-EB-b-Q-05}). Surprisingly, for $Q > 0$ the $|1s,2s,t \rangle$ state is bound and, as $d$ increases, tends very slowly to the $D^0$ energy as shown in Fig.~\ref{fig:E-EB-b-Q05}. Moreover, its binding energy is roughly the same as the ground state $|1s,1s,s \rangle$ for $d \leq 15 a^*$, another unexpected result. The existence of a bound $D^-$ triplet state opens the possibility of performing coherent rotations involving this state and the nearby singlet ground state. \section{Discussions and conclusions} \label{sec:discussion} Our model for $D^-$ centers involves a number of simplifications: (i) the mass anisotropy is not included; (ii) the multivalley structure of the conduction band of Si is not considered; (iii) correlation terms in the trial wave-function are neglected. These assumptions aim to decrease the number of variational parameters while allowing many of the integrals to be solved analytically. Qualitatively, regarding assumption (i), it has been shown that the mass anisotropy inclusion gives an increase of the binding energy for both $D^0$ and $D^-$ (see Ref.~\onlinecite{inouePRB08}); regarding (ii), inclusion of the multivalley structure of the conduction together with the anisotropy of the mass would lead to an enhancement of the binding energy of $D^-$ due to the possibility of having intervalley configurations in which the electrons occupy valleys in 'perpendicular' orientations, (with perpendicularly oblated wave-functions), thus leading to a strong reduction of the electron-electron repulsive interaction. \cite{larsenPRB81,inouePRB08} Regarding point (iii), more general trial wave-functions for $D^-$ have been proposed in the literature. For example, the one suggested by Chandrasekar models correlation effects by multiplying Eq.~\ref{eq:wfH-} by a factor, $(1+Cr_{12})$, ~\cite{chandrasekarRMP44} where $C$ is an additional variational parameter. In the bulk, the effect of this correlation factor is to increase the binding energy of $D^-$ from $0.027 \, Ry^*$ if $C=0$ (our case) to $0.0518 \, Ry^*$.~\cite{Bethe-Salpeter} We conclude that all three simplifications assumed in our model lead to an underestimation of the binding energy of $D^-$, thus, the values reported here are to be taken as lower bounds for it. As compared to experiments, an important difference with respect to the theory is that we are assuming a flat-band condition while the actual devices have a built-in electric field due to band-bending at the interface between the gate oxide and the p-doped channel.~\cite{lansbergen-NatPhys,rahman2009} If an electric field were included, the electron would feel a stronger binding potential (which results from the addition of the donor potential and the triangular potential well formed at the interface) leading to an enhancement of the binding energy of $D^0$ and $D^-$ (with an expected strong decrease of the electron-electron interaction in this case for configurations with one electron bound to the donor at $z=0$ and the other pulled to the interface at $z=-d$). The presented results are dominated by the presence of a barrier, which constrains the electron to the $z>-d$ region, and the modification of the screening due to the charge induced at the interface, a consequence of the dielectric mismatch between Si and the barrier material. This is included by means of image charges. Effects of quantum confinement and dielectric confinement\cite{delerue-lannoo,diarra07} are not considered here: we believe these are not relevant in the FinFETs under study. Although the conduction channel is very narrow ($4$ nm$^2$)\cite{sellier07} the full cross section of the Si wire is various tens of nm and quantum and dielectric confinement is expected to be effective for typical device sizes under $10$ nm. Both quantum and dielectric confinement lead to an enhancement of the binding energy with respect to the bulk, which is the opposite to what we obtain for small $d$. Neutral double donors in Si, such as Te or Se, have been proposed for spin readout via spin-to-charge conversion~\cite{kane00PRB} and for spin coherence time measurements.\cite{calderonDD} The negative donor $D^-$ also constitutes a two-electron system, shallower than Te and Se. In this context, investigation of the properties of $D^-$ shallow donors in Si affecting quantum operations as, for example, their adequacy for implementing spin measurement via spin-to-charge conversion mechanism,~\cite{kane00PRB,koppens05} deserve special attention. Our theoretical study indicates that, very near an interface (for $d< 4a^*$), the stability of D$^-$ against dissociation requires architectures that yield effective dielectric mismatch $Q>0$, a requirement for any device involving operations or gates based on D$^-$ bound states. In conclusion, we have presented a comprehensive study of the effects of interface dielectric mismatch in the charging energy of nearby negatively charged donors in Si. In our study, the theoretical treatment is based on a single-valley effective mass formalism, while transport spectroscopy experiments were carried out in FinFET devices. The experiments reveal a strong reduction on the charging energy of isolated As dopants in FinFETs as compared to the bulk values. Calculations present, besides the charging energy, the binding energy of donor in three different charge states as a function of the distance between the donor and an interface with a barrier. The boundary problem is solved by including the charge images whose signs depend on the difference between the dielectric constant of Si and that of the barrier material [the dielectric mismatch, quantified by the parameter $Q$ defined below Eq.~(\ref{eq:beforeQ})]. Typically, thin insulating layers separate the Si channel, where the dopants are located, from metallic gates needed to control the electric fields applied to the device. This heterostructured barrier leads to an effective screening with predominance of the metallic components, if compared to a purely SiO$_2$ thick layer, for which $Q<0$. Assuming a barrier material with an effective dielectric constant larger than that of Si (in particular, $Q=0.5$ corresponds to $\epsilon_{\rm barrier} = 3\epsilon_{\rm Si}$), we obtain a reduction of the charging energy $U$ relative to $U_{\rm bulk}$ at small $d$, consistent with the experimental observation. We did not attempt quantitative agreement between presented values here, but merely to reproduce the right trends and clarify the underling physics. It is clear from our results that more elaborate theoretical work on interface effects in donors, beyond the simplifying assumptions here, should take into account the effective screening parameter as a combined effect of the nearby barrier material and the adjacent metallic electrodes. From our calculations and experimental results, we conclude that the presence of metallic gates tend to increase $\epsilon_{\rm barrier}^{\rm effective}$ above $\epsilon_{\rm Si}$, leading to $Q>0$ and reducing the charging energies. \acknowledgments M.J.C. acknowledges support from Ram\'on y Cajal Program and FIS2009-08744 (MICINN, Spain). B.K. acknowledges support from the Brazilian entities CNPq, Instituto Nacional de Ciencia e Tecnologia em Informa\c c\~ao Quantica - MCT, and FAPERJ. J.V., G.P.L, G.C.T and S.R. acknowledge the financial support from the EC FP7 FET-proactive NanoICT projects MOLOC (215750) and AFSiD (214989) and the Dutch Fundamenteel Onderzoek der Materie FOM. We thank N. Collaert and S. Biesemans at IMEC, Leuven for the fabrication of the dopant device.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:1} Suppose we observe $p$-dimensional Gaussian vectors $x_1, \ldots, x_n \stackrel{i.i.d.}{\sim} \mathcal{N}(0, \Sigma)$, with $\Sigma = \Sigma_{p}$ the underlying $p$-by-$p$ population covariance matrix. Traditionally, to estimate $\Sigma$, we form the empirical (sample) covariance matrix $S = S_n = \frac{1}{n} \sum_{i=1}^n x_i x_i'$; this is the maximum likelihood estimator. Under the classical asymptotic framework that statisticians have used for centuries, where $p$ is fixed and $n \rightarrow \infty$, $S$ is a consistent estimator of $\Sigma$ (under any matrix norm). In recent decades, many impressive random matrix-theoretic studies consider $p = p_n$ tending to infinity with $n$. Generally, these studies focus on {\it proportional growth}, where the sample size and dimension are comparable: \begin{align} n, p \rightarrow \infty \, , \qquad \gamma_n = \frac{p}{n} \rightarrow \gamma > 0 \, . \label{pro} \end{align} Under this framework, certain beautiful and striking mathematical phenomena are elegantly brought to light. A striking deliverable for statisticians, particularly, is the discovery that in such a high-dimensional setting, the maximum likelihood estimator $S$ is an inconsistent estimator of $\Sigma$ (under various matrix norms). \subsection{Standard Covariance Estimation in the Proportional Regime} Inconsistency of $S$ under proportional growth stems from the following phenomena, not present in classical fixed-$p$ statistics. These results are due to Marchenko and Pastur \cite{MP67}, Baik, Ben Arous, and P\'ech\'e \cite{BBP}, Baik and Silverstein \cite{BkS04}, and Paul \cite{P07}. \begin{enumerate} \item {\it Eigenvalue spreading}. Assume proportional growth with $\gamma \in (0, 1]$. In the standard normal case $\Sigma = I_{p}$, the $p$-dimensional identity matrix, the spectral measure of $S$ converges weakly almost surely to the Marchenko-Pastur distribution with parameter $\gamma$. This distribution, or \textit{bulk}, is non-degenerate, absolutely continuous, and has support $[(1-\sqrt{\gamma})^2, (1+\sqrt{\gamma})^2] = [\lambda_-(\gamma),\lambda_+(\gamma)] $. Intuitively, sample eigenvalues, rather than concentrating near population eigenvalues (which, in the identity case, are all simply $1$), spread out across a fixed-size interval, preventing consistency of $S$ for $\Sigma$. \item {\it Eigenvalue inflation}. We can quantify asymptotic bias in extreme eigenvalues more precisely. Consider Johnstone's {\it spiked covariance model}, where all but finitely many eigenvalues $\ell_1, \ldots, \ell_{p}$ of $\Sigma$ are identity: \begin{align*} \hspace{-1cm} [\textbf{I}]\hspace{1.2cm} \ell_1 > \cdots > \ell_r > 1\, , \qquad \ell_{r+1} = \cdots = \ell_{p} = 1 \, . \end{align*} Under this model, the number of so-called ``spiked'' eigenvalues $r$ is fixed and independent of $n$. Spiked eigenvalues are constant. Let $\lambda_{i} = \lambda_{i, n}$ denote the eigenvalues of $S$, ordered decreasingly $\lambda_1 \geq \cdots \geq \lambda_{p}$. As it turns out, $\lambda_1, \ldots, \lambda_r$---the ``leading'' eigenvalues---do not converge to their population counterparts $\ell_1, \ldots, \ell_r$; rather, they are shifted upwards. Under (\ref{pro}), for each fixed $i \geq 1$, \begin{align} \phantom{\,.} \lambda_i \xrightarrow{a.s.} \lambda(\ell_i) \, , \label{1} \end{align} where $\lambda(\ell) \equiv \lambda(\ell, \gamma)$ is the ``eigenvalue mapping" function, given piecewise by \begin{align} \lambda(\ell) = \begin{dcases} \ell + \frac{\gamma \ell}{\ell-1} & \ell > 1 + \sqrt{\gamma}\\ (1 + \sqrt{\gamma})^2 & \ell \leq 1 + \sqrt{\gamma} \end{dcases} \, . \label{bias_func} \end{align} The transition point $\ell_+(\gamma) \equiv 1 + \sqrt{\gamma}$ between the two behaviors is known as the Baik-Ben Arous-P\'ech\'e (BBP) transition. Below the transition, $1 < \ell \leq \ell_+(\gamma)$, ``weak signal" leads to a limiting eigenvalue independent of $\ell$. For fixed $i$ such that $\ell_i \leq \ell_+(\gamma)$, $\lambda_i$ tends to $\lambda_+(\gamma)$, the upper-bulk-edge of the Marchenko-Pastur distribution with parameter $\gamma$. Above the transition, $ \ell_+(\gamma) < \ell$, ``strong signal" produces an empirical eigenvalue dependant on $\ell$, though with upwards bias: $\lambda(\ell) > \ell$. This asymptotic bias in extreme eigenvalues is a further cause of inconsistency of $S$ in several loss measures, obviously including operator norm. \item {\it Eigenvector rotation.} The eigenvectors $v_1, \ldots, v_{p}$ of $S_n$ do not align asymptotically with the corresponding eigenvectors $u_1, \ldots, u_{p}$ of $\Sigma$. Under proportional growth, the limiting angles are deterministic, and obey: \begin{align} | \langle u_i, v_j \rangle | \xrightarrow{a.s.} \delta_{ij} \cdot c(\ell_i) \, , \hspace{2cm} 1 \leq i, j \leq r \, ; \label{4} \end{align} here the ``cosine" function $c(\ell) = c(\ell, \gamma)$ is given piecewise by \begin{align} \phantom{\,.} c^2(\ell) = \begin{dcases} \frac{1 - \gamma/(\ell-1)^2}{1+\gamma/(\ell-1)} & \ell > 1 + \sqrt{\gamma}\\ 0 & \ell \leq 1 + \sqrt{\gamma} \end{dcases} \, . \label{2} \end{align} \noindent Again, a phase transition occurs at $\ell_+(\gamma)$. The misalignment of empirical and theoretical eigenvectors further contributes to inconsistency in the spiked case where not all population eigenvalues are equal; this is easiest to see for Frobenius loss. \end{enumerate} \subsection{Shrinkage Estimation} Charles Stein proposed {\it eigenvalue shrinkage} as an alternative to traditional covariance estimation \cite{S86, S56}. Let $S = V \Lambda V'$ denote an eigendecomposition, where $V$ is orthogonal and $\Lambda = \text{diag}(\lambda_1, \ldots, \lambda_{p})$. Let $\eta: [0, \infty) \rightarrow [0, \infty)$ denote a ``shrinkage" function or ``rule" and $\eta(\Lambda) = \text{diag}(\eta(\lambda_1), \ldots, \eta(\lambda_{p}))$. Estimators of the form $\widehat \Sigma_\eta = V \eta(\Lambda) V'$ are studied in hundreds of papers; see for example work of Donoho, Gavish, and Johnstone \cite{DGJ18} and Ledoit and Wolf \cite{LW20, LW202}. Note that while there is indeterminism in the choice of eigenvectors $V$, $\widehat \Sigma_\eta$ is well defined. The standard sample covariance estimator $S$ is obtained by the identity ``shrinker'' $\eta(\lambda) = \lambda$, no shrinkage at all, while ``effective'' shrinkers generally act as contractions, obeying $|\eta(\lambda) - 1| < |\lambda - 1|$. In the spiked model, a well-chosen shrinker minimizes the errors induced by eigenvalue spreading and eigenvector rotation. Working under the spiked model and proportional growth, \cite{DGJ18} considers numerous loss functions $L$ and derives asymptotically unique admissible shrinkers $\eta^*( \cdot | L)$, in many cases far outperforming $S$. \subsection{Which Choice of Asymptotic Framework?} The modern ``big data" explosion exhibits all manner of ratios of dimension to sample size. Indeed, there are internet traffic datasets with billions of samples and thousands of dimensions, and computational biology datasets with thousands of samples and millions of dimensions. To consider only asymptotic frameworks where row and column counts are roughly balanced, as they are under proportional growth, is a restriction, and perhaps, even an obstacle. Although proportional growth analysis has yielded many valuable insights, it also raises pressing doubts in applications. Consider the practitioner's ``scaling conundrum'': in a given application, with one dataset of size $(n_\text{data}, p_\text{data})$, how can a practitioner know whether the proportional growth model is applicable? Implicit in the choice of asymptotic framework is an assumption on how the data arises in a sequence of growing datasets; this choice has consequences. The practitioner may view the data as arising from the fixed-$p$ sequence $(n,p_{\text{data}})$ with only $n$ varying. If so, the practitioner could appeal to long tradition and estimate $\Sigma$ by $S$. On the other hand, viewing the dataset size as part of a sequence $\big(n,\frac{p_{\text{data}}}{n_{\text{data}}} \cdot n\big)$, with constant aspect ratio $\gamma = p_\text{data} / n_\text{data}$, the practitioner might follow recent trends in the theoretical literature and apply eigenvalue shrinkage. Current theory offers little guidance on choice which perspective is more appropriate, particularly if $p_\text{data}$ is large yet relatively small compared to $n_\text{data}$. And---importantly---these two are not the only possible scaling relations that could produce the given $(n_\text{data}, p_\text{data})$. \subsection{Disproportional Growth} Within the full spectrum of power law scalings $p \asymp n^\alpha$, $\alpha \geq 0$, the much-studied proportional growth limit corresponds to the {\it single case} $\alpha=1$. The classical $p$-fixed, $n$ growing relation again corresponds to a single case, $\alpha=0$. This paper considers {\it disproportional growth}, encompassing {\it everything else}: \[ n, p \rightarrow \infty \, , \qquad \gamma_n = p/n \rightarrow 0 \text{ or } \infty \, . \] Note that all power law scalings $0 < \alpha < \infty$, $\alpha \neq 1$ are included, as well as non-power law scalings, such as\ $p = \log n$ or $p = e^n$. The disproportional growth framework splits naturally into instances; to describe them, we use terminology that assumes the underlying data matrices $X = X_n$ are $p \times n$. \begin{enumerate} \item The ``wide matrix'' disproportional limit obeys: \begin{equation} \label{tall} n, p \rightarrow \infty \, , \qquad \gamma_n = p/n \rightarrow 0 . \end{equation} In this limit, which includes power laws with $\alpha \in (0,1)$, $n$ is much larger than $p$, and yet we are outside the classical, fixed-$p$ large-$n$ setting. \item The ``tall matrix'' disproportional limit involves arrays with many more columns than rows; formally: \begin{equation} \label{wide} n, p \rightarrow \infty \, , \qquad \gamma_n = p/n \rightarrow \infty \, . \end{equation} This limit, including power laws with $\alpha \in (1, \infty)$, admits many additional scalings of numbers of rows to columns. \end{enumerate} Properties of covariance matrices in the two disproportionate limits are closely linked. Indeed, the non-zero eigenvalues of $XX'$ and $X'X$ are equal. In the standard normal case $\Sigma = I$, $XX'$ and $X'X$ are the unnormalized sample covariance matrices of wide and tall datasets, respectively. Elaborating this, for any sequence of tall matrices with $\gamma_n \rightarrow \infty$, there is an accompanying sequence of wide matrices with $\gamma_n \rightarrow 0$ and related spectral properties. \subsection{Estimation as $\gamma_n \rightarrow 0$} The $\gamma_n \rightarrow 0$ regime seems, at first glance, very different from the proportional case, $\gamma_n \rightarrow \gamma > 0$. Neither eigenvalue spreading nor eigenvalue inflation are apparent: under spikes {\bf I}, empirical eigenvalues converge to their population counterparts: $\lambda_i \rightarrow \ell_i$, almost surely. Eigenvalue shrinkage may therefore seem irrelevant or unhelpful: $S$ itself is a consistent estimator of $\Sigma $ in Frobenius and operator norms. To the contrary, we will show that well-designed eigenvalue shrinkage confers substantial relative gains over standard covariance estimation, paralleling gains seen earlier under proportional growth. Eigenvalue spreading {\it does} occur as $\gamma_n \rightarrow 0$, though only by $O(\sqrt{\gamma_n})$. Accordingly, introduce the quantities \[ \tlam = \frac{\lambda -1}{\sqrt{\gamma_n}} \, , \qquad \,\,\, \overset{\leftharpoonup}{\ell} = \frac{\ell - 1 }{\sqrt{\gamma_n}} \, , \] measuring leading empirical and population eigenvalues on a finer scale. We adapt the spiked covariance model to the $\gamma_n \rightarrow 0$ setting, with spiked eigenvalues $\ell_i = \ell_{i;n} = 1 + \overset{\leftharpoonup}{\ell}_i \sqrt{\gamma_n}$ varying with $n$ while maintaining $\overset{\leftharpoonup}{\ell}_i$ fixed. On this scale, as we shall see, in addition to eigenvalue spreading, eigenvalue inflation and eigenvector rotation occur as well. The consequences of such high-dimensional phenomena are similar to those uncovered in the proportional setting. For many choices of loss function, $S$ is outperformed substantially by well-designed shrinkage functions, particularly near the phase transition at $\ell_+(\gamma_n)$. We will consider a range of loss functions $L$, deriving for each a shrinker $\eta^*( \cdot | L)$ which is optimal as $\gamma_n \rightarrow 0$. Analogous results hold as $\gamma_n \rightarrow \infty$. \subsection{Estimation in the Spiked Wigner Model} At the heart of our analysis is a connection to the {\it spiked Wigner model}. Let $W = W_n$ denote a {\it Wigner matrix}, a real symmetric matrix of size $n \times n$ with independent entries on the upper triangle distributed as $\mathcal{N}(0,1)$. Let $\Theta = \Theta_n$ denote a symmetric $n \times n$ ``signal'' matrix of fixed rank $r$; under the {\it spiked Wigner model} observed data $Y = Y_n$ obeys \begin{align} \phantom{\,. }Y = \Theta + \frac{1}{\sqrt{n}} W \,. \label{model2} \end{align} Let $\theta_1 \geq \cdots \geq \theta_{r_+} > 0 > \theta_{r_+ + 1} \geq \dots \geq \theta_{r}$ denote the non-zero eigenvalues of $\Theta$, so there are $r_+$ positive values and $r_- = r-r_+$ negative. A standard approach to recovering $\Theta$ from noisy data $Y$ uses the eigenvalues of $Y$, $\lambda_1(Y) \geq \cdots \geq \lambda_n(Y)$, and the associated eigenvectors $v_1, \ldots, v_n$: \[ \widehat \Theta^{r} = \sum_{i=1}^{r_+} \lambda_i(Y) v_i v_i' + \sum_{i=n-r_-+1}^n \lambda_i(Y) v_i v_i' \, . \] As it turns out, $\widehat \Theta^r$ can be improved upon substantially by estimators of the form \begin{align} \phantom{\,.} \widehat\Theta_\eta = \sum_{i=1}^n \eta(\lambda_i(Y)) v_i v_i' \, , \end{align} with $\eta: \mathbb{R}^+ \rightarrow \mathbb{R}^+$ a well-chosen shrinkage function. Optimal shrinkage formulas for the spiked Wigner model are equivalent to optimal formulas for covariance estimation in the disproportionate, $\gamma_n \rightarrow 0$ limit. This is not mere coincidence. As $\gamma_n \rightarrow 0$, the spectral measure of $\gamma_n^{-1/2}(S - I_{p})$ converges (weakly almost surely) to the semicircle law (Bai and Yin \cite{BY88}); this is precisely the limiting spectral measure of spiked Wigners $Y$. Moreover, we show below that the driving theoretical quantities in each setting---leading eigenvalue inflation, eigenvector rotation, optimal shrinkers, and losses---are all ``isomorphic.'' \subsection{Our Contributions} \label{1.7} Given this background, we now state our contributions: \begin{enumerate} \item We study the disproportionate $\gamma_n \rightarrow 0$ limit with an eye towards establishing and using analogues of (\ref{1})-(\ref{2}). In the critical scaling of this regime, spiked eigenvalues decay towards one as $1 + \overset{\leftharpoonup}{\ell} \sqrt{\gamma_n}$, where $\overset{\leftharpoonup}{\ell}$ is a new formal parameter. Analogs of (\ref{1})-(\ref{2}) as a function of $\overset{\leftharpoonup}{\ell}$ are presented in Lemma \ref{thrm:spiked_covar} below. On this scale, the analog of the BBP phase transition—--the critical spike strength above which leading sample eigenvectors correlate with population eigenvectors—--now occurs at $\overset{\leftharpoonup}{\ell} = 1$. These formulas and concepts are our basis for our development of new optimal eigenvalue shrinkers. Results equivalent to Lemma \ref{thrm:spiked_covar} were previously established under slightly different assumptions by Bloemendal et al.\ \cite{BKYY16}. We provide a simple, direct argument that permits completely general rates at which $n, p \rightarrow \infty$ while $\gamma_n \rightarrow 0$. Analogous results hold as $\gamma_n \rightarrow \infty$, explored in later sections. \item New sets of asymptotically optimal formulas for shrinkage of eigenvalues are found under fifteen canonical loss functions. While all losses vanish as $\gamma_n \rightarrow 0$ and diverge as $\gamma_n \rightarrow \infty$, optimal shrinkage provides improvement by multiplicative factors. Under proportional growth, there is a distinct optimal shrinker for each of the fifteen losses considered. In the disproportionate regime, there are three distinct shrinkers, a simplification. These formulas are the limits of proportional shrinkers as the aspect ratio $\gamma_n$ vanishes or diverges and population spikes decay towards one or diverge. We derive closed forms for the relative gain of optimal shrinkage versus classical no-shrinkage. In addition, we find optimal hard thresholding levels under each loss. \item The $n, p \rightarrow \infty$, $\gamma_n \rightarrow 0$ limit is dissimilar to classical fixed-$p$ statistics: for any rate $\gamma_n \rightarrow 0$, non-trivial eigenvalue shrinkage is optimal. \item A consequence of our results is that the optimal shrinkage formulas inherited from the proportional regime, with $\gamma$ replaced by $\gamma_n$, achieve optimal performance in the disproportionate limits. These shrinkers are harmless in the classical fixed-$p$ limit. A practitioner with a dataset of size $(n_{\text{data}}, p_{\text{data}})$ implementing the proportionate shrinkers of Donoho, Gavish, and Johnstone \cite{DGJ18} would substitute $p_{\text{data}}/n_{\text{data}}$ for $\gamma$. We prove that this heuristic is asymptotically optimal in the disproportionate limits. Thus, for a given loss function, we provide a single shrinkage function to be used with the current aspect ratio that achieves optimal performance in any asymptotic embedding of $(n_{\text{data}}, p_{\text{data}})$. A practitioner need not have the asymptotic embedding (unknown, of course, in practice) to perform optimal shrinkage. \item We obtain asymptotically optimal shrinkage formulas for the spiked Wigner model. As the empirical spectral distributions of the spiked covariance as $\gamma_n \rightarrow 0$ and the spiked Wigner converge to a common limit---the semicircle law---optimal shrinkage formulas are equivalent after a change of variables. \end{enumerate} Our assumption that non-spiked population eigenvalues are identity is for convenience. In the case of an arbitrary noise level, where $\Sigma$ is a low-rank perturbation of $\sigma^2 I$, procedures herein may be appropriately scaled. If the noise level $\sigma^2$ is unknown, it is consistently estimated by the median eigenvalue of $S$ as $\gamma_n \rightarrow 0$. As $\gamma_n \rightarrow \infty$, the median of non-zero eigenvalues suffices. Asymptotic optimality extends to this setting by the continuity of the provided shrinkers. Knowledge of the number of spikes $r$ is similarly unnecessary. In practice, optimal shrinkers may be applied to each empirical eigenvalue. Rigorous proof of this claim is given in Section 7 of \cite{DGJ18}. Similarly, the rank and variance assumptions (\ref{model2}) may be relaxed. \section{The Fixed-Spike, $\gamma_n \rightarrow \gamma$ Limit} \label{sec-PGLim} We briefly review certain tools and background concepts. \begin{definition} As discussed in Section \ref{1.7}, the model rank $r$ is assumed known. It therefore makes sense to employ {\it rank-aware covariance shrinkage estimators}: for a shrinkage function $\eta: [0, \infty) \rightarrow [0, \infty)$, \begin{align} \widehat{\Sigma}_\eta = \widehat{\Sigma}_{\eta, n,r} & = \sum_{i=1}^r \eta(\lambda_i) v_i v_i' + \sum_{i=r+1}^n v_i v_i' \nonumber \\ & = \sum_{i=1}^r (\eta(\lambda_i) - 1) v_i v_i' + I \, . \label{000} \end{align} In the particular case $\eta(\lambda) \equiv \lambda$ ---no shrinkage--- we may instead write $S^r$ rather than $\widehat{\Sigma}_{\lambda, n,r}$. \end{definition} \begin{definition} Let $\|\cdot\|_F$, $\|\cdot \|_{op}$, and $\|\cdot\|_*$ respectively denote the Frobenius, operator, and nuclear matrix norms. We consider estimation under 15 loss functions, each formed by applying one of the 3 matrix norms to one of 5 {\it pivots}. By pivot, we mean a matrix-valued function $\Delta(A,B)$ of two real positive definite matrices $A, B$; we consider specifically: \begin{align*} \Delta_1 &= A - B \,, & \Delta_2 &= A^{-1} - B^{-1} \,, & \Delta_3 &= A^{-1} B - I \,, \\ \Delta_4 &= B^{-1} A - I \,, & \Delta_5&= A^{-1/2} B A^{-1/2} - I \, . && \end{align*} We apply each norm to each of the pivots, obtaining for $k=1, \ldots, 5$, the loss functions: \begin{align} & L_{F,k}(\Sigma, \widehat \Sigma) = \|\Delta_k(\Sigma, \widehat \Sigma) \|_F \,, \nonumber & L_{O,k}(\Sigma, \widehat \Sigma) = \|\Delta_k(\Sigma, \widehat \Sigma) \|_{op} \,, \nonumber && L_{N,k}(\Sigma, \widehat \Sigma) = \|\Delta_k(\Sigma, \widehat \Sigma) \|_* \, . \end{align} \end{definition} \begin{lemma} \label{lem:dg18_7} (Lemma 7 of \cite{DGJ18}) In the proportional limit (\ref{pro}) of the spiked model {\normalfont \bf I}, suppose $\eta(\lambda_i)$ has almost sure limit $\eta_i$, $1 \leq i \leq r$. Each loss $L_{\star,k}$ converges almost surely to a deterministic limit: \[ L_{\star,k}(\Sigma, \widehat \Sigma_{\eta}) \xrightarrow{a.s.} {\cal L}_{\star,k}((\ell_i)_{i=1}^r,(\eta_i)_{i=1}^r) , \qquad \star \in \{ F,O,N\}, \quad 1 \leq k \leq 5 . \] The asymptotic loss is sum- or max- decomposable into $r$ terms deriving from non-unit spiked eigenvalues. The terms involve matrix norms applied to pivots of $2 \times 2$ matrix expressions: \begin{align*} & A(\ell) = \begin{bmatrix} \ell & 0 \\ 0 & 1 \end{bmatrix} \,, & B(\eta,c) = I_2 + (\eta-1) \begin{bmatrix} c^2 & c s \\ c s & s^2 \end{bmatrix} \,, \end{align*} where $s^2 = 1 -c^2$. With $\ell_i$ denoting a spiked eigenvalue and $c(\ell_i)$ the limiting cosine in (\ref{4}), the decompositions are \begin{align*} \phantom{\,,} {\cal L}_{F, k}((\ell_i)_{i=1}^r, (\eta_i)_{i=1}^r) & = \bigg( \sum_{i=1}^r \big[ L_{F, k} \big(A(\ell_i),B(\eta_i,c(\ell_i)) \big) \big]^2 \bigg)^{1/2} \,, \\ {\cal L}_{O,k}((\ell_i)_{i=1}^r, (\eta_i)_{i=1}^r) & = \max_{1 \leq i \leq r} L_{O,k} \big( A(\ell_i),B(\eta_i,c(\ell_i) )\big) \, , \\ {\cal L}_{N,k}((\ell_i)_{i=1}^r,(\eta_i)_{i=1}^r) & = \sum_{i=1}^r L_{N,k} \big( A(\ell_i),B(\eta_i,c(\ell_i)) \big) \, . \end{align*} \end{lemma} For each of the 15 losses studied here, and several others, \cite{DGJ18} derives---under proportional growth $\gamma_n \rightarrow \gamma > 0$---a shrinker $\eta^*(\lambda | L) = \eta^*(\lambda| L, \gamma)$ minimizing the asymptotic loss ${\cal L}$. Optimal shrinkers depend on the ``inverse" of the eigenvalue mapping (\ref{bias_func}): \begin{align*} \ell(\lambda) = \ell(\lambda, \gamma) = \begin{dcases} \frac{\lambda +1 - \gamma + \sqrt{(\lambda - 1 - \gamma)^2 - 4 \gamma}}{2} & \lambda > \lambda_+(\gamma) \\ \ell_+(\gamma) & \lambda \leq \lambda_+(\gamma) \end{dcases} \, . \end{align*} Although $\ell_i$ is unobserved, this inverse provides a consistent estimator of $\ell_i$ above the BBP transition: \[ \phantom{\,.} \ell(\lambda_i) \xrightarrow[]{a.s.} \ell_i \, , \hspace{2cm} \ell_i > \ell_+(\gamma) \,. \] In most cases, shrinkers are explicit in terms of $\ell$, $c$, and $s$. For example, for $L_{F,1}$, the optimal shrinker is $\eta^*(\lambda | L_{F,1}) = \ell(\lambda) \cdot c^2(\ell(\lambda)) + s^2(\ell(\lambda))$, while for $L_{O,1}$, it is simply $\eta^*(\lambda | L_{O,1}) = \ell(\lambda)$; a list of 18 such closed forms can be found in \cite{DGJ18}. For notational lightness, we may write $\eta^*(\lambda | L_{F,1}) = \ell \cdot c^2 + s^2$, or $\eta^*(\lambda | L_{O,1}) = \ell$. \section{Covariance Estimation as $\gamma_n \rightarrow 0$} \subsection{The Variable-Spike, $\gamma_n \rightarrow 0$ Limit} \label{sec-DPGLim} We now formalize our earlier informal discussion of the asymptotic limit $\gamma_n \rightarrow 0$. Consider the normalized empirical eigenvalues defined by \begin{equation} \label{def-tlam} \tlam_i = \tlam_{i,n} = \frac{\lambda_i-1}{\sqrt{\gamma_n}}\,, \hspace{2cm} 1 \leq i \leq p \, . \end{equation} This transformation ``spreads" eigenvalues tightly grouped near $1$. In the standard normal case $\Sigma = I$, the empirical distribution of $(\tlam_i)_{i=1}^{p}$ converges weakly almost surely to the semicircle law with support $[-2,2]$ (Bai and Yin \cite{BY88}). We generalize the spiked model to allow spiked eigenvalues $(\ell_i)_{i=1}^r$ to vary with $n$. Matching (\ref{def-tlam}), we consider normalized spiked eigenvalues $\gamma_n^{-1/2}(\ell_i-1)$ and assume their convergence to limits $(\overset{\leftharpoonup}{\ell}_i)_{i=1}^r \in (0,\infty)$. That is, we study spiked eigenvalues of the form \begin{align*} \hspace{-.1cm} [\textbf{II}]\hspace{1.5cm} \ell_{i} = \ell_{i, n} = 1 + \overset{\leftharpoonup}{\ell}_i (1+ o(1)) \sqrt{\gamma_n} \, , \hspace{2cm} 1 \leq i \leq r \, , \end{align*} where $(\overset{\leftharpoonup}{\ell}_i)_{i=1}^r$ are constant, non-negative, parameters. Spiked eigenvalues are no longer fixed as under model {\bf I}, rather, they decay towards one at the $\sqrt{\gamma_n}$ rate. We assume supercritical eigenvalues---those with $\overset{\leftharpoonup}{\ell}_i > 1$---are simple. This new disproportional limit and varying-spike model yields formulas analogous to, yet distinct from, those in effect under proportional growth and fixed spikes. New formulas seem more elegant: a phase transition occurs simply at $\overset{\leftharpoonup}{\ell}=1$. Define the eigenvalue mapping function \begin{equation} \label{def-dpg-spike-eigenmap} \tlam(\overset{\leftharpoonup}{\ell}) = \begin{dcases} \overset{\leftharpoonup}{\ell} + \frac{1}{\overset{\leftharpoonup}{\ell}} & \overset{\leftharpoonup}{\ell} > 1\\ 2 & 0 < \overset{\leftharpoonup}{\ell} \leq 1 \\ \end{dcases} \end{equation} and the eigenvector cosine function \begin{equation} \label{def-dpg-spike-cosine} \overset{\leftharpoonup}{c}^2(\overset{\leftharpoonup}{\ell}) = \ \begin{dcases} 1 - \frac{1}{\overset{\leftharpoonup}{\ell}^2} , & \overset{\leftharpoonup}{\ell} > 1 \\ 0 & 0 < \overset{\leftharpoonup}{\ell} \leq 1, \\ \end{dcases} . \end{equation} For future use, we also define $\overset{\leftharpoonup}{s}^2(\overset{\leftharpoonup}{\ell}) = 1-\overset{\leftharpoonup}{c}^2(\overset{\leftharpoonup}{\ell})$. \begin{lemma} \label{thrm:spiked_covar} Under $\gamma_n \rightarrow 0$ and varying spikes {\normalfont \bf II}, \begin{equation} \label{eq-dpg-spike-eigenmap} \phantom{\,.} \tlam_{i} \xrightarrow{a.s.} \tlam(\overset{\leftharpoonup}{\ell}_i) \, , \hspace{2cm} 1 \leq i \leq r \, . \end{equation} With $v_1, \ldots, v_{p}$ and denoting the eigenvectors of $S$ in decreasing eigenvalue ordering, and $u_1, \ldots, u_{p}$ the corresponding eigenvectors of $\Sigma$, the angles between pairs of eigenvectors have limits \begin{equation} \label{lim-spg-spike-cosine} | \langle u_i, v_j \rangle | \xrightarrow{a.s.} \delta_{ij} \cdot \overset{\leftharpoonup}{c}(\overset{\leftharpoonup}{\ell}_i), \hspace{1.05cm} 1 \leq i, j \leq r \, . \end{equation} \end{lemma} The reader will no doubt see that Lemma \ref{thrm:spiked_covar} exhibits a formal similarity to proportional regime results (\ref{1}) and (\ref{4}); as in the proportional case, spiked eigenvalues of a critical scale produce eigenvalue inflation and eigenvector rotation, now written in terms of the new parameter $\overset{\leftharpoonup}{\ell}$. The arrow decorators allow us to preserve a formal resemblance between (\ref{bias_func}) and (\ref{2}) and previous results, yet indicate there are important differences. A stronger, non-asymptotic form of this lemma is established by Bloemendal et al.\ \cite{BKYY16}. Yet, \cite{BKYY16} requires the existence of some $k>0$ such that $n \leq p^k $ eventually, while here $\gamma_n$ may tend to zero arbitrarily slowly. Feldman \cite{F21} considers a closely-related signal-plus-noise model. For the reader's convenience, a simple, direct proof of Lemma \ref{thrm:spiked_covar} is provided in Appendix \ref{Appendix}. \subsection{Asymptotic Loss in the Variable-Spike, $\gamma_n \rightarrow 0$ Limit} Recall the families of rank-aware estimates $\widehat{\Sigma}_\eta$ and losses $L_{\star,k}$ defined in Section \ref{sec-PGLim}. Under variable-spikes {\bf II}, the sequence of estimands \[ \Sigma = \sum_{i=1}^r (\ell_i - 1) u_i u_i' + I \] now approaches identity. $L_{\star, k}(\Sigma, \widehat \Sigma_\eta)$ vanishes for any shrinker $\eta$ that is continuous at one and satisfies $\eta(1) = 1$, in particular, $L_{\star,k}(\Sigma, S) \rightarrow 0$. We therefore consider rescaled losses: \[ \lharp L} %\newcommand{\tEll}{\lharpoonu{L}_{\star, k}(\Sigma,\widehat \Sigma) = \frac{L_{\star,k}(\Sigma, \widehat \Sigma)}{\sqrt{\gamma_n}} \, . \] Observe that $\lharp L} %\newcommand{\tEll}{\lharpoonu{L}_{\star, 1}(\Sigma, \widehat \Sigma) = \|(\Sigma - I) - (\widehat{\Sigma} - I) \|_{\star} /\sqrt{\gamma_n} $, which we view as transforming to a new coordinate system with origin at the identity matrix. Let $\lharp \phi_n(x) = \gamma_n^{-1/2}(x-1)$ denote the mapping to these coordinates. Thus, (\ref{def-tlam}) and spikes {\bf II} may be written as $\tlam_{i} = \lharpoonu{\phi}_n(\lambda_{i})$ and $\lharpoonu{\phi}_n(\ell_{i}) \rightarrow \overset{\leftharpoonup}{\ell}_i$. \begin{definition} Let $\eta = \eta_n$ denote a sequence of shrinkers, possibly varying with $n$. Suppose that under the disproportional $\gamma_n \rightarrow 0$ limit and varying-spikes {\bf II}, the sequences of normalized shrinker outputs induced by rescaling converge: \[ \phantom{\,.} \lharpoonu{\phi}_n(\eta(\lambda_i)) \xrightarrow{a.s.} \lharp \eta_i \, , \hspace{2cm} 1 \leq i \leq r \, . \] We call the limits $(\lharp \eta_i)_{i=1}^r$ the {\it asymptotic shrinkage descriptors}. \end{definition} \begin{lemma} \label{lem-dpg-asy-loss} Let $\eta$ denote a sequence of shrinkers with asymptotic shrinkage descriptors $(\lharp \eta_i)_{i=1}^r$ under the disproportional $\gamma_n \rightarrow 0$ limit and varying-spikes {\normalfont \bf II}. Each loss $\lharp L} %\newcommand{\tEll}{\lharpoonu{L}_{\star,k}$ then converges almost surely to a deterministic limit: \[ \lharp L} %\newcommand{\tEll}{\lharpoonu{L}_{\star,k}(\Sigma, \widehat{\Sigma}_{\eta}) \xrightarrow{a.s.} \lharpoonu{\cal L}_{\star}((\overset{\leftharpoonup}{\ell}_i)_{i=1}^r, (\overset{\leftharpoonup}{\eta}_i)_{i=1}^r) \, , \qquad \star \in \{ F,O,N\} \, , \quad 1 \leq k \leq 5 \, . \] The asymptotic loss does not involve $k$. It is sum- or max- decomposable into $r$ terms deriving from non-unit spike eigenvalues. The terms involve matrix norms applied to pivots of $2 \times 2$ matrices $\widetilde{A}$ and $\widetilde{B}$: \begin{align*} & \widetilde{A}(\overset{\leftharpoonup}{\ell}) = \begin{bmatrix} \overset{\leftharpoonup}{\ell} & 0 \\ 0 & 0 \end{bmatrix} \,, & \widetilde{B}(\overset{\leftharpoonup}{\eta},\overset{\leftharpoonup}{c}) = \overset{\leftharpoonup}{\eta} \cdot \begin{bmatrix} \overset{\leftharpoonup}{c}^2 & \overset{\leftharpoonup}{c} \overset{\leftharpoonup}{s} \\ \overset{\leftharpoonup}{c} \overset{\leftharpoonup}{s} & \overset{\leftharpoonup}{s}^2 \end{bmatrix} \,, \end{align*} where $\lharp s \mystrut^2 = 1 - \lharp c \mystrut^2$. With $\lharp \ell_i$ denoting a spiked eigenvalue and $\lharp c(\lharp \ell_i)$ the limiting cosine in (\ref{lim-spg-spike-cosine}), the decompositions are \begin{align*} \phantom{\,,} \lharpoonu{\cal L}_{F}((\overset{\leftharpoonup}{\ell}_i)_{i=1}^r, (\overset{\leftharpoonup}{\eta}_i)_{i=1}^r) & = \bigg( \sum_{i=1}^r \big[ L_{F, 1}\big(\widetilde{A}(\overset{\leftharpoonup}{\ell}_i),\widetilde{B}(\overset{\leftharpoonup}{\eta}_i,\overset{\leftharpoonup}{c}(\overset{\leftharpoonup}{\ell}_i)) \big) \big]^2 \bigg)^{1/2} \,, \\ \lharpoonu{\cal L}_{O}((\overset{\leftharpoonup}{\ell}_i)_{i=1}^r, (\overset{\leftharpoonup}{\eta}_i)_{i=1}^r) & = \max_{1 \leq i \leq r} L_{O,1} \big(\widetilde{A}(\overset{\leftharpoonup}{\ell}_i),\widetilde{B}(\overset{\leftharpoonup}{\eta}_i,\overset{\leftharpoonup}{c}(\overset{\leftharpoonup}{\ell}_i))\big) \, , \\ \lharpoonu{\cal L}_{N}((\overset{\leftharpoonup}{\ell}_i)_{i=1}^r,(\overset{\leftharpoonup}{\eta}_i)_{i=1}^r) & = \sum_{i=1}^r L_{N,1} \big(\widetilde{A}(\overset{\leftharpoonup}{\ell}_i),\widetilde{B}(\overset{\leftharpoonup}{\eta}_i,\overset{\leftharpoonup}{c}(\overset{\leftharpoonup}{\ell}_i)) \big) \, . \end{align*} \end{lemma} \begin{proof} Under loss $L_{\star,1}$, the argument is identical to the proofs of Lemma 2 and 7 of \cite{DGJ18}, only using Lemma \ref{thrm:spiked_covar} for eigenvalue inflation and eigenvector rotation as $\gamma_n \rightarrow 0$. The pivots we consider are asymptotically equivalent: using the simultaneous block decomposition in Lemma 5 of \cite{DGJ18} and a Neumann series expansion, \[ \phantom{\,. } |\lharp L_{\star,1}(\Sigma, \widehat{\Sigma}_{\eta}) - \lharp L_{\star, k}(\Sigma, \widehat{\Sigma}_{\eta})| \xrightarrow[]{a.s.} 0 \, , \hspace{2cm} 2 \leq k \leq 5 \, . \] \end{proof} For example, the asymptotic shrinkage descriptors of the rank-aware sample covariance estimator $S^r = S_n^r = \sum_{i=1}^r (\lambda_i - 1) v_i v_i' + I$ are $\lharp \eta_i = \lharp \lambda(\lharp \ell_i)$. For $r = 1$, suppressing the subscript of $\overset{\leftharpoonup}{\ell}_1$, squared asymptotic loss evaluates to \begin{equation} \label{eq-dpg-shr-asy-loss-fro} \phantom{\,.} \big[\lharpoonu{\cal L}_{F}(\overset{\leftharpoonup}{\ell}, \tlam(\overset{\leftharpoonup}{\ell}))\big]^2 = (\overset{\leftharpoonup}{\ell} - \tlam(\overset{\leftharpoonup}{\ell}) \overset{\leftharpoonup}{c}^2(\overset{\leftharpoonup}{\ell}))^2 + \tlam^2(\overset{\leftharpoonup}{\ell}) (1-\overset{\leftharpoonup}{c}^4(\overset{\leftharpoonup}{\ell})) \, . \end{equation} By Lemma \ref{thrm:spiked_covar}, this simplifies to $2+ 3/\overset{\leftharpoonup}{\ell}^2$ for $\overset{\leftharpoonup}{\ell} > 1$ and to $\overset{\leftharpoonup}{\ell}^2+4$ for $\overset{\leftharpoonup}{\ell} \leq 1$. Hence, the (unsquared) asymptotic loss attains a global maximum of $\sqrt{5}$ precisely at the phase transition $\overset{\leftharpoonup}{\ell}=1$. Asymptotic losses of $S^1$ under each norm are collected below in Table 1, to later facilitate comparison with optimal shrinkage. \setlength\extrarowheight{5pt} \begin{table}[h] \centering \begin{tabular}{| c | c | c |} \hline Norm & $\overset{\leftharpoonup}{\ell} < 1$ & $\overset{\leftharpoonup}{\ell} > 1$ \\ \hline Frobenius & $\sqrt{\lharp \ell^2 + 4}$ & $\sqrt{2 + 3/\overset{\leftharpoonup}{\ell}^2}$ \\ Operator & $2$ & $ \big ( 1+ \sqrt{5+ 4 \overset{\leftharpoonup}{\ell}^2} \big) /(2\overset{\leftharpoonup}{\ell})$ \\ Nuclear & $\overset{\leftharpoonup}{\ell} + 2 $ & $\sqrt{4+5/\overset{\leftharpoonup}{\ell}^2}$ \\ \hline \end{tabular} \caption{Asymptotic Loss $\lharpoonu{\cal L}_\star$ of the rank-aware sample covariance $S^1$ (the subscript of $\overset{\leftharpoonup}{\ell}_1$ is suppressed).} \label{tbl-asy-loss-shr-rank-aware} \end{table} \setlength\extrarowheight{0pt} \subsection{Optimal Asymptotic Loss} This subsection assumes $r=1$; the subscript of $\overset{\leftharpoonup}{\ell}_1$ is suppressed. Recalling the relations between $\overset{\leftharpoonup}{\ell}$, $\tlam(\overset{\leftharpoonup}{\ell})$, and $\overset{\leftharpoonup}{c}(\overset{\leftharpoonup}{\ell})$, one sees in Lemma \ref{lem-dpg-asy-loss} and (\ref{eq-dpg-shr-asy-loss-fro}) that $\tlam(\overset{\leftharpoonup}{\ell})$ is not the minimizer of the function $\overset{\leftharpoonup}{\eta} \mapsto \lharpoonu{\cal L}_{\star}(\overset{\leftharpoonup}{\ell}, \overset{\leftharpoonup}{\eta})$. A sequence estimators $\widehat \Sigma_{\eta} = (\eta(\lambda_1)-1) v_1 v'_1 + I $ can outperform $S^1$ substantially, provided the asymptotic shrinkage descriptor $\overset{\leftharpoonup}{\eta}_1 = \lim_{n \rightarrow \infty} \lharp \phi_n(\eta(\lambda_1)) $ exists and $\lharpoonu{\cal L}_\star(\overset{\leftharpoonup}{\ell}, \overset{\leftharpoonup}{\eta}_1) < \lharpoonu{\cal L}_\star(\overset{\leftharpoonup}{\ell}, \tlam(\overset{\leftharpoonup}{\ell}))$. In this subsection, we calculate the asymptotic shrinkage descriptors that minimize $\overset{\leftharpoonup}{\eta} \mapsto \lharpoonu{\cal L}_{\star}(\overset{\leftharpoonup}{\ell}, \overset{\leftharpoonup}{\eta})$. The following subsection shows the existence of shrinkers with such asymptotic shrinkage descriptors. \begin{definition} \label{def-shr-formal-asy} The {\it formally optimal asymptotic loss} in the rank-1 setting is \[ \lharpoonu{\cal L}_{\star}^1(\overset{\leftharpoonup}{\ell}) = \min_{\vartheta} \lharpoonu{\cal L}_{\star}(\overset{\leftharpoonup}{\ell},\vartheta), \qquad \star \in \{F,O,N\}. \] A { \it formally optimal shrinker} is a function $\overset{\leftharpoonup}{\eta}(\cdot | \star): \mathbb{R} \mapsto \mathbb{R}$ achieving $\lharpoonu{\cal L}_{\star}^1(\overset{\leftharpoonup}{\ell})$: \[ \overset{\leftharpoonup}{\eta}(\overset{\leftharpoonup}{\ell} | \star) = \underset{\vartheta}{\mbox{argmin}} \, \lharpoonu{\cal L}_{\star}(\overset{\leftharpoonup}{\ell},\vartheta), \qquad \overset{\leftharpoonup}{\ell} > 0 \, , \qquad \star \in \{F,O,N\} \, . \] We write $\lharp \eta(\overset{\leftharpoonup}{\ell} | \star)$ rather than $\lharp \eta(\overset{\leftharpoonup}{\ell} | L_{\star,k})$ as by Lemma \ref{lem-dpg-asy-loss}, optimal asymptotic losses are independent of the pivot $k$. \end{definition} \begin{lemma} \label{lem-shr-opt} Formally optimal shrinkers and corresponding losses are given by \setlength\extrarowheight{3pt} \begin{align} & \overset{\leftharpoonup}{\eta}^*(\overset{\leftharpoonup}{\ell}|F) = (\overset{\leftharpoonup}{\ell} - 1/\overset{\leftharpoonup}{\ell})_+ \,,&& \lharpoonu{\cal L}_{F}^1(\overset{\leftharpoonup}{\ell}) = \begin{dcases} \sqrt{2 - 1/\overset{\leftharpoonup}{\ell}^2} & \overset{\leftharpoonup}{\ell} > 1 \\ \overset{\leftharpoonup}{\ell} & 0 < \overset{\leftharpoonup}{\ell} \leq 1 \end{dcases} \, , \nonumber \\ & \overset{\leftharpoonup}{\eta}^*(\overset{\leftharpoonup}{\ell}|O) = \overset{\leftharpoonup}{\ell} \cdot 1_{\{\overset{\leftharpoonup}{\ell} > 1\}} \,, &&\lharpoonu{\cal L}_{O}^1(\overset{\leftharpoonup}{\ell}) = \begin{dcases} 1 & \overset{\leftharpoonup}{\ell} > 1\\ \overset{\leftharpoonup}{\ell} & 0 < \overset{\leftharpoonup}{\ell} \leq 1 \end{dcases} \, , \label{003}\\ & \overset{\leftharpoonup}{\eta}^*(\overset{\leftharpoonup}{\ell}|N) = \big(\overset{\leftharpoonup}{\ell} -2 /\overset{\leftharpoonup}{\ell}\big)_+ \, , && \lharpoonu{\cal L}_{N}^1(\overset{\leftharpoonup}{\ell}) = \begin{dcases} 2\sqrt{1-1/\overset{\leftharpoonup}{\ell}^2} & \overset{\leftharpoonup}{\ell} > \sqrt{2} \\ \overset{\leftharpoonup}{\ell} & 0 < \overset{\leftharpoonup}{\ell} \leq \sqrt{2} \end{dcases} \, . \nonumber \end{align} \setlength\extrarowheight{0pt} \end{lemma} \begin{proof} By Lemma \ref{lem-dpg-asy-loss}, \begin{align} \label{1523} \phantom{\,.} \lharpoonu{\cal L}_\star(\overset{\leftharpoonup}{\ell}, \vartheta) & = L_{\star,1}\big(\widetilde A(\overset{\leftharpoonup}{\ell}), \widetilde B(\vartheta, \lharp c(\overset{\leftharpoonup}{\ell})) \big) = \bigg \| \begin{bmatrix} \overset{\leftharpoonup}{\ell} - \vartheta \lharp c \mystrut^2(\overset{\leftharpoonup}{\ell}) & - \vartheta \lharp c(\overset{\leftharpoonup}{\ell}) \lharp s(\overset{\leftharpoonup}{\ell}) \\ - \vartheta \lharp c(\overset{\leftharpoonup}{\ell}) \lharp s(\overset{\leftharpoonup}{\ell}) & -\vartheta \lharp s \mystrut^2(\overset{\leftharpoonup}{\ell}) \end{bmatrix} \bigg\|_\star \nonumber \\ & =\begin{cases} \sqrt{(\overset{\leftharpoonup}{\ell} - \vartheta)^2 + 2 \overset{\leftharpoonup}{\ell} \vartheta \overset{\leftharpoonup}{s}^2(\overset{\leftharpoonup}{\ell})} & \star = F \\ \max(|\lambda_+|, |\lambda_-|) & \star = O \\ |\lambda_+| + |\lambda_-| & \star = N \end{cases} \,,\end{align} where $\lambda_\pm = \big(\vartheta - \overset{\leftharpoonup}{\ell} \pm \sqrt{(\vartheta - \overset{\leftharpoonup}{\ell})^2 + 4 \vartheta \overset{\leftharpoonup}{\ell} \overset{\leftharpoonup}{s}^2(\overset{\leftharpoonup}{\ell})} \big)/2$ ($\lambda_\pm$ are the eigenvalues of the above $2 \times 2$ matrix, according to Lemma 14 of \cite{DGJ18s}). Differentiating, Frobenius loss is minimized by $\vartheta = \overset{\leftharpoonup}{\ell} \lharp c \mystrut^2(\overset{\leftharpoonup}{\ell}) = (\overset{\leftharpoonup}{\ell} - 1/\overset{\leftharpoonup}{\ell})_+$. For $\overset{\leftharpoonup}{\ell} > 1$, operator norm loss is minimized by $\vartheta = \overset{\leftharpoonup}{\ell}$, for which $\lambda_+ = - \lambda_-$. For $\overset{\leftharpoonup}{\ell} \leq 1$, $\lambda_+ = \vartheta$, while $-\lambda_- = \overset{\leftharpoonup}{\ell}$. In this case, we take $\vartheta = 0$. For $\vartheta \geq 0$, nuclear norm loss may be rewritten as \[ \ \lharpoonu{\cal L}_N(\overset{\leftharpoonup}{\ell},\vartheta) = \sqrt{(\vartheta - \overset{\leftharpoonup}{\ell})^2 + 4 \vartheta \overset{\leftharpoonup}{\ell} \overset{\leftharpoonup}{s}^2(\overset{\leftharpoonup}{\ell}) } \, ; \] this is minimized by $\vartheta = \overset{\leftharpoonup}{\ell} (1-2 \overset{\leftharpoonup}{s}^2(\overset{\leftharpoonup}{\ell}))_+$. If $\theta \leq 0$, $\lharpoonu{\cal L}_N(\overset{\leftharpoonup}{\ell},\vartheta) = - \vartheta + \overset{\leftharpoonup}{\ell}$ is minimized by $\vartheta = 0$. We collect below formally optimal shrinkers: \begin{align} \label{9237} & \overset{\leftharpoonup}{\eta}^*(\overset{\leftharpoonup}{\ell} | F) = \overset{\leftharpoonup}{\ell} \overset{\leftharpoonup}{c}(\overset{\leftharpoonup}{\ell}) \, , & \overset{\leftharpoonup}{\eta}^*(\overset{\leftharpoonup}{\ell} | O) = \overset{\leftharpoonup}{\ell} \cdot 1_{\{\overset{\leftharpoonup}{\ell} > 1\}} \, , && \overset{\leftharpoonup}{\eta}^*(\overset{\leftharpoonup}{\ell} | N) = \overset{\leftharpoonup}{\ell} (1-2 \overset{\leftharpoonup}{s}^2(\overset{\leftharpoonup}{\ell}))_+ \, . \end{align} Substitution of (\ref{def-dpg-spike-cosine}) completes the proof. \end{proof} \subsection{Unique Admissibility} Formally optimal shrinkers derived in the previous subsection depend on $\overset{\leftharpoonup}{\ell}$, which is not observable. We define the partial inverse of the eigenvalue mapping $\tlam(\overset{\leftharpoonup}{\ell})$ (\ref{def-dpg-spike-eigenmap}): \begin{equation} \label{def-invert-eigenmap} \overset{\leftharpoonup}{\ell}( \tlam) = \left \{ \begin{array}{ll} (\tlam + \sqrt{\tlam^2 - 4})/2 \ & \tlam > 2 \\ 1 & \tlam \leq 2 \\ \end{array} \right . . \end{equation} Recall the rescaling mapping $\lharpoonu{\phi}_n$, with inverse $\lharpoonu{\phi}_n^{-1}(\overset{\leftharpoonup}{\eta}) = 1 + \sqrt{\gamma_n} \overset{\leftharpoonup}{\eta}$. Through these functions, we may ``change coordinates'' in shrinkers defined in terms of $\overset{\leftharpoonup}{\ell}$ to obtain shrinkers defined on observables. Thanks to the sum-/max- decomposibility of asymptotic losses, these shrinkers generate covariance estimates which are asymptotically optimal in the rank-$r$ case. \begin{definition} \label{def48} A sequence of shrinkers $\eta^*(\lambda|\star) = \eta_n^*(\lambda|\star)$ is {\it asymptotically optimal} under the disproportional limit $\gamma_n \rightarrow 0$, variable spikes {\bf II}, and loss $\lharp L} %\newcommand{\tEll}{\lharpoonu{L}_{\star,k}$ if the formally optimal asymptotic loss is achieved: \begin{align*} \phantom{\,.} \lharp L} %\newcommand{\tEll}{\lharpoonu{L}_{F,k}(\Sigma,\widehat{\Sigma}_{\eta^*(\lambda|F)}) & \xrightarrow{a.s.} \bigg( \sum_{i=1}^r \big[ \lharpoonu{\cal L}_F^1(\overset{\leftharpoonup}{\ell}_i) \big]^2 \bigg)^{1/2} \, , \\ \phantom{\,.} \lharp L} %\newcommand{\tEll}{\lharpoonu{L}_{O,k}(\Sigma,\widehat{\Sigma}_{\eta^*(\lambda|O)}) & \xrightarrow{a.s.} \max_{1 \leq i \leq r} \lharpoonu{\cal L}_O^1(\overset{\leftharpoonup}{\ell}_i) \, , \\ \phantom{\,.} \lharp L} %\newcommand{\tEll}{\lharpoonu{L}_{N,k}(\Sigma,\widehat{\Sigma}_{\eta^*(\lambda|N)}) & \xrightarrow{a.s.} \sum_{i=1}^r \lharpoonu{\cal L}_N^1(\overset{\leftharpoonup}{\ell}_i) \, . \end{align*} \end{definition} \begin{theorem} \label{thrm:spiked_covar2} For $\star \in \{F,N\}$, define the following shrinkers through the formally optimal shrinkers $\overset{\leftharpoonup}{\eta}(\overset{\leftharpoonup}{\ell} | \star)$ of Lemma \ref{lem-shr-opt}: \begin{align} \eta^*(\lambda | \star) & = \lharpoonu{\phi}_n^{-1} \big (\overset{\leftharpoonup}{\eta}^*(\overset{\leftharpoonup}{\ell}(\lharpoonu{\phi}_n(\lambda))|\star )\big) \label{def-asy-opt-shrink} \nonumber \\ & = 1 + \sqrt{\gamma_n} \cdot \overset{\leftharpoonup}{\eta}^* \Big( \overset{\leftharpoonup}{\ell} \Big( \frac{\lambda-1}{\sqrt{\gamma_n}} \Big) \Big| \star \Big) \, . \end{align} For the operator norm, define \begin{align} \eta^*(\lambda|O) &= \lharp \phi_n^{-1} \Big(\lharp \ell(\lharp \phi_n(\lambda)) \cdot 1_{\{ \lharp \phi_n(\lambda) > \tau_n \}} \Big) \nonumber \\ &= 1 + \sqrt{\gamma_n} \cdot \lharp \ell \Big( \frac{\lambda-1}{\sqrt{\gamma_n}} \Big) \cdot 1_{\{ \lambda > 1+ \tau_n \sqrt{\gamma_n} \}} \,, \label{7059} \end{align} where $\tau_n = 2 + o(1)$ is a sufficiently slowly decaying sequence. The shrinkers $\eta^*(\cdot|\star)$ are asymptotically optimal. Consider any other sequence of shrinkers $\eta^\circ = \eta_n^\circ$; unless $\eta^\circ$ has asymptotic shrinkage descriptors equal to those of $\eta^*$, it has strictly larger asymptotic normalized loss. Thus, up to asymptotically negligible perturbations, $\eta^*(\cdot | \star)$ is the unique asymptotically admissible shrinker. \end{theorem} Empirically, for the operator norm, bulk edge thresholding performs well: \[ \phantom{\,.} \eta^*(\lambda | O) = \lharp \phi_n^{-1} \big(\lharp \ell(\lharp \phi_n(\lambda)) \cdot 1_{\{ \lharp \phi_n(\lambda) > 2 \}} \big) \,. \] This shrinker, which thresholds normalized eigenvalues exactly at two, is used in the simulations visualized in Figure \ref{fig-shr-empirical}. Achieved loss is quite close to $\lharp {\cal L}_O^1(\overset{\leftharpoonup}{\ell})$ on $(0,1]$. The slightly elevated threshold in (\ref{7059}) is an artifact of the proof. \begin{proof} By Lemma \ref{thrm:spiked_covar} and continuity of the partial inverse (\ref{def-invert-eigenmap}), \begin{equation} \phantom{\,.} \lharp \ell(\lharp \lambda_i) \xrightarrow{a.s.} \max(\lharp \ell_i, 1) \, , \hspace{2cm} 1 \leq i \leq r \, . \label{0575} \end{equation} As $\lharp \eta^*(\overset{\leftharpoonup}{\ell}|F)$ and $\lharp \eta(\overset{\leftharpoonup}{\ell}|N)$ are continuous and on $(0,1]$ constant, (\ref{0575}) implies the asymptotic shrinkage descriptors of $\eta^*(\cdot|F)$ and $\eta^*(\cdot|N)$ almost surely exist and equal the formally optimal evaluates $(\lharp \eta^*(\overset{\leftharpoonup}{\ell}_i|F))_{i=1}^r$ and $(\lharp \eta^*(\overset{\leftharpoonup}{\ell}_i|N))_{i=1}^r$, respectively. The formally optimal shrinker under operator norm loss $\lharp \eta^*(\cdot|O)$ is discontinuous at the phase transition $\overset{\leftharpoonup}{\ell} = 1$. For $\lharp \ell_i > 1$, existence and matching of the $i$-th asymptotic shrinkage descriptor to $\lharp \eta^*(\overset{\leftharpoonup}{\ell}_i|O)$ is immediate. Subcritical spiked eigenvalues converge to the bulk upper edge, at a rate bounded by equation (1.6) of \cite{F21}: for $\overset{\leftharpoonup}{\ell}_i \leq 1$, almost surely eventually, $\tlam_i \leq p^{-1/11}$. The $i$-th asymptotic shrinkage descriptor is therefore zero. Consider any other sequence of shrinkers $\eta^\circ = \eta^\circ_n$ and rank-aware shrinkage estimators $\widehat{\Sigma}_{\eta^\circ}$. Unless this sequence has asymptotic shrinkage descriptors $(\overset{\leftharpoonup}{\eta}_i^\circ)_{i=1}^r$ identical to those of $\eta^*$, its asymptotic loss is strictly larger. Namely, if there is a subsequence with normalized limits $\overset{\leftharpoonup}{\eta}_i^\circ = \lim_{k \rightarrow \infty} \lharpoonu{\phi}_{n_k}(\eta^\circ(\lambda_{i,n_k}))$ and $\| (\overset{\leftharpoonup}{\eta}_i^\circ)_{i=1}^r - (\overset{\leftharpoonup}{\eta}_i^*)_{i=1}^r \|_{\ell_{2}^r} = \varepsilon > 0$, then by Lemmas \ref{lem-dpg-asy-loss} and \ref{lem-shr-opt}, there is some $\delta = \delta(\varepsilon) > 0$ such that along this subsequence, the asymptotic loss under shrinker $\eta^\circ$ exceeds the asymptotic loss of $\eta^*$ by at least $\delta$. Note that $\eta^*$ does not depend on the model parameters $(\overset{\leftharpoonup}{\ell}_i)_{i=1}^r$. Hence, $\widehat{\Sigma}_{\eta^*}$ achieves optimal asymptotic performance at each possible choice of $(\overset{\leftharpoonup}{\ell}_i)_{i=1}^r$. Except for asymptotically negligible perturbations, $\eta^*(\cdot|\star)$ is the unique shrinker asymptotically admissible under $L_{\star,k}$. \end{proof} \begin{corollary} \label{cor1} Under the disproportional limit $\gamma_n \rightarrow 0$ and variable spikes {\bf II}, both the sample covariance $S$ and the rank-aware sample covariance \begin{align} S^r = \sum_{i=1}^r (\lambda_i-1) v_i v_i' + I \label{11} \end{align} are asymptotically inadmissible for $L_{\star, k}$. \end{corollary} \begin{proof} This is an immediate consequence of Theorem \ref{thrm:spiked_covar2}. Still, we sketch a direct argument for the Frobenius-norm case. Let $W$ denote projection onto the combined span of $(u_i)_{i=1}^r$ and $(v_i)_{i=1}^r$. Then, for the sample covariance matrix, \begin{eqnarray*} \phantom{\,.} \| \Sigma - S \|_F^2 &=& \| W'( \Sigma - S )W \|_F^2 + \| (I-W)( I - S )(I-W) \|_F^2 \, , \end{eqnarray*} while for the rank-aware estimate, \begin{eqnarray*} \phantom{\,.} \| \Sigma - S^r \|_F^2 &=& \| W( \Sigma - S^r )W \|_F^2 + \| (I-W)( I - S^r )(I-W) \|_F^2 \\ &=& \| W( \Sigma - S^r )W \|_F^2 \, . \end{eqnarray*} The terms $\| W'( \Sigma - S )W \|_F^2$ and $ \| W( \Sigma - S^r )W \|_F^2$ tend to a common limit, so it suffices to study the rank-aware case. By Lemma \ref{lem-dpg-asy-loss}, $\|\Sigma - S^r\|_F^2 \xrightarrow{a.s.} \sum_{i=1}^r [\lharp {\cal L}(\overset{\leftharpoonup}{\ell}_i, \tlam(\overset{\leftharpoonup}{\ell}_i))]^2$, and using equation (\ref{eq-dpg-shr-asy-loss-fro}) one may verify that \[ \phantom{\,.} [\lharpoonu{\cal L}_{F}(\overset{\leftharpoonup}{\ell}, \tlam(\overset{\leftharpoonup}{\ell}))]^2 - [\lharpoonu{\cal L}_{F}(\overset{\leftharpoonup}{\ell}, \overset{\leftharpoonup}{\eta}^*)]^2 = (\tlam(\overset{\leftharpoonup}{\ell}) - \overset{\leftharpoonup}{\eta}^*)^2 \geq 0 \, . \] Over the range $\overset{\leftharpoonup}{\ell} > 1$, $\tlam(\overset{\leftharpoonup}{\ell}) - \overset{\leftharpoonup}{\eta}^* = 2/\overset{\leftharpoonup}{\ell}$, while over $\overset{\leftharpoonup}{\ell} \leq 1$, $\tlam(\overset{\leftharpoonup}{\ell}) - \overset{\leftharpoonup}{\eta}^* = 2$; thus the inequality is strict. \end{proof} \subsection{Performance in the $\gamma_n \rightarrow 0$ Limit} Figure \ref{fig-shr-optshrink} depicts optimal shrinkers (left) and corresponding asymptotic losses (right, in the rank-one case $r=1$). In the left-hand panel, the red curve marks the diagonal $\overset{\leftharpoonup}{\eta}(\tlam) = \tlam$, corresponding to no shrinkage. For each loss function we consider, the optimal shrinker $\overset{\leftharpoonup}{\eta}^*(\cdot| \star)$ lies below the diagonal. All optimal shrinkers vanish below the bulk edge, $\tlam \leq 2$. Below the phase transition occuring at $\overset{\leftharpoonup}{\ell} = 1$, sample and population eigenvectors are asymptotically orthogonal. In that region, it is futile to use empirical eigenvectors to model low-rank structure---they are pure noise. Therefore, to achieve optimal loss, we simply take $\eta=0$. According to (\ref{def-dpg-spike-eigenmap}), $\overset{\leftharpoonup}{\ell} \leq 1$ if and only if $\tlam \leq 2$, hence all optimal rules vanish for $\tlam < 2$. Over the restricted range $0 < \overset{\leftharpoonup}{\ell} < 1$, optimal rules are of course not unique; we also obtain optimality over that range by simple bulk-edge hard thresholding of empirical eigenvalues, $\overset{\leftharpoonup}{\eta}(\tlam) = \tlam \cdot 1_{\{\tlam> 2\}}$.) The right-hand panel compares performances under various loss functions of the standard estimator $S^r$ (dotted lines) and optimal estimators (solid lines). Asymptotic losses of the standard estimator are strictly larger than that of optimal estimators for all $\overset{\leftharpoonup}{\ell}$---near $\overset{\leftharpoonup}{\ell} = 1$, standard loss is far larger. As $\overset{\leftharpoonup}{\ell} \rightarrow 0^+$, optimal losses tend to zero, while standard losses tend to $2$. \begin{definition} The (absolute) {\it regret} of a decision rule $\overset{\leftharpoonup}{\eta}$ is defined as \[ \phantom{\,.} \lharp {\cal R}_\star(\overset{\leftharpoonup}{\ell},\overset{\leftharpoonup}{\eta}) = \lharpoonu{\cal L}_\star(\overset{\leftharpoonup}{\ell},\overset{\leftharpoonup}{\eta}) - \lharpoonu{\cal L}_\star(\overset{\leftharpoonup}{\ell}, \overset{\leftharpoonup}{\eta}^*) \, . \] The {\it possible improvement} of a decision rule $\overset{\leftharpoonup}{\eta}$ is $\lharp {\cal I}_\star(\overset{\leftharpoonup}{\ell},\overset{\leftharpoonup}{\eta}) = \lharp {\cal R}_\star(\overset{\leftharpoonup}{\ell},\overset{\leftharpoonup}{\eta})/\lharpoonu{\cal L}_\star(\overset{\leftharpoonup}{\ell},\overset{\leftharpoonup}{\eta})$, i.e., the fractional amount by which performance improves by switching to the optimal rule. \end{definition} Losses of $S^r$ in the right-hand panel of Figure \ref{fig-shr-optshrink} are well above losses of optimal estimators below the phase transition $\overset{\leftharpoonup}{\ell} \leq 1$; the limit $\overset{\leftharpoonup}{\ell} \rightarrow 0^+$ produces maximal absolute regret, $2$, for each of these losses. For example, with operator norm loss, $\lharpoonu{\cal L}_O(0^+,\tlam) = 2$, while $\lharpoonu{\cal L}_O(0^+,\overset{\leftharpoonup}{\eta}^*)=0$, giving absolute regret $\lharp {\cal R}_O(0^+,\tlam) = 2$ and possible improvement $\lharp {\cal I}_O(0^+,\tlam) = 1$ (100\% of the standard loss is avoidable). Similarly, with nuclear norm loss, we have $\lharp {\cal R}_N(\overset{\leftharpoonup}{\ell},\tlam)=2$ for $\overset{\leftharpoonup}{\ell} \leq 1$, but $\lharp {\cal I}_N(0,\tlam) = 1$ (100\% of the standard loss is avoidable). \setlength\extrarowheight{5pt} \begin{table} \centering \begin{tabular}{| c | c | c | c | c |} \hline Norm & $\lharp {\cal R}_\star(0^+,\tlam)$ & $\lharp {\cal R}_\star(1,\tlam)$ & $\lharp {\cal I}_\star(0^+,\tlam)$ & $\lharp {\cal I}_\star(1,\tlam)$\\ \hline Frobenius & $2$ & $\sqrt{5}-1$ & $100$\% & 55\% \\ Operator & 2 & 1& $100$\% & 50\% \\ Nuclear & 2 & 2& $100$\% & 66\% \\ \hline \end{tabular} \caption{{\bf Regret and Improvement, $\gamma_n \rightarrow 0$.} Absolute Regret $\lharp {\cal R}$ and Possible Improvement $\lharp {\cal I}$ of the standard rank-aware estimator $S^r$ (equivalently, $\overset{\leftharpoonup}{\eta} = \tlam$) near zero and exactly at the phase transition $\overset{\leftharpoonup}{\ell}=1$.} \end{table} \begin{figure}[h] \centering \includegraphics[height=2.6in]{OptNonlinGammaZero} \includegraphics[height=2.6in]{spiked_covar2.png} \caption{{\bf Optimal shrinkers and losses, $\gamma_n \rightarrow 0$.} Left: optimal shrinkage functions. Right: losses of optimal shrinkers (solid) and of the standard estimator $S^r$ under Frobenius (blue), operator (orange), nuclear (green) norms.} \label{fig-shr-optshrink} \end{figure} \setlength\extrarowheight{0pt} \begin{figure}[h!] \centering \includegraphics[height=3in]{spiked_covar7.png} \caption{{\bf Monte-Carlo simulations, small $\gamma_n$}. Average over 50 realizations of losses under three norms, both for the standard and asymptotically optimal estimators. Here, $p=1{,}000$ and $n = 100{,}000$, so $\gamma_n= .01$.} \label{fig-shr-empirical} \end{figure} \section{Covariance Estimation as $\gamma_n \rightarrow \infty$} \subsection{The Variable-Spike, $\gamma_n \rightarrow \infty$ Limit} \label{sec-DPGLim-Infty} \label{sec:spiked_covar2} We now turn to the dual situation, $\gamma_n \rightarrow \infty$. To expose phase transition phenomena, we consider variable spiked eigenvalues of the form \begin{align*} \hspace{-.1cm} [\textbf{III}]\hspace{1.5cm} \ell_i = \ell_{i;n} = 1 + (\rharpoonu{\ell}_i + o(1)) \gamma_n \, , \hspace{2cm} 1 \leq i \leq r \, , \end{align*} where $(\rharpoonu{\ell}_i)_{i=1}^r$ are fixed, positive, and distinct parameters. Correspondingly, we study the normalized empirical eigenvalues \begin{equation} \label{def-ulam} \rharpoonu{\lambda}_{i} = \rharp \lambda_{i,n} = \frac{ \lambda_{i}}{\gamma_n} \, . \end{equation} The next theorem provides the $\gamma_n \rightarrow \infty$ analogues of eigenvalue inflation (\ref{1}) and eigenvalue rotation (\ref{4}). \begin{lemma} \label{thrm3} (Benaych-Georges and Rao Nadakuditi \cite{BGN11}, Shen et al.\ \cite{Shen}) Under the disproportional limit $\gamma_n \rightarrow \infty$ and variable spikes {\normalfont \bf III}, the leading empirical eigenvalues of $S$ satisfy \begin{equation} \phantom{\,.} \rharpoonu{\lambda}_{i} \xrightarrow{a.s.} \rharp 1 + \rharpoonu{\ell}_i \,. \end{equation} The angles between empirical eigenvectors and the corresponding population eigenvectors have limits \begin{equation} \begin{aligned} \phantom{\,.} \hspace{1cm}& | \langle u_i, v_j \rangle | \xrightarrow{a.s.} \delta_{ij} \cdot \rharpoonu{c}(\rharpoonu{\ell}_i) \, ,\end{aligned} \qquad \hspace{1cm} 1 \leq i, j \leq r \,, \end{equation} where the cosine function is given by \begin{align} \rharpoonu{c}^2(\rharpoonu{\ell}) = \frac{\rharpoonu{\ell}}{1+\rharpoonu{\ell}} \, . \label{123} \end{align} \end{lemma} No phase transition appears in this framing of the $\gamma_n \rightarrow \infty$ setting; for example, $\partial \rharpoonu{\lambda} / \partial \rharpoonu{\ell} = 1$ and $\partial \rharpoonu{c}/ \partial \rharpoonu{\ell} > 0$ for all $\rharpoonu{\ell} > 0$, while as $\gamma_n \rightarrow 0$, we had $\partial \tlam/\partial \overset{\leftharpoonup}{\ell} = 0$ and $\partial \overset{\leftharpoonup}{c} / \partial \overset{\leftharpoonup}{\ell} = 0$ for $0 < \overset{\leftharpoonup}{\ell} < 1$. Recall that for $i \leq \min(n,p)$, $\lambda_i(X'X) = \lambda_i(XX')$; one might therefore expect that the phase transition as $\gamma_n \rightarrow 0$ would manifest here as well as a clear phase transition. Such a transition for the {\it eigenvalue} does occur under alternative scalings and coordinates to $\rharpoonu{\ell}$, $\rharpoonu{\lambda}$. Indeed, remaining in the $\gamma_n \rightarrow \infty$ limit, consider $\tilde \ell_i = \tilde \ell_{i,n} = \gamma_n(1+\lharp \ell_i(1+o(1))\gamma_n^{-1/2})$. Leveraging $\lambda_i(X'X) = \lambda_i(XX')$ and earlier $\gamma_n \rightarrow 0$ results, a phase transition occurs at $\overset{\leftharpoonup}{\ell}_i = 1$. This transition, however, tells us nothing of the eigenvectors: the properties of eigenvectors of $X'X$ and $XX'$ are quite different, and on this scale, leading empirical eigenvectors are asymptotically decorrelated from their population counterparts. By adopting spikes {\bf III}, we work on a far coarser scale, one where eigenvectors correlate but with no visible phase transition. \subsection{Asymptotic Loss and Unique Admissibility in the $\gamma_n \rightarrow \infty$ Limit} Under variable spikes {\bf III}, the population covariance now explodes. As losses similarly explode, we consider rescaled losses: \[ \rharpoonu{L}_{\star, k}(\Sigma, \widehat \Sigma) = \frac{L_{\star,k}(\Sigma,\widehat \Sigma)}{\gamma_n} \, . \] Let $\rharpoonu{\phi}_n(\lambda) = \lambda / \gamma_n$ denote the mapping to this new coordinate system. Thus, spikes {\bf III} and (\ref{def-ulam}) may be written as $\rharpoonu{\lambda}_{i} = \rharpoonu{\phi}_n(\lambda_{i})$ and $\rharp \phi_n(\ell_i) \rightarrow \rharp \ell_i$. \begin{definition} Let $\eta = \eta_n$ denote a sequence of shrinkers, possibly varying with $n$. Suppose that under the disproportional limit $\gamma_n \rightarrow \infty$ and varying-spikes {\bf III}, the sequences of normalized shrinker outputs induced by rescaling converge: \[ \phantom{\,.} \rharpoonu{\phi}_n(\eta(\lambda_i)) \xrightarrow{a.s.} \rharp \eta_i \, , \hspace{2cm} 1 \leq i \leq r \, . \] We call the limits $(\rharp \eta_i)_{i=1}^r$ the {\it asymptotic shrinkage descriptors}. \end{definition} \begin{lemma} \label{lem-dpg-grow-asy-loss} Let $\eta$ denote a sequence of shrinkers with asymptotic shrinkage descriptors $(\rharp \eta_i)_{i=1}^r$ under the disproportional limit $\gamma_n \rightarrow \infty$ limit and varying-spikes {\normalfont \bf III}. Each loss $\rharp L_{\star,1}$ then converges almost surely to a deterministic limit: \[ \rharpoonu{L}_{\star,1}(\Sigma, \widehat{\Sigma}_{\eta}) \xrightarrow{a.s.} \rharpoonu{\cal L}_{\star}((\rharpoonu{\ell}_i)_{i=1}^r, (\rharpoonu{\eta}_i)_{i=1}^r) , \qquad \star \in \{ F,O,N\} \, . \] The asymptotic loss is sum- or max- decomposable into $r$ terms involving matrix norms applied to the $2 \times 2$ matrices $\widetilde{A}$ and $\widetilde{B}$ introduced in Lemma \ref{lem-dpg-asy-loss}. With $\rharpoonu{\ell}_i$ denoting a spiked eigenvalue and $\rharp c(\rharp \ell)$ the limiting cosine in (\ref{123}), the decompositions are \begin{align*} \phantom{\,,} \rharpoonu{\cal L}_{F}((\rharpoonu{\ell}_i)_{i=1}^r, (\rharpoonu{\eta})_{i=1}^r) & = \bigg( \sum_{i=1}^r \big[ L_{F, 1} \big(\widetilde{A}(\rharpoonu{\ell}_i),\widetilde{B}(\rharpoonu{\eta}_i,\rharpoonu{c}(\rharp \ell_i)) \big) \big]^2 \bigg)^{1/2} \,, \\ \rharpoonu{\cal L}_{O}((\rharpoonu{\ell}_i)_{i=1}^r, (\rharpoonu{\eta})_{i=1}^r) & = \max_{1 \leq i \leq r} L_{O,1} \big(\widetilde{A}(\rharpoonu{\ell}_i),\widetilde{B}(\rharpoonu{\eta}_i,\rharp c(\rharp \ell_i))\big) \, , \\ \rharpoonu{\cal L}_{N}((\rharpoonu{\ell}_i)_{i=1}^r, (\rharpoonu{\eta})_{i=1}^r) & = \sum_{i=1}^r L_{N,1} \big(\widetilde{A}(\rharpoonu{\ell}_i),\widetilde{B}(\rharpoonu{\eta}_i, \rharp c(\rharp \ell_i)) \big) \, . \end{align*} \end{lemma} Proof of this lemma is similar to that of Lemma \ref{lem-dpg-asy-loss} and omitted. Note that only pivot $\Delta_1$ is considered as $S$ and $\widehat \Sigma_\eta$ have $p - n$ eigenvalues equal to zero. As a simple example, the asymptotic shrinkage descriptors of the rank-aware sample covariance estimator $S^r= \sum_{i=1}^r (\lambda_i - 1) v_i v_i' + I$ are $\rharp \eta_i = \rharp \lambda(\rharp \ell_i)$. Squared asymptotic loss evaluates to (suppressing the subscript of $\rharp \ell_1$) \begin{equation} \label{eq-dpg-grow-asy-loss-fro} [\rharpoonu{\cal L}_{F}(\rharpoonu{\ell}, \rharpoonu{\lambda})]^2 = (\rharpoonu{\ell} - \rharpoonu{\lambda} \rharpoonu{c}^2(\rharp \ell))^2 + \rharpoonu{\lambda}^2 ( 1 - \rharpoonu{c}^4(\rharp \ell)) \, . \end{equation} By Theorem \ref{thrm:spiked_covar2}, $\rharpoonu{\lambda} \rharpoonu{c}^2(\rharp \ell) = \rharpoonu{\ell}$, while $\rharpoonu{\lambda}^2 (1 - \rharpoonu{c}^4(\rharp \ell)) = (1+2 \rharpoonu{\ell})$, so $[\rharpoonu{\cal L}_{F}(\rharpoonu{\ell}, \rharpoonu{\lambda})]^2 = (1+2\rharpoonu{\ell})$. Asymptotic losses of $S^1$ under each norm are collected below in Table 3, to later facilitate comparison with optimal shrinkage. \setlength\extrarowheight{4pt} \begin{table}[h!] \centering \begin{tabular}{| c | c |} \hline Norm & $\rharp {\cal L}_\star(\rharp \ell, \rharp \lambda)$ \\ \hline Frobenius & $\sqrt{1 + 2 \rharpoonu{\ell}}$ \\ Operator & $\left (1 + \sqrt{1+4 \rharpoonu{\ell}} \right )/2$ \\ Nuclear & $\sqrt{1+4\rharpoonu{\ell}}$ \\ \hline \end{tabular} \caption{Asymptotic Loss $\rharpoonu{\cal L}_\star$ of the standard rank-aware estimator $S^1$.} \label{tbl-asy-loss-gro-rank-aware} \end{table} \setlength\extrarowheight{0pt} The intermediate form (\ref{eq-dpg-grow-asy-loss-fro}) is symbolically isomorphic to the intermediate form (\ref{eq-dpg-shr-asy-loss-fro}) seen earlier in the $\gamma_n \rightarrow 0$ case (under replacement of $\overset{\leftharpoonup}{\;}$'s by $\overset{\rightharpoonup}{\;}$'s), suggesting that the path to optimality will again lead to eigenvalue shrinkage. \begin{definition} \label{def-gro-formal-asy} In the $\gamma_n \rightarrow \infty$ limit, the {\it formally optimal asymptotic loss} in the rank-1 setting is \[ \rharpoonu{\cal L}_{\star}^1(\rharpoonu{\ell}) \equiv \min_{\vartheta} \rharpoonu{\cal L}_{\star}(\rharpoonu{\ell},\vartheta), \qquad \star \in \{F,O,N\}. \] A {\it formally optimal shrinker} is a function $\rharpoonu{\eta}(\cdot | \star): \mathbb{R} \mapsto \mathbb{R}$ achieving $\rharpoonu{\cal L}_{\star}^1(\rharpoonu{\ell})$: \[ \rharpoonu{\eta}(\rharpoonu{\ell} | \star) = \underset{\vartheta}{\mbox{argmin}} \, \rharpoonu{\cal L}_{\star}(\rharpoonu{\ell},\vartheta), \qquad \rharpoonu{\ell} > 0, \qquad \star \in \{F,O,N\}. \] \end{definition} In complete analogy with Lemma \ref{lem-shr-opt}, we have explicit forms of formally optimal shrinkers. \begin{lemma} \label{lem-gro-opt} Formally optimal shrinkers (defined analogously to Definition \ref{def48}) and corresponding losses are given by \begin{align} \label{eval-gro-oper-loss} & \rharpoonu{\eta}^*(\rharpoonu{\ell}|F) = \frac{\rharpoonu{\ell}^2}{1+\rharpoonu{\ell}} \,,&& [\rharpoonu{\cal L}_{F}^1(\rharpoonu{\ell})]^2 = \frac{\rharpoonu{\ell}^2 (2\rharpoonu{\ell} +1)}{(\rharpoonu{\ell}+1)^2}\,, \nonumber \\ & \rharpoonu{\eta}^*(\rharpoonu{\ell}|O) = \rharpoonu{\ell} \,,&&\rharpoonu{\cal L}_{O}^1(\rharpoonu{\ell}) = \frac{ \rharpoonu{\ell}}{(1+\rharpoonu{\ell})^{1/2}} \,, \\ & \rharpoonu{\eta}^*(\rharpoonu{\ell}|N) = \rharpoonu{\ell} \bigg( \frac{\rharpoonu{\ell}-1}{\rharpoonu{\ell}+1} \bigg)_+ && \rharpoonu{\cal L}_{N}^1(\rharpoonu{\ell}) = \rharpoonu{\ell} \cdot \bigg[ 1 _{\{\rharpoonu{\ell} < 1\}} + 1_{\{\rharpoonu{\ell} > 1\}} \cdot \frac{2 \cdot \sqrt{\rharpoonu{\ell}}}{\rharpoonu{\ell}+1} \bigg]\,. \nonumber \end{align} \end{lemma} \begin{proof} Asymptotic losses are functions of the limiting formulas for eigenvalue inflation and eigenvector rotation. Thus, by the proof Lemma \ref{lem-shr-opt}, in particular (\ref{9237}), \begin{align} & \rharpoonu{\eta}^*(\rharpoonu{\ell} | F) = \rharpoonu{\ell} \rharpoonu{c}^2(\rharpoonu{\ell}) \, , & \rharpoonu{\eta}^*(\rharpoonu{\ell} | O) = \rharpoonu{\ell} \, , && \rharpoonu{\eta}^*(\rharpoonu{\ell} | N) = \rharpoonu{\ell} (1-2 \rharpoonu{s}^2(\rharpoonu{\ell}))_+ \, . \end{align} Substitution of (\ref{123}) yields the left-hand column of (\ref{eval-gro-oper-loss}). In parallel fashion, asymptotic losses are isomorphic: \[ \lharpoonu{\cal L}_F^*(\overset{\leftharpoonup}{\ell}) = \overset{\leftharpoonup}{\ell}^2 \overset{\leftharpoonup}{s}^2 (2 -\overset{\leftharpoonup}{s}^2) \, , \qquad \rharpoonu{\cal L}_F^*(\rharpoonu{\ell}) = \rharpoonu{\ell}^2 \rharpoonu{s}^2 (2 -\rharpoonu{s}^2) \, . \] \end{proof} \begin{theorem} \label{10} Define the following shrinkers through the formally optimal shrinkers $\rharpoonu{\eta}(\rharp \ell | \star)$ of Lemma \ref{lem-gro-opt}: \begin{align*} \phantom{\,.} \eta^*(\lambda | \star) = \eta_n^*(\lambda | \star) &= \rharp \phi_n^{-1} \big( \rharpoonu{\eta}( \rharp \phi_n(\lambda) - 1| \star ) \big) \\ &= \gamma_n \rharp \eta ( \lambda / \gamma_n - 1 | \star)\,. \end{align*} Under $\gamma_n \rightarrow \infty$ and variable spikes {\normalfont \bf III}, $\eta^*(\cdot | \star)$ achieves the optimal asymptotic normalized loss. Up to asymptotically negligible perturbations, $\eta^*(\cdot| \star)$ is the unique asymptotically admissible shrinker for $\rharp L_{\star,1}$. \end{theorem} All formally optimal shrinkers are continuous. The proof of Theorem \ref{10} is analogous to that of Theorem \ref{thrm:spiked_covar2} and is omitted. \begin{corollary} Under $\gamma_n\rightarrow \infty$ and variable spikes {\normalfont \bf III}, both the sample covariance $S$ and the rank-aware sample covariance $S^r$ are asymptotically inadmissible for $L_{\star,1}$. \end{corollary} \subsection{Performance in the $\gamma_n \rightarrow \infty$ Limit} Figure \ref{fig-gro-optshrink} depicts optimal shrinkers (left) and corresponding asymptotic losses (right, in the rank-one case $r=1$), similarly to Figure \ref{fig-shr-optshrink}. In the left-hand panel, the red curve again marks no shrinkage, $\rharpoonu{\eta}(\rharpoonu{\lambda}) = \rharpoonu{\lambda}$. Each optimal shrinker $\rharpoonu{\eta}^*(\cdot| \star)$ lies below the diagonal---especially for small $\rharpoonu{\lambda}$. Normalized (non-zero) eigenvalues converge to $\rharpoonu{\lambda} = 1$; optimal shrinkers all vanish below $\rharpoonu{\lambda} \leq 1$. The right-hand panel compares performances under various loss functions of the standard estimator $S^r$ (dotted lines) and the optimal estimators (solid lines). Asymptotic losses of the standard estimator are strictly larger than that of optimal estimators for all $\rharp \ell$. Under Frobenius and nuclear norms, optimal shrinkage also outperforms the fixed shrinker $\eta(\lambda) = \lambda - 1$. As $\rharpoonu{\ell} \rightarrow 0^+$, optimal losses $\rharpoonu{\cal L}_\star^*(\rharpoonu{\ell})$ tend to zero, while standard losses tend to 1. The maximal relative regret for $S^r$ is unbounded. For example, with operator norm loss, $\rharpoonu{\cal L}_O(1,\rharpoonu{\lambda}) = (1+\sqrt{5})/2$, while $\rharpoonu{\cal L}_O(1,\rharpoonu{\eta}^*)=1/\sqrt{2}$. The absolute regret is $\rharp {\mathcal{R}}_O (1,\rharpoonu{\lambda}) = .91$, and 57\% improvement in loss is possible at $\rharpoonu{\ell}=1$. The maximal possible relative improvement is 100\%: at $\rharpoonu{\ell}=0$, all the loss incurred by the standard estimator is avoidable. Under Frobenius norm, $\rharpoonu{\cal L}_F(1,\rharpoonu{\lambda})=\sqrt{3}$, $\rharpoonu{\cal L}_F(1, \rharp \eta^*) = \sqrt{3}/2$, and $\rharp {\mathcal{R}}_F(1,\rharpoonu{\lambda}) = \sqrt{3}/2$. There is 50\% possible improvement over the standard estimator at $\rharpoonu{\ell}=1$, and fully $100$\% of the standard loss is avoidable using shrinkage at $\rharpoonu{\ell}=0$. \setlength\extrarowheight{4pt} \begin{table}[h] \centering \begin{tabular}{| c | c | c | c | c |} \hline Norm & $\rharp {\mathcal{R}}_\star(0^+,\rharpoonu{\lambda})$ & $\rharp {\mathcal{R}}_\star(1,\rharpoonu{\lambda})$ & $\rharp {\cal I}_\star(0^+,\rharpoonu{\lambda})$ & $\rharp {\cal I}_\star(1,\rharpoonu{\lambda})$\\ \hline Frobenius & 1 & $\sqrt{3}/2$ & 100\% & 50\% \\ Operator & 1 & 2.52 & 100\% & 57\% \\ Nuclear & 1 & $\sqrt{5}-1$ & 100\% & 56\% \\ \hline \end{tabular} \caption{{\bf Regret and Improvement, $\gamma_n \rightarrow \infty$.} Absolute Regret $\rharp {\mathcal{R}}$ and possible relative improvement $\rharp {\cal I}$ of the standard rank-aware estimator $S^r$ (equivalently, $\rharpoonu{\eta} = \rharpoonu{\lambda}$) near zero and exactly at $\rharpoonu{\ell} = 1$.} \end{table} \begin{figure}[h!] \centering \includegraphics[height=2.6in]{spiked_covar4.png} \hspace{-3cm} \includegraphics[height=2.6in]{spiked_covar5.png} \caption{{\bf Optimal shrinkers and losses, $\gamma_n \rightarrow \infty$.} Left: optimal shrinkage functions. Right: losses of optimal shrinkers (solid) and of the standard estimator $S^r$, under Frobenius (blue), operator (orange), and nuclear (green) norms.} \label{fig-gro-optshrink} \end{figure} \setlength\extrarowheight{0pt} \begin{figure}[h!] \centering \includegraphics[height=2.5in]{spiked_covar8} \caption{{\bf Monte-Carlo simulations, large $\gamma_n$.} Averages over 50 realizations of losses under three norms for the standard and asymptotically optimal estimators. Here, $p=10{,}000$ and $n = 100$, so $\gamma_n= 100$.} \label{fig-gro-empirical} \end{figure} \section{Optimal Hard Thresholding} \label{sec-opt-hard-thresh} A natural alternative to optimal shrinkage is simple {\it hard thresholding}, applying the zero-based shrinker $H_\tau(\theta) = \theta \cdot 1_{\{\theta \geq \tau\}} + 1_{\{ \theta < \tau \}}$. In original coordinates, $\widehat{\Sigma}_{H_\tau} = \sum_{i=1}^r H_{\tau_n}(\lambda_i) v_i v_i' + I$; in rescaled coordinates, this corresponds to hard thresholding the normalized eigenvalues at the below levels: \begin{itemize} \item $\lharpoonu{\tau}_n = \lharpoonu{\phi}_n(\tau_n)$, i.e. $H_{\lharpoonu{\tau}_n}(\tlam_j)$ ($\gamma_n \rightarrow 0$), \item $\rharpoonu{\tau}_n = \rharpoonu{\phi}_n(\tau_n)$, i.e. $H_{\rharpoonu{\tau}_n}(\rharpoonu{\lambda}_j)$ ($\gamma_n \rightarrow \infty$). \end{itemize} Under the appropriate varying-spikes model {\bf II} or {\bf III}, it makes sense to choose threshold sequences $\tau_n$ that, after normalization, are essentially constant: \begin{itemize} \item $\lharpoonu{\tau}_n = \lharpoonu{\tau} \cdot ( 1+ o(1))$ ($\gamma_n \rightarrow 0$), \item $\rharpoonu{\tau}_n = \rharpoonu{\tau} \cdot (1 + o(1))$ ($\gamma_n \rightarrow \infty$). \end{itemize} It may seem natural to place the threshold exactly {\it at} the bulk edge. We propose, however, a performance-dominating alternative, with thresholds notably beyond the bulk edge. See Table \ref{tbl-opt-thresh}. \setlength\extrarowheight{2pt} \begin{table}[h!] \centering \begin{tabular}{| c | c | c |} \hline Norm & $\lharpoonu{\tau}(\star)$ & $\rharpoonu{\tau}(\star)$ \\ \hline Frobenius & $4/\sqrt{3}$ & $1+\sqrt{2}$ \\ Operator & $\sqrt{2 (1 + \sqrt{2})}$& $3$ \\ Nuclear & $6 / \sqrt{5}$ & $3+\sqrt{5}$ \\ \hline \hline Bulk Edge & 2 & 1 \\ \hline \end{tabular} \caption{{\bf Optimal thresholding parameters.} Thresholds in rows 2 through 4 are considerably beyond the bulk edge in row 5. To use these (normalized) thresholds with unnormalized eigenvalues, back-translate: use $\tau_n = \lharpoonu{\phi}_n^{-1}(\lharpoonu{\tau})$ as $\gamma_n \rightarrow 0$ and $\tau_n = \rharpoonu{\phi}_n^{-1}(\rharpoonu{\tau})$ as $\gamma_n \rightarrow \infty$.} \label{tbl-opt-thresh} \end{table} \setlength\extrarowheight{0pt} \begin{definition} We say that $\lharpoonu{\tau}$ is the {\it unique admissible normalized threshold} for asymptotic loss $\lharpoonu{\cal L}_\star( \lharp \ell, \cdot)$ as $\gamma_n \rightarrow 0$ if, for any other deterministic normalized threshold $\lharpoonu{\nu}$, we have \[ \lharpoonu{\cal L}_\star( \overset{\leftharpoonup}{\ell}, H_{\lharpoonu{\tau}}) \leq \lharpoonu{\cal L}_\star( \overset{\leftharpoonup}{\ell}, H_{\lharpoonu{\nu}}) \, , \hspace{2cm} \forall \, \lharp \ell \geq 0 \, , \] with strict inequality at some $\overset{\leftharpoonup}{\ell}' \geq 0$. We similarly define the unique admissible normalized threshold $\rharp \tau$ for $\rharp {\cal L}_\star(\rharp \ell,\cdot)$ as $\gamma_n \rightarrow \infty$. \end{definition} \begin{theorem} \label{thm-opt-thresh} For $\star \in \{ F,O,N\}$, there are unique admissible thresholds $\lharpoonu{\tau}(\star)$ and $\rharpoonu{\tau}(\star)$ for asymptotic losses $\lharpoonu{\cal L}_ \star( \overset{\leftharpoonup}{\ell}, \overset{\leftharpoonup}{\eta})$ and $\rharpoonu{\cal L}_\star( \rharpoonu{\ell}, \rharpoonu{\eta})$, respectively, given in Table \ref{tbl-opt-thresh}. \end{theorem} \begin{proof} Consider $\gamma_n \rightarrow 0$. $\overset{\leftharpoonup}{\ell} \mapsto \lharpoonu{\cal L}(\overset{\leftharpoonup}{\ell},0)$ and $\overset{\leftharpoonup}{\ell} \mapsto \lharpoonu{\cal L}(\overset{\leftharpoonup}{\ell},\tlam)$ denote the asymptotic losses of the null $\overset{\leftharpoonup}{\eta}(\tlam)=0$ and identity $\overset{\leftharpoonup}{\eta}(\tlam) = \tlam$ rules, respectively. In each case of Table \ref{tbl-opt-thresh}, there is an unique crossing point $\lharpoonu{\theta}(\star)$ exceeding the bulk edge such that \[ \lharpoonu{\cal L}_\star(\overset{\leftharpoonup}{\ell},0) < \lharpoonu{\cal L}_\star(\overset{\leftharpoonup}{\ell},\tlam) \, , \quad \overset{\leftharpoonup}{\ell} < \lharpoonu{\theta}(\star) \, ; \qquad \lharpoonu{\cal L}_\star(\overset{\leftharpoonup}{\ell},0) < \lharpoonu{\cal L}_\star(\overset{\leftharpoonup}{\ell},\tlam)\, , \quad \overset{\leftharpoonup}{\ell} > \lharpoonu{\theta}(\star) \, . \] Equality occurs only for $\overset{\leftharpoonup}{\ell} = \lharpoonu{\theta}(\star)$. Calculations of $\lharp \theta(\star)$ are omitted. Define $\lharpoonu{\tau}(\star) = \tlam(\lharpoonu{\theta}(\star))$. Note that \begin{align*} & H_{\lharpoonu{\tau}(\star)}(\tlam) \xrightarrow[]{a.s.} 0 \, , & \overset{\leftharpoonup}{\ell} < \lharpoonu{\theta} \, , \\ & H_{\lharpoonu{\tau}(\star)}(\tlam) \xrightarrow{a.s.} \tlam(\overset{\leftharpoonup}{\ell}) \, , & \overset{\leftharpoonup}{\ell} > \lharpoonu{\theta} \, . \end{align*} Consequently, \[ \lharp L_{\star,k}(\overset{\leftharpoonup}{\ell}, H_{\tau(\star)}) \xrightarrow{a.s.} \lharpoonu{\cal L}_\star(\overset{\leftharpoonup}{\ell},H_{\tau(\star)}) = \min \big( \lharpoonu{\cal L}_\star(\overset{\leftharpoonup}{\ell},0) , \, \lharpoonu{\cal L}_\star(\overset{\leftharpoonup}{\ell},\tlam) \big) \, . \] Let $\lharpoonu{\nu}$ denote another choice of threshold. Now, for every $\overset{\leftharpoonup}{\ell}$, \[ \lharpoonu{\cal L}_\star( \ell, { H}_{\lharpoonu{\nu}}) \in \big\{ \lharpoonu{\cal L}_\star(\overset{\leftharpoonup}{\ell},0) , \, \lharpoonu{\cal L}_\star(\overset{\leftharpoonup}{\ell},\tlam) \big\}. \] The loss $\lharpoonu{\cal L}_\star( \overset{\leftharpoonup}{\ell},H_{\lharpoonu{\tau}(\star)})$ is the minimum of these two. Hence, for every $\overset{\leftharpoonup}{\ell}$, \begin{equation} \label{eq-not-worse} \lharpoonu{\cal L}_\star( \overset{\leftharpoonup}{\ell},H_{\lharpoonu{\tau}(\star)}) \leq \lharpoonu{\cal L}_\star( \overset{\leftharpoonup}{\ell},H_{\lharpoonu{\nu}}) \, . \end{equation} Since $\lharpoonu{\nu} \neq \lharpoonu{\tau}(\star)$, there is an intermediate value $\lharpoonu{\theta}'$ between $\lharpoonu{\theta}$ and $\overset{\leftharpoonup}{\ell}(\lharpoonu{\nu})$ such that $\tlam(\lharpoonu{\theta}')$ is intermediate between $\lharpoonu{\tau}(\star)$ and $\lharpoonu{\nu}$. At $\lharpoonu{\theta}'$, of the two procedures behaves as the null rule while the other behaves as the identity. The two asymptotic loss functions cross only at a single point $\lharpoonu{\theta}(\star)$. Hence, at $\lharpoonu{\theta}'$ the asymptotic loss functions are unequal. By (\ref{eq-not-worse}), \begin{equation} \label{eq-strict-better} \phantom{\,.} \lharpoonu{\cal L}_\star(\lharpoonu{\theta}',H_{\lharpoonu{\tau}(\star)}) < \lharpoonu{\cal L}_\star(\lharpoonu{\theta}',H_{\lharpoonu{\nu}}) \, . \end{equation} Together, (\ref{eq-not-worse}) and (\ref{eq-strict-better}) establish unique asymptotic admissibility. The argument as $\gamma_n \rightarrow \infty$ is similar. \end{proof} Figure \ref{fig-loss-crossing} depicts two of the six cases: Frobenius norm as $\gamma_n \rightarrow 0$, and nuclear norm as $\gamma_n \rightarrow \infty$. \begin{figure}[h!] \centering \includegraphics[height=2.2in]{OptThreshFrobenius.png} \includegraphics[height=2.2in]{OptThreshNuc.png} \\ \caption{{\bf Determining the optimal threshold}. Left: Frobenius norm, $\gamma_n \rightarrow 0$. The two loss functions $\overset{\leftharpoonup}{\ell} \mapsto \lharpoonu{\cal L}_F(\overset{\leftharpoonup}{\ell},0)$, $\overset{\leftharpoonup}{\ell} \mapsto \lharpoonu{\cal L}_F(\overset{\leftharpoonup}{\ell},\tlam)$ cross in a single point $\overset{\leftharpoonup}{\ell}=\lharpoonu{\theta}(F)$. The optimal threshold is $\lharpoonu{\tau} = \tlam(\lharpoonu{\theta}(F))$. Right: nuclear norm, $\gamma_n \rightarrow \infty$.} \label{fig-loss-crossing} \end{figure} \section{Universal Closed Forms} \label{sec-uni-clo} The spiked covariance model poses a ``scaling dilemma'' for practitioners: \begin{quotation} \sl I only have my one dataset, with its own specific $p$ and $n$. I don't know what asymptotic scaling $(n, p)$ my dataset ``obeys." Yet, I have two theories seemingly competing for my favor: proportional and disproportional growth. Each theory has its own optimal formulas. Which should I apply? \noindent Fortunately, this dilemma can be avoided. \end{quotation} \begin{definition} Let $\eta^+(\lambda | L, \gamma)$ denote a closed-form shrinker for the {\it proportional} growth regime, mentioned following Lemma \ref{lem:dg18_7}. Given a dataset of size $n \times p$, define the {\it universal shrinker} \[ \phantom{\,.} \eta^u(\lambda | L) = \eta_n^{u}(\lambda | L ) = \eta^+(\lambda | L, p/n) \, . \] That is, we evaluate $\eta^+$ using the aspect ratio $\gamma_n = p/n$ of the given dataset. This requires {\it no hypothesis} on scaling of $p$ with $n$. We also denote by $\widehat{\Sigma}^u$ the shrinkage estimator $\widehat{\Sigma}_{\eta^u}$. \end{definition} \begin{observation} Adopt loss $L = L_{\star,k}$ for $\star \in \{ F,O,N\}$. The asymptotic shrinkage descriptors of the universal shrinker $\eta^u(\lambda| L)$ are optimal in the proportional and disproportional limits. \end{observation} \begin{enumerate} \item Assume the proportional limit $\gamma_n \rightarrow \gamma$. The asymptotic shrinkage descriptors, or shrinkage limits, of the optimal proportional-regime rule $\eta^+(\lambda | L, \gamma)$ are \[ \phantom{\,.} \eta_i^+ =\lim_{n \rightarrow \infty} \eta^+(\lambda_{i} | L, \gamma) \, .\] The corresponding shrinkage limits \[ \eta_i^u =\lim_{n \rightarrow \infty} \eta^u(\lambda_{i} | L, \gamma_n) = \lim_{n \rightarrow \infty} \eta^+(\lambda_{i} | L, \gamma_n) \] almost surely exist and are identical: \[ \eta_i^u \stackrel{a.s.}{=} \eta_i^+ \, , \qquad i=1,\dots,r \, . \] The asymptotic losses of the two shrinkers as calculated by Lemma \ref{lem:dg18_7} are almost surely identical. \item Assume the disproportional limit $\gamma_n \rightarrow 0$ and varying spikes {\normalfont \bf II}. The shrinkage limits of $\eta^u$ and $\eta^*$ are \[ \overset{\leftharpoonup}{\eta}_i^u = \lim_{n \rightarrow \infty} \lharpoonu{\phi}_n(\eta^u(\lambda_{i} | L)) \, , \hspace{2cm} \overset{\leftharpoonup}{\eta}_i^* = \lim_{n \rightarrow \infty} \overset{\leftharpoonup}{\eta}^*(\lharpoonu{\phi}_n(\lambda_{i}) | L) \, . \] These limits almost surely exist and are identical: \[ \phantom{\,.} \overset{\leftharpoonup}{\eta}_i^u \stackrel{a.s.}{=} \overset{\leftharpoonup}{\eta}_i^* \, , \qquad i=1,\dots,r \, . \] The asymptotic losses of the two shrinkers as calculated by Lemma \ref{lem-dpg-asy-loss} are almost surely identical. \item Assume the disproportional limit $\gamma_n \rightarrow \infty$ and varying spikes {\normalfont \bf III}. The shrinkage limits of $\eta^u$ and $\eta^*$ are \[ \rharpoonu{\eta}_i^u = \lim_{n \rightarrow \infty} \rharpoonu{\phi}_n(\eta^u(\lambda_{i} | L)) \, , \hspace{2cm} \rharpoonu{\eta}_i^* = \lim_{n \rightarrow \infty} \rharpoonu{\eta}^*(\rharpoonu{\phi}_n(\lambda_{i}) | L ) \, . \] These limits almost surely exist and are identical: \[ \rharpoonu{\eta}_i^u \stackrel{a.s.}{=} \rharpoonu{\eta}_i^* \, , \qquad i=1,\dots,r \, . \] The asymptotic losses of the two shrinkers as calculated by Lemma \ref{lem-dpg-grow-asy-loss} are almost surely identical. \end{enumerate} For example, recall the proportional-regime shrinker for $L_{F,1}$: \begin{align} \phantom{\,.} \eta^+(\lambda | L_{F,1},\gamma) = 1+ (\ell(\lambda, \gamma)-1) c^2(\ell(\lambda, \gamma), \gamma) \, . \label{9267} \end{align} Note that \[ \phantom{\,.} \lharp \phi_n(\ell(\lambda, \gamma_n)) = \frac{\lharp \phi_n(\lambda) - \sqrt{\gamma_n} + \sqrt{(\lharp \phi_n(\lambda) - \sqrt{\gamma_n})^2 - 4}}{2} = \lharp \ell (\lharp \phi_n(\lambda) - \sqrt{\gamma_n}) \, , \] so $\lharp \phi_n(\ell(\lambda_i, \gamma_n)) \xrightarrow{a.s.} \overset{\leftharpoonup}{\ell} (\tlam_i)$. Thus, \begin{align*} \phantom{\,.} \lharp \phi_n( \eta^u(\lambda_i | L_{F,1})) & =\frac{\ell(\lambda_i, \gamma_n) - 1}{\sqrt{\gamma_n}} \cdot c^2(\ell(\lambda_i, \gamma_n ), \gamma_n) \\ &= \lharp \phi_n(\ell(\lambda_i, \gamma_n)) \cdot \frac{1 - 1 / \phi_n^2(\ell(\lambda_i, \gamma_n))}{1 + \sqrt{\gamma_n} / \phi_n(\ell(\lambda_i, \gamma_n))} \cdot 1_{\{\tlam_i > 1\}}\\ &\xrightarrow{a.s.} (\overset{\leftharpoonup}{\ell}_i - 1/\overset{\leftharpoonup}{\ell}_i)_+ \, , \end{align*} agreeing with Lemma \ref{lem-shr-opt}. \begin{corollary}\label{cor2} The estimator sequence $\widehat{\Sigma}_{\eta^u}$ is uniquely asymptotically admissible under either the proportional or disproportional growth regimes. \end{corollary} This principle applies more broadly; consider thresholding. Constructed in the previous section as $\gamma_n \rightarrow 0$ and $\gamma_n \rightarrow \infty$, optimal thresholds also exist in the proportional limit $\gamma_n \rightarrow \gamma \in (0,\infty)$. These three choices of threshold, depending on the limit regime, again present a scaling conundrum to practitioners. One can easily check, however, that under each loss there exists a simple close-form threshold which performs optimally in all three limits. Under proportional growth and $L_{F,1}$, the losses of the null and identity rules are \[ {\cal L}_{F,1}( \ell,0) = (\ell-1)^2 \, , \qquad {\cal L}_{F,1}(\ell,\lambda) = (\ell - \lambda(\ell,\gamma))^2 + (\lambda(\ell,\gamma)-1)^2(1 -c^4(\ell,\gamma)) \, . \] $\ell \mapsto {\cal L}_{F,1}(\ell,0)$ is increasing and, for $\ell > \ell_+(\gamma)$, $\ell \mapsto {\cal L}_{F,1}(\ell,\lambda)$ is decreasing. The crossing point $\theta_\gamma(F,1)$ of the two losses ${\cal L}_{F,1}(\theta_\gamma,0) = {\cal L}_{F,1}(\theta_\gamma, \lambda)$ can be shown to be the largest real root of \[ \gamma^3 \theta^2 + 3 \gamma^2 \theta^2 (\theta-1) + \gamma (\theta+1) (\theta-1)^3 = (\theta-1)^5. \] The corresponding threshold is $\lambda_\gamma(F,1) = \lambda(\theta_\gamma(F,1),\gamma)$. One may verify that \[ \lambda_\gamma(F,1) \sim 1 + \sqrt{\gamma} \cdot \lharpoonu{\tau}(F) \, , \quad \gamma \rightarrow 0 \, ; \qquad \lambda_\gamma(F,1) \sim 1 + \gamma \cdot \rharpoonu{\tau}(F) \, , \quad \gamma \rightarrow \infty \, . \] Define the {\it universal threshold rule} $\lambda^u(F,1) = \lambda^u_n(F,1)$ by evaluating the proportional rule with the aspect ratio $\gamma_n = p/n$ of the given dataset: $\lambda^u(F,1) = \lambda_{\gamma_n}(F,1)$. This threshold can be applied as is---it requires no scaling hypothesis. Nevertheless, it is an optimal threshold in both the proportional limit and either disproportional limit. \section{Estimation in the Spiked Wigner model} We now develop a connection to the {\it spiked Wigner model}. Let $W = W_n$ denote a {\it Wigner matrix}, a real symmetric matrix of size $n \times n$ with independent entries on the upper triangle distributed as $\mathcal{N}(0,1)$. The empirical distribution of eigenvalues of $W$ converges (weakly almost surely) to $\omega(x) = (2\pi)^{-1} \sqrt{4 - x^2}$, the standard semicircle density with support $\overline{\lambda}_{\pm} = \pm 2$. Let $\Theta = \Theta_n$ denote a symmetric $n \times n$ ``signal'' matrix of fixed rank $r$; under the {spiked Wigner model} observed data $Y = Y_n$ obeys \begin{align} \phantom{\,. }Y = \Theta + \frac{1}{\sqrt{n}} W \,. \end{align} Let $\theta_1 \geq \cdots \geq \theta_{r_+} > 0 > \theta_{r_++1} \geq \dots \geq \theta_{r}$ denote the non-zero eigenvalues of $\Theta$, so there are $r_+$ positive values and $r_- = r - r_+$ negative, and $u_1, \ldots, u_n$ the corresponding eigenvectors. The standard (rank-aware) reconstruction is \[ \phantom{\,.} \widehat \Theta^{r} = \sum_{i=1}^{r_+} \lambda_i(Y) v_i v_i' + \sum_{i=n-r_-+1}^n \lambda_i(Y) v_i v_i'\, , \] where $\lambda_1(Y) \geq \cdots \geq \lambda_n(Y)$ are the eigenvalues of $Y$ and $v_1, \ldots, v_n$ the associated eigenvectors. Maïda \cite{Maida2007}, Capitaine, Donati-Martin and Feral \cite{Capitaine2009}, and Benaych-Georges and Rao Nadakuditi \cite{BGN11}, among others, derive an eigenvalue mapping $\overline{\lambda}(\theta)$ describing the empirical eigenvalues induced by signal eigenvalues $\theta_i$. Their results imply that the top $r_+$ empirical eigenvalues of $Y$ obey $\lambda_i(Y) \xrightarrow{a.s.} \overline{\lambda}(\theta_i)$, $i=1,\dots,r_+$, while the lowest $r_-$ obey $\lambda_{n-i} \xrightarrow{a.s.} \overline{\lambda}(\theta_{r-i})$, $0 \leq i < r_-$. Here the eigenvalue mapping function is defined by \begin{align} \phantom{\,.} \overline{\lambda}(\theta) = \begin{dcases} \theta + \frac{1}{\theta} & |\theta| > 1\\ 2 \, \text{sign}(\theta) & 0 < |\theta| \leq 1 \end{dcases} , \end{align} with phase transitions at $\pm 1$ mapping to bulk edges $\overline{\lambda}_\pm = \pm 2$. There is a partial inverse to $\theta \mapsto \overline{\lambda}(\theta)$: \begin{equation} \label{eq:theta-def} \theta(\lambda) = \begin{dcases} (\lambda + \text{sign}(\lambda) \sqrt{ \lambda^2 - 2 \sigma^2})/2 & |\lambda| > 2 \\ 0 & | \lambda | \leq 2 \end{dcases} \, . \end{equation} Empirical eigenvectors are not perfectly aligned with the corresponding signal eigenvectors: \[ |\langle u_i, v_i \rangle |^2 \xrightarrow{a.s.} \overline c^2(\theta_i) \, , \qquad i \in \{1, \ldots, r_+, n-r_-+1 , \ldots , n\} \, , \] where the cosine function is given by \begin{align} \phantom{\,.} \overline{c}^2(\theta) = \begin{dcases} 1 - \frac{1}{\theta^{2}} & |\theta| > 1\\ 0 & \theta \leq 1 \end{dcases} . \label{wig_cos} \end{align} The phenomena of spreading, inflation, and rotation imply that $\widehat \Theta^r$ can be improved upon, substantially, by well-chosen shrinkage estimators: \begin{align} \widehat{\Theta}_\eta = \sum_{i=1}^n \eta(\lambda_i(Y)) v_i v_i' \, , \end{align} with $\eta: \mathbb{R} \rightarrow \mathbb{R}$ a shrinkage function. For numerous loss functions $L$, specific shrinkers $\eta^*( \cdot | L)$ outperform the standard rank-aware estimator $\widehat{\Theta}^r$. We evaluate performance under a fixed-spike model, in which the signal eigenvalues $(\theta_i)_{i=1}^r$ do not vary with $n$. We measure loss using matrix norms $L_{\star, 1}(\Theta,\widehat{\Theta})$, $\star \in \{ F,O,N \}$, as earlier, and evaluate asymptotic loss following the ``shrinkage descriptor" approach. \begin{lemma} Let $\eta_n$ denote a sequence of shrinkers, possibly varying with $n$. Under the fixed-spike model, suppose that the sequences of shrinker outputs converge: \begin{align*} \eta_n(\lambda_i) & \xrightarrow{a.s.} \overline\eta_i \, , \hspace{-2cm}& 1 \leq i \leq r_+ \, , \\ \eta_n(\lambda_{n-(r-i)}) & \xrightarrow{a.s.} \overline\eta_i \, , \hspace{-2cm} & r_+ < i \leq r \, . \end{align*} As before, we call the limits $(\overline \eta_i)_{i=1}^r$ the asymptotic shrinkage descriptors. Each loss $L_{\star,1}$ converges almost surely to a deterministic limit: \[ \phantom{\,.} L_{\star,1}(\Theta, \widehat \Theta_{\eta_n}) \xrightarrow{a.s.} \overline{\cal L}_{\star}((\theta_i)_{i=1}^r, (\overline{\eta}_i)_{i=1}^r) \, . \] The asymptotic loss is sum- or max- decomposable into $r$ terms involving matrix norms applied to pivots of the $2 \times 2$ matrices $\widetilde{A}$ and $\widetilde{B}$ introduced earlier. With $\theta_i$ denoting a spike parameter, $\overline{c}(\theta_i)$ the limiting cosine in (\ref{wig_cos}), and $\overline{s}^2(\theta_i) = 1 - \overline c^2(\theta_i)$, the decompositions are \begin{align*} \phantom{\,,} \overline{\cal L}_{F}((\theta_i)_{i=1}^r, (\overline{\eta}_i)_{i=1}^r) & = \bigg( \sum_{i=1}^r \big[ L_{F, 1}\big(\widetilde{A}(\theta_i),\widetilde{B}(\overline{\eta}_i,\overline{c}(\theta_i))\big) \big]^2 \bigg)^{1/2} \,, \\ \overline{\cal L}_{O}((\theta_i)_{i=1}^r, (\overline{\eta}_i)_{i=1}^r) & = \max_{1 \leq i \leq r} L_{O,1} \big (\widetilde{A}(\theta_i),\widetilde{B}(\overline{\eta}_i,\overline{c}(\theta_i))\big) \, , \\ \overline{\cal L}_{N}((\theta_i)_{i=1}^r,(\overline{\eta}_i)_{i=1}^r) & = \sum_{i=1}^r L_{N,1} \big(\widetilde{A}(\theta_i),\widetilde{B}(\overline{\eta}_i,\overline{c}(\theta_i))\big) \, . \end{align*} \end{lemma} Proceeding as before, we obtain closed forms of formally optimal shrinkers and losses, explicit in terms of $\theta$. As in previous sections, asymptotically optimal shrinkers on observables are constructed using the partial inverse $\theta(\lambda)$ (\ref{eq:theta-def}). \begin{lemma} \label{lem-wig-opt} Formally optimal shrinkers and corresponding losses are given by \begin{align} & \overline{\eta}^*(\theta|F) = \textup{sign}(\theta) (|\theta| - 1/|\theta|)_+ \,,&& [\overline{\cal L}_{F}^1(\theta)]^2 = \begin{dcases} \theta^2 (1 - 1/\theta^4) & |\theta| > 1 \\ \theta^2 & 0 \leq |\theta| \leq 1 \end{dcases} \, , \nonumber \\ & \overline{\eta}^*(\theta|O) = \theta \cdot 1_{\{|\theta| > 1 \}}\, , &&\overline{\cal L}_{O}^1(\theta) =\begin{dcases} 1 & |\theta| > 1\\ |\theta| & 0 < |\theta| \leq 1 \end{dcases} \, ,\label{eq:optloss-wig-oper}\\ & \overline{\eta}^*(\theta|N) = \textup{sign}(\theta) \big(|\theta|-2 /|\theta|\big)_+ \, , && \overline{\cal L}_{N}^1(\theta) = \begin{dcases} 2\sqrt{1-1/|\theta|^2} & |\theta| > \sqrt{2} \\ |\theta| & 0 < |\theta| \leq \sqrt{2} \end{dcases} \, . \nonumber \end{align} \end{lemma} Evidently, these expressions bear a strong formal resemblance to those we found earlier for covariance shrinkage as $\gamma_n \rightarrow 0$: for $x > 0$, \begin{align*} & \overline{\lambda}(x) = \tlam(x) \, , & \overline{c}(x) = \overset{\leftharpoonup}{c}(x) \, , \\ & \overline{\eta}^*(x|\star) = \overset{\leftharpoonup}{\eta}^*(x | \star) \, , & \overline{\cal L}_{\star}^1(x) = \lharp {\cal L}_{\star}^1(x) \,. \end{align*} Such similarities extend to hard thresholding; namely, the $L_{\star,1}$-optimal thresholds $\overline{\tau}(\star)$ for the spiked Wigner model (to which eigenvalue magnitudes are compared to) are equal to their counterparts in the $\gamma_n \rightarrow 0$ setting: \[ \phantom{\,.} \overline{\tau}(\star) = \lharpoonu{\tau}(\star) \, , \qquad \star \in \{F,O,N\} \, . \] These are not chance similarities. The empirical spectral distribution of $\gamma_n^{-1/2}(S - I)$ converges as $\gamma_n \rightarrow 0$ to the semicircle law (Bai and Yin \cite{BY88}). Spiked covariance formulas as $\gamma_n \rightarrow 0$ for eigenvalue inflation and eigenvector rotation---functions of the limiting spectral distribution---are therefore equivalent to those under the spiked Wigner model. By Lemmas \ref{lim-spg-spike-cosine} and \ref{lem-wig-opt}, this mandates identical shrinkage. In all essential quantitative aspects---eigenvalue inflation, eigenvector rotation, and optimal shrinkers and losses---the $\gamma_n \rightarrow 0$ covariance estimation and spiked Wigner settings are ``isomorphic.'' \section{Bidirectional Spiked Covariance Model} Thus far we have discussed the spike covariance model assuming the spiked eigenvalues deviate are {\it elevated}, $\ell_i > 1$. We now consider the possibility of {\it depressed} values, $\ell_i < 1$. Earlier results as $\gamma_n \rightarrow 0$ quickly adapt to this setting. We adopt the more general spike covariance model $\ell_{i} = \ell_{i,n} = 1 + \overset{\leftharpoonup}{\ell}_i (1+o(1)) \sqrt{\gamma_n}$ where $\overset{\leftharpoonup}{\ell}_i$ may now be either positive or negative. In the interest of brevity, results in this section are stated informally without proof. Normalized bulk edges lie at $\pm 2$, and phase transitions of normalized eigenvalues occur at $\pm 1$. The ``bidirectional'' eigenvalue mapping function, $\tlam^\pm(\overset{\leftharpoonup}{\ell})$, is the odd extension of the ``unidirectional'' mapping (previously denoted by $\tlam$, now by $\tlam^+$ for clarity): \[ \phantom{\,.} \tlam^\pm(\overset{\leftharpoonup}{\ell}) = \mbox{sign}(\overset{\leftharpoonup}{\ell}) \cdot \tlam^+( | \overset{\leftharpoonup}{\ell} |) \, , \] while the cosine function $\lharp c^\pm(\overset{\leftharpoonup}{\ell}) = (1 - |\overset{\leftharpoonup}{\ell}|^{-2})_+$ is the even extension of $\lharp c^+$ (previously denoted by $\lharp c$). The connection to the spiked Wigner model is now even more apparent, eigenvalue mappings and cosine functions are identical: \[ \tlam^\pm(\overset{\leftharpoonup}{\ell}) = \overline{\lambda}(\overset{\leftharpoonup}{\ell})\, , \hspace{2cm} \overset{\leftharpoonup}{c}^\pm(\overset{\leftharpoonup}{\ell}) = \overline{c}(\overset{\leftharpoonup}{\ell}) \, . \] For Frobenius norm loss, we have the ``bidirectionally optimal'' shrinker \[ \phantom{\,.} \overset{\leftharpoonup}{\eta}^\pm(\overset{\leftharpoonup}{\ell}|F) = \overline{\eta}(\overset{\leftharpoonup}{\ell}|F) = \mbox{sign}(\overset{\leftharpoonup}{\ell}) \cdot ( |\overset{\leftharpoonup}{\ell}| - 1/|\overset{\leftharpoonup}{\ell}|)_+ \,. \] the odd extension of the ``unidirectionally optimal" shrinker, while the optimal (rank-one) loss is $\sqrt{2 - 1/|\overset{\leftharpoonup}{\ell}|^2}$ for $|\overset{\leftharpoonup}{\ell}| > 1$ and $|\overset{\leftharpoonup}{\ell}|$ otherwise, the even extension of $\lharp {\cal L}_F^1$. Similarly, bidirectionally optimal shrinkers and corresponding losses under operator and nuclear norm losses are respectively the odd and even extensions of functions in Lemma \ref{lem-shr-opt}. \section{Conclusion} Although proportional-limit analysis has become popular in recent years, many datasets---perhaps most---have many more rows than columns or many more columns than rows. We have studied eigenvalue shrinkage in each of these disproportional regimes and identified optimal procedures under each, exhibiting closed-form expressions for asymptotically optimal shrinkage functions and corresponding losses. We further identified a single closed-form nonlinearity for each loss function considered which can be ``universally" applied across the proportional fixed-spiked or either disproportional varying-spike limit. Equivalent optimal shrinkage rules independently arise for matrix recovery under the spiked Wigner model. \section*{Acknowledgements} This work was supported by NSF DMS grant 1811614.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Optimizing the exploitation of patchy resources is a long-standing dilemma in a variety of search problems, including robotic exploration~\cite{Wawerla:2009}, human decision processes~\cite{Gittins:1973}, and especially in animal foraging~\cite{MP66,Pyke:1977,VLRS11,Charnov:1976}. In foraging, continuous patch-use~\cite{Charnov:1976,Stephens-DW:1986} and random search~\cite{Benichou:2011,VLRS11} represent two paradigmatic exploitation mechanisms. In the former (Fig.~\ref{model}(a)), a forager consumes resources within a patch until a specified depletion level, and concomitant decrease in resource intake rate, is reached before the forager moves to another virgin patch. In his pioneering work~\cite{Charnov:1976}, Charnov predicted the optimal strategy to maximize resource consumption. This approach specifies how fitness-maximizing foragers should use environmental information to determine how completely a food patch should be exploited before moving to new foraging territory. The nature of foraging in an environment with resources that are distributed in patches has been the focus of considerable research in the ecology literature (see e.g., \cite{Charnov:1976,Oaten:1977,G80,Iwasa:1981,M82,OH88,MCS89,VB89,V91,OB06,V06,ECMG07,PS11}); theoretical developments are relatively mature and many empirical verifications of the theory have been found. However, continuous patch use models typically do not account for the motion of the searcher within a patch, and the food intake rate within a patch is given {\it a priori} \cite{Oaten:1977,Iwasa:1981,Green:1984}, so that depletion is deterministic and spatially homogeneous. Random search represents a complementary perspective in which the searcher typically moves by a simple or a generalized random walk. The search efficiency is quantified by the time to reach targets (Fig.~\ref{model}(b)). Various algorithms, including L\'evy strategies~\cite{Viswanathan:1999a}, intermittent strategies~\cite{Benichou:2005,Oshanin:2007,Lomholt:2008,Bressloff:2011} and persistent random walks~\cite{Tejedor:2012}, have been shown to minimize this search time under general conditions. However, these models do not consider depletion of the targets. \begin{figure}[!h] \centering \includegraphics[width=240 pt]{tout_3.pdf} \caption{(a) Continuous patch use: a searcher uniformly depletes patch $i$ at a fixed rate for a deterministic time $T$ and moves to a patch $i+1$ when patch $i$ is sufficiently depleted. (b) Random search: a searcher seeks one or a few fixed targets (circles) via a random walk. (c) Our model: a searcher depletes resources within a patch for a random time $T_i$. (d) Model time history. Phase $i$, of duration $\tau_i$, is composed of patch exploitation (duration $T_i$, shadowed) and migration (duration $Z$). The last phase is interrupted at time $t$, either during exploitation (shown here) or migration, and lasts $\tau^*$.} \label{model} \end{figure} Issues that have been addressed to some extent in the above scenarios include the overall influence of resource patchiness (but see~\cite{M82,MCS89,ECMG07,PS11,WRVL15,NWS15} for relevant work), as well as the coupling between searcher motion within patches and resource depletion; the latter is discussed in a different context than that given here in Ref.~\cite{RBL03}. In this work, we introduce a minimal patch exploitation/inter-patch migration model that accounts for the interplay between mobility and depletion from which we are able to explicitly derive the amount of consumed food $F_t$ up to time $t$, determine the optimal search strategy, and test its robustness. \section{The Model} Each patch is modeled as an infinite lattice, with each site initially containing one unit of resource, or food. A searcher undergoes a discrete-time random walk within a patch and food at a site is completely consumed whenever the site is first visited. The searcher thus sporadically but methodically depletes the resource landscape. Resources within a patch become scarcer and eventually it becomes advantageous for the searcher to move to a new virgin patch. We implement the scarcity criterion that the searcher leaves its current patch upon wandering for a time $\mathcal{S}$ without encountering food. Throughout this work, all times are rescaled by the (fixed) duration of a random-walk step. Thus $\mathcal{S}$ also represents the number of random-walk steps that the walker can take without finding food. This notion of a specified ``give-up time'' has been validated by many ecological observations~\cite{Krebs:1974,Iwasa:1981,McNair:1983,Green:1984}. The searcher therefore spends a random time $T_i$ and consumes $f_i$ food units in patch $i$, before leaving (Fig.~\ref{model}(d)). We assume, for simplicity, a deterministic migration time $Z$ to go from one patch to the next. We define $t_i$ as the time when the searcher arrives at patch $i+1$ and $\tau_i=t_i-t_{i-1}$ as the time interval between successive patch visits. The duration of phase $i$, which starts at $t_{i-1}$ and consists of exploitation in patch $i$ and migration to patch $i+1$, is $\tau_i \equiv T_i+Z$. Our model belongs to a class of composite search strategies that incorporate: (i) intensive search (patch exploitation) and (ii) fast displacement (migration)~\cite{Benhamou:2007,Plank:2008,Nolting:2015}; here we extend these approaches to account for resource depletion. In addition to its ecological relevance, this exploit/explore duality underlies a wide range of phenomena, such as portfolio optimization in finance~\cite{Gueudre:2014}, knowledge management and transfer~\cite{March:1991}, research and development strategies~\cite{Gittins:1973}, and also everyday life decision making~\cite{Cohen:2007}. We quantify the exploitation efficiency by the amount of consumed food $F_t$ up to time $t$. Note that $F_t$ is also the number of distinct sites that the searcher visits by time $t$, which is known for Markovian random walks~\cite{Weiss,Hughes}. In our model, we need to track \emph{all} previously visited sites in the current patch to implement the scarcity criterion, which renders the dynamics non-Markovian. We first argue that $F_t$ admits a non-trivial optimization in spatial dimensions $d\leq 2$. If a random-walk searcher remains in a single patch forever (pure exploitation; equivalently, $\mathcal{S}\to \infty$), then $F_t$, which coincides with the number of distinct sites visited in the patch, grows sublinearly in time, as $\sqrt{t}$ in $d=1$ and as $t/\ln t$ in $d=2$~\cite{Weiss}. On the other hand, if the searcher leaves a patch as soon as it fails to find food (pure exploration, $\mathcal{S}=1$), $F_t$ clearly grows linearly in time, albeit with a small amplitude that scales as $1/Z$. Thus $F_t$ must be optimized at some intermediate value of $\mathcal{S}$, leading to substantial exploitation of the current patch before migration occurs. \section{The Amount of Food Consumed} \subsection{Formalism} \label{formal} To compute the amount of food consumed, let $m$ be the (random) number of phases completed by time $t$, while the $(m+1)^{\rm st}$ phase is interrupted at time $t$. Then $F_t$ can be written as \begin{subequations} \begin{equation} \label{Ct} F_t=f_1+\ldots+f_{m}+f^*, \end{equation} where $f^*$ denotes the food consumed in this last incomplete phase. Similarly, the phase durations $\{\tau_i\}$ satisfy the sum rule (Fig.~\ref{model}(d)) \begin{equation} \label{sumrule} t=\tau_1+\ldots+\tau_{m}+\tau^*, \end{equation} \end{subequations} where again $\tau^*$ denotes the duration of the last phase. Since the food consumed and the duration of the $i^{\rm th}$ phase, $f_i$ and $\tau_i$ respectively, are correlated, the sum rule~\eqref{sumrule} couples the $f_i$'s and the number $m$ of patches visited. The distinct variables $f_i$ and $\tau_i$ are correlated and pairwise identically distributed, except for the last pair $(f^*,\tau^*)$ for the incomplete phase. We will ignore this last pair in evaluating $F_t$, and approximation is increasingly accurate for large $\mathcal{S}$. We now express the distribution of $F_t$ in terms of the joint distribution of the food consumed in any phase and the duration of any phase, which we compute in $d=1$. For this purpose, we extend the approach developed in~\cite{Godreche:2001} for standard renewal processes to our situation where $f$ and $\tau$ are coupled. To obtain of $F_t$, it is convenient to work with the generating function $\langle e^{-p F_t} \rangle$, where the angle brackets denote the average over all possible searcher trajectories. This includes integrating over each phase duration, as well as summing over the number of phases and the food consumed in each patch. The generating function can therefore be written as \begin{align} \label{basis} \left\langle e^{-pF_t}\right\rangle=&\sum_{m=0}^\infty\int_{\mathbb{R}^{m}} \kern-0.6em {\rm d}y_1\ldots{\rm d}y_m \sum_{n_1,\ldots,n_m} e^{-p(n_1+\ldots.+n_m)}\nonumber\\ &\times {\rm Pr}\big(\{n_i\},\{y_i\},m\big), \end{align} where we now treat the time as a continuous variable in the long-time limit. The second line is the joint probability that the food consumed in each patch is $\{ n_i\}$, that each phase duration is $\{ y_i\}$, and that $m$ phases have occurred; we also ignore the last incomplete phase. From Fig.~\ref{model}(d), the final time $t$ occurs sometime during the $(m+1)^{\rm st}$ phase, so that $t_m<t<t_{m+1}$. We rewrite the joint probability as the ensemble average of the following expression that equals 1 when the process contains exactly $m$ complete phases of durations $\{ y_i\}$, with $n_i$ units of food consumed in the $i^{\rm th}$ phase, and equals 0 otherwise: \begin{align} {\rm Pr}\big(\{n_i\},\{y_i\},m\big \!=\!\Big\langle\! \prod_{i=1}^m\delta_{f_i,n_i}\delta(\tau_i-y_i)I(t_m\!<\!t\!<\!t_{m+1})\Big\rangle, \end{align} with the indicator function $I(z)=1$ if the logical variable $z$ is true, and $I(z)=0$ otherwise. We can compute the Laplace transform with respect to the time $t$ of this joint probability (see Appendix~\ref{eq4}) from which the temporal Laplace transform of the generating function $\langle e^{-p F_t} \rangle$ is \begin{align} \label{step1} \int_0^\infty {\rm d}t \, e^{-st}\left\langle e^{-pF_t}\right\rangle =\frac{1-\left\langle e^{-s\tau} \right\rangle_1} {s\left(1-\left\langle e^{-pf-s\tau}\right\rangle_1 \right)}. \end{align} Here $\left\langle e^{-pf-s\tau}\right\rangle_1$ is an ensemble average over the values $(f,\tau)$ for the amount of food consumed in a \emph{single} phase and the duration of this phase; we use the subscript 1 to indicate such an average over a single phase. Equation~\eqref{step1} applies for any distribution of the pair $(f,\tau)$; in particular for any spatial dimension, search process, and distribution of food within patches. \subsection{Detailed Results} We now make Eq.~\eqref{step1} explicit in $d=1$ by calculating $\left\langle e^{-s\tau-pf}\right\rangle_1$. For this purpose, we make use of the equivalence between exploitation of a single patch and the survival of a starving random walk~\cite{Benichou:2014,CBR16}. In this latter model, a random walk is endowed with a metabolic capacity $\mathcal{S}$, defined as the number of steps the walker can take without encountering food before starving. The walker moves on an infinite $d$-dimensional lattice, with one unit of food initially at each site. Upon encountering a food-containing site, the walker instantaneously and completely consumes the food and can again travel $\mathcal{S}$ additional steps without eating before starving. Upon encountering an empty site, the walker comes one time unit closer to starvation. In our exploitation/migration model, the statistics of $(f,\tau)$ for a searcher that leaves its current patch after $\mathcal{S}$ steps coincides with the known number of distinct sites visited and lifetime of a starving random walk with metabolic capacity $\mathcal{S}$ at the instant of starvation~\cite{Benichou:2014,CBR16}. In Appendix~\ref{eq5ab}, we determine the full distribution of the pair $(f,\tau)$, from which we finally extract the quantity $\left\langle e^{-pf-s\tau}\right\rangle_1$ in Eq.~\eqref{step1}, where $\tau=T+Z$, with $T$ the (random) time spent in a patch and $Z$ the fixed migration time. The final result is \begin{subequations} \begin{align} \label{step2} \langle e^{-pf-s\tau}\rangle_1=\int_0^\infty \!\! {\rm d}\theta \,P(\theta) \,\,e^{\,[-p\pi\theta\sqrt{\mathcal{S}/2}-s(Z+\mathcal{S})+Q(\theta)]}\,, \end{align} where \begin{align} \begin{split} \label{Q} Q(\theta)&=\exp\bigg[4\int_0^\theta\frac{{\rm d}u}{u}\sum_{j=0}^\infty q_j\bigg]\,,\\ q_j&=\frac{1-e^{-[s\mathcal{S}+{(2j+1)^2}/{u^2}]}}{1+{su^2\mathcal{S}}/{(2j\!+\!1)^2}} -\left(1-e^{-{(2j+1)^2}/{u^2}}\right)\,,\\ P(\theta)&=\frac{4}{\theta}\sum_{j= 0}^\infty e^{-(2j+1)^2/\theta^2} \exp\!\bigg[\!-2\!\sum_{k= 0}^\infty E_1 \big({(2k+1)^2}/{\theta^2}\big)\bigg], \end{split} \end{align} \end{subequations} and $E_1(x)=\int_1^\infty{\rm d}t \, e^{-xt}/t$ is the exponential integral. \begin{figure}[h!] \centering \includegraphics[width=250 pt]{1D_2.pdf} \caption{Scaled mean (a) and variance (b) of the food consumed $F_t$ at $t=5\times 10^5$ steps. Points give numerical results and the curves are the asymptotic predictions in \eqref{mean-var2}. The migration time $Z$ between patches is 500 steps.} \label{moments} \end{figure} We now focus on the first two moments of $F_t$, whose Laplace transforms are obtained from the small-$p$ expansion of Eq.~\eqref{step1}. By analyzing this expansion in the small-$s$ limit, the long-time behavior of these moments are (with all details given in Appendix~\ref{eq6}): \begin{align} \begin{split} \label{mean-var1} &\frac{\langle F_t\rangle}{t} \sim\frac{\langle f\rangle}{\langle T\rangle +Z}\,, \\[1em] &\frac{{\rm Var} (F_t)}{t} \!\sim\!\frac{\langle f\rangle ^2{\rm Var}(T)}{(\langle T\rangle \!+\!Z)^3}+ \frac{{\rm Var}(f)}{\langle T\rangle \!+\!Z} -2\frac{\langle f\rangle {\rm Cov}(f,T)}{(\langle T\rangle \!+\!Z)^2}\,, \end{split} \end{align} where ${\rm Var}(X)\equiv\langle X^2\rangle -\langle X\rangle^2$ and ${\rm Cov}(X,Y)\equiv\langle XY\rangle-\langle X\rangle\langle Y\rangle$ and for simplicity, we now drop the subscript 1. From the small-$p$ and small-$s$ limits of Eqs.~\eqref{step2} and~\eqref{Q}, the limiting behavior of the moments for $\mathcal{S}\gg 1$ are: \begin{align} \begin{split} \label{mean-var2} &\frac{\langle F_t\rangle}{t} \simeq\frac{K_1\sqrt{\mathcal{S}}}{K_2\mathcal{S}+Z}\,, \\[0.1in] &\frac{{\rm Var} (F_t)}{t} \simeq \left[\frac{K_3 \mathcal{S}^3}{(K_2\mathcal{S}\!+\!Z)^3} +\frac{K_4 \mathcal{S}}{K_2\mathcal{S}\!+\!Z} -\frac{K_5 \mathcal{S}^2}{(K_2\mathcal{S}\!+\!Z)^2}\right]\,, \end{split} \end{align} where the $K_i$ are constants that are derived in Appendix~\ref{eq7}. The dependences $\langle f\rangle=K_1\sqrt{\mathcal{S}}$ and $\langle T\rangle=K_2\mathcal{S}$ have simple heuristic explanations (see also~\cite{CBR16}): suppose that the length of the interval where resources have been consumed reaches a length $\sqrt{\mathcal{S}}$. When this critical level of consumption is reached, the forager will typically migrate to a new patch because the time to traverse the resource-free interval will be of the order of $\mathcal{S}$. Thus the resources consumed in the current patch will be of the order of the length of the resource-free region, namely $\sqrt{\mathcal{S}}$, while the time $\langle T\rangle$ spent in this patch will be of the order of the time $\mathcal{S}$ to traverse this region of length $\sqrt{\mathcal{S}}$. The salient feature from Eq.~\eqref{mean-var2} is that $\langle F_t\rangle$ has a maximum, which occurs when $\mathcal{S}=Z/K_2$, corresponding to $\langle T \rangle = Z$ (Fig.~\ref{moments}). That is, the optimal strategy to maximize food consumption is to spend the same time exploiting each patch and migrating between patches. It is worth mentioning that we can reproduce the first of Eqs.~\eqref{mean-var1} by neglecting correlations between $f$ and $T$. In this case $\langle F_t\rangle$ is simply the average amount of food $\langle f \rangle$ consumed in a single patch multiplied by the mean number $t/(\langle T\rangle+Z)$ of patches explored at large time $t$. However, this simple calculational approach fails to account for the role of fluctuations, specifically the covariance between $f$ and $T$, in the variance of $F_t$. In fact, the covariance term (last term in Eq.~\eqref{mean-var1}) reduces fluctuations in food consumption by a factor three compared to the case where correlations are neglected. \section{Extensions} The optimal strategy outlined above is robust and holds under quite general conditions, including, for example: (i) randomly distributed food within a patch, and (ii) searcher volatility. For (i), suppose that each lattice site initially contains food with probability $\rho$. To show that the optimal search strategy is independent of $\rho$, we again exploit the mapping onto starving random walks in the limit $\mathcal{S}\gg 1/\rho$. A density of food $\rho$ corresponds to an effective lattice spacing that is proportional to $\rho^{-1/d}$, with $d$ the spatial dimension. For large $\mathcal{S}$, this effective lattice spacing has a negligible effect on the statistics of the starving random walk. Both the mean lifetime and mean number of distinct sites visited are the same as in the case where the density of food equals 1. However, because the probability to find food at a given site is $\rho$, the amount of food consumed differs from the number of distinct sites visited by an overall factor $\rho$. Thus the food consumed at time $t$ (the first of Eqs.~\eqref{mean-var2}) is simply \begin{equation} \frac{\langle F_t\rangle}{t} \simeq \rho\, \frac{K_1\sqrt{\mathcal{S}}}{K_2\mathcal{S}+Z} \,. \end{equation} Consequently, the optimal search strategy occurs for the same conditions as the case where each site initially contains food (Fig.~\ref{extensions}(a)). For the second attribute, suppose that the searcher has a fixed probability $\lambda$ to leave the patch at each step, independent of the current resource density, rather than migrating after taking $\mathcal{S}$ steps without encountering food. The residence time of the searcher on a single patch thus follows an exponential distribution with mean $\lambda^{-1}$. The exploitation of a single patch can now be mapped onto the \emph{evanescent} random walk model, in which a random walk dies with probability $\lambda$ at each step~\cite{Yuste:2013}, and for which the mean number of distinct sites visited has recently been obtained in one dimension. Since Eq.~\eqref{step1}, and thus Eqs.~\eqref{mean-var1}, still hold for any distribution of times spent in each patch, we can merely transcribe the results of~\cite{Yuste:2013} (in particular their Eq.~(7) and the following text) to immediately find that the average food consumed at time $t$ is \begin{align} \frac{\langle F_t\rangle}{t}\sim\,\,\frac{\sqrt{\coth{{\lambda}/{2}}}}{Z+\lambda^{-1}}~. \end{align} Now $\langle F_t \rangle$ is maximized for $1/\lambda\simeq Z$ in the $Z \gg 1$ limit. Again, the optimal strategy is to spend the same amount of time on average in exploiting a patch and in migrating between patches (Fig.~\ref{extensions}(a)). For the ecologically-relevant case of two-dimensional resource patches, the average amount of food consumed is governed by a similar optimization as in $d=1$ (Fig.~\ref{extensions}(b)). While the description of the two-dimensional case does not appear to be analytically tractable, we numerically find that the optimal strategy consists in spending somewhat more time exploiting a single patch rather than migrating between patches. This inclination arises because patch exploitation---whose efficiency is quantified by the average number of distinct sites visited by a given time---is relatively more rewarding in two than in one dimension~\cite{Weiss,Hughes}. \begin{figure}[h!] \centering \includegraphics[width=250 pt]{figure3.pdf} \caption{Average food consumed in $d=1$ when: (a) the distribution of food is Poisson distributed with density $\rho=0.1$ (\textcolor{blue}{$\times$}) and $\rho=1$ ($\bullet$), as well as when the searcher has a constant probability at each step to leave the patch (\textcolor{red}{$\blacktriangledown$}); (b) Average food consumed in $d=2$ for food density $\rho=1$. The inter-patch travel time $Z=50$ for all cases.} \label{extensions} \end{figure} \section{Summary} To summarize, we introduced a minimal patch exploitation/inter-patch migration model that quantifies the couplings between searcher motion within patches, resource depletion, and migration to new patches. Our model may provide a first step to understand more realistic ecological foraging, where effects such as predation of the forager~\cite{H79,R10}, heterogeneous travel times between patches~\cite{R12}, and more complex motions than pure random walks~\cite{RB09,BCRLMC16} are surely relevant. On the theoretical side, our model can also be viewed as a resetting process, in which a random walker stochastically resets to a new position inside a virgin patch. In contrast to existing studies~\cite{Evans:2011a,Evans:2011b,Pal:2015,Nagar:2015,Eule:2016}, the times between resets are not given {\it a priori} but determined by the walk itself. This modification may open a new perspective in the burgeoning area of resetting processes. Financial support for this research was provided in part by starting grant FPTOpt-277998 (OB), by grants DMR-1608211 and DMR-1623243 from the National Science Foundation and from the John Templeton Foundation (SR), and by the Investissement d'Avenir LabEx PALM program Grant No.\ ANR-10-LABX-0039-PALM (MC).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Gamma-ray lines from solar flares were first observed in 1972 with the gamma-ray spectrometer (GRS) aboard the {\it OSO-7} satellite \citep{chu73}. Since then, repeated observations with various space missions, including the {\it Solar Maximum Mission}/GRS (e.g. Share \& Murphy 1995), all four {\it Compton Gamma Ray Observatory} instruments (e.g. Share, Murphy, \& Ryan 1997) and the {\it Reuven Ramaty High Energy Solar Spectroscopic Imager }({\it RHESSI}; e.g. Lin et al. 2003), have firmly established gamma-ray astronomy as an important tool for studying the active sun. Prompt gamma-ray lines are produced from deexcitation of nuclei excited by nuclear interactions of flare-accelerated particles with the solar atmosphere. Detailed spectroscopic analyses of this emission have furnished valuable information on the composition of the ambient flare plasma, as well as on the composition, energy spectrum and angular distribution of the accelerated ions (e.g. Ramaty \& Mandzhavidze 2000; Share \& Murphy 2001; Lin et al. 2003; Kiener et al. 2006). Additional information about the density and temperature of the ambient plasma is obtained from the positron-electron annihilation line at 0.511 MeV (Murphy et al. 2005) and the neutron capture line at 2.223 MeV (Hua et al. 2002 and references therein). The bombardment of the solar atmosphere by flare-accelerated ions can also synthesize radioactive nuclei, whose decay can produce observable, delayed gamma-ray lines in the aftermath of large flares. One of the most promising of such lines is at 846.8~keV resulting from the decay of $^{56}$Co (half-life $T_{1/2}$=77.2~days) into the first excited state of $^{56}$Fe \citep{ram00,koz02}. \citet{ram00} calculated the time dependence of the 846.8~keV line emission that would have been expected after the 6 X-class flares of June 1991. Smith et al. are now searching for this delayed line emission with the {\it RHESSI} spectrometer after the very intense series of flares that occured between 2003 October 28 and November 4 (the analysis is in progress, D. Smith 2006, private communication). The observation of solar radioactivity can be important for at least two reasons. First, the radioisotopes can serve as tracers to study mixing processes in the solar atmosphere \citep{ram00}. Additionally, their detection should provide a new insight into the spectrum and fluence of flare-accelerated ions. In particular, since the accelerated heavy nuclei are believed to be significantly enhanced as compared to the ambient medium composition (e.g. Murphy et al. 1991), the radioisotopes are expected to be predominantly produced by interactions of fast heavy ions with ambient hydrogen and helium. Thus, the delayed line emission can provide a valuable measurement of the accelerated metal enrichment. We performed a systematic study of the radioactive line emission expected after large solar flares. In addition to gamma-ray lines emitted from deexcitation of daughter nuclei, we considered radioactivity X-ray lines that can be produced from the decay of proton-rich isotopes by orbital electron capture or the decay of isomeric nuclear levels by emission of a conversion electron. We also treated the positron-electron annihilation line resulting from the decay of long-lived $\beta^+$-emitters. The radioisotopes which we studied are listed in Table~1, together with their main decay lines. We selected radioactive X- and gamma-ray line emitters that can be significantly produced in solar flares (see \S~2) and with half-lives between $\sim$10~min, which is the typical duration of large gamma-ray flares \citep{ves99}, and 77.2~days ($^{56}$Co half-life). We neglected radioisotopes with mean lifetime $\tau_r$ greater than that of $^{56}$Co, because (1) their activity ($\dot{N_r}=N_r/\tau_r$) is lower and (2) their chance of surviving at the solar surface is also lower. In \S~2, we present the total cross sections for the production of the most important radioactive nuclei. In \S~3, we describe our thick-target yield calculations of the radioisotope synthesis. The results for the delayed line emission are presented in \S~4. Prospects for observations are discussed in \S~5. \section{Radioisotope production cross sections} All of the radioactive nuclei shown in Table~1 can be significantly produced in solar flares by H and He interactions with elements among the most abundant of the solar atmosphere and accelerated particles: He, C, N, O, Ne, Mg, Al, Si, S, Ar, Ca, Cr, Mn, Fe and Ni\footnote{\citet{kuz05} recently claimed that nuclear interactions between fast and ambient heavy nuclei can be important for the formation of rare isotopes in solar flares. We evaluated the significance of these reactions by using the universal parameterization of total reaction cross sections given by \citet{tri96}. Assuming a thick target interaction model, a power-law source spectrum for the fast ions and standard compositions for the ambient and accelerated nuclei (see \S~3), we found that the heavy ion collisions should contribute less than a few percent of the total radioisotope production and can therefore be safely neglected.}. We did not consider the production of radioisotopes with atomic number $Z>30$. We also neglected a number of very neutron-rich nuclei (e.g. $^{28}$Mg, $^{38}$S...), whose production in solar flares should be very low. Most of the radioisotopes listed in Table~1 are proton-rich, positron emitters. Their production by proton and $\alpha$-particle reactions with the abundant constituents of cosmic matter was treated in detail by Kozlovsky, Lingenfelter, \& Ramaty (1987). The work of these authors was extended by Kozlovsky, Murphy, \& Share (2004) to include $\beta^+$-emitter production from the most important reactions induced by accelerated $^3$He. However, new laboratory measurements have allowed us to significantly improve the evaluation of a number of cross sections for the production of important positron emitters. In the following, we present updated cross sections for the formation of $^{34}$Cl$^m$, $^{52}$Mn$^g$, $^{52}$Mn$^m$, $^{55}$Co, $^{56}$Co, $^{57}$Ni, $^{58}$Co$^g$, $^{60}$Cu, and $^{61}$Cu, by proton, $^3$He and $\alpha$ reactions in the energy range 1--10$^3$ MeV nucleon$^{-1}$. In addition, we evaluate cross sections for the production of the 4 radioactive nuclei of Table~1 which are not positron emitters: $^7$Be, $^{24}$Na, $^{56}$Mn and $^{58}$Co$^m$. The reactions which we studied are listed in Table~2. We considered proton, $^3$He and $\alpha$-particle interactions with elements of atomic numbers close to that of the radioisotope of interest and that are among the most abundant of the solar atmosphere. Spallation reactions with more than 4 outgoing particles were generally not selected, because their cross sections are usually too low and their threshold energies too high to be important for solar flares. We generally considered reactions with elements of natural isotopic compositions, because, except for H and the noble gases, the terrestrial isotopic compositions are representative of the solar isotopic abundances \citep{lod03}. Furthermore, most of the laboratory measurements we used were performed with natural targets. Most of the cross section data were extracted from the EXFOR database for experimental reaction data\footnote{See http://www.nndc.bnl.gov/exfor/.}. When laboratory measurements were not available or did not cover the full energy range, we used 3 different nuclear reaction codes to obtain theoretical estimates. Below a few hundred MeV (total kinetic energy of the fast particles), we performed calculations with both EMPIRE-II (version 2.19; Herman et al. 2004) and TALYS (version 0.64; Koning, Hilaire, \& Duijvestijn 2005). These computer programs account for major nuclear reaction models for direct, compound, pre-equilibrium and fission reactions. They include comprehensive libraries of nuclear structure parameters, such as masses, discrete level properties, resonances and gamma-ray parameters. The TALYS and EMPIRE-II calculations were systematically compared with available data and the agreement was generally found to be better than a factor of 2. We obtained, however, more accurate predictions for isomeric cross sections with TALYS than with EMPIRE-II. Above the energy range covered by TALYS and EMPIRE-II, we used the "Silberberg \& Tsao code" (Silberberg, Tsao, \& Barghouty 1998 and references therein) when experimental cross section data for proton-nucleus spallation reactions were lacking. This code is based on the semiempirical formulation originally developed by \citet{sil73} for estimates of cross sections needed in cosmic-ray physics. It has been updated several times as new cross sections measurements have become available \citep{sil98}. For spallation reactions induced by $\alpha$-particles above $\sim$100~MeV nucleon$^{-1}$, we used the approximation (Silberberg \& Tsao 1973) \begin{equation} \sigma_\alpha(E)=X\sigma_p(4E)~, \end{equation} where $E$ is the projectile kinetic energy per nucleon, $\sigma_\alpha$ and $\sigma_p$ are the cross sections for the $\alpha$-particle- and proton-induced reactions leading to the same product, and \begin{equation} X = \left\{ \begin{array}{ll} 1.6 & \rm{for~} \Delta A \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 3 \\ 2 & \rm{for~} \Delta A > 3~, \end{array} \right. \end{equation} where $\Delta A$ is the difference between target and product masses. \subsection{$^7$Be Production} The relevant cross sections for $^7$Be production are shown in Figure~1. The cross section for the reaction $^4$He($\alpha$,$n$)$^7$Be (dashed curve labeled "$^4$He" in Fig.~1) is from the measurements of \citet{kin77} from 9.85 to 11.85~MeV nucleon$^{-1}$ and Mercer et al. (2001 and references therein) above $\sim$15.4~MeV nucleon$^{-1}$. The cross sections for the proton reactions with $^{12}$C, $^{14}$N and $^{16}$O (solid curves in Fig.~1) are from the extensive measurements of Michel et al. (1997 and references therein). The cross sections for the $\alpha$-particle reactions with $^{12}$C, $^{14}$N and $^{16}$O (dashed curves labeled "$^{12}$C", "$^{14}$N" and "$^{16}$O" in Fig.~1) are from the measurements of \citet{lan95} below 42 MeV nucleon$^{-1}$ and from TALYS calculations at 50 and 62.5~MeV nucleon$^{-1}$. At higher energies, we used the data compilation of \citet{rea84} and assumed the $^7$Be production cross sections to be half of the isobaric cross sections for producing the mass $A$=7 fragment from spallation of the same target isotope. The cross section for the reaction $^{12}$C($^3$He,$x$)$^7$Be (dotted curve in Fig.~1) is from \citet{dit94} below 9.1~MeV nucleon$^{-1}$ and from TALYS calculations from 10 to 83.3~MeV nucleon$^{-1}$. At higher energies, we extrapolated the cross section assuming the same energy dependence as the one for the $^{12}$C($\alpha$,$x$)$^7$Be reaction. We neglected the production of $^7$Be from $^3$He reactions with $^{14}$N and $^{16}$O. \subsection{$^{24}$Na Production} The relevant cross sections for $^{24}$Na production are shown in Figure~2. The cross section for the reaction $^{25}$Mg($p$,2$p$)$^{24}$Na is from \citet{mea51} below 105 MeV, \citet{ree69} in the energy range 105--300~MeV and above 400~MeV, and \citet{kor70} at 300 and 400~MeV. The cross section for the reaction $^{26}$Mg($p$,2$pn$)$^{24}$Na is also from \citet{mea51} and \citet{kor70} below 400~MeV. Its extrapolation at higher energies was estimated from calculations with the Silberberg \& Tsao code. The cross sections for the proton reactions with $^{27}$Al and $^{nat}$Si are from the measurements of Michel et al. (1997). The cross section for the reaction $^{nat}$Mg($\alpha$,$x$)$^{24}$Na is from the data of \citet{lan95} below 42 MeV nucleon$^{-1}$ and from TALYS calculations at 50 and 62.5~MeV nucleon$^{-1}$. Above 100 MeV nucleon$^{-1}$, the $\alpha$+$^{nat}$Mg cross section was estimated from equations~(1) and (2), and the $p$+$^{25}$Mg and $p$+$^{26}$Mg cross sections discussed above. The cross sections for the reactions $^{22}$Ne($\alpha$,$pn$)$^{24}$Na and $^{22}$Ne($^3$He,$p$)$^{24}$Na are not available in the literature and were estimated from calculations with the TALYS and EMPIRE-II codes, respectively. \subsection{$^{34}$Cl$^m$ Production} Shown in Figure~3 are cross sections for production of the first excited (isomeric) state of $^{34}$Cl ($^{34}$Cl$^m$, $T_{1/2}=32$~min) at an excitation energy of 146.4~keV. The cross section data for these reactions are scarce. The cross section for the reaction $^{32}$S($^3$He,$p$)$^{34}$Cl$^m$ was measured by \citet{lee74} from 1.4 to $\sim$7.3~MeV nucleon$^{-1}$. Its rapid fall at higher energies was estimated from TALYS calculations. The cross section of the reaction $^{34}$S($p$,$n$)$^{34}$Cl$^m$ was measured by \citet{hin52} from threshold to $\sim$90~MeV. However, as the decay scheme of $^{34}$Cl was not well known in 1952, it is not clear which fraction of the isomeric state was populated in this experiment. We thus did not use these data, but have estimated the cross section from TALYS calculations. We also used theoretical evaluations from the TALYS and the Silberberg \& Tsao codes for the reactions $^{32}$S($\alpha$,$pn$)$^{34}$Cl$^m$ and $^{sol}$Ar($p$,$x$)$^{34}$Cl$^m$. In this latter reaction, the notation $^{sol}$Ar means Ar of solar isotopic composition\footnote{The solar isotopic composition of Ar and the other noble gases are very different from their terrestrial isotopic compositions (see Lodders 2003 and references therein).} and the cross section was obtained by weighting the cross sections for proton reactions with $^{36}$Ar, $^{38}$Ar and $^{40}$Ar by the relative abundances of these three isotopes in the solar system. \subsection{$^{52}$Mn$^{g,m}$ Production} The production of the ground state of $^{52}$Mn ($^{52}$Mn$^g$, $T_{1/2}=5.59$~days) and of the isomeric level at 377.7~keV ($^{52}$Mn$^m$, $T_{1/2}=21.1$~min) are both important for the delayed line emission of solar flares. The relevant cross sections are shown in Figure~4. The data for the production of the isomeric pair $^{52}$Mn$^g$ and $^{52}$Mn$^m$ in $p$+$^{nat}$Cr collisions are from \citet{win62} from 5.8 MeV to 10.5~MeV; \citet{wes87} from 6.3 to $\sim$26.9~MeV; \citet{kle00} from $\sim$17.4 to $\sim$38.1~MeV; and \citet{reu69} at 400~MeV. It is noteworthy that TALYS simulations for these reactions were found to be in very good agreement with the data, which demonstrates the ability of this code to predict accurate isomeric state populations. The cross section for the reaction $^{nat}$Fe($p$,$x$)$^{52}$Mn$^g$ is from \citet{mic97}. We estimated the cross section for the production of the isomer $^{52}$Mn$^m$ in $p$+$^{nat}$Fe collisions by multiplying the cross section for the ground state production by the isomeric cross section ratio $\sigma_m / \sigma_g$ calculated with the TALYS code. The cross sections for the production of $^{52}$Mn$^g$ and $^{52}$Mn$^m$ from $\alpha$+$^{nat}$Fe interactions are also from TALYS calculations below 62.5~MeV nucleon$^{-1}$. They were extrapolated at higher energies using equations~(1) and (2), and the $p$+$^{nat}$Fe cross sections discussed above. Also shown in Figure~4 is the cross section for the reaction $^{nat}$Cr($^3$He,$x$)$^{52}$Mn$^m$, which is based on the data of \citet{fes94} below $\sim$11.7~MeV nucleon$^{-1}$ and TALYS calculations at higher energies. \subsection{$^{56}$Mn Production} The relevant cross sections for $^{56}$Mn production are shown in Figure~5. The laboratory measurements for the production of this radioisotope are few. We used the experimental works of the following authors: \citet{wat79} for the reaction $^{55}$Mn($^3$He,2$p$)$^{56}$Mn from $\sim$3.8 to $\sim$12.9~MeV nucleon$^{-1}$; Michel, Brinkmann, \& St\"uck (1983a) for the reaction $^{55}$Mn($\alpha$,2$pn$)$^{56}$Mn from 6.1 to $\sim$42.8~MeV nucleon$^{-1}$; and Michel, Brinkmann, \& St\"uck (1983b) for the reaction $^{nat}$Fe($\alpha$,$x$)$^{56}$Mn from $\sim$13.8 to $\sim$42.8~MeV nucleon$^{-1}$. The excitation functions for these 3 reactions were completed by theoretical estimates from TALYS. The cross section for the reaction $^{57}$Fe($p$,2$p$)$^{56}$Mn is entirely based on nuclear model calculations, from EMPIRE-II below 100 MeV and the Silberberg \& Tsao code at higher energies. \subsection{$^{55}$Co, $^{56}$Co and $^{57}$Ni Production} The relevant cross sections for production of $^{55}$Co, $^{56}$Co and $^{57}$Ni are shown in Figures~6, 7 and 8, respectively. The cross sections for the proton reactions with $^{nat}$Fe and $^{nat}$Ni are based on the data of \citet{mic97}. For $^{56}$Co production by $p$+$^{nat}$Fe and $p$+$^{nat}$Ni collisions, we also used the works of \citet{tak94} and \citet{tar91}, respectively. The cross sections for the $\alpha$-particle reactions with $^{nat}$Fe are from \citet{tar03b} below 10.75~MeV nucleon$^{-1}$, \citet{mic83b} in the energy range $\sim$12.3--42.8~MeV nucleon$^{-1}$ and TALYS calculations at 50 and 62.5~MeV nucleon$^{-1}$. These cross sections were extrapolated at higher energies assuming that they have energy dependences similar to those of the $p$+$^{nat}$Fe reactions (see eqs.~[1] and [2]). The cross sections for the $\alpha$-particle reactions with $^{nat}$Ni are based on the data of \citet{mic83b}. For the reaction $^{nat}$Ni($\alpha$,$x$)$^{57}$Ni, we also used the measurements of Tak\'acs, T\'ark\'anyi, \& Kovacs (1996) below $\sim$6.1~MeV nucleon$^{-1}$. The procedure to estimate the $\alpha$+$^{nat}$Ni cross sections above 50~MeV nucleon$^{-1}$ was identical to the one discussed above for the $\alpha$+$^{nat}$Fe cross sections. The cross sections for the $^3$He reactions with $^{nat}$Fe are based on the data of \cite{tar03a} from $\sim$4.1 to $\sim$8.5~MeV nucleon$^{-1}$, the data of \citet{haz65} from 1.9 to $\sim$19.8~MeV nucleon$^{-1}$, and TALYS calculations. The measurements of \citet{haz65} were performed with targets enriched in $^{56}$Fe. To estimate the cross section for $^3$He+$^{nat}$Fe collisions from their data, we multiplied the measured cross section by 0.92, the relative abundance of $^{56}$Fe in natural iron. We neglected the production of $^{55}$Co and $^{56}$Co by $^3$He+$^{nat}$Ni interactions. For the reaction $^{nat}$Ni($^3$He,$x$)$^{57}$Ni, we used the data of \citet{tak95} below $\sim$11.7~MeV nucleon$^{-1}$ and EMPIRE-II calculations to 80~MeV nucleon$^{-1}$. At higher energies, the cross section was extrapolated assuming an energy dependence similar to the one of the $^{nat}$Ni($\alpha$,$x$)$^{57}$Ni cross section. \subsection{$^{58}$Co$^{g,m}$ Production} The relevant cross sections for the production of the isomeric pair $^{58}$Co$^{g,m}$ are shown in Figures~9a and b. The isomeric state of $^{58}$Co ($^{58}$Co$^m$, $T_{1/2}=9.04$~hours) is the first excited level at 24.9~keV. It decays to the ground state by the conversion of a K-shell electron, thus producing a Co K$\alpha$ line emission at 6.92~keV. The ground state $^{58}$Co$^g$ having a much longer lifetime, $T_{1/2}=70.9$~days, we considered for its production the total cross section for the formation of the isomeric pair, $\sigma_t = \sigma_m + \sigma_g$. Isomeric cross section ratios $\sigma_m / \sigma_t$ were measured for various reaction channels by Sud\'ar \& Qaim (1996 and references therein). We used their data for the reaction $^{55}$Mn($\alpha$,$n$)$^{58}$Co$^{g,m}$ below $\sim$6.3~MeV nucleon$^{-1}$. At higher energies, we used the $\sigma_m$ and $\sigma_t$ measurements of \citet{mat65} and \citet{riz89}, respectively. The total cross section for the reaction $^{nat}$Fe($\alpha$,$x$)$^{58}$Co is based on the data of \citet{iwa62} from 4.4 to 9.65~MeV nucleon$^{-1}$ and \citet{mic83b} in the energy range $\sim$6.5--42.8~MeV nucleon$^{-1}$. In the absence of data for the isomer formation in $\alpha$+$^{nat}$Fe collisions, we estimated the cross section by multiplying the cross section for the total production of $^{58}$Co by the isomeric ratio $\sigma_m / \sigma_t$ calculated with the TALYS code. The total cross section for the reaction $^{nat}$Fe($^3$He,$x$)$^{58}$Co is from \citet{haz65} and \citet{tar03a} below $\sim$8.5~MeV nucleon$^{-1}$. At higher energies, it is based on TALYS calculations. The cross section for the isomeric state population in $^3$He+$^{nat}$Fe collisions is also from simulations with the TALYS code. The cross sections for $^{58}$Co$^{g,m}$ production from proton, $^3$He and $\alpha$ reactions with $^{nat}$Ni are shown in Figure~9b. The total cross sections $\sigma_t$ are based on the data of \citet{mic97}, \citet{tak95} below $\sim$11.7~MeV nucleon$^{-1}$, and \citet{mic83b} below $\sim$42.7~MeV nucleon$^{-1}$, for the proton, $^3$He and $\alpha$ reactions, respectively. To extrapolate the cross sections for the $^3$He and $\alpha$ reactions, we used the TALYS code and the approximation described by equations (1) and (2). Data for the isomeric state population are lacking and we estimated the $\sigma_m$ cross sections as above, i.e. from TALYS calculations of the isomeric ratios $\sigma_m / \sigma_t$. \subsection{$^{60}$Cu and $^{61}$Cu Production} The relevant cross sections for the production of $^{60}$Cu and $^{61}$Cu are shown in Figure~10. The cross sections for the production of $^{60}$Cu are based on the data of \citet{bar75} below $\sim$17~MeV, \citet{mur78} below $\sim$8.8~MeV nucleon$^{-1}$, and \citet{tak95} below $\sim$11.7~MeV nucleon$^{-1}$, for the proton, $\alpha$-particle and $^3$He reactions with $^{nat}$Ni, respectively. The cross section for the reaction $^{nat}$Ni($^3$He,$x$)$^{61}$Cu is also from \citet{tak95} below $\sim$11.7~MeV nucleon$^{-1}$. The cross section for the reaction $^{nat}$Ni($\alpha$,$x$)$^{61}$Cu was constructed from the data of \citet{tak96} below $\sim$6.1~MeV nucleon$^{-1}$, \citet{mur78} from $\sim$2.5 to $\sim$9.2~MeV nucleon$^{-1}$, and \citet{mic83b} in the energy range $\sim$4.2--42.7~MeV nucleon$^{-1}$. All these cross sections were extrapolated to higher energies by the means of TALYS calculations. \section{Radioisotope production yields} We calculated the production of radioactive nuclei in solar flares assuming a thick target interaction model, in which accelerated particles with given energy spectra and composition produce nuclear reactions as they slow down in the solar atmosphere. Taking into account the nuclear destruction and catastrophic energy loss (e.g. interaction involving pion production) of the fast particles in the interaction region, the production yield of a given radioisotope $r$ can be written as (e.g. Parizot \& Lehoucq 1999): \begin{equation} Q_r = \sum_{ij} n_j \int_0^{\infty} {dE v_i(E) \sigma_{ij}^r(E) \over \dot{E}_i(E)} \int_E^{\infty} dE' N_i(E') \exp \bigg[- \int_E^{E'} {dE'' \over \dot{E}_i(E'') \tau_i^{ine}(E'')}\bigg]~, \end{equation} where $i$ and $j$ range over the accelerated and ambient particle species that contribute to the synthesis of the radioisotope considered, $n_j$ is the density of the ambient constituent $j$, $v_i$ is the velocity of the fast ion $i$, $\sigma_{ij}^r$ is the cross section for the nuclear reaction $j$($i$,$x$)$r$, $\dot{E}_i$ is the energy loss rate for the accelerated particles of type $i$ in the ambient medium, $N_i$ is the source energy spectrum for these particles, and $\tau_i^{ine}$ is the energy dependent average lifetime of the fast ions of type $i$ before they suffer inelastic nuclear collisions in the interaction region. As H and He are by far the most abundant constituents of the solar atmosphere, we have \begin{equation} \tau_i^{ine} \cong {1 \over v_i(n_H\sigma_{iH}^{ine} + n_{He}\sigma_{iHe}^{ine})}~, \end{equation} where $\sigma_{iH}^{ine}$ and $\sigma_{iHe}^{ine}$ are the total inelastic cross sections for particle $i$ in H and He, respectively. We used the cross sections given by \citet{mos02} for the $p$--H and $p$--He total inelastic reactions and the universal parameterization of Tripathi et al. (1996,1999) for the other fast ions. The energy loss rate was obtained from \begin{equation} \dot{E}_i = v_i {Z_i^2(\rm{eff}) \over A_i} \bigg[ n_H m_H \bigg({dE \over dx}\bigg)_{pH} + n_{He} m_{He} \bigg({dE \over dx}\bigg)_{pHe} \bigg]~, \end{equation} where $(dE/dx)_{pH}$ and $(dE/dx)_{pHe}$ are the proton stopping powers (in units of MeV g$^{-1}$ cm$^{2}$) in ambient H and He, respectively \citep{ber05}, $m_H$ and $m_{He}$ are the H- and He-atom masses, $Z_i(\rm{eff}) = Z_i[1-\exp(-137\beta_i/Z_i^{2/3})]$ is the equilibrium effective charge \citep{pie68}, $\beta_i=v_i/c$ is the particle velocity relative to that of light, and $Z_i$ and $A_i$ are the nuclear charge and mass for particle species $i$, respectively. Inserting equations (4) and (5) into equation (3), we see that under the assumption of thick target interactions, the yields do not depend on the ambient medium density, but only on the relative abundances $n_j/n_H$. We used for the ambient medium composition the same abundances as Kozlovsky et al. (2004, Table~2). We took for the source energy spectrum of the fast ions an unbroken power law extending from the threshold energies of the various nuclear reactions up to $E_{max}$=1~GeV nucleon$^{-1}$: \begin{equation} N_i(E)=C_i E^{-s} H(E_{max}-E)~, \end{equation} where the function $H(E)$ denotes the Heaviside step function and $C_i$ is the abundance of the accelerated particles of type $i$. We assumed the following impulsive-flare composition for the accelerated ions: we used for the abundances of fast C and heavier elements relative to $\alpha$-particles the average composition of solar energetic particles (SEP) measured in impulsive events from interplanetary space (Reames 1999, Table 9.1), but we took the accelerated $\alpha/p$ abundance ratio to be 0.5, which is at the maximum of the range observed in impulsive SEP events. The choice of such a large $\alpha/p$ ratio is motivated by analyses of gamma-ray flares \citep{sha97,man97,man99}, showing a relatively strong emission in the line complex at $\sim$450~keV from $\alpha$-particle interactions with ambient $^4$He. The expected modifications of our results for higher proton abundances relative to $\alpha$-particles and heavier ions are discussed in \S~4. We performed calculations with an accelerated $^3$He/$\alpha$ abundance ratio of 0.5, which is typical of the accelerated $^3$He enrichment found in impulsive SEP events \citep{rea94,rea99}, as well as in gamma-ray flares (Share \& Murphy 1998; Manzhavidze et al. 1999). The resulting accelerated-particle composition is similar to the one used by \citet{koz04}, but slightly less enriched in heavy elements (e.g. Fe and Ni abundances are lower than those of these authors by 13\% and 29\%, respectively). The enhancement of the fast heavy elements is however still large relative to the ambient material composition. We have, for example, $C_{Fe}/C_p=137n_{Fe}/n_H$. Thick-target radioisotope yields are given in Table~3 for $s$=3.5, 2 and 5 (eq.~[6]). The first value is close to the mean of spectral index distribution as measured from analyses of gamma-ray line ratios \citep{ram96}, whereas the two other values are extreme cases to illustrate the dependence of the radioisotope production on the spectral hardness. The calculations were normalized to unit incident number of protons of energy greater than 5~MeV. For comparison, the last two lines of this table give thick-target yields for the production of the 4.44 and 6.13 MeV deexcitation lines from ambient $^{12}$C and $^{16}$O, respectively. These prompt narrow lines are produced in reactions of energetic protons and $\alpha$-particles with ambient $^{12}$C, $^{14}$N, $^{16}$O and $^{20}$Ne (see Kozlovsky et al. 2002). We can see that, relative to these two gamma-ray lines, the production of most of the radioisotopes increases as the accelerated particle spectrum becomes harder (i.e. with decreasing $s$). This is because the radioactive nuclei are produced by spallation reactions at higher energies, on average, than the $^{12}$C and $^{16}$O line emission, which partly results from inelastic scattering reactions. Because of the enhanced heavy accelerated particle composition, most of the yield is from interactions of heavy accelerated particles with ambient H and He. For example, the contribution of fast Fe and Ni collisions with ambient H and He accounts for more than 90\% of the total $^{56}$Co production, whatever the spectral index $s$. Because we are interested in emission after the end of the gamma-ray flare, we show in the fifth column of Table~3 a factor $f_d$ which should be multiplied with the given yields to take into account the decay of the radioactive nuclei occuring before the end of the flare. It was calculated from the simplifying assumption that the radioisotope production rate is constant with time during the flare for a time period $\Delta t$. We then have \begin{equation} f_d={\tau_r \over \Delta t} (1-e^{-\Delta t / \tau_r})~, \end{equation} where $\tau_r$ is the mean lifetime of radioisotope $r$. In Table~3, $f_d$ is given for $\Delta t$=10~min. \section{Delayed X- and gamma-ray line emission} Calculated fluxes of the most intense delayed lines are shown in Tables~4--6 for three different times after a large gamma-ray flare. The lines are given in decreasing order of their flux for $s$=3.5. The calculations were normalized to a total fluence of the summed 4.44 and 6.13~MeV prompt narrow lines $\mathcal{F}_{4.4+6.1}$=300 photons~cm$^{-2}$, which is the approximate fluence observed in the 2003 October 28 flare with {\it INTEGRAL}/SPI \citep{kie06}. The flux of a given delayed line $l$ produced by the decay of a radioisotope $r$ at time $t$ after the end of the nucleosynthesis phase was obtained from \begin{equation} F_l(t)={\mathcal{F}_{4.4+6.1} Q_r f_d I_l^r \over Q_{4.4+6.1} \tau_r} e^{-t / \tau_r}~, \end{equation} where $Q_r$ and $Q_{4.4+6.1}$ are the yields (Table~3) of the parent radioisotope and summed prompt $^{12}$C and $^{16}$O lines, respectively, and $I_l^r$ is the line branching ratio (the percentages shown in Table~1). The factor $f_d$ was calculated for a flare duration of 10~min (Table~3). The calculated fluxes do not take into account attenuation of the line photons in the solar atmosphere. Unless the flare is very close to the solar limb, the attenuation of the delayed gamma-ray lines should not be significant (see Hua, Ramaty, \& Lingenfelter 1989) as long as the radioactive nuclei do not plunge deep in the solar convection zone. The delayed X-ray lines can be more significantly attenuated by photoelectric absorption (see below). A full knowledge of the delayed 511~keV line flux would require a comprehensive calculation of the accelerated particle transport, solar atmospheric depth distribution of $\beta^+$-emitter production and transport of the emitted positrons, because (1) the number of 2$\gamma$ line photons produced per emitted positron ($f_{511}$) crucially depends on the density, temperature and ionization state of the solar annihilation environment \citep{mur05}, (2) significant escape of positrons from the annihilation region can occur, and (3) the line can be attenuated by Compton scattering in the solar atmosphere. Here, we simply assumed $f_{511}$=1 (see Kozlovsky et al. 2004; Murphy et al. 2005) and neglected the line attenuation. We see in Tables~4 and 5 that the annihilation line is predicted to be the most intense delayed line for hours after the flare end. After $\sim$2~days however, its flux can become lower than that of the 846.8~keV line from the decay of $^{56}$Co into $^{56}$Fe (see Table~6). We show in Figure~11 the time dependence of the 511~keV line flux, for $s$=3.5 and $\Delta t$=10~min, together with the contributions of the main radioactive positron emitters to the line production. We see that from $\sim$1 to $\sim$14~hours, $^{18}$F is the main source of the positrons. Since this radioisotope can be mainly produced by the reaction $^{16}$O($^3$He,$p$)$^{18}$F, we suggest that a future detection of the decay curve of the solar 511~keV line could provide an independent measurement of the flare-accelerated $^3$He abundance. Prompt line measurements have not yet furnished an unambiguous determination of the fast $^3$He enrichment (Manzhavidze et al. 1997). Among the 9 atomic lines listed in Table~1, the most promising appears to be the Co K$\alpha$ line at 6.92~keV (Tables~4--5). It is produced from both the decay of the isomer $^{58}$Co$^m$ by the conversion of a K-shell electron and the decay of $^{57}$Ni by orbital electron capture. Additional important atomic lines are the Fe and Ni K$\alpha$ lines at 6.40 and 7.47~keV, respectively. The X-ray line fluxes shown in Tables~4 and 5 should be taken as upper limits, however, because photoelectric absorption of the emitted X-rays was not taken into account, as it depends on the flare location and model of accelerated ion transport. Calculations in the framework of the solar magnetic loop model (Hua et al. 1989) showed that the interaction site of nuclear reactions is expected to be in the lower chromosphere, at solar depths corresponding to column densities of 10$^{-3}$ to 10$^{-1}$~g cm$^{-2}$. These results were reinforced by the gamma-ray spectroscopic analyses of \citet{ram95}, who showed that the bulk of the nuclear reactions are produced in flare regions where the ambient composition is close to coronal, i.e. above the photosphere. For such column densities of material of coronal composition, we calculated from the photoelectric absorption cross sections of \citet{bal92} that the optical depths of 6.92~keV escaping photons are between $\sim$10$^{-3}$ and 10$^{-1}$. Thus, the attenuation of this X-ray line is expected to be $\lower.5ex\hbox{$\; \buildrel < \over \sim \;$}$10\% for flares occuring at low heliocentric angles. However, the line attenuation can be much higher for flares near the solar limb. A serious complication to the X-ray line measurements could arise from the confusion of the radioactivity lines with the intense thermal emission from the flare plasma. In particular, this could prevent a detection of the delayed X-ray lines for hours after the impulsive flaring phase, until the thermal emission has become sufficiently low. The necessary distinction of thermal and nonthermal photons would certainly benefit from an X-ray instrument with high spectral resolution, because K$\alpha$ lines from neutral to low-ionized Fe, Co or Ni are not expected from thermal plasmas at ionization equilibrium. The neutral Co line at 6.92~keV could still be confused, however, with the thermal K$\alpha$ line of Fe XXVI at 6.97~keV. The neutral Fe K$\alpha$ line at 6.40~keV is commonly observed during large solar flares (e.g. Culhane et al. 1981), as a result of photoionization by flare X-rays and collisional ionization by accelerated electrons (e.g. Zarro, Dennis, \& Slater 1992). However, this nonthermal line emission is not expected to extend beyond the impulsive phase. A near future detection of delayed nuclear gamma-ray lines is perhaps more probable. We see in Table~4 that at $t$=30~min after the flare, the brightest gamma-ray line (after the 511~keV line) is at 1434~keV from the $\beta^+$ decay of the isomeric state $^{52}$Mn$^m$ into $^{52}$Cr. The flux of this line is predicted to significantly increase as the accelerated particle spectrum becomes harder, because it is mainly produced from Fe spallation reactions at relatively high energies, $>$10~MeV nucleon$^{-1}$ (Figure~4). For $t$$\lower.5ex\hbox{$\; \buildrel > \over \sim \;$}$3~hours, the radioactivity of $^{52}$Mn$^m$ ($T_{1/2}$=21.1~min) has become negligible and the 1434~keV line essentially results from the decay of the ground state $^{52}$Mn$^g$ ($T_{1/2}$=5.59~days). Thus, this line remains significant for several days after the flare. However, for $t$$\lower.5ex\hbox{$\; \buildrel > \over \sim \;$}$2~days, the most intense line could be at 846.8~keV from $^{56}$Co decay, depending on the spectral index $s$ (see Table~6). Additional important gamma-ray lines during the first hour are at 1332 and 1792~keV from the radioactivity of $^{60}$Cu. During the first two days, one should also look for the line at 931.1~keV from the radioactivity of $^{55}$Co and for those at 1369 and 2754~keV from $^{24}$Na decay. We now discuss the influence of the accelerated ion composition on the delayed line emission. In Figure~12, we show calculated fluences of the 846.8 and 1434~keV lines as a function of the accelerated $\alpha/p$ abundance ratio. They were obtained from the equation \begin{equation} \mathcal{F}_l = \int_0^\infty F_l(t) dt ={\mathcal{F}_{4.4+6.1} Q_r f_d I_l^r \over Q_{4.4+6.1}}~, \end{equation} where the yields $Q_r$ and $Q_{4.4+6.1}$ were calculated for various proton abundances relative to $\alpha$-particles and the other accelerated ions. Thus, the predicted fluence variations with accelerated $\alpha/p$ actually show the relative contributions of reactions induced by fast protons. The fluences decrease for decreasing $\alpha/p$ ratio (i.e. increasing proton abundance), because, for $\alpha/p$$\lower.5ex\hbox{$\; \buildrel > \over \sim \;$}$0.05, the radioisotopes are predominantly produced by spallation of accelerated heavy nuclei, whose abundances are significantly enhanced in impulsive flares, whereas the ambient $^{12}$C and $^{16}$O lines largely result from fast proton interactions. This effect is less pronounced for $s$=5, because for this very soft spectrum, the contribution of $\alpha$-particle reactions to the prompt line emission is more important. Obviously, the detection of any delayed line from a solar flare should furnish valuable information on the accelerated particle composition and energy spectrum. Determination of the accelerated particle composition from spectroscopy of prompt line emission is difficult. \section{Discussion} We have made a detailed evaluation of the nuclear data relevant to the production of radioactive line emission in the aftermath of large solar flares. We have presented updated cross sections for the synthesis of the major radioisotopes by proton, $^3$He and $\alpha$ reactions, and have provided theoretical thick-target yields, which allow flux estimates for all the major delayed lines at any time after a gamma-ray flare. Together with the 846.8~keV line from $^{56}$Co decay, whose importance was already pointed out by \cite{ram00}, our study has revealed other gamma-ray lines that appear to be promising for detection, e.g. at 1434 keV from $^{52}$Mn$^{g,m}$, 1332 and 1792~keV from $^{60}$Cu, 2127 keV from $^{34}$Cl$^m$, 1369 and 2754~keV from $^{24}$Na, and 931.1~keV from $^{55}$Co. The strongest delayed X-ray line is found to be the Co K$\alpha$ at 6.92~keV, which is produced from both the decay of the isomer $^{58}$Co$^m$ by the conversion of a K-shell electron and the decay of $^{57}$Ni by orbital electron capture. Distinguishing this atomic line from the thermal X-ray emission can be challenging until the flare plasma has significantly cooled down. However, a few hours after the flare the thermal emission will be gone or significantly reduced and the delayed Co K$\alpha$ line will be more easily detected. Delayed gamma-ray lines could be detected sooner after the end of the impulsive phase, as the prompt nonthermal gamma-ray emission vanishes more rapidly. The lines will be very narrow, because the radioactive nuclei are stopped by energy losses in the solar atmosphere before they decay. Although generally weaker than the main prompt lines, some delayed lines emitted after large flares can have fluences within the detection capabilities of the {\it RHESSI} spectrometer or future space instruments. Multiple flares originating from the same active region of the sun can build up the radioactivity, thus increasing the chance for detection. However, a major complication to the measurements can arise from the fact that the same radioactivity lines can be produced in the instrument and spacecraft materials from fast particle interactions. A line of solar origin could sometimes be disentangled from the instrumental line at the same energy by their different time evolutions. But the bombardment of the satellite by solar energetic particles associated with the gamma-ray flare can make this selection more difficult. A positive detection of delayed radioactivity lines, hopefully with {\it RHESSI}, would certainly provide unique information on the flare-accelerated particle composition and energy spectrum. In particular, since the enrichment of the accelerated heavy elements can be the major source of the radioisotopes, their detection should furnish a valuable measurement of this enhancement. Thus, a concomitant detection of the two lines at 846.8 and 1434~keV could allow measurement of not only the abundance of accelerated Fe ions, but also of their energy spectrum (see Figure~12). A future measurement of the decay curve of the electron-positron annihilation line or of other delayed gamma-ray lines would be very useful for studying solar atmospheric mixing. The lines should be strongly attenuated by Compton scattering when the radioactive nuclei plunge deep in the solar interior. The use of several radioisotopes with different lifetimes should place constraints on the extents and timescales of mixing processes in the outer convection zone. In addition, the imaging capabilities of {\it RHESSI} could allow measurement of the size and development of the radioactive patch on the solar surface. This would provide unique information on both the transport of flare-accelerated particles and dynamics of solar active regions. It is noteworthy that solar radioactivity can be the only way to study flares that had recently occured over the east limb. Radioactive nuclei produced in solar flares can also be detected directly if they escape from the sun into interplanetary space. At present, only two long-lived radioisotopes of solar-flare origin have been identified, $^{14}$C ($T_{1/2}$=5.7$\times$10$^3$ years, Jull, Lal, \& Donahue 1995) and $^{10}$Be ($T_{1/2}$=1.51$\times$10$^6$ years, Nishiizumi \& Caffe 2001), from measurements of the solar wind implanted in the outer layers of lunar grains. Based on the measured abundances relative to calculated average production rates in flares, a large part of these radioactive species must be ejected in the solar wind and energetic-particle events rather than being mixed into the bulk of the solar convection zone. Detection of solar radioactivities with shorter lifetimes, either directly in interplanetary space or from their delayed line emission, are expected to provide a new insight into the destiny of the nuclei synthesized in solar flares. \acknowledgments We would like to thank Amel Belhout for her assistance in the EMPIRE-II calculations and Jean-Pierre Thibaud for his constructive comments on the manuscript. B. Kozlovsky would like to thank V. Tatischeff and J. Kiener for their hospitality at Orsay and acknowledges the Israeli Science Foundation for support.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{intro} Introduction} In a QCD-like theory with Dirac fermions, the measure of the euclidean functional integral is positive when all fermions have a positive mass, and, as a consequence, there is no topological term induced by the fermionic part of the theory. This generalizes to all types of fermion {\it irreps}: complex, real, and pseudoreal. If the theory contains $N$ Dirac fermions in a real {\it irrep}, we may reformulate it in terms of $2N$ Majorana fermions. We will be using Majorana fields each of which packs together a Weyl fermion and its anti-fermion.\footnote{ The precise definition is given in Eq.~(\ref{Diracreal}) below.} Assuming an equal positive mass $m>0$ for all Dirac flavors, the mass matrix $M$ of the Majorana formulation is then given by $M=mJ_S$, with the $2N \times 2N$ matrix \begin{equation} J_S = \left( \begin{array}{cc} 0 & {\bf 1}_N \\ {\bf 1}_N & 0 \end{array} \right)\ , \label{Js} \end{equation} where ${\bf 1}_n$ is the $n\times n$ unit matrix. A non-anomalous chiral rotation can then be used to bring the mass matrix to a flavor-diagonal form $M=m J_S^{\rm rot}$ where \begin{equation} J_S^{\rm rot} = i\gamma_5 {\bf 1}_{2N} \ , \label{Js2} \end{equation} showing that each entry of $M$ has a $\textrm{U(1)}_A$ phase equal to $\p/2$. Now let us apply a $\textrm{U(1)}_A$ rotation that turns the mass matrix into a positive matrix, $M=m{\bf 1}_{2N}$. Because of the anomaly, this generates a topological term $e^{i\th Q}$, where \begin{equation} \label{topo} Q = \frac{g^2}{32\p^2}\int d^4x \,{\rm tr}(F\tilde{F})\ , \end{equation} is the topological charge, and \begin{equation} \label{pihalf} \th = -\p N T/2 \ , \end{equation} with $T$ the index of the Dirac operator for the fermion {\it irrep}\ in a single instanton background. Let us consider the consequences of this topological term. $T$ is always even for a real {\it irrep}.\footnote{% We will recover this result in Sec.~\ref{maj}.} If $NT$ is divisible by 4 then $e^{i\th Q}=1$, and the topological term drops out. If $NT$ is not divisible by 4, we have $e^{i\th Q}=(-1)^Q$. Hence, it appears that the Majorana measure will be positive for $Q$ even, but negative for $Q$ odd. This is puzzling, because the measure of the original Dirac theory is positive for any $Q$, and, obviously, the Dirac and Majorana formulations should represent the same theory. The paradox would be resolved if the very transition to the Majorana formulation would somehow generate a ``compensating'' topological term $e^{i\p NT Q/2}$. The additional topological term induced by the $\textrm{U(1)}_A$ rotation would then cancel against the compensating topological term. We would end up with a positive mass matrix and with no topological term, as in the original Dirac theory. The purpose of this paper is to show that this is indeed what happens. In reality, it turns out that the paradox described above arises because in the argument we ignored a phase ambiguity of the Majorana measure which is present in the formal continuum theory. The existence of this ambiguity allows us to {\em require} agreement between the Dirac and Majorana formulations. When the Majorana mass matrix involves $J_S$ or $J_S^{\rm rot}$, this requirement implies the existence of the compensating topological term in the path integral. Going beyond formal arguments, we demonstrate the presence of the compensating topological term through a fully non-perturbative lattice derivation of the transition from the Dirac to the Majorana formulation. Finally, we discuss the implications for the chiral effective theory. This paper is organized as follows. In Sec.~\ref{maj} we show how, in the continuum, a phase ambiguity arises in the choice of a basis for a gauge theory with Majorana fermions. We explain how this ambiguity can be resolved in a theory with an even number of Majorana fermions by comparison with the same theory formulated in terms of Dirac fermions. Then, in Sec.~\ref{latt}, we show that the lattice formulation implies a natural choice of basis, thus fixing the phase consistently, both in the formulations with Wilson and with domain-wall fermions. This allows us to discuss the $\th$ angle induced by the lattice fermion action, reviewing and generalizing the earlier work of Ref.~\cite{SSt}. We consider separately a stand-alone gauge theory of Majorana fermions, and a theory of $2N$ Majorana fermions obtained by reformulating a theory of $N$ Dirac fermions. We then revisit the precise form of the condensate in the presence of a fermion-induced $\th$ angle, both in the gauge theory as well as in chiral perturbation theory. This is done in Sec.~\ref{dirac} for a theory with Dirac fermions in a complex {\it irrep}\ of the gauge group, and in Sec.~\ref{vacreal} for a theory with Majorana fermions in a real {\it irrep}\ of the gauge group. Section~\ref{conc} contains our summary and conclusion. There are six appendices dealing with technical details. \section{\label{maj} Majorana fermions and the phase ambiguity} In this section, we first review some useful standard results for Dirac (Sec.~\ref{diracrev}) and Majorana (Sec.~\ref{majrev}) fermions. We then discuss the phase ambiguity that is encountered in defining the continuum path integral for Majorana fermions (Sec.~\ref{ambg}). \subsection{\label{diracrev} Dirac fermions} Consider a euclidean gauge theory with $N$ Dirac fermions in some {\it irrep}\ of the gauge group. The partition function for the most general choice of parameters is \begin{equation} Z = \int {\cal D} A {\cal D}\psi{\cal D}\overline{\j}\, \exp\left(-\int d^4x\, {\cal L} \right)\ , \label{Z} \end{equation} where \begin{equation} \label{lag} {\cal L} = \frac{1}{4} F^2 + \overline{\j} (\Sl{D} + {\cal M}^\dagger P_L + {\cal M} P_R ) \psi + i \th Q \ , \end{equation} with $P_{R,L}=(1\pm\gamma_5)/2$, and ${\cal M}$ is a complex $N\times N$ matrix. The topological charge $Q$ was introduced in Eq.~(\ref{topo}). We will specialize to a mass matrix of the form \begin{equation} \label{cm} {\cal M} = m\Omega = m e^{i\alpha/(NT)} \tilde\O \ , \qquad \tilde\O \in \textrm{SU}(N) \ , \end{equation} with real $m>0$ and a real phase $\alpha$. Upon integrating out the fermions the dependence on $\tilde\O$ drops out thanks to the invariance under non-singlet chiral transformations, and \begin{equation} \label{DetF} {\rm det}(\Sl{D} + {\cal M}^\dagger P_L + {\cal M} P_R) = e^{i\alpha Q}\, {\rm det}^N(\Sl{D}+m) \ , \end{equation} where, on the right-hand side, $\Sl{D}+m$ is the one-flavor Dirac operator. This result can be derived using the spectral representation of the Dirac operator, see App.~\ref{specrep}. As mentioned earlier, $T$ is the index of the Dirac operator in a single instanton background. The measure $\mu(A)$ of the path integral is thus \begin{eqnarray} \label{msr} \mu(A) &=& e^{-i\th_{\rm eff} Q}\, e^{-\frac{1}{4} F^2} {\rm det}^N(\Sl{D}+m) \\ &=& e^{-i\th_{\rm eff} Q}\, \tilde\m(A) \ ,\nonumber \end{eqnarray} where \begin{equation} \label{mutilde} \tilde\m(A) = e^{-\frac{1}{4} F^2} {\rm det}^N(\Sl{D}+m) \ , \end{equation} is positive, and the effective topological angle is \begin{equation} \label{theff} \th_{\rm eff} = \th - \alpha \ . \end{equation} \subsection{\label{majrev} Majorana fermions} A theory of $N$ Dirac fermions in a real representation of the gauge group $G$ can be reformulated in terms of $2N$ Majorana fermions. The $N$ Dirac fermions are composed of $2N$ Weyl fermions. From these Weyl fermions, we construct Majorana fermions each of which packs together a Weyl fermion and its anti-fermion, which is possible because the fermion and the anti-fermion belong to the same representation of $G$. The mapping between Dirac fermions (on the right-hand side) and Majorana fermions (on the left-hand side) is \begin{eqnarray} \label{Diracreal} \Psi_{L,i} &=& \psi_{L,i} \ ,\\ \Psi_{R,i} &=& CS\overline{\j}^T_{L,i} \ ,\nonumber\\ \Psi_{R,N+i} &=& \psi_{R,i} \ ,\nonumber\\ \Psi_{L,N+i} &=& CS\overline{\j}^T_{R,i} \ ,\nonumber \end{eqnarray} where $i=1,\ldots,N$. Here $C$ the charge conjugation matrix, and $S$ the group tensor satisfying the invariance property $g^TSg=S$ for all $g\in G$. We recall the basic properties, $C^{-1}=C^\dagger=C^T=-C$, and $S^{-1}=S^\dagger=S^T=S$. We also introduce \begin{equation} \label{majcond} \overline{\J} \equiv \Psi^T CS \ . \end{equation} Thus, Eq.~(\ref{Diracreal}) determines all the components of the Majorana fermions in terms of the original Dirac fermions, or, equivalently, in terms of the corresponding Weyl fields. Other mappings between Dirac and Majorana fermions are possible, and we give an example in App.~\ref{bases}. What is special about Eq.~(\ref{Diracreal}) is that it respects the natural mapping between Weyl and Majorana fields. Proceeding to the lagrangian, for the kinetic term we have \begin{equation} \label{LK} {\cal L}_K = \sum_{i=1}^N \overline{\j} \Sl{D} \psi = {1\over 2}\sum_{I=1}^{2N} \overline{\J}_I\Sl{D}\Psi_I \ . \end{equation} For the mass term we have \begin{equation} \label{LM} {\cal L}_m = m \overline{\j} e^{i\alpha_D\gamma_5} \psi = \frac{m}{2}\, \overline{\J} e^{i\alpha_D\gamma_5} J_S \Psi \ , \end{equation} where the $2N\times 2N$ matrix $J_S$ was introduced in Eq.~(\ref{Js}), and $\alpha_D=\alpha/(NT)$ is the phase introduced in the Dirac case in Eq.~(\ref{cm}). We have set $\tilde\O=1$, since the $\textrm{SU}(N)$ part of the original Dirac mass matrix does not play a role in the following. The flavor symmetry is as follows. In the massless limit, the theory is invariant under $\textrm{SU}(2N)$ transformations \begin{eqnarray} \label{transfreal} \Psi &\to& \left(P_L h + P_R h^*\right)\Psi\ ,\\ \overline{\J} &\to& \overline{\J} (P_L h^T + P_R h^\dagger)\ ,\nonumber \end{eqnarray} with $h\in\textrm{SU}(2N)$. When the mass term~(\ref{LM}) is turned on, the $\textrm{SU}(2N)$ symmetry is explicitly broken to $\textrm{SO}(2N)$. The Dirac formulation of the same theory obviously has the same global symmetry; but the full symmetry is manifest only in the Majorana formulation.\footnote{% For a discussion of how the global symmetry is realized in the Dirac formulation, see Ref.~\cite{sextet}. } \subsection{\label{ambg} Pfaffian phase ambiguity} There exists a non-anomalous $\textrm{SU}(2N)$ chiral rotation that brings the Majorana mass term~(\ref{LM}) to a diagonal form \begin{equation} \label{massDiag} {\cal L}_m = \frac{m}{2}\, \overline{\J} i\gamma_5 e^{i\alpha_D\gamma_5} \Psi = \frac{m}{2}\, \overline{\J} e^{i(\alpha_D+\p/2)\gamma_5} \Psi \ . \end{equation} We see that we have an extra $\textrm{U(1)}$ phase of $\p/2$, leading to an apparent paradox, as explained in the introduction. In the following, we ask the question of how this paradox may be resolved in the continuum. In Sec.~\ref{latt} we will show how it is avoided, by introducing a non-perturbative regulator. To start, let us consider a single Majorana fermion with lagrangian \begin{eqnarray} \label{L1} {\cal L} &=& {1\over 2} \overline{\J} D \Psi \ = \ {1\over 2} \Psi^T CSD\, \Psi\ , \\ \label{Dm} D &=& \Sl{D} + m e^{i\alpha_M\gamma_5} \ . \end{eqnarray} The differential operator $CSD$ is antisymmetric, and the result of formally integrating out the Majorana fermion is $\textrm{pf}(CSD)$, the pfaffian of $CSD$. In the Dirac case, ${\rm det}(D)$ is simply equal to the (regulated) product of all eigenvalues, see App.~\ref{specrep}. What about pfaffians? Introducing the abbreviation ${\cal A}=CSD$, the effect of a unitary change of basis for Majorana fermions is \begin{equation} \label{AA'} {\cal A} \to {\cal A}' = {\cal U}^T {\cal A}\, {\cal U} \ , \end{equation} where both ${\cal A}$ and thus ${\cal A}'$ are antisymmetric. We will be looking for a change of basis so that ${\cal A}'$ will have a skew-diagonal form. For a real representation, the eigenvalues of the Dirac operator have a twofold degeneracy. Because its hermitian part is equal to $m\cos\alpha_M$ times the identity matrix, the Dirac operator~(\ref{Dm}) is normal, $[D,D^\dagger]=0$. Consider an eigenvector $\chi$ with eigenvalue $\lambda$. By normality, $D\chi=\lambda\chi$ implies $D^\dagger\chi=\lambda^*\chi$. Hence \begin{equation} D\,CS\chi^* = CS D^T \chi^* = CS (D^\dagger\chi)^* = CS(\lambda^*\chi)^* = \lambda CS\chi^*\ . \label{CSpsi} \end{equation} It follows that $CS\chi^*$ is an eigenmode with the same eigenvalue as $\chi$. The eigenmodes $\chi$ and $CS\chi^*$ are orthogonal, $(CS\chi^*)^\dagger \chi = -\chi^T CS \chi =0$, where we used that the matrix $CS$ is antisymmetric. The skew-diagonal representation ${\cal A}'$ is achieved by transforming to a basis in which each eigenvector $\chi$ is followed by its companion eigenvector $CS\chi^*$. Selecting arbitrarily one eigenvector from each pair, and labeling the resulting subset as $\chi_1,\chi_2,\ldots$, we consider the unitary change of basis generated by the matrix ${\cal U}$ whose columns are comprised of the ordered pairs of eigenvectors, \begin{equation} \label{Ubasis} {\cal U} = (\chi_1, e^{i\f_1}CS\chi_1^*, \chi_2, e^{i\f_2}CS\chi_2^*, \ldots) \ . \end{equation} Notice that, for each pair, we have allowed the second eigenvector to have an arbitrary U(1) phase relative to the original eigenvector. These arbitrary phases play a profound role, as we will now see. The $2\times2$ subspace of ${\cal A}'$ associated with a pair $\chi, e^{i\f}CS\chi^*$ with eigenvalue $\lambda$ has the explicit form \begin{equation} \left(\begin{array}{c} \chi^T \\ e^{i\f}\chi^\dagger SC^T \end{array} \right) CS D \left(\begin{array}{cc} \chi & e^{i\f}CS\chi^* \end{array} \right) = e^{i\f} \lambda \left(\begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right) \ . \label{2by2phi} \end{equation} The pfaffian of ${\cal A}'$ factorizes as the product of pfaffians for the $2\times2$ subspaces, where the pfaffian of the above $2\times2$ subspace is, by definition, equal to $-e^{i\f} \lambda$. Explicitly, \begin{equation} \label{pfA'} \textrm{pf}({\cal A}') = \prod_i (-e^{i\f_i} \lambda_i) \ . \end{equation} This result exhibits a phase ambiguity, represented by the sum $\sum_i\f_i$. In retrospect, the phase ambiguity can be traced to the elementary property $\textrm{pf}({\cal A}') = \textrm{pf}({\cal A}) {\rm det}({\cal U})$. This relation implies that the phase of the pfaffian depends on the choice of basis for the Majorana field on which the differential operator ${\cal A}$ acts. The basis is represented by the unitary matrix ${\cal U}$, and ${\rm det}({\cal U})$ is, thus, a basis-dependent phase. The rigorous resolution of the phase ambiguity requires a non-perturbative treatment in order to specify the basis, which we will give in Sec.~\ref{latt}. In the rest of this section we restrict ourselves to an even number of Majorana fermions, and discuss how the phase may be fixed by appealing to the corresponding theory defined in terms of Dirac fermions, where no such phase ambiguity exists. As reviewed in App.~\ref{specrep} for the Dirac case, let us consider separately the zero modes and the non-zero modes. Starting with the non-zero modes, and following App.~\ref{specrep}, the eigenvectors $\chi_\pm$ now each have a companion, $e^{i\f_\pm}CS\chi_\pm^*$, where we have allowed for arbitrary relative U(1) phases. The contribution of these two pairs of eigenvectors to $\textrm{pf}({\cal A})$ is \begin{equation} \label{twopairs} (-e^{i\f_+} \lambda_+)(-e^{i\f_-} \lambda_-) = e^{i(\f_++\f_-)} (\lambda^2+m^2) \ , \end{equation} where, as in App.~\ref{specrep}, $\lambda^2$ is the eigenvalue of the second-order operator $-\Sl{D}^2 P_R$. For a single Dirac fermion in the same real representation, the contribution of the eigenvectors $\chi_\pm$ and $e^{i\f_\pm}CS\chi_\pm^*$ to ${\rm det}(D)$ is simply a factor of \begin{equation} \label{detnonzero} (\lambda^2+m^2)^2 \ . \end{equation} The determinant is independent of the arbitrary U(1) phase of each eigenvector. If we now take two Majorana fermions, the corresponding contribution to $\textrm{pf}({\cal A}')$ will be \begin{equation} \label{pfnonzero} [e^{i(\f_++\f_-)} (\lambda^2+m^2)]^2 \ . \end{equation} We see that, by making the {\em choice} \begin{equation} \label{phasenonzero} \f_+ = \f_- = 0 \ , \end{equation} we achieve agreement between the corresponding factors for the Dirac and two-Majorana cases.\footnote{It is, in fact, sufficient to choose $\f_++\f_-=0\ \mbox{mod}\ \p$.} Proceeding to the zero modes, in the Dirac case the contribution of a pair of zero modes, $\chi_0,e^{i\f_0}CS\chi_0^*$, is just \begin{equation} \label{detzero} (m e^{\pm i\alpha_D})^2 \ , \end{equation} depending on the chirality. In the Majorana case, the corresponding contribution to $\textrm{pf}({\cal A}')$ from each Majorana fermion is \begin{equation} \label{onepfzero} -m e^{i\f_0} e^{\pm i\alpha_M} = -m e^{i\f_0} e^{\pm i(\alpha_D+\p/2)} \ , \end{equation} where on the right-hand side we have substituted $\alpha_M=\alpha_D+\p/2$. The extra phase of $\p/2$ arises during the transition from the Dirac to the Majorana formulation, as we have seen in Eq.~(\ref{massDiag}). The contribution from two Majorana fermions is thus \begin{equation} \label{twopfzero} (m e^{i\f_0} e^{\pm i(\alpha_D+\p/2)})^2 \ . \end{equation} It follows that the Dirac result~(\ref{detzero}) will only be reproduced provided we make the non-trivial choice \begin{equation} \label{phasezero} \f_0 = \p/2\ \mbox{mod}\ \p \ . \end{equation} \section{\label{latt} Non-perturbative calculation} In the previous section, we showed that the definition of a theory with Majorana fermions has an intrinsic phase ambiguity, which can be used to resolve the apparent paradox introduced in Sec.~\ref{intro}. However, the question of whether, and how, the theory ``chooses'' the proper phase was left open. In order to address this question, we need a properly regulated non-perturbative definition of the theory, which is provided by the lattice. The lattice action for a Majorana fermion will always have the generic form ${1\over 2}\Psi^T {\cal A} \Psi$ for a suitable antisymmetric operator ${\cal A}$. Integrating over the lattice Majorana field yields $\textrm{pf}({\cal A})$, which is now well defined. There is no room for any (phase) ambiguity, because, on any finite-volume lattice, ${\cal A}$ is a finite-size matrix, and the lattice selects the coordinate basis to define ${\cal A}$. Our first result concerns a single Majorana fermion with no chiral angle(s), and a positive bare mass $m_0>0$. Using domain-wall fermions, we show in App.~\ref{pospf} that $\textrm{pf}({\cal A})$ is strictly positive in this case. The domain-wall fermion measure is then strictly positive for any number of Majorana fermions, and in all topological sectors. In this section, we discuss in detail the transition from the Dirac to the Majorana formulation. In Sec.~\ref{Wilson}, we regulate the theory using Wilson fermions, and in Sec.~\ref{DWF}, using domain-wall fermions. While in the case of Wilson fermions, there is a lacuna in the argument (which we discuss in some detail in App.~\ref{SStconj}), this is not the case for domain-wall fermions. We establish that the compensating topological term alluded to in the introduction indeed arises when needed, thus resolving the paradox. As in App.~\ref{pospf}, it proves easier to work with the five-dimensional formulation of domain-wall fermions, rather than directly with any Ginsparg-Wilson operator that arises in the limit of an infinite fifth dimension. We also remark that staggered fermions always lead to a four-fold taste degeneracy in the continuum limit, and so they cannot be used here, given that the apparent paradox only arises when $NT$ is even, but not divisible by four.\footnote{ Interpreting the staggered tastes as physical flavors, it is possible that reduced staggered fermions can be employed \cite{STW1981,rdcstag}. We have not explored this further.} We summarize the results of this section in Sec.~\ref{smr}. \subsection{\label{Wilson} Wilson fermions} If we formulate the theory using Wilson fermions, the resolution of the puzzle relies on the observation, made in Ref.~\cite{SSt}, of how the $\th$ angle can be realized within this fermion formulation. The starting point of the discussion is a one-flavor Wilson operator with both the Wilson and mass terms chirally rotated by angles $\th_W$ and $\th_m$, respectively, \begin{equation} D_W(\th_W,\th_m) = D_K + e^{i\th_W\gamma_5} W + e^{i\th_m\gamma_5} m_0 \ . \label{Wth} \end{equation} Here $D_K$ is the naive lattice discretization of the (massless) Dirac operator. $W$ is the Wilson term, which eliminates the fermion doublers, and is chosen for definiteness to be real positive; $m_0$ is the bare mass. The partition function takes the form~(\ref{Z}), but with the fermion part of the lagrangian replaced by\footnote{We will not need the lattice form of the gauge action.} \begin{equation} {\cal L}_F = \overline{\j} D_W(\th_W,\th_m) \psi \ . \label{LFth} \end{equation} First, only the difference $\th_W-\th_m$ can be physical, as can be seen by applying the transformation \begin{equation} \psi\to e^{i\eta\gamma_5} \psi \ , \qquad \overline{\j}\to \overline{\j} e^{i\eta\gamma_5} \ . \label{rot5} \end{equation} In the lattice regulated theory, the determinant of this transformation is unity, hence it provides an alternative representation of exactly the same theory. It is easily checked that this transformation leaves the $D_K$ part invariant, while the angles undergo the transformation $\th_W\to\th_W+2\eta$, $\th_m\to\th_m+2\eta$. By choosing $\eta=-\th_m/2$ we eliminate the phase of the mass term, while the phase of the Wilson term becomes $\th_F\equiv\th_W-\th_m$. With only the angle $\th_F$ left in the fermion action, and with $\th$ as the explicit vacuum angle (see Eq.~(\ref{lag})), what Ref.~\cite{SSt} claimed is that, in the continuum limit, \begin{equation} Z(\th,\th_F) = Z(\th+NT\th_F,0) \ . \label{Zlim} \end{equation} This implies that the relative $\textrm{U(1)}$ phase of the Wilson term and the mass term turns into the familiar $\th$ angle in the continuum limit. In Eq.~(\ref{Zlim}) we have written down the generalization of the result of Ref.~\cite{SSt} to $N$ Dirac fermions in an {\it irrep}\ with index $T$. In the case that a topological term with $\th\ne 0$ is already present in the gauge action, $NT\th_F$ gets added to $\th$. We pause here to note that the argument given in Ref.~\cite{SSt} is not complete as it stands, because of a subtlety related to renormalization. While it is beyond the scope of this paper to complete the proof, App.~\ref{SStconj} outlines a conjecture on the interplay of the observation of Ref.~\cite{SSt} and renormalization. However, this subtlety does not affect the rest of this paper. In particular, in the next subsection we provide an argument analogous to the one given here based on domain-wall fermions, where the subtlety does not arise. Next, let us work out the transition from the Dirac to the Majorana case. We start with a single Dirac fermion in a real {\it irrep}, where the Wilson fermion operator $D_W(\th_F)$ is given by Eq.~(\ref{Wth}), taking $\th_W=\th_F$ and $\th_m=0$. In the Majorana formulation, the $4\times 4$ matrix in spinor space becomes an $8\times 8$ matrix which mixes the two Majorana species. In terms of $4\times 4$ blocks, the Wilson operator in the Majorana formulation is \begin{equation} D_{\rm Maj}(\th_F) = \left( \begin{array}{cc} D_K & e^{i\th_F\gamma_5} W + m_0 \\ e^{i\th_F\gamma_5} W + m_0 & D_K \end{array} \right) \ , \label{Dmaj1} \end{equation} where we have used Eqs.~(\ref{Diracreal}) and~(\ref{majcond}). The lagrangian becomes \begin{equation} {\cal L}_F = {1\over 2} \overline{\J} D_{\rm Maj}(\th_F) \Psi \ . \label{Lmaj} \end{equation} The key feature of Eq.~(\ref{Dmaj1}) is that, because of their identical chiral properties, the Wilson and mass terms occur in the same places. Applying an $\textrm{SU}(2)$ flavor transformation, \mbox{\it i.e.}, using Eq.~(\ref{transfreal}) for $N=1$ with $h=\exp(-i\p\s_2/4)=h^*$, and using that $h^T\s_1 h = \s_3$, the Majorana Wilson operator gets rotated into \begin{equation} D_{\rm Maj}(\th_F) = \left( \begin{array}{cc} D_K + e^{i\th_F\gamma_5} W + m_0 & 0 \\ 0 & D_K - (e^{i\th_F\gamma_5} W + m_0) \end{array} \right) \ . \label{Dmaj3} \end{equation} When $\th_F=0$, the relative phase of the Wilson and mass terms is zero, for both of the Majorana species. This implies that $D_{\rm Maj}(0)$ is the Wilson operator for two Majorana fermions with the same bare mass $m_0$ (as opposed to the case where one Majorana fermion would have a mass $+m_0$ and the other $-m_0$). We prove this assertion by applying the transformation~(\ref{rot5}) with $\eta=\p/2$ to the second Majorana fermion only.\footnote{ Note that the transformation~(\ref{rot5}) is consistent with the Majorana condition~(\ref{majcond}). } Explicitly, it reads $\Psi_2 \to i\gamma_5\Psi_2$. The Majorana--Wilson operator transforms into \begin{eqnarray} D_{\rm Maj}(\th_F) &\to& \left( \begin{array}{cc} D_K + e^{i\th_F\gamma_5} W + m_0 & 0 \\ 0 & i\gamma_5 \Big( D_K - (e^{i\th_F\gamma_5} W + m_0) \Big) i\gamma_5 \end{array} \right) \label{finalWmaj}\\ &=& \rule{0ex}{5ex} \left( \begin{array}{cc} D_K + e^{i\th_F\gamma_5} W + m_0 & 0 \\ 0 & D_K + e^{i\th_F\gamma_5} W + m_0 \end{array} \right) \ . \nonumber \end{eqnarray} The fermion operator for each Majorana fermion is now exactly the same as in the Dirac case. The corresponding basis for the Majorana fields is given in App.~\ref{bases}. It follows that the fermion measure of the two-Majorana formulation is equal to $\textrm{pf}^2(CSD_W(\th_F))$, and thus equal to the Dirac measure ${\rm det}(D_W(\th_F))$. We have proved that the fermion measure in the Majorana formulation is unchanged from the Dirac formulation. Equation~(\ref{finalWmaj}) shows that we can choose the mass matrix to be proportional to the unit matrix, instead of to $J_S$ (Eq.~(\ref{Js})) or $J_S^{\rm rot}$ (Eq.~(\ref{Js2})). Unlike in the formal continuum treatment of the previous section, no phase ambiguity, nor any ``excess'' phase of $\p/2$, arises when the transition to Majorana fermions is done in the lattice-regulated theory. \subsection{\label{DWF} Domain-wall fermions} In this subsection, we revisit the argument of the previous subsection, but now using domain-wall fermions \cite{DBK} instead of Wilson fermions. As we will see, in the case of domain-wall fermions, the argument is complete, allowing us to conclude that a lattice regularization can indeed be invoked to settle the ambiguity we found in Sec.~\ref{maj}. The starting point is the domain-wall fermion action \cite{YSdwf1} for a massive Dirac fermion with bare mass $m_0$ and domain-wall height $M$, \begin{eqnarray} \label{Sdwf} S&=&\sum_{s=1}^{N_5} \overline{\j}(s)(D_K+M-1-W)\psi(s)\\ && + \sum_{s=1}^{N_5-1} \left(\overline{\j}(s)P_R\psi(s+1)+\overline{\j}(s+1)P_L\psi(s)\right) \nonumber\\ && - \ m_0\left(\overline{\j}(N_5)P_R\psi(1)+\overline{\j}(1)P_L\psi(N_5)\right)\ ,\nonumber \end{eqnarray} where $\psi$ is the five-dimensional fermion field $\psi(x,s)$, $s=1,\dots,N_5$. In Eq.~(\ref{Sdwf}), only the dependence on the fifth coordinate is made explicit. The mass term couples the fields on opposite boundaries. Domain-wall fermions are not exactly massless for finite $N_5$ when $m_0=0$. The mass induced by a finite fifth direction, usually referred to as the residual mass, is reminiscent of the additive mass renormalization of Wilson fermions. However, the residual mass vanishes in the limit $N_5\to\infty$, which we will take {\em before} the continuum limit. Following this order of limits, the mass term introduced in Eq.~(\ref{Sdwf}) renormalizes multiplicatively. Thus, the complications of the additive mass renormalization of the Wilson case, that we encountered in Sec.~\ref{Wilson}, are avoided. Our aim in this subsection is to recast the argument given in Sec.~\ref{Wilson} in terms of the domain-wall formulation of the lattice regularized theory. The first step is to prove an analogous result to Eq.~(\ref{Zlim}), thus rederiving the theorem of Ref.~\cite{SSt} in terms of domain-wall fermions. For this, we need to define an axial transformation. We take $N_5=2K$ even, and define the axial transformation as \cite{YSdwf2} \begin{eqnarray} \label{axial} \delta\psi(s)&=&e^{i\eta}\psi(s)\ ,\qquad\ \ \!\delta\overline{\j}(s)=\overline{\j}(s)e^{-i\eta} \ ,\qquad 1\le s\le K\ ,\\ \ \delta\psi(s)&=&e^{-i\eta}\psi(s)\ ,\qquad\delta\overline{\j}(s)=\overline{\j}(s)e^{i\eta}\ ,\quad K+1\le s\le 2K\ .\nonumber \end{eqnarray} Following Ref.~\cite{YSdwf2}, we define the five-dimensional currents \begin{eqnarray} \label{currents} j_\mu(x,s)&=&{1\over 2}\left(\overline{\j}(x,s)(1+\gamma_\mu)U_\mu(x)\psi(x+\mu,s)-\overline{\j}(x+\mu,s)(1-\gamma_\mu)U^\dagger_\mu(x)\psi(x,s)\right)\ ,\nonumber\\ j_5(x,s)&=&\overline{\j}(x,s)P_R\psi(x,s+1)-\overline{\j}(x,s+1)P_L\psi(x,s)\ . \end{eqnarray} The four-dimensional axial current corresponding to the axial transformation~(\ref{axial}) is \begin{equation} \label{axialcurr} j^A_\mu(x)=-\sum_{s=1}^K j_\mu(x,s)+\sum_{s=K+1}^{2K} j_\mu(x,s)\ . \end{equation} It satisfies the Ward--Takahashi identity \begin{equation} \label{WI} \partial^-_\mu j^A_\mu=2j_5(K)+2m(\overline{\j}(2K)P_R\psi(1)-\overline{\j}(1)P_L\psi(2K))\ . \end{equation} Analogous to Eq.~(\ref{Wth}), we can now introduce two angles, through the combinations $S_W(\th_W)$ and $S_m(\th_m)$, where \begin{eqnarray} \label{DWFthth} S_W(\th_W) &=& e^{i\theta_W}\overline{\j}(K)P_R\psi(K+1)+e^{-i\theta_W}\overline{\j}(K+1)P_L\psi(K)\ , \\ S_m(\th_m) &=& -m_0\left(e^{-i\theta_m}\overline{\j}(2K)P_R\psi(1)+e^{i\theta_m}\overline{\j}(1)P_L\psi(2K)\right)\ .\nonumber \end{eqnarray} $S_W(\th_W)$ replaces the $s=K$ term on the second line of Eq.~(\ref{Sdwf}), and $S_m(\th_m)$ replaces the mass term (third line) in Eq.~(\ref{Sdwf}). Once again, under an axial transformation (Eq.~(\ref{axial})), $\th_{m,W}\to\th_{m,W}+2\eta$, and hence only the difference $\th_F=\th_W-\th_m$ is physical. Slightly generalizing the discussion of the previous subsection, here we will keep both $\th_W$ and $\th_m$ arbitrary. If we now differentiate the fermion partition function with respect to $\th_W$, the result is $\svev{{\tilde{j}}_5(\th_W)}$, where $\svev{\cdot}$ indicates integration over the fermion fields, and we have defined \begin{equation} \label{dSdwf} {\tilde{j}}_5(\th_W) = e^{i\th_W}\overline{\j}(K)P_R\psi(K+1) - e^{-i\th_W}\overline{\j}(K+1)P_L\psi(K)\ , \end{equation} We will prove that in the theory with a non-zero $\th_W$, the continuum limit of $\svev{{\tilde{j}}_5(\th_W)}$ yields the axial anomaly. By integrating with respect to $\th_W$, it then follows that \begin{equation} \label{Zththdwf} Z(\th,\th_W,\th_m)=Z(\th+NT\th_W,0,\th_m)\ , \end{equation} where now the path integral is defined with the domain-wall fermion action instead of the Wilson fermion action, and we have again allowed for $N$ Dirac fermions in an {\it irrep}\ with index $T$. Equation~(\ref{Zththdwf}) generalizes Eq.~(\ref{Zlim}) of the preceding subsection. The proof turns out to be quite straightforward. Let $G(\th_W,\th_m)$ be the inverse of the domain-wall Dirac operator $D(\th_W,\th_m)$, with angles $\th_W$ and $\th_m$ introduced as in Eq.~(\ref{DWFthth}). Using Eq.~(\ref{dSdwf}), and writing ${\tilde{j}}_5(\th_W)=\overline{\j} J_5(\th_W)\psi$, we have \begin{equation} \label{proof} \svev{{\tilde{j}}_5(\th_W)}=-{\rm Tr}\,\Big(J_5(\th_W)G(\th_W,\th_m)\Big)=-{\rm Tr}\,\Big(J_5(0)G(0,\th_m-\th_W)\Big)\ , \end{equation} where in the second step we used the axial transformation~(\ref{axial}) with $\eta=\th_W/2$ to move the angle $\th_W$ to the mass term. We now take the limit $K\to\infty$, in which the propagator in Eq.~(\ref{proof}) becomes translationally invariant in the fifth dimension. In particular, the propagator becomes independent of the boundaries, and thus of $m$ and $\theta_m$ (or $\th_m-\th_W$ after the axial rotation). It follows that $\svev{{\tilde{j}}_5(\th_W)}=\svev{{\tilde{j}}_5(0)}$ for any $\th_W$ and $\th_m$, and the anomaly is recovered as in Ref.~\cite{YSdwf1}. With the domain-wall equivalent of Eq.~(\ref{Zlim}) in hand, we now return to the equivalence between one Dirac fermion in a real {\it irrep}\ of the gauge group and two Majorana fermions, in the domain-wall regularization. As we will see, the argument follows similar steps as that for the Wilson-fermion case given in Sec.~\ref{Wilson}. We begin by mapping the action~(\ref{Sdwf}) into an action for two Majorana fermions, denoted as $\Psi_i$, $i=1,2$. We again make use of Eq.~(\ref{Diracreal}), but now with a Majorana condition adapted for domain-wall fermions. Analogous to Eq.~(\ref{majcond}), we will require that \cite{DBKMS} \begin{equation} \label{bJdwf} \overline{\J}=(R_5\Psi)^TCS\ , \end{equation} with $S$ and $C$ as in Sec.~\ref{maj}, and $R_5$ a reflection in the fifth direction: \begin{equation} \label{R} R_5\Psi(x,s)=\Psi(x,N_5-s+1)\ . \end{equation} The reason for adding the reflection is that charge conjugation (in four dimensions) interchanges left- and right-handed fermions. Here the right- and left-handed modes emerge near the boundaries $s=1$ and $s=N_5$, respectively, and they need to be explicitly interchanged to match the four-dimensional picture. The domain-wall fermion action~(\ref{Sdwf}) in terms of two massless Majorana fermions $\Psi_{1,2}$ defined by \begin{eqnarray} \Psi_{L,1}(s) &=& \psi_L(s) \ , \label{Psidwf}\\ \Psi_{R,1}(s) &=& R_5SC\,\overline{\j}^T_L(s)=SC\,\overline{\j}^T_L(N_5-s+1) \ , \nonumber\\ \Psi_{L,2}(s) &=& R_5SC\,\overline{\j}^T_R(s)= SC\,\overline{\j}^T_R(N_5-s+1)\ , \nonumber\\ \Psi_{R,2}(s) &=& \psi_R(s) \ , \nonumber \end{eqnarray} is then given, for $m_0=0$, by \begin{eqnarray} \label{Sdwfmaj} S_{\rm Maj}&=&{1\over 2}\sum_{s=1}^{N_5}\Psi^T(N_5-s+1)CSD_K\Psi(s)\\ && +\ {1\over 2}\sum_{s=1}^{N_5} \Psi^T(N_5-s+1)CS \s_1 (M-W-1)\Psi(s)\nonumber\\ && +\ {1\over 2}\sum_{s=1}^{N_5-1} \left(\Psi^T(N_5-s+1)CS \s_1 P_R\Psi(s+1) +\Psi^T(N_5-s)CS \s_1 P_L\Psi(s)\right)\ , \nonumber \end{eqnarray} where $\s_1$ is again the first Pauli matrix acting on the flavor index $i=1,2$ of $\Psi_i$. Using Eq.~(\ref{R}) and Eq.~(\ref{Psidwf}), the Majorana form of Eq.~(\ref{DWFthth}) is \begin{eqnarray} \label{DWFththmaj} S_W(\th_W) &=& {1\over 2}\left(e^{i\th_W}\Psi^T_R(K+1)SC\s_1\Psi_R(K+1)+e^{-i\th_W}\Psi^T_L(K)SC\s_1\Psi_L(K)\right)\,, \hspace{6ex} \\ S_m(\th_m) &=& -\frac{m_0}{2}\left(e^{-i\th_m}\Psi^T_R(1)SC\s_1\Psi_R(1)+e^{i\th_m}\Psi^T_L(2K)SC\s_1\Psi_L(2K)\right)\,.\nonumber \end{eqnarray} $S_m(\th_m)$ gets added to the massless Majorana domain-wall action~(\ref{Sdwfmaj}), while $S_W(\th_W)$ replaces the $s=K$ term on the third line of Eq.~(\ref{Sdwfmaj}). As in Sec.~\ref{Wilson}, the flavor matrix $\s_1$ in Eq.~(\ref{DWFththmaj}) can be rotated into $\s_3$. If we then perform a phase transformation\footnote{Again, this phase transformation is not anomalous on the lattice.} \begin{eqnarray} \label{Psi2phase} \Psi_2(x,s)&\to& i\Psi_2(x,s)\ ,\qquad 1\le s\le K\ ,\\ \Psi_2(x,s)&\to& -i\Psi_2(x,s)\ ,\qquad K+1\le s\le 2K\ ,\nonumber \end{eqnarray} on the Majorana field $\Psi_2$, while leaving $\Psi_1$ alone, this rotates $\s_3$ into the identity matrix in flavor space. The end result is that $\s_1$ is removed from Eqs.~(\ref{Sdwfmaj}) and~(\ref{DWFththmaj}) (while leaving the kinetic term invariant), thus proving that the theory has two Majorana fermions with equal positive mass $m$ and the same $\th$ angle as the Dirac theory. Again, using that $\textrm{pf}^2({\cal A})={\rm det}({\cal A})$ for any antisymmetric ${\cal A}$, we conclude that the Majorana measure is identical to the Dirac measure. \subsection{\label{smr} Summary} We summarize the main results of this section. The starting point is a lattice-regularized theory with Wilson or domain-wall fermions, and with chiral angles $\th_m$ and $\th_W$ introduced in Eqs.~(\ref{Wth}) or~(\ref{DWFthth}), respectively.\footnote{ Or, to be more precise, in Eq.~(\ref{Diracren}) in the case of Wilson fermions.} We also allow for an explicit topological term in the gauge action, with angle $\th$ (see Eq.~(\ref{lag})). Consider first the case of $N$ identical Dirac fermions. As first observed in Ref.~\cite{SSt}, in the continuum limit an additional vacuum angle \begin{equation} \label{thind} \th_{\rm ind} = NT\th_W \ , \end{equation} is induced by the fermions. Introducing the ``shifted'' angle \begin{equation} \label{thetatot} \th_{\rm shf} = \th + \th_{\rm ind}\ , \end{equation} the operational meaning of this statement is that all observables will be reproduced in the continuum limit if we set $\th_W=0$, and, at the same time, replace $\th$ by $\th_{\rm shf}$ as the angle multiplying the explicit topological term in the (lattice) lagrangian. As for the phase of the fermion mass matrix, we trivially have $\alpha=NT\th_m$ (recall Eq.~(\ref{cm})). Substituting this into Eq.~(\ref{theff}) we conclude that, after integrating out the fermions, the effective vacuum angle in the gauge field measure is \begin{equation} \label{thefflatt} \th_{\rm eff} = \th_{\rm shf}-\alpha = \th + NT(\th_W-\th_m) \ . \end{equation} In the case of $N_{maj}$ identical Majorana fermions, the same result holds, with $N=N_{maj}/2$. The interesting case is an even number $2N$ of Majorana fermions, which we have shown to be equivalent to $N$ Dirac fermions, as they should be. This has resolved the apparent paradox described in Sec.~\ref{intro}. We conclude this section by summarizing the result in the case of a single Dirac fermion, $N=1$. The key observation is that, after the transition from a Dirac fermion to two Majorana fermions, the mass term and the Wilson term (or its domain-wall fermion counterpart) are proportional to the same matrix in flavor space. As we have shown, by a sequence of non-anomalous lattice transformations (meaning that the jacobian of each lattice transformation is equal to one), we may bring the two Majorana fermions to a diagonal form, with the same phases as for the original Dirac fermion (see, \mbox{\it e.g.}, Eq.~(\ref{finalWmaj}) for the Wilson case). Alternatively, we may elect to apply only SU(2) transformations to the Majorana fermions. These can bring the Wilson and mass terms, that originally point in the $\s_1$ direction in flavor space, first into the $\s_3$ direction, and then into the $i\gamma_5$ direction (see Eq.~(\ref{Js2})). In this situation we again obtain two identical Majorana fermions, except with new phases that are shifted by the same amount, $\th'_W=\th_W+\p/2$ and $\th'_m=\th_m+\p/2$. In the continuum limit the explicit topological phase becomes $\th_{\rm shf}' = \th + T\th'_W=\th+T(\th_W+\p/2)$. Because the difference $\th'_W-\th'_m=\th_W-\th_m$ is unchanged, when we substitute the new phases into Eq.~(\ref{thefflatt}) we see that the effective vacuum angle $\th_{\rm eff}$ is unchanged as well. \section{\label{dirac} Vacuum angle and the chiral condensate: complex \bf{\textit{irrep}}} Our non-perturbative study in the previous section has implications for the chiral expansion of fermions in a real {\it irrep}, and, in particular, for the interplay between the vacuum angle and the U(1) phase of the fermion mass matrix within the chiral expansion. These will be discussed in Sec.~\ref{vacreal} below. As a preparatory step, in this section we review the role of the vacuum angle in the more familiar case of fermions in a complex {\it irrep}. We first consider the chiral condensate in the underlying theory in Sec.~\ref{diracmic}, paying special attention to its U(1) phase in the light of the results of the previous section. In Sec.~\ref{dirachpt} we then discuss how the same features are reproduced in the effective theory, \mbox{\it i.e.}, in chiral perturbation theory. \subsection{\label{diracmic} Microscopic theory} We begin with a continuum derivation. Starting from Eqs.~(\ref{Z}),~(\ref{lag}) and~(\ref{cm}), the left-handed and right-handed fermion condensates are defined by \begin{subequations} \label{cond} \begin{eqnarray} \Sigma_{L,ij} &=& \svev{\overline{\j}_j P_L \psi_i} \ = \ -\frac{1}{V}\frac{\partial \log Z}{\partial {\cal M}^*_{ij}}\ , \label{condL}\\ \Sigma_{R,ij} &=& \svev{\overline{\j}_j P_R \psi_i} \ = \ -\frac{1}{V}\frac{\partial \log Z}{\partial {\cal M}_{ji}} \ , \label{condR} \end{eqnarray} \end{subequations} where $V$ is the volume, and $i,j=1,\ldots,N$ are flavor indices. Standard steps using the identity \begin{equation} \Big(\Sl{D} + m(\Omega^\dagger P_L + \Omega P_R) \Big) \Big(-\Sl{D} + m(\Omega P_L + \Omega^\dagger P_R) \Big) = -\Sl{D}^2 + m^2 \ . \label{Dsq} \end{equation} give rise to the expressions \begin{eqnarray} \Sigma_L &=& -(a_1-a_5) \Omega \ , \label{SigL}\\ \Sigma_R &=& -(a_1+a_5) \Omega^\dagger \ , \label{SigR} \end{eqnarray} where \begin{eqnarray} a_1 &=& \frac{m}{2V} \svev{{\rm Tr}\, \left[\Big(-\Sl{D}^2 + m^2\Big)^{-1} \right] } \ , \label{g0cond}\\ a_5 &=& \rule{0ex}{4ex} \frac{m}{2V} \svev{{\rm Tr}\, \left[\gamma_5 \Big(-\Sl{D}^2 + m^2\Big)^{-1} \right] } \ . \label{g5cond} \end{eqnarray} The ${\rm Tr}\,$ symbol indicates a trace over spacetime, color and Dirac indices.\footnote{When the Dirac operator occurs inside the ${\rm Tr}\,$ symbol, by convention it carries no flavor indices.} By applying a parity transformation we may express these quantities more explicitly as \begin{eqnarray} a_1 &=& \frac{m}{2V} \int {\cal D} A\, \tilde\m(A) \cos(\th_{\rm eff} Q) \,{\rm Tr}\,\! \left[\Big(-\Sl{D}^2 + m^2\Big)^{-1} \right] \ , \label{vala1}\\ a_5 &=& -\frac{im}{2V} \int {\cal D} A\, \tilde\m(A) \sin(\th_{\rm eff} Q) \,{\rm Tr}\,\! \left[\gamma_5\Big(-\Sl{D}^2 + m^2\Big)^{-1} \right] \ . \label{vala5} \end{eqnarray} It follows that $a_1$ is real, while $a_5$ is imaginary. Both $a_1$ and $a_5$ are functions of $\th_{\rm eff}$, defined in Eq.~(\ref{theff}). Introducing \begin{equation} z = a_1 - a_5\ , \label{z} \end{equation} we arrive at \begin{equation} \Sigma_L = \Sigma_R^\dagger = -z(\th_{\rm eff})e^{i\alpha/(NT)}\tilde\O=-\left[z(\th_{\rm eff})e^{-i\th_{\rm eff}/(NT)}\right]e^{i\th/(NT)}\tilde\O \ . \label{Sigz} \end{equation} In the special case $\th_{\rm eff}=\th-\alpha=0$, $a_5$ vanishes while $a_1=r$ is real positive. Hence, in that case, $z=r>0$, and \begin{equation} \label{Sigr} \Sigma_L = -r\Omega = -r\, e^{i\alpha/(NT)} \tilde\O=-r\, e^{i\th/(NT)} \tilde\O \ . \end{equation} Finally, in the limit $m\to 0$ we recover the Banks--Casher relation, \begin{equation} \label{BC} r = \frac{\p}{2}\,\r(0) \ , \end{equation} where $\r(\lambda)$ is the spectral density of the massless Dirac operator. Returning to the general case of Eq.~(\ref{Sigz}) we see that the orientation of the condensate is determined by the ``normalized'' mass matrix ${\cal M}/m$ and by $\th_{\rm eff}$. In retrospect, this pattern is a consequence of Eq.~(\ref{cond}), which defines the condensates via derivatives of the partition function with respect to the mass matrix, together with the fact that the partition function itself is invariant under non-abelian chiral rotations of the mass matrix, and depends on $\th$ (or $\th_{\rm shf}$) and $\alpha$ through their difference $\th_{\rm eff}$ only, as we proved rigorously in Sec.~\ref{latt} (see, in particular, Eq.~(\ref{thefflatt})). These are the only features of the condensate that we will need in the following. \subsection{\label{dirachpt} Effective low-energy theory} We now turn to the effective theory for the Nambu--Goldstone pions associated with the spontaneous breaking of chiral symmetry. As noted above, at this stage the discussion is restricted to QCD-like theories in which the fermions belong to a complex {\it irrep}. The dynamical effective field is \begin{equation} \Sigma(x) = \Sigma_0 U(x)\ ,\qquad U(x) = \exp(i\sqrt{2}\Pi(x)/f) \ , \qquad \Pi(x)=\prod_{a=1}^{N^2-1}\Pi_a(x)T_a \ , \label{Upion} \end{equation} where $U(x)$ is the $\textrm{SU}(N)$ valued pion field, and $\Sigma_0\in \textrm{U(1)}$ is a constant phase factor.\footnote{ Any constant $\textrm{SU}(N)$-valued part of $\Sigma$ can be absorbed into the pion field. $\Sigma_0$ may be regarded as a remnant of the $\eta'$ field (see, for instance, Refs.~\cite{EW,VV}). } The leading-order potential is \begin{equation} \label{Vcl} V = - \frac{f^2 B}{2} \,{\rm tr}({\cal M}^\dagger \Sigma + \Sigma^\dagger {\cal M}) \ , \end{equation} where we recall that ${\cal M}=me^{i\alpha/(NT)}\tilde\O$, with $\tilde\O\in \textrm{SU}(N)$. We remind the reader that the product $Bm$ is renormalization-group invariant, and depends only on the chiral-limit value of the condensate.\footnote{ In particular, the leading-order chiral lagrangian is insensitive to the quadratic divergence proportional to $m/a^2$ that is present in the bare lattice condensate away from the chiral limit in any fermion formulation. } As we have seen in Sec.~\ref{diracrev}, the partition function of the microscopic theory depends on $\alpha$ and $\th$ only through their difference $\th_{\rm eff} = \th - \alpha$, and the same must thus be true in the effective theory: the lagrangian of the effective theory must be a function of $\th_{\rm eff}$ only, order by order in the chiral expansion, starting with the tree-level potential $V$. Evidently, $V$ will be a function of only $\th_{\rm eff}$ if we set \begin{equation} \label{Sig0} \Sigma_0 = e^{i\th/(NT)} \ . \end{equation} In App.~\ref{theffchpt} we use the power counting and the symmetries of the effective theory to prove that Eq.~(\ref{Sig0}) provides the unique solution to the requirement that the tree-level potential~(\ref{Vcl}) depends on $\alpha$ and $\th$ only through their difference $\th_{\rm eff}$. We also prove that a similar statement applies to the next-to-leading order lagrangian. In the effective theory, the tree-level condensate now takes the form \begin{equation} \label{Sigth} \Sigma_L =\frac{\partial V}{\partial {\cal M}^*} \bigg|_{U=U_0}= -\frac{f^2 B}{2}\, e^{i\th/(NT)} U_0 \ , \end{equation} where $U_0 \in \textrm{SU}(N)$ is the global minimum of the potential. For this to be consistent with Eq.~(\ref{Sigz}), the global minimum $U_0$ must be equal to $U_n = e^{2\p in/N} \tilde\O$, for some $0\le n < N$, as we will see next. Substituting $U_n$ into Eq.~(\ref{Vcl}) gives \begin{equation} \label{Vmin} V(U_n)=-f^2 BNm \cos(\th_{\rm eff}/(NT)+2\p n/N) \ . \end{equation} In App.~\ref{proofSigLO} we prove that the global minimum is reached when $\th_{\rm eff}+2\p nT$ is closest to zero. Denoting the corresponding value of $n$ by $n(\th_{\rm eff})$, the tree-level condensate is thus \begin{equation} \label{SigLO} \Sigma_L = -\frac{f^2 B}{2} e^{i(\th/(NT)+2\p n(\th_{\rm eff})/N)} \tilde\O \ . \end{equation} This result for $\Sigma_L$ is consistent with Eq.~(\ref{Sigz}), and thus demonstrates the need to introduce the constant $\textrm{U(1)}$-valued phase $\Sigma_0$ into the effective theory. Without $\Sigma_0$, the effective theory would have led to a value for $\Sigma_L$ in $\textrm{SU}(N)$. This would have been inconsistent, as, for example, can be seen in the case $\th=\alpha\ne 0$, by comparison with Eq.~(\ref{Sigr}). We comment that in exceptional cases there is a competition between the leading- and next-to-leading order potentials \cite{Smilga,HS}. In that case the discussion leading to Eq.~(\ref{SigLO}) does not apply. But the functional form of Eq.~(\ref{SigLO}) remains valid: it must remain true that $\Sigma_L$ is oriented in the direction of $e^{i(\th/(NT)+2\p n/N)} \tilde\O$ for some $n$, where again $n$ depends on $\th_{\rm eff}$ only, as can again be seen by comparison with Eq.~(\ref{Sigz}). \section{\label{vacreal} Vacuum angle and the chiral condensate: real \bf{\textit{irrep}}} In this section we turn to real {\it irreps}. In Sec.~\ref{real} we discuss the condensate, and elaborate on the differences between the complex case (discussed in Sec.~\ref{dirac}) and the real case. We deal separately with a stand-alone theory of Majorana fermions, and with a theory of $2N$ Majorana fermions that was obtained by reformulating a theory of $N$ Dirac fermions, where the apparent paradox described in the introduction arises. We then discuss the implications for the chiral effective theory. In Sec.~\ref{chpt} we give a diagrammatic proof that, when $\th_{\rm eff}$ is held fixed, different orientations of the mass matrix give rise to same physical observables. \subsection{\label{real} The condensate for a real \textbf{\emph{irrep}}} We begin with a general theory of $N_{\rm maj}$ Majorana fermions, where $N_{\rm maj}$ can be both even or odd. Allowing $N=N_{\rm maj}/2$ to be half-integer in Eq.~(\ref{transfreal}), the flavor symmetry of the massless theory is $\textrm{SU}(N_{\rm maj})$, which we will assume to be spontaneously broken to $\textrm{SO}(N_{\rm maj})$. We will consider a mass term of the general form \begin{equation} \label{genmassmaj} {1\over 2}\overline{\J}({\cal M}^\dagger P_L+{\cal M} P_R)\Psi\ , \end{equation} where now \begin{equation} \label{majmass} {\cal M} = {\cal M}^T = m\Omega = m e^{2i\alpha/(N_{\rm maj} T)} \tilde\O \ , \qquad \tilde\O \in \textrm{SU}(N_{\rm maj}) \ , \end{equation} and we assume $m>0$. Formally, the fermion path integral is a pfaffian. However, as we have seen in Sec.~\ref{ambg}, the phase of this pfaffian is ambiguous in the continuum. The rigorous solution to this problem is to define the pfaffian via a lattice regularization. For the mass matrix in Eq.~(\ref{majmass}), this gives rise to the following relations in the continuum limit \begin{equation} \label{PfF} \textrm{pf}(\Sl{D} + {\cal M}^\dagger P_L + {\cal M} P_R) = e^{i\alpha Q}\, \textrm{pf}^{N_{\rm maj}}(\Sl{D}+m) = e^{i\alpha Q}\, {\rm det}^{N_{\rm maj}/2}(\Sl{D}+m)\ . \end{equation} The second equality implies that $\textrm{pf}(\Sl{D}+m)$ is strictly positive, as follows from App.~\ref{pospf}. One way to derive Eq.~(\ref{PfF}) is to start from a lattice theory of domain-wall Majorana fermions with $\th_W=0$ and $\th_m=2\alpha/(N_{\rm maj} T)$, and take the continuum limit. Defining $\Sigma_L$ and $\Sigma_R$ as in Eqs.~(\ref{condL}) and~(\ref{condR}), but replacing $\psi\to\Psi$ and $\overline{\j}\to\overline{\J}$, the rest of the discussion of Sec.~\ref{diracmic} carries over.\footnote{ The definition of parity is somewhat more subtle with Majorana fermions, see Ref.~\cite{GSHiggs2}.} We next consider the case where $N$ Dirac fermions are traded with $2N$ Majorana fermions. In the initial Dirac-fermion lattice formulation we again set $\th_W=0$. As follows from Sec.~\ref{latt}, this choice implies that $\th_{\rm shf}=\th$, and thus the angle $\th$ that multiplies the lattice-discretized topological term in the gauge action is set to the same value as in the target continuum theory. As usual, the $\textrm{U(1)}$ phase of the lattice mass matrix is the same as in the continuum, $\th_m=\alpha/(NT)$. The key point is that the values of $\th_m$ and $\th_W$ in any equivalent Majorana formulation are constrained by their values in the initial Dirac formulation, and, in particular, by the choice $\th_W=0$ we have initially made. The basic transition to Majorana fermions (using Eq.~(\ref{Diracreal}) in the Wilson case, or Eq.~(\ref{Psidwf}) in the domain-wall case) gives rise to a mass term and a (generalized) Wilson term that are both oriented in the direction of the matrix $J_S$ of Eq.~(\ref{Js}). In itself, $J_S$ has an axial $\textrm{U(1)}$ phase of $\p/2$. As a result, the phases of the mass term and the (generalized) Wilson term both get shifted by $\p/2$, becoming $\th'_m = \alpha/(NT)+\p/2$, and $\th'_W = \p/2$. In the continuum limit, the new phase of the mass matrix is $\alpha' = NT\th'_m = \alpha+NT\p/2$. The phase $\th'_W$ gets traded with an additional vacuum angle, so that the new vacuum angle is $\th'=\th_{\rm shf}=\th+NT\p/2$. As expected, both phases were shifted by the same amount, so that the effective vacuum angle, which is their difference, is unchanged, $\th_{\rm eff} = \th-\alpha = \th'-\alpha'$. Alternatively, we may perform an additional (non-anomalous) lattice transformation that brings back the phases to their original values, $\th_m = \alpha/(NT)$ and $\th_W=0$, so that $\th_{\rm shf}=\th$ (for the Wilson case, see Eq.~(\ref{finalWmaj})). Once again, $\th_{\rm eff}$ is unchanged. We next turn to the chiral effective theory, focusing on the case $N_{\rm maj}=2N$, with the mass matrix ${\cal M}$ of Eq.~(\ref{majmass}). The non-linear field $\Sigma$ is now an element of the coset $\textrm{SU}(2N)/\textrm{SO}(2N)$. It is symmetric, $\Sigma^T=\Sigma$, and transforms as $\Sigma\to h\Sigma h^T$ under the chiral transformation~(\ref{transfreal}), just like ${\cal M}$ (when elevated to a spurion). Instead of Eqs.~(\ref{Upion}) and~(\ref{Sig0}), which we had in the case of a complex {\it irrep}, the coset field for a real {\it irrep}\ is parametrized as \begin{equation} \label{Sigreal} \Sigma(x) = U(x)^T \Sigma_0 = \Sigma_0 U(x)\ , \end{equation} where now \begin{equation} \label{Sig0J} \Sigma_0 = e^{i\tilde\th/(NT)} J \ , \end{equation} and where $J$ is a real symmetric $\textrm{SO}(2N)$ matrix. Once again, the phase $\tilde\th$ is to be chosen so that the chiral theory is a function of $\th_{\rm eff}$ only. We will discuss examples of this shortly. Equations~(\ref{Sigreal}) and~(\ref{Sig0J}) provide a generalization of the results of Ref.~\cite{BL}, where the role of the U(1) phase was not discussed, and of Ref.~\cite{tworeps}, where the discussion was limited to $\theta=\alpha=0$, and $J={\bf 1}_{2N}$. For simplicity, in the rest of this section we again set $\tilde\O=1$ in Eq.~(\ref{majmass}).\footnote{% The generalization to arbitrary $\tilde\O$ is similar to Sec.~\ref{dirac}. } Let us consider the construction of the chiral theory in the case we have just discussed, where $N$ Dirac fermions get traded with $2N$ Majorana fermions. In the initial Dirac formulation we take the mass matrix to be $me^{i\alpha/(NT)}{\bf 1}_N$, and we allow for an arbitrary vacuum angle $\th$. After the transition to the Majorana formulation, the mass matrix is ${\cal M}=me^{i\alpha/(NT)} J_S$, which is equivalent to a $\textrm{U(1)}$ phase $\alpha'/(NT)=\alpha/(NT)+\p/2$. Correspondingly, the vacuum angle of the continuum-limit theory becomes $\th'=\th+NT\p/2$. A possible choice for $\Sigma_0$ is $e^{i\th'/(NT)} {\bf 1}_{2N}$. An alternative, equivalent choice, which involves the same U(1) phase, is $\Sigma_0=e^{i\th/(NT)} J_S$. For the latter choice, the factors of $J_S$ cancel out between the mass matrix and the non-linear field when the latter is expanded in terms of the pion field. Studying the classical solution as we did in Sec.~\ref{dirachpt}, we similarly find that the expectation value of the pion field $U(x)$ is a $Z_{2N}$ element which again depends only on $\th_{\rm eff}$. The situation is similar if we apply an $\textrm{SU}(2N)$ transformation that rotates the Majorana mass matrix to ${\cal M}=me^{i(\alpha/(NT)+\p/2)} {\bf 1}_{2N} \ = me^{i\alpha'/(NT)} {\bf 1}_{2N}$ (this corresponds to $J_S^{\rm rot}$ of Eq.~(\ref{Js2})). If we choose to apply the same $\textrm{SU}(2N)$ rotation to $\Sigma_0$, it becomes $\Sigma_0=e^{i(\th/(NT)+\p/2)} {\bf 1}_{2N}=e^{i\th'/(NT)} {\bf 1}_{2N}$. Finally, if in the lattice-regularized theory we have applied a further U(1) axial transformation that simultaneously brings the mass matrix to ${\cal M}=me^{i\alpha/(NT)} {\bf 1}_{2N}$, and the (shifted) vacuum angle of the continuum-limit theory back to $\th_{\rm shf}=\th$, then in the chiral theory we can correspondingly set $\Sigma_0=e^{i\th/(NT)} {\bf 1}_{2N}$. In all of these examples, the constant mode of the pion field $U(x)$ will be a $Z_{2N}$ element determined by $\th_{\rm eff}$ only. \subsection{\label{chpt} Chiral expansion for a real \textbf{\emph{irrep}}} In the case of a complex {\it irrep}, studied in Sec.~\ref{dirac}, we have demonstrated that the condensate can be expressed as a function of $\th$ and $\th_{\rm eff}$ via Eq.~(\ref{Sigz}). We then determined the $\th$ dependence of the chiral lagrangian by requiring that the effective theory reproduce this result. When we expand the chiral lagrangian around the classical solution in terms of the pion field, the expansion is then manifestly a function of $\th_{\rm eff}$ only, and not of $\th$ and $\alpha$ separately. It follows that physical observables, such as the decay constant and the pion mass, depend only on $\th_{\rm eff}$ as well. In the case of a real {\it irrep}, we again expect that the chiral expansion for any physical observable will depend on $\alpha$ and $\th$ only through their difference $\th_{\rm eff}$. However, establishing this result is now more subtle. Let us consider two simple examples, both of which can be parametrized as ${\cal M}=mJ$, $\Sigma_0=J$, for the same $J$. The two cases are then defined by taking $J=J_S$, for which $\alpha/(NT)=\th/(NT)=\p/2$, or $J={\bf 1}_{2N}$, for which $\alpha=\th=0$. Notice that $\th_{\rm eff}=0$ in both cases. Now, using Eq.~(\ref{Sigreal}), and noting that in both cases $J^2={\bf 1}_{2N}$, it is easy to see that $J$ drops out of the product $\Sigma^\dagger(x){\cal M}$. However, unlike in the case of a complex {\it irrep}, this does not immediately imply that the perturbative expansion is independent of the choice of $J$. The reason is the constraints imposed on the pion field: this field is hermitian, traceless, and satisfies \begin{equation} \label{constraint} \p=J\p^T J\ . \end{equation} Thus, even though $J$ drops out of the tree-level lagrangian, the pion field still depends on it, through the above constraint, and the pion propagator \cite{BL,tworeps} \begin{equation} \label{pionprop} \svev{\p_{ij}(x)\p_{k\ell}(y)} = \int\frac{d^4p}{(2\pi)^4}\, \frac{e^{ip(x-y)}}{p^2+M^2} \left(\frac{1}{2} \left(\delta_{i\ell}\delta_{jk} + J_{ik}J_{j\ell}\right) - \frac{1}{2N}\,\delta_{ij}\delta_{k\ell}\right)\ , \end{equation} depends on the choice of $J$ explicitly. Let us consider the case $N=1$. For $J=J_S$, and choosing a basis where $J_S=\s_3$, the constraints translate into $\p_{11}=\p_{11}^*=-\p_{22}$, and $\p_{12}=-\p_{12}^*=-\p_{21}$. For $J={\bf 1}_2$, the diagonal elements remain the same as before, whereas for the off-diagonal elements we have $\p_{12}=\p_{12}^*=\p_{21}$. Stated differently, for $J=\s_3$ the expansion of the pion field is $\p=\p_3\s_3+\p_2\s_2$, whereas for $J={\bf 1}_2$ it is $\p=\p_3\s_3+\p_1\s_1$. The tensor structure of the non-vanishing propagators is \begin{eqnarray} \label{propexample} \svev{\p_{11}(x)\p_{11}(y)}:\quad \frac{1}{2} \left(\delta_{11}\delta_{11} + J_{11}J_{11}\right) - \frac{1}{2}\,\delta_{11}\delta_{11}&=&\frac{1}{2}\ , \qquad J=\s_3,{\bf 1}_2\ ,\\ \svev{\p_{12}(x)\p_{12}(y)}:\quad \frac{1}{2} \left(\delta_{12}\delta_{12} + J_{11}J_{22}\right) - \frac{1}{2}\,\delta_{12}\delta_{12}&=& \rule{0ex}{5ex} \left\{ \begin{array}{c} -{1\over 2} \ , \qquad J=\s_3 \ , \\ {1\over 2} \ , \qquad J={\bf 1}_2 \ . \end{array} \right.\nonumber \end{eqnarray} Using a hat to distinguish the pion field for the case $J=\s_3$, we see that it will transform into the pion field of the $J={\bf 1}_2$ case if we substitute \begin{equation} \label{fieldredef} \hat\p_{11}=\p_{11}\ ,\qquad \hat\p_{12}=i\p_{12}\ , \end{equation} which corresponds to the replacement of $\s_1$ by $\s_2$ in the expansion of the pion field. Of course, non-perturbatively, the redefinition~(\ref{fieldredef}) is not allowed, but in (chiral) perturbation theory the only question is whether it leads to the same order-by-order diagrammatic expansion for any correlation function with a prescribed set of external pion legs. We will now prove that \begin{eqnarray} \label{lemma} &&\svev{\hat\p_{11}^{(1)}(x_1)\dots \hat\p_{11}^{(m)}(x_m)\hat\p_{12}^{(1)}(y_1)\dots\hat\p_{12}^{(n)}(y_n)}\\ &&\hspace{2cm}=i^n\svev{\p_{11}^{(1)}(x_1)\dots \p_{11}^{(m)}(x_m)\p_{12}^{(1)}(y_1)\dots\p_{12}^{(n)}(y_n)}\ , \nonumber \end{eqnarray} to all orders in chiral perturbation theory, for any $m$ and $n$. A vertex with $k$ $\p_{12}$ lines attached to it also changes by a factor $i^k$ after the field redefinition (note that $k$ is always even, so that taking $i$ or $-i$ does not matter). Also, for any diagram, the number of $\p_{12}$ external lines $n$, the number of $\p_{12}$ propagators $p$ and the number $v_k$ of vertices with $k$ $\p_{12}$ lines attached to it are related by \begin{equation} \label{12comb} 2p=n+\sum_k k v_k\ . \end{equation} It follows from this relation that, for all diagrams, the field redefinition~(\ref{fieldredef}) indeed leads to the factor $i^n$ in Eq.~(\ref{lemma}), thus proving this result. Each $\p_{12}$ propagator flips its sign, and $p$ such propagators thus lead to a factor $(-1)^p=i^{2p}$. In addition, the diagram changes by a factor $i^{\sum_k k v_k}$ because of the $v_k$ vertices with $k$ $\p_{12}$ lines, and thus the diagram changes by a total factor $i^{2p+\sum_k k v_k}=i^n$, using Eq.~(\ref{12comb}). Here we also used that all terms in the exponent are even (and, thus, $n$ is even as well). Next, we discuss the general case of $N$ Dirac fermions in a real {\it irrep}, comparing the cases $J=J_S$, with $J_S$ in Eq.~(\ref{Js}), and $J={\bf 1}_{2N}$. The matrix $J_S$ can now be brought onto a form in which $\s_3$ appears $N$ times along the diagonal. The constraints on the pion field are now, in this basis, \begin{eqnarray} \label{constraintsgen} \p_{NN}&=&-\sum_{i=1}^{N-1}\p_{ii}\ ,\\ \p_{ij}&=&(-1)^{i+j}\p_{ji}\ .\nonumber \end{eqnarray} In addition, $\p_{ii}$ is real for all $i$, and $\p_{ij}=\p_{ji}^*$ for all $i\ne j$. A minus sign in the pion propagator $\svev{\p_{ij}(x)\p_{ij}(y)}$, {\it cf.\ } Eq.~(\ref{propexample}), occurs when $i$ is even and $j$ is odd, or the other way around, because $J_{ii}J_{jj}=-1$ only in this situation. Since minus signs in a field redefinition like Eq.~(\ref{fieldredef}) do not affect our arguments, we can choose \begin{equation} \label{fieldredefgen} \hat\p_{ij}=i^{i+j}\p_{ij}\ . \end{equation} Now let us consider a diagram with $p_{ij}$ $\hat\p_{ij}$ propagators, $n_{ij}$ $\hat\p_{ij}$ external lines, and $v_{k,ij}$ vertices with $k_{ij}$ $\hat\p_{ij}$ lines attached to it. Note that because of Eq.~(\ref{constraintsgen}) we can always take $i\le j$ (and $i\ne N$ if $i=j$, but this is not important). We have that \begin{equation} \label{combgen} 2p_{ij}=n_{ij}+\sum_{k_{ij}} k_{ij}v_{k,ij}\ . \end{equation} This relation implies that a correlation function with $n_{ij}$ external $\hat\p_{ij}$ lines equals $i^{-(i+j)n_{ij}}$ times the correlation function in terms of the unhatted meson field $\p_{ij}$, using that $i^{-2p_{ij}}=i^{2p_{ij}}$, and Eq.~(\ref{combgen}). The full correlation function changes by the product \begin{equation} \label{factor} \prod_{ij}i^{-(i+j)n_{ij}}=i^{-\sum_{ij}(i+j)n_{ij}}\ , \end{equation} where the product and sum are over all pairs $ij$ present in the correlation function. The sum in the exponent on the right-hand side of Eq.~(\ref{factor}) always has to be even, because every index has to appear an even number of times in the correlation function for it not to vanish. This means we can drop the minus sign in this exponent, and we thus find the desired result. Note that, unlike in the $N=1$ example, we do not always have that $n_{ij}$ is even. A simple counter example is the correlation function $\svev{\p_{12}\p_{23}\p_{34}\p_{41}}$, which does not vanish, but has $n_{12}=n_{23}=n_{34}=n_{14}=1$. However, clearly, $(1+2)n_{12}+(2+3)n_{23}+(3+4)n_{34}+(1+4)n_{14}=20$ is even. A similar type of argument was used in Ref.~\cite{GSS} to show the equivalence of ``standard'' quenched chiral perturbation theory \cite{BG} with ``non-perturbatively correct'' quenched chiral perturbation theory. \section{\label{conc} Conclusion} In QCD-like theories it is well known that physical observables depend only on the effective vacuum angle $\th_{\rm eff}$, which is the difference between the explicit angle $\th$ multiplying the topological term in the gauge-field action, and the (properly normalized) $\textrm{U(1)}_A$ angle $\alpha$ of the fermion mass matrix. When $N$ Dirac fermions belong to a real {\it irrep}\ of the gauge group, the theory can be reformulated in terms of $2N$ Majorana fermions. The integration over a Majorana field yields a functional pfaffian. As we discussed in the introduction, the phase of this pfaffian appears to lead to a paradox: in certain cases, $\th_{\rm eff}$ changes by an odd multiple of $\p$ relative to its value in the initial Dirac theory. Tracing the origin of this phenomenon we showed that, in the continuum, the phase of the functional pfaffian is in fact inherently ambiguous, as it depends on the choice of basis for the Majorana field. A partial solution is that, in the case of $2N$ Majorana fermions, one can fix the ambiguity by appealing to the corresponding theory of $N$ Dirac fermions in such a way that the apparent paradox is avoided. A non-perturbative lattice definition of Majorana fermions is free of the phase ambiguity: on any finite-volume lattice, the (real-{\it irrep}) Dirac operator becomes a finite-size matrix, and, moreover, the lattice automatically selects the coordinate basis to define the Dirac operator, and, hence, its pfaffian. We reviewed the work of Ref.~\cite{SSt} who argued long ago that, if the Wilson term in the Wilson lattice action for Dirac fermions is rotated by a phase, that phase induces a topological term in the continuum limit. We observed that there is a subtlety with this argument associated with renormalization, which leads to a conjecture (first made in Ref.~\cite{JS}) on how to complete the argument of Ref.~\cite{SSt}, described in App.~\ref{SStconj}. We generalized this result to domain-wall fermions, where this subtlety does not arise, as well as to the case of Majorana fermions. This allowed us to unambiguously determine the effective vacuum angle, finding consistent results between the Dirac and Majorana formulations in all cases. As an application, we discussed how chiral perturbation theory reproduces the correct dependence on the explicit ($\th$) and effective ($\th_{\rm eff}$) vacuum angles. This behavior has been long known (even if maybe not widely known) for the effective theory for a gauge theory with Dirac fermions, but, to our knowledge, this is the first detailed study of this issue for the effective theory for a gauge theory with Majorana fermions. As such, our results fill in a lacuna in the discussion of Ref.~\cite{BL}, and resolve a question that was left open in Ref.~\cite{tworeps}. In particular, we considered the chiral expansion for $2N$ Majorana fermions in two cases that share $\th_{\rm eff}=0$, while the mass matrix is proportional to $J_S$ in one case, and to ${\bf 1}_{2N}$ in the other, giving a diagrammatic proof that all physical observables are equal in the two cases, as required by the common value of $\th_{\rm eff}$. \vspace{3ex} \noindent {\bf Acknowledgments} \vspace{2ex} \noindent We like to thank Steve Sharpe for useful discussions. We also like to thank Jan Smit for comments and discussion on the first version of this paper, which led to the addition of two new appendices. The work of MG is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-FG03-92ER40711. YS is supported by the Israel Science Foundation under grant no.~491/17.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Human matting is an important image editing task which enables accurate separation of humans from their backgrounds. It aims to estimate the per-pixel opacity of the foreground regions, making it valuable to use the extracted human image in some recomposition scenarios, including digital image and video production. One may refer this task as semantic segmentation problem~\cite{badrinarayanan2017segnet,chen2017rethinking,long2015fully}, which achieves fine-grained inference for enclosing objects. However, segmentation techniques focus on pixel-wise binary classification towards scene understanding, although semantic information is well labelled, it could not catch complicated semantic details like human hair. \begin{figure}[t] \centering \resizebox{1.00\linewidth}{!}{ \includegraphics{figures/motivation.pdf} } \caption{The user interactive method could catch precise semantics and details under the guidance of trimaps. Without the trimap and enough training dataset, one may get inaccurate semantic estimation, which inevitably leads to wrong matting results. Our methods achieve comparable matting results by leveraging coarse annotated data while do not need trimaps as inputs.} \label{fig: motivation} \end{figure} The matting problem can be formulated in a general manner. Given an input image $I$, matting is modeled as the weighted combination of foreground image $F$ and background image $B$ as follows~\cite{wang2008image}: \begin{equation} \label{eq:matting_def} I_{z} = \alpha_{z}F_{z} + (1 - \alpha_{z})B_{z}\,, \quad \alpha_{z} \in [0,1]\,. \end{equation} where $z$ represents any pixel in image $I$. The known information in Eq.~\ref{eq:matting_def} are the three dimensional RGB color $I_{z}$, while the RGB color $F_{z}$ and $B_{z}$, and the alpha matte estimation $\alpha_{z}$ are unknown. Matting is thus to solve the 7 unknown variables from 3 known values, which is highly under-constrained. Therefore, most existing matting methods take a carefully specified trimap as constraint to reduce the solution space. However, a dilemma in terms of quality and efficiency for trimaps still exists. The key factor that affecting the performance of matting algorithm is the accuracy of trimap. The trimap divides the image into three regions, including the definite foreground, the definite background and the unknown region. Intuitively, the smaller regions around foreground boundary that the trimap contains, the less unknown variables would be estimated, leading to a more precise alpha matte result. However, designing such an accurate trimap requires a lot of human efforts with low efficiency. The labeling quality should be unified among all the data, either large or small size of unknown regions will degrade the final alpha matte effects. One possible solution to solve the dilemma is adaptively learn a trimap from coarse to fine~\cite{shen2016deep,Cai_2019_ICCV}. In contrast, another solution discards the trimap from the input and employs it as an implicit constraint to a deep matting network\cite{chen2018semantic,Zhang_2019_CVPR}. However, these methods still rely on the quality of the generated trimap, unable to retain both the semantic information and high quality details when implicit trimap is inaccurate. Another limitation comes from the data for human matting. It is important to have high quality annotation data for image matting task. Since humans in natural images possess a variety of colors, poses, head positions, clothes, accessories, etc. The semantically meaningful structure around the foreground like human hair, furs are the challenging regions for human matting. Annotating such accurate alpha matte is labor intensive and requires great skills beyond normal users. Shen \etal~\cite{shen2016deep} proposed a human portrait dataset with 2000 images, but it has strict constraint on position of human upper body. The widely used DIM dataset~\cite{xu2017deep} is limited in human data, with only 213 human images. Although Chen \etal~\cite{chen2018semantic} created a large human matting dataset, it is only for commercial use. Unfortunately, collecting the dataset in ~\cite{chen2018semantic} with 35,311 images takes more than 1,200 hours, which is undesirable in practice. Therefore, we argue that there is a solution by combining the limited fine annotated image with easily collected coarse annotated image for human matting. To address the aforementioned problems, we propose a novel framework to utilize both coarse and fine annotated data for human matting. Our method could predict accurate alpha matte with high quality details and sufficient semantic information without trimap as constraint, as shown in Figure~\ref{fig: motivation}. We achieve this goal by proposing a coupled pipeline with three subnetworks. The mask prediction network (MPN) aims to predict low resolution coarse mask, which contains semantic human information. MPN is trained using both fine and coarse annotated data for better performance on various real images. However, the output of MPN may vary and are not consistent with respect to different input images. Therefore, a quality unification network (QUN) trained on hybrid annotated data is introduced to rectify the quality level of MPN output to the same level. A matting refinement network (MRN) is proposed to predict the final accurate alpha matte, taking in both the origin image and its unified coarse mask as input. Different with MPN and QUN, the matting refinement network is trained using only the fine annotated data. We also constructed a hybrid annotated dataset for human matting task. The dataset consists of both high quality (fine) annotated human images and low quality (coarse) annotated human images. We first collect 9526 images/alpha pairs with fine annotations. In comparison with previous dataset, we diversity the distribution of human images with carefully annotated alpha matte~\cite{shen2016deep,xu2017deep}, within a labor rational volume size~\cite{chen2018semantic}. We further collect 10597 coarse annotated data to better capture accurate semantics within our framework. We follow~\cite{xu2017deep} to composite both data onto 10 background images in MS COCO~\cite{lin2014microsoft} and Pascal VOC~\cite{everingham2010pascal} to form our dataset. Comprehensive experiments have been conducted on this dataset to demonstrate the effectiveness of our method, and our model is able to refine coarse annotated public dataset as well as semantic segmentation methods, which further verifies the generalization of our method. The main contributions of this work are: \begin{itemize}[topsep=0.5pt, itemsep=0.5pt, partopsep=0.5pt] \item To our best knowledge, this is the first method that uses coarse annotated data to enhance the performance of end-to-end human matting. Previous methods either take trimap as constraint or use sufficient fine annotated dataset only. \item We propose a quality unification network to rectify the mask quality during the training process so as to utilize both coarse and fine annotations, allowing accurate semantic information as well as structural details. \item The proposed method can be used to refine coarse annotated public dataset as well as semantic segmentation methods, which makes it easy to create fine annotated data from coarse masks. \end{itemize} \begin{figure*}[ht] \centering \resizebox{1\linewidth}{!}{ \includegraphics{figures/flowchart_final.pdf} } \caption{An overview of our network architecture. The proposed method is composed of three parts. The first part is mask prediction network (MPN), to predict low resolution coarse semantic mask. MPN is trained using both coarse and fine data. The second part is quality unification network (QUN). QUN aims to rectify the quality of the output from the mask prediction network to the same level. The rectified coarse mask is then unified and enables consistent input for training the following alpha matte prediction stage. The third part is matting refinement network (MRN), taking in the input image and the unified coarse mask to predict the final accurate alpha matte.} \label{fig: flowchart} \end{figure*} \section{Related Work} \noindent {\bf Natural Image Matting.} Natural image matting tries to estimate the the unknown area with known foreground and background in the trimap. The traditional methods can be summarized to sampling based methods and affinity based methods~\cite{wang2008image}. The sampling based methods~\cite{chuang2001bayesian,feng2016cluster,gastal2010shared,he2011global,johnson2016sparse,karacan2015image,ruzon2000alpha} leverage the nearby known foreground and background colors to infer the alpha values of the pixels in the undefined area. Assuming that alpha values for two pixels have strong correlations if the corresponding colors are similar. Following the assumption, various sampling methods are proposed including Bayesian matting~\cite{chuang2001bayesian}, sparse coding~\cite{feng2016cluster,johnson2016sparse}, global sampling~\cite{he2011global} and KL-divergence approaches~\cite{karacan2015image}. Compared with sampling based methods, Affinity based methods~\cite{aksoy2018semantic,aksoy2017designing,bai2007geodesic,chen2013knn,grady2005random,levin2007closed,levin2008spectral,sun2004poisson} define different affinities between neighboring pixels, trying to model the matte gradient instead of the per-pixel alpha value. Deep learning based method is able to learn a mapping between the image and corresponding alpha matte in an end-to-end manner. Cho \etal~\cite{cho2016natural} take the advantage of close-form matting~\cite{levin2007closed} and KNN matting~\cite{chen2013knn} for alpha mattes reconstruction. Xu \etal~\cite{xu2017deep} integrate the encoder-decoder structure with a following refinement network to predict alpha matte. Lutz \etal~\cite{lutz2018alphagan} further employ the generative adversarial network for image matting task. Cai \etal~\cite{Cai_2019_ICCV} argue the limitation of directly estimating the alpha matte from a coarse trimap, and propose to disentangle the matting into trimap adaptation and alpha estimation tasks. Compared with the above methods, our method simply use RGB images as input without the constraint of designated trimaps. \noindent {\bf Human image Matting.} As a specific type of image matting, human matting aims to estimate the accurate alpha matte corresponding to the human in the input image, which involves semantically meaningful structures like hair. Recently, several deep learning based human matting methods~\cite{chen2018semantic,shen2016deep,zhu2017fast} have been proposed. Shen \etal~\cite{shen2016deep} propose a deep neural network to generate the trimap of a portrait image and add a matting layer\cite{levin2007closed} for network optimization using the forward and backward propagation strategy. Zhu \etal~\cite{zhu2017fast} use a similar pipeline and design a light dense network for portrait segmentation and a feature block to learn the guided filter~\cite{he2010guided} for alpha matte prediction. Chen \etal~\cite{chen2018semantic} introduce an automatic human matting algorithm without feeding trimaps. It combines a segmentation module with a matting module for end-to-end matting. The late fusion CNN structure in ~\cite{Zhang_2019_CVPR} integrates the foreground and background classification presents its capacity for human image matting. However, these models require carefully collected image/alpha pairs, which may also suffer from inaccurate semantics due to lack of fine annotated human data. \section{Proposed Approach} We develop three subnetworks as a sequential pipeline. The first one is mask prediction network (MPN), to predict coarse semantic masks using data at different annotation quality level. The second one is quality unification network (QUN). QUN rectifies the quality of the output coarse mask from MPN to the same level. The third part is matting refinement network (MRN), to predict the final accurate alpha matte. The flowchart and the network structure is displayed in Figure~\ref{fig: flowchart}. \subsection{Mask Prediction Network} As no trimap is required as input, the first stage of the proposed method is to predict a coarse semantic mask. The network we use is encoder-decoder structure with skip connection, and we predict the foreground mask and the background mask at the same time. At this stage, we aim to estimate a coarse mask, and therefore the network is not trained at a high resolution. We resize all training data to resolution $192\times 160$ so as to train the mask prediction network (MPN) efficiently. In addition, the mask prediction network is trained using all training data, including low quality and high quality annotated data. The loss function to train LRPN is $L_1$ loss, \begin{equation} \begin{aligned} \label{eq:l1_loss} \mathcal{L}_{MPN}=\lambda_L|\alpha^c_p-\alpha^c_g|_1+(1-\lambda_L)|\beta^c_p-\beta_g^c|_1\,, \end{aligned} \end{equation} where the output is a 2-channel mask, $\alpha^c_p$ denotes the first channel of the output, i.e., the predicted foreground mask, $\alpha^c_g$ denotes the ground truth foreground mask, $\beta^c_p$ denotes the second channel of the output, i.e., the predicted background mask, and $\beta_g^c$ denotes the ground truth background mask. We set $\lambda_L=0.5$ in experiments. \subsection{Quality Unification Network} Due to the high cost of annotating high quality matting data, we propose to use hybrid data from different data source. Some of the data is annotated at high quality , even hairs are very well separated from the background (Figure~\ref{fig:demo_mqun}(a)). Whereas, majority of other data are annotated at a relatively low quality (Figure~\ref{fig:demo_mqun}(b)). Mask prediction network is trained with both fine annotated data and coarse annotated data. Thus, the quality of the predicted mask may vary significantly. As the alpha matte prediction network can only be trained on the high quality annotated data, the variation of the coarse mask quality will inevitably lead to inconsistent matting results during the inference stage. As illustrated in Figure~\ref{fig:mid_figure}(c), if the coarse mask is relatively accurate, the refinement network will work well to output accurate alpha matte. On the contrary, the refinement network will fail if the coarse mask lacks important details. We proposed to eliminate the data bias for training matting refinement network by introducing a quality unification network (QUN). The quality unification network aims to rectify the output quality of the mask prediction network to the same level, by improving the quality of coarse masks and lowering the quality of fine masks simultaneously. The output of the mask prediction network and the original image are feed into the quality unification network to unify the quality level. The rectified coarse mask is unified and enables consistent input for training the following accurate alpha matte prediction stage. The loss function of training QUN network contains two parts, identity loss and consistence loss. Identity loss forces the output of QUN not to change much from the original input, \begin{equation} \begin{aligned} \label{identity_loss} \mathcal{L}_{identity}= |Q(x)-x|_1+|Q(x')-x'|_1\,, \end{aligned} \end{equation} where $Q(\cdot)$ represent the quality unification network. $x$ denotes the concatenation of the input image and the accurate mask, $x'$ denotes the concatenation of the input image and the inaccurate mask. The second part is consistence loss. Consistence loss forces the output of QUN corresponding to accurate mask and inaccurate mask to be close. \begin{equation} \begin{aligned} \label{consist_loss} \mathcal{L}_{consist}=|Q(x)-Q(x')|\,. \end{aligned} \end{equation} Thus, the loss function of training QUN is the weighted sum of identity loss and consistence loss, \begin{equation} \begin{aligned} \label{QUN_loss} \mathcal{L}_{QUN}= \lambda_1\mathcal{L}_{identity}+ \lambda_2\mathcal{L}_{consist}\,. \end{aligned} \end{equation} During the training, we set $\lambda_1=0.25$ and $\lambda_2=0.5$. In Figure~\ref{fig:demo_mqun}, we illustrate the results of QUN. Fine mask (Figure~\ref{fig:demo_mqun}(a)) and coarse mask (Figure~\ref{fig:demo_mqun}(b)) are unified by QUN to Figure~\ref{fig:demo_mqun}(d) and (e) respectively. The difference maps are also calculated. We can observe that the unified high quality mask become relatively coarser and low quality mask becomes relatively finer. As a result, the unified masks are much closer to each other than the original fine and coarse masks. \begin{figure}[t] \centering \resizebox{0.9\linewidth}{!}{ \includegraphics{figures/demo_mqun.pdf} } \caption{Different quality of masks are unified by QUN. (a) High quality mask. (b) low quality mask. (c) Difference map of high and low quality mask. (d) Unified result of high quality mask by QUN. (e) Unified result of low quality mask by QUN. (f) Difference map of the unified high quality mask and the low quality mask. (g) Difference map of the unified high quality mask and the original high quality mask. (h) Difference map of the unified low quality mask and the original low quality mask. (i) Input image. } \label{fig:demo_mqun} \end{figure} \subsection{Matting Refinement Network} Matting refinement network (MRN) aims to predict accurate alpha matte. Therefore, we train MRN at a higher resolution ($768*640$ in all experiments). Note that the coarse mask from MPN and QUN is at low resolution ($192\times 160$). The coarse mask is integrated to MRN as external input feature maps, where the input is downscaled $4$ times after several convolution operations. The output of MRN are 4-channel maps, including three foreground RGB channels and one alpha matte channel. Predicting the foreground RGB channels coupled with alpha matte is able to increase the robustness, which plays a similar role of the compositional loss used in~\cite{xu2017deep,chen2018semantic}. The loss function we used to train MRN is $L_1$ loss, \begin{equation} \begin{aligned} \label{eq:hrn_loss} \mathcal{L}_{MRN}=\lambda_H|{RGB}_p-{RGB}_g|_1+(1-\lambda_H)|\alpha_p-\alpha_g|_1\,, \end{aligned} \end{equation} where ${RGB}_p$ and ${RGB}_g$ denote the predicted RGB foreground channels and ground truth foreground channels respectively. $\alpha_p$ and $\alpha_g$ denote the predicted alpha matte and ground truth alpha matte respectively. We set $\lambda_H=0.5$ in experiments. \begin{figure}[t] \centering \resizebox{1.00\linewidth}{!}{ \includegraphics{figures/dataset.pdf} } \caption{Input images and the corresponding annotations in our dataset. Our dataset consists of both coarse annotated images (a) and fine annotated images (b).} \label{fig: dataset} \end{figure} \subsection{Implementation details} We implement our method with Tensorflow~\cite{abadi2016tensorflow} framework. We perform training for our three networks sequentially. Before feeding into the mask prediction network, we conduct a down-sampling operation on images at $192\times160$ resolution, including both fine and coarse annotated data. Flipping is performed randomly on each training pair. We first train the mask prediction network for 20 epochs and fix the parameters. Then we concatenate the low resolution image and the output foreground mask as input to train quality unification network. When training QUN, random filters(filter size set as 3 or 5), binarization and morphology operations(dilate and erode) are exerted to fine annotated data to generate paired high and low quality mask data. After training quality unification network, all parameters are fixed. We finally train the matting refinement network with only the fine annotated data. The entire data pairs (image, alpha matte) are randomly cropped to $768\times640$. The learning rate for training all networks is $1e-3$. MPN and QUN are trained using batch size $16$ and MRN is trained using batch size $1$, as MRN is trained using only high resolution data. When testing, a feed-forward pass of our pipeline is performed to output the alpha matte prediction with only the image as input. The average testing time on multiple 800$\times$800 images is 0.08 seconds. \begin{table}[t] \centering\scriptsize \caption{The configurations of human matting datasets.} \resizebox{1\linewidth}{!}{ \begin{tabular}{ccccc} \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c}{Train Set} & \multicolumn{2}{c}{Test Set} \\ & Human & image & Human & image \\ \hline Shen \etal~\cite{shen2016deep} & 1700 & 1700 & 300 & 300 \\ TrimapDIM~\cite{xu2017deep} & 202 & 20200 & 11 & 220 \\ SHM~\cite{chen2018semantic} & 34493 & 34493 & 1020 & 1020 \\ Ours(coarse) & 10597 & 105970 & \multirow{2}{*}{125 (+11)} & \multirow{2}{*}{1360} \\ Ours(fine) & 9324(+202) & 95260 & & \\ \hline \end{tabular}} \label{tab:dataset} \end{table} \section{Human matting dataset} A main challenge for human matting is the lack of data. Xu \etal~\cite{xu2017deep} proposed a general matting dataset by compositing foreground objects from natural images to differents backgrounds, which has been widely used in the following matting works\cite{Cai_2019_ICCV,lutz2018alphagan,Zhang_2019_CVPR}. However, the diversity of human images is severely limited, including only 202 human images in training set and 11 human images in testing set. For human matting dataset, Shen \etal~\cite{shen2016deep} collected a portrait dataset with 2000 images, it assumes that the upper body appears at similar positions in human images and the images are annotated by Closed From~\cite{levin2007closed}, KNN~\cite{chen2013knn} methods, which can be inevitably biased. Although a large human fashion dataset is created by \cite{chen2018semantic} for matting, it is only for commercial use. To this end, we create a human matting dataset with high-quality for research. We carefully collected 9449 diverse human images with simple background from the Internet (i.e., white or transparent background in PNG format), each human image acquires a well annotated alpha matte after simple processing. The human images are split to training/testing set, with 9324 and 125 respectively. Following Xu \etal~\cite{xu2017deep}, we first add the human images in DIM dataset\cite{xu2017deep} into our training/testing set, forming a total of 9526 and 136 human foregrounds respectively. We then randomly sample 10 background images in MS COCO~\cite{lin2014microsoft} and Pascal VOC~\cite{everingham2010pascal} and composite the human images onto those background images. During composition, we ensure that the background images are not containing humans. \begin{figure*}[t!] \centering \resizebox{1.00\linewidth}{!}{ \includegraphics{figures/quality_cmp.pdf} } \caption{The qualitative comparison on our proposed dataset. The first column and the last column show the input image and the ground truth alpha matte, and the rest columns present the estimation results by DeepLab~\cite{chen2017rethinking}, Closed-form matting~\cite{levin2007closed}, DIM~\cite{xu2017deep}, SHM~\cite{chen2018semantic}, our method trained using fine annotated data only and our method trained using hybrid annotated data.} \label{fig:quality_cmp} \end{figure*} Another issue should be addressed for human matting dataset is the quality of annotations. Image matting task requires user designated annotations for objects, i.e., the high quality alpha matte. Besides, the user interaction methods require carefully prepared trimaps and scribbles as constraints, which is labor intensive and less scalable. Method without user provided trimaps is to predict the alpha matte by first generating implicit trimaps for further guidance, thus lead to some artifacts as well as losing some semantics for complex structures. We integrate the coarse annotation data to tackle this problem as they are much easier to obtain. We collect another 10597 human data from~\cite{wu2014early} and Supervisely Person Dataset, and follow the above setup to generate 105970 image with coarse annotations. Table~\ref{tab:dataset} shows the configuration of the existing human matting dataset. Our dataset consists of both fine and coarse annotated data, with nearly the same amount. Compared with user interactive methods~\cite{shen2016deep,xu2017deep}, our dataset covers diverse high quality human images, making it more robust for human matting models. Although sacrifice the number of high quality annotations than automatic method~\cite{chen2018semantic}, we introduce coarse annotated data to enhance the capacity for extracting both semantic and matting details at a lower cost. The data for both annotations are shown in Figure~\ref{fig: dataset}. \section{Experiments} \label{sec: Experiments} \subsection{Evaluation results.} \paragraph{Evaluation metrics.} We adopt four widely used metrics for matting evaluations following the previous works~\cite{xu2017deep,chen2018semantic}. The metrics are MSE (mean square error), SAD (sum of the absolution difference), the gradient error and the connectivity error. The gradient error and connectivity error proposed in~\cite{rhemann2009perceptually} are used to reflect the human perception towards visual quality of the alpha matte. Lower values of these metrics correspond to better estimated alpha matte. We normalize the estimated alpha matte and true alpha matte to [0, 1] to calculate these evaluation metrics. Since no trimap is required, we calculate over the entire images and average by the pixel number. \paragraph{Baselines.} We select the most typical method from semantic segmentation methods, traditional matting methods, user interactive methods and automatic methods respectively as our baselines. These methods are DeepLab~\cite{chen2017rethinking}, Closed-form matting~\cite{levin2007closed}, DIM~\cite{aksoy2017designing} and SHM~\cite{chen2018semantic}. Note that the Closed-form matting and DIM need extra trimap as input. DIM and SHM can only be trained using the fine annotated data. DeepLab and the proposed method are trained using the proposed hybrid annotated dataset. \begin{table}[t] \centering\scriptsize \caption{The quantitative results.} \resizebox{1\linewidth}{!}{ \begin{tabular}{lcccc} \hline Method & SAD & MSE & Gradient & Connectivity \\ \hline DeepLab ~\cite{chen2017rethinking} & 0.028 & 0.023 & 0.012 & 0.028 \\ Trimap+CF ~\cite{levin2007closed} & 0.0083 & 0.0049 & 0.0035 & 0.080 \\ Trimap+DIM~\cite{xu2017deep} & \textbf{0.0045} & \textbf{0.0017} & \textbf{0.0013} & \textbf{0.0043} \\ SHM~\cite{chen2018semantic} & 0.011 & 0.0078 & 0.0032 & 0.011 \\ \hline ours(w/o coarse data) & 0.0099 & 0.0067 & 0.0029 & 0.0095 \\ ours(w/o QUN) & 0.0076 & 0.0042 & 0.0024 & 0.0072 \\ \hline ours & \textbf{0.0058} & \textbf{0.0026} & \textbf{0.0016} & \textbf{0.0054} \\ \hline \end{tabular}} \label{tab:quantitative} \end{table} \paragraph{Performance comparison.} In Table~\ref{tab:quantitative}, we list the quantitative results over 1360 testing images. The semantic segmentation method DeepLab~\cite{chen2017rethinking} only predict coarse mask and lack fine details (Figure~\ref{fig:quality_cmp}(b)), resulting in the worst quantitative metrics. SHM~\cite{chen2018semantic} does not perform well as the volume of our high quality training dataset is limited, and fails to predict accurate semantic information for some images (Figure~\ref{fig:quality_cmp}(d)). In contrast, the interactive method close-form matting~\cite{levin2007closed} and DIM~\cite{xu2017deep} performs well, benefiting from the input semantic information provided by trimaps. These two methods only need to estimate the uncertain part in trimaps. The proposed method using hybrid training dataset outperforms most methods and is comparable with state-of-the-art methods. DIM~\cite{xu2017deep} is slightly better than the proposed method. Note that the proposed method only take in input images, DIM requires high informative trimaps as extra input. Even though, the visual quality of the proposed method (Figure~\ref{fig:quality_cmp}(g)) and DIM (Figure~\ref{fig:quality_cmp}(d)) looks very close. \begin{figure}[ht] \centering \resizebox{1.00\linewidth}{!}{ \includegraphics{figures/mid_figure.pdf} } \caption{Self-comparisons. Without quality unification network (QUN), the quality of coarse mask sent to the matting refinement network (MRN) may vary significantly. When the coarse mask is relatively accurate, MRN predicts alpha matte well. When the coarse mask lacks most hair details, the estimated alpha matte is accurate. Equipped with QUN, the mask quality is unified before feeding into MRN. The estimated alpha matte is more consistent against different kinds of coarse masks.} \label{fig:mid_figure} \end{figure} \begin{figure} \centering \resizebox{1.00\linewidth}{!}{ \includegraphics{figures/real_image.pdf} } \caption{Real image matting results. The collected coarse annotated dataset enriches our dataset significantly and enables the proposed method to capture the semantic information well and predicts accurate alpha matte for different kinds of input images.} \label{fig:real_image} \end{figure} \paragraph{Self-comparisons.} Our method can achieve high quality alpha matte estimation by incorporating coarse annotated human data. Coarse annotated data promote the proposed network to estimate semantic information accurately. To verify the importance of the these data, we separately train the same network with fine annotated dataset only. The quantitative results are listed in Table~\ref{tab:quantitative}. Without using the coarse data, the performance is obviously worse. From Figure~\ref{fig:quality_cmp}(f) and (g), we can also observe that method trained only with fine annotated data suffers from inaccurate semantic estimation and presents incomplete alpha matte. The mask quality unification network make it possible for the final matting refinement network to adapt to different kinds of coarse mask input. Without QUN, inputs to the matting refinement network may vary significantly, which is hard to deal with at inference stage. We list the quantitative metrics without QUN being used in Table~\ref{tab:dataset}. Both fine and coarse annotated dataset are used in this comparison. The results are obviously worse when QUN is removed. For a better visual comparison, we display the results in Figure~\ref{fig:mid_figure}. The predicted alpha matte is fine if the coarse mask is relatively accurate. When the coarse mask lacks most hair details, the estimated alpha matte is not good. With QUN, the mask quality is unified before feeding into MRN. The estimated alpha matte is more accurate and robust to different kinds of coarse masks. \begin{figure}[t!] \centering \resizebox{1.00\linewidth}{!}{ \includegraphics{figures/application2.pdf} } \caption{Using the proposed method to refine coarse human mask from public dataset annotations or semantic segmentation methods. Feed the coarse human mask from Pascal (b) or Coco (e) dataset annotation or DeepLab (h) to our quality unification network, and then use the matting refinement network to generate the accurate human alpha matte.} \label{fig:application} \end{figure} \subsection{Applying to real images} We further apply the proposed method to real images from the Internet. Matting on real images is challenging as the foreground is smoothly fused with the background. In Figure~\ref{fig:real_image}, we display our testing results on real images. Benefiting from the sufficient training on our hybrid dataset, the proposed method captures the semantic information very well for different kinds of input images and predicts accurate alpha matte at a detailed level. \section{Applications} The mask prediction network in the proposed method aims to capture coarse semantic information requiring by the subsequent networks. The semantic mask from this network can be coarse or accurate. The following quality unification network will unify the mask quality for the final matting refinement network. Therefore, if the semantic mask is arrange in some way, the proposed method is still able to work seamlessly and generate accurate alpha matte. Thus we can apply our framework to refine coarse annotated public dataset, such as the PASCAL~\cite{pascal-voc-2007} (Figure~\ref{fig:application}(a-c)) and COCO dataset~\cite{lin2014microsoft} (Figure~\ref{fig:application}(d-f)). The annotated human mask are resized and used as input for our QUN and MRN. Even though the annotations are not accurate, especially the annotations from COCO dataset, the proposed method manages to generate accurate refinement results. We can also use the proposed method to refine semantic segmentation methods (Figure~\ref{fig:application}(g-i)). Semantic segmentation methods are usually trained on coarse annotated public dataset, and the output mask is not precise. We feed the coarse mask obtained from DeepLab~\cite{chen2017rethinking} to our QUN and MRN. The proposed method generates surprisingly good alpha matte. Details that are missing from the coarse mask are well recovered, even for the very detailed hair parts. \section{Conclusion} \label{sec: Conclusion} In this paper, we propose to use coarse annotated data coupled with fine annotated data to enhance the performance of end-to-end semantic human matting. We propose to use MPN to estimate coarse semantic masks using the hybrid annotated dataset, and then use QUN to unify the quality of the coarse masks. The unified mask and the input images are fed into MRN to predict the final alpha matte. The collected coarse annotated dataset enriches our dataset significantly, and makes it possible to generate high quality alpha matte for real images. Experimental results show that the proposed method performs comparably against state-of-the-art methods. In addition, the proposed method can be used for refining coarse annotated public dataset, as well as semantic segmentation methods, which potentially brings a new method to annotate high quality human data with much less effort. \clearpage {\small \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:level1}Introduction} There has been a lot of progress on the development algorithms for quantum computing in recent years. Phase estimation is an important step in many quantum computing algorithms\cite{kitaev1995quantum, berry2000optimal, kitaev2002classical, o2009iterative, svore2013faster, paesani2017experimental, o2019quantum}. In recent years there has been a push to move beyond simple phase estimation and instead learn more concrete models for states and processes. This necessitates learning in higher-dimensional spaces and in turn bringing more sophisticated inference procedures to bear against the problem of phase estimation in the presence of device imperfections or more broadly techniques like quantum Hamiltonian learning \cite{granade2012robust, wang2017experimental, sergeevich2011characterization, wiebe2014quantum, wiebe2014hamiltonian}. Bayesian methods have a long history of providing a logically consistent framework for solving such problems. The two biggest benefits from using Bayesian methods are that adaptive inference protocols are straight forward to design in such frameworks and also the fact that the posterior distribution naturally quantifies the uncertainty in the inference. The central challenge, however, is that direct application of Bayesian reasoning is exponentially expensive (owing to the curse of dimensionality). The two most common ways that have been used in the field to circumvent these problems involve using Monte-Carlo approximations through sequential Monte-Carlo (otherwise known as particle filter methods) and also assumed density filtering which renders the problem efficient by making assumptions about the form of the prior and posterior distributions. The approximations that underly these approaches will generically fail at some point in the inference procedure. The most natural way of seeing this is from the fact that exact Bayesian inference is {\NP}-hard~\cite{cooper1990computational}. This means that unless $\BPP=\NP$, we cannot expect that efficient Bayesian inference is possible on probabilistic classical computers. Regardless, there is a rich history of developing approximate learning methods that exploit structures in the data to be able to learn efficiently for many classes of problems but because of these complexity theoretic limitations care must be taken to ensure that the assumptions that underly each approach (such as Gaussianity or unimodality of the posterior) are met. Developing robust and cost effective methods that can deal with such problems therefore becomes critical if we intend to move beyond learning algorithms that are hand tuned by experts and towards automating such learning procedures. In this paper we propose a new approach for phase estimation in the presence of noise and also in turn Hamiltonian learning that combines existing particle filter approaches with adaptive grid methods for approximating the posterior probability density. We find numerically that these methods can be highly efficient and further do not suffer the same multi-modality issues that popular methods such as Liu-West resampling faces. \subsubsection{Bayesian Phase Estimation} Bayesian phase estimation is an approach to iterative phase estimation wherein the knowledge about the eigenphases of a unitary operation are represented on a classical computer via a prior distribution over the unknown phase. In particular, let us assume that we have a unitary of the form $e^{-iH t}$ such that for an initial state $e^{-iHt} \ket{\psi} = e^{-i\omega t}$. This holds without loss of generality for any unitary; however, for simplicity we assume here that $t\in\mathbb{R}$ rather than the discrete case where $t\in \mathbb{Z}$ (unless fractional queries are used~\cite{sheridan2009approximating,berry2012gate,gilyen2019quantum}). Bayesian phase estimation has a number of advantages. First, it can easily be made adaptive since it manifestly tracks the current uncertainty and gives the user ways of estimating the most informative experiments to perform given the current knowledge. Second, it provides a well motivated estimate of the uncertainty of the phase estimation procedure. Finally, the framework can be easily extended to allow inference of other parameters that may be impacting the estimate of the eigenvalue $\omega$, such as $T_2$ times or over-rotations. The central drawback of Bayesian inference is that the current state of knowledge is exponentially expensive to store and manipulate which necessitates the use of approximate forms of Bayesian inference. In order to understand these tradeoffs it is necessary to briefly review the formalism of Bayesian inference. The central object behind Bayesian inference is the prior distribution. The prior distribution for the unknown phase $\omega$ is a probability density function $P(\omega)$ such that $\int_{\omega'}^{\omega'+\Delta} P(\omega) \mathrm{d}\omega$ gives the probability of the unknown phase being in the range $[\omega',\omega'+\Delta]$. The next object in Bayesian inference is the likelihood function, which gives the probability that a given outcome would be observed given a set of hypothetical parameters. For ideal Bayesian phase estimation the parameters that dictate an experiment are the evolution time $t$ and (optionally) a inversion phase $x$. The data returned is a measurement outcome on a qubit, which is either zero or one. The likelihood of measuring zero or one is \begin{align} \label{eq:likelihood1} P(0|\omega;t,x) &= \cos^2((\omega -x) t/2), \nonumber\\ P(1|\omega;t,x) &= \sin^2((\omega -x) t/2). \end{align} Once a measurement outcome is observed, Bayesian inference allows the posterior distribution to be computed over the unknown phase given the experimental result. The expression for the updated distribution is \begin{equation} P(\omega|d;t,x) = \frac{P(\omega) P(d|\omega;t,x) }{\int P(\omega) P(d|\omega;t,x) \mathrm{d}\omega } \end{equation} In practice, the optimal values of $t$ and $x$ can be found by optimizing the Bayes risk of an experiment and in practice a good estimate of this optimal experiment can be quickly found using the particle guess heuristic (PGH)~\cite{wiebe2016efficient}. This gives a rule for updating a distribution, but provides little guidance about how to choose the initial distribution over the parameters. In practice, it is common to choose the distribution to be uniform but if prior knowledge is provided about the unknown phase that can (and should) be reflected in the choice of distribution. Bayesian inference can be extended beyond this idealized setting to accommodate models of noise in the system. This is addressed by changing the likelihood function and, if necessary, the dimension of the prior distribution. In particular, let us assume that the system is subject to decoherence and that the decoherence time $T_2$ is not known perfectly. This can be addressed by switching to a two-dimensional prior distribution $P(\omega,T_2)$. The likelihood function in this case can be written as \begin{equation} P(d|\omega,T_2;t,x) = e^{-t/T_2} P(d|\omega;t,x) + \frac{(1-e^{-t/T_2})}{2}. \end{equation} In this case, it is important to note that the prior distribution is two-parameter and as such requires quadratically more grid points to represent than the one-dimensional case. In general if we have $D$ dimensions in the prior, then the resulting distribution would require a number of points that is greater by a power of $D$. This is to say that exact Bayesian inference requires a number of points that grows exponentially with the number of parameters learnt and as such this curse of dimensionality can only be overcome through the use of approximate methods. \section{\label{sec:smc}Approximate Bayesian inference} Sequential Monte Carlo methods are often used for Bayesian inference and phase estimation~\cite{wiebe2016efficient}. The idea behind these methods is to approximate the probability density as a sum of Dirac-Delta functions. In particular, let us assume that we have an initial prior probability density $P(\omega)$ we then wish to find a set of $\omega_j$ and $w_j$ that approximate $P(\omega)$ such that \begin{equation} P(\omega) \approx \sum_{j=1}^{N_{\rm part}} w_j\delta(\omega -\omega_j). \end{equation} Here the notion of approximation that is meant is that there exists some length $L$ and $\delta$ such that \begin{equation} \max_{\omega'}\left|\int_{\omega' -L/2}^{\omega'+L/2} P(\omega) \mathrm{d} \omega - \sum_{j: \omega_j \in [\omega'-L/2,\omega'+L/2]} w_j\right| \le \delta\label{eq:approx} \end{equation} These delta functions are traditionally called particles. If we take the $\omega_j$ to be sampled from the original probability distribution and assign uniform weights $w_j$ to each particle then $\delta \in O(1/\sqrt{N})$ and in this sense the quality of the Monte-Carlo approximation to the prior improves as the number of particles increases. In this discrete representation Bayes' rule for updating the prior distribution based on the observed result is straight forward. Let us assume that we observe outcome $d=\{0,1\}$, which are the two measurements that we could observe in an iterative phase estimation algorithm. If we define the likelihood to be $P(d|\omega_j;t,x)$ then Bayes' rule can be applied to update our particle weights $w_j$ based on the measurement result via \begin{equation} w_j \mapsto \frac{w_j P(d|\omega_j; t,x)}{\sum_j w_j P(d|\omega_j ;t,x)}. \end{equation} A key observation is that Bayesian inference does not cause the locations of the particles, i.e. the $\omega_j$, to move. It only causes the weights to change. This can prove problematic as such algorithms are repeated because the weights tend to decrease. Resampling is a standard approach that can be used to address these problems. The idea behind resampling is to draw a new ensemble of $\omega_j$ with $w_j = 1/N$ such that the new ensemble of particles models the resampled probability distribution well. The method that has become common place for quantum applications is Liu-West resampling. This scheme is given below. \begin{algorithm}[H] \label{alg:LW} \SetAlgoLined Input $(\omega_i, w_i), \quad i \in {1, \cdots, N}$ \\ Resampling parameter $a$ \\ Output: updated particles and weights $(\omega'_i, w'_i), \quad i \in {1, \cdots, N}$ \\ Liu-West(($\omega_i, w_i), a$) \\ mean $\mu_{\omega} = \sum_i \omega_i w_i$ \\ $h = \sqrt{1-a^2}$ \\ covariance $\Sigma_{\omega} = h^2 \sum_i (\omega_i^2 w_i - \mu_{\omega}^2)$ \\ for $i = 1 \cdots N,$\\ \quad draw $j$ with probability $w_j$\\ \quad $\mu_{\omega_i} = a \omega_j + (1-a) \mu_{\omega} $ \\ \quad draw $\omega'_i$ from $\mathcal{N}(\mu_{\omega_i}, \Sigma_{\omega})$ \\ \quad $w'_i = 1/N$ \\ return ($(\omega'_i, w'_i)$) \caption{Liu-West resampling algorithm} \end{algorithm} The idea behind it is that the algorithm draws a new ensemble of particles such that the posterior mean and covariance are approximately preserved. This choice is motivated, in part, by the fact that the posterior mean is the estimate that minimizes the expected square error in the estimated parameters. Preserving this quantity and the covariance matrix means that the both this estimate and the uncertainty in this estimate are preserved in the resampling algorithm. A major challenge however, is that while the posterior mean corresponds to the optimal unbiased estimator for the true parameters only if one is interested in the mean-square error in the inferred parameters. In cases, such as phase estimation, where there can be degeneracies between positive and negative frequencies the mean square error in the frequency does not adequately reflect the estimate that minimizes the expected error in the resultant model's predictions. This can cause the resampler to fail spectacularly. Our aim is to improve this by using a new technique called adaptive grid refinement. \subsection{\label{sec:grid}Adaptive grid refinement} In this section we introduce an adaptive grid refinement strategy for Bayesian phase estimation. The idea we employ borrows from the spirit of particle filtering. Specifically, the idea that we use is that the essential information about the posterior distribution that needs to be maintained in the distribution is the probability density. However, unlike resampling methods such as Liu-West resampling, we do not assume that the salient information about the posterior distribution is encoded in the low-order moments of the posterior. Instead, our goal is to adaptively build a discrete mesh that supports the probability distribution. This approach, unlike Liu-West resampling, suffers from the curse of dimensionality. Our central innovation is to show that the two methods can be used simultaneously and thereby allowing parameters that are hard to estimate with particle filters to be learned using an adaptive mesh. The aim of the mesh construction algorithm is to divide a region in parameter space into a set of boxes in $C_i \subset \mathbb{R}^{D}$ such that if $x_i$ is the centroid of the box $C_i$ with volume $V_i$ then $\int P(x) \mathrm{d}x \approx V_i P(x_i)$. That is to say, we identify that the grid is in need of refinement when the midpoint rule for integration fails over any box in the mesh. We then resample the distribution by dividing each dimension for the box into two sub-regions resulting in $2^D$ new boxes. In general, more advanced quadrature methods could be deployed such as Richardson extrapolations or Runge-Kutta methods. Such approaches have shown to have value for solutions to partial differential equations~\cite{huang2010adaptive} however here we focus our explorations on refining the precision of the mesh rather than the underlying quadrature formula. \subsubsection{The midpoint rule} The simplest possible case of using an adaptive mesh to perform approximate Bayesian inference involves using the midpoint rule in one dimension to form a grid that approximates the probability distribution. One of the benefits of using the midpoint rule here, apart from its simplicity, is the fact that it comes with an explicit error bound. We use this error bound to determine when the mesh needs to be subdivided. Integration of $M = \int_a^b f(t) dt$ through midpoint rule with $n$ equal grid intervals is \begin{equation} M_n = \frac{b-a}{n} (f(m_1) + f(m_2) + \cdots + f(m_n)), \end{equation} where \begin{equation} m_k = \frac{t_k+t_{k+1}}{2} = a + \frac{2k-1}{2n}(b-a). \end{equation} If $-K_2 \leq f'' \leq K_2$ on the interval [a,b], then the error $E_n(M)=|M_n-\int_a^b f(t) dt|$ is bounded by \begin{equation} \label{eq:error_bound} E_n \leq \frac{K_2(b-a)^3}{24n^2}. \end{equation} If we therefore set $n=1$ then our aim when choosing the segments is to ensure that the average error per unit length of the box is constant. That is to say \begin{equation} \frac{E_n}{(b-a)} \le \frac{K_2 (b-a)^2}{24}.\label{eq:errdens} \end{equation} Therefore, ignoring the irrelevant factor of $24$, an appropriate rule for refining the grid in such a way as to ensure that the total error in the probability distribution within each box is at most $\epsilon$ is to take for each segment of length $l_i$, the lengths such that $f''(x_i) l_i^2 = \epsilon$ for each interval $i$. In the slightly more complex case where we perform the analysis on a higher-dimensional mesh, the midpoint rule can be used straight forwardly in a recursive fashion on the integral over each dimension. However since the cost of doing so grows rapidly for $D>1$, we focus below on the one-dimensional case for clarity. The idea that we present introduces an adaptive mesh into the problem. The advantage of an adaptive mesh, fundamentally, is that the number of points needed to accurately estimate the probability density within a region can be smaller than that required by Monte-Carlo sampling. Indeed, under reasonable continuity assumptions about the underlying distribution these differences can be exponential~\cite{atkinson2008introduction}. However, these methods suffer from the curse of dimensionality. Our approach will provide a way to achieve both advantages for Bayesian phase estimation without having to deal with the drawbacks of either approach. In order to construct this grid adaptively we use two concepts. The first is that of grid refinement. The idea behind our algorithm is to use an elementary mesh and use the remainder estimate for the midpoint rule to decide whether the precision of the mesh within a region needs to be increased or not. All dimensions that are not on the mesh are resampled using the Liu-West resampling algorithm described earlier. \subsubsection{Grid Refinement} The first procedure that we will describe is grid refinement. The approach we take in this is to determine if there are any cells in the adaptive mesh that violate the bound on the error density that is tolerable according to~\eqref{eq:errdens}. In particular, we compute the error density for $i^{th}$ grid element as $e_i=f_i'' l_i^2,$ where $f_i''$ is the second derivative of the function $f_i=\omega_i w_i$ and $l_i$ is the length of the grid element. We use second order central difference method once to compute the first derivatives (gradient) $f'$ twice to compute the second derivative $f''.$ That is \begin{equation}\label{eq:f_grad} f_i' = \frac{f(\omega_{i+1})-f(\omega_{i-1})}{2l_i} + O(l_i^2) \end{equation} and \begin{equation}\label{eq:f_hess} f_i'' = \frac{f'(\omega_{i+1})-f'(\omega_{i-1})}{2l_i} + O(l_i^2). \end{equation} We give pseudocode for this procedure below in Algorithm~\ref{alg:adapt_refine}. \begin{algorithm}[H] \label{alg:adapt_refine} \SetAlgoLined Input: $\{\omega_i, w_i, e_{th}\}$ \\ \quad $f_i = \omega_i w_i, \quad i \in \{1, \cdots, N\}$ \\ \quad $e_i = f_i'' l_i^2$ \\ \quad for $j = 1, \cdots N$ \\ \quad \quad if $e_{j}>e_{th},$ \\ \quad \quad \quad compute segment end points $p_j$ and $p_{j+1}$ \\ \quad \quad \quad $\omega^r_{j_1} = \frac{\omega_j-p_j}{2}$ and $\omega^r_{j_2} = \frac{p_{j+1}-\omega_j}{2}$ \\ \quad \quad \quad Set $w^r_{j_1} = w^r_{j_2} = w_j/2$ \\ \quad \quad \quad Delete particle $\omega_j$ \\ \quad \quad \quad Insert particles $\omega^r_{j_1}, \omega^r_{j_2}$ into $\{\omega_i, w_i\}$ \\ \Return $\{\omega_i, w_i\}$ \caption{Adaptive grid refinement algorithm} \end{algorithm} In the current approach each grid element meeting the refinement criterion is split only once. In general, we can employ a multi-pass version where the split is continued until none of the resulting grid elements satisfy the refinement criterion. \subsubsection{Adaptive grid merge criterion} The following heuristic can be useful for managing the memory of our protocol. As the learning procedure progresses, certain regions of parameter space will be assigned low posterior probability and thus no longer need the same level of precision prescribed by the adaptive grid protocol. Here we address this issue by providing a heuristic for merging cells in the grid when the probability in them becomes too small. We give explicit pseudo code below in Algorithm~\ref{alg:adaptive_merge} for one implementation of this merging heuristic. \begin{algorithm}[H] \label{alg:adaptive_merge} \SetAlgoLined Input: $\{\omega_i, w_i, w_{th}\}$ \\ \quad for each adjacent elements $\omega_{j}$ and $\omega_k$ \\ \quad \quad compute end points $p_j, p_{j+1}$ and $p_k, p_{k+1}$ for $\omega_j$ and $\omega_k$ \\ \quad \quad if $\max(w_j,w_k) < w_{th}$ \\ \quad \quad \quad $\omega^m_{jk} = (\max(p_{j+1},p_{k+1}) + \min(p_{j},p_{k}))/2$ \\ \quad \quad \quad $w^m_{jk} = w_j + w_k$ \\ \quad \quad \quad Delete $\omega_j, \omega_k$ \\ \quad \quad \quad Insert $\omega^m_{jk}$ into $\{\omega_i, w_i\}$ \\ \Return $\{\omega_i, w_i\}$ \caption{Adaptive grid merge algorithm} \end{algorithm} We develop adaptive grid refinement particle filter algorithm that incorporates adaptive grid refinement algorithm~\ref{alg:adapt_refine} and adaptive grid merge algorithm \ref{alg:adaptive_merge} for Bayesian phase estimation. We provide the pseudo code for adaptive grid refinement particle filter in Algorithm~\ref{alg:adaptivegrid_pf}. \begin{algorithm}[h] \label{alg:adaptivegrid_pf} \SetAlgoLined Define prior grid $\omega_{grid} = \{\omega_1, \omega_2, \cdots, \omega_N\}$ \\ $k = 0, w^k_0 = 1/N$,\\ for $k = 1 \cdots N_k,$\\ \quad $\omega_{grid} = \omega_{grid}$ \\ \quad Set expt to experimental parameters \algorithmiccomment{i.e., expt = PGH($\{\omega_i, w_i\}, k$)} \\ \quad $d_k$ = simulate-expt($\omega$, expt) \\ \quad weights $\hat{w}^k_i = \hat{w}^{k-1}_{i} p(d_k|\omega_i^k)$\\ \quad Normalize \quad $w_k^i = \frac{\hat{w}^i_k}{\sum_{j=1}^N \hat{w}^j_k}$ \\ \quad ($\omega^r_i, w^r_i$) = Grid-Refine Algorithm~\ref{alg:adapt_refine} \\ \quad ($\omega^m_i, w^m_i$) = Grid-Merge algorithm \ref{alg:adaptive_merge} \\ \quad ($\omega_i, w_i$) = ($\omega^m_i, w^m_i$) \\ Estimate $\hat{\omega} = \sum_{i=1}^N \omega_k^i w^i_k$ \caption{Adaptive grid refinement particle filter} \end{algorithm} \subsection{\label{sec:hybrid}Hybrid grid and sampling smc} In this section we consider using a hybrid grid and a sampling based method to estimate the frequency $\omega$ and the dephasing parameter $T_2.$ We use grid based approach to estimate the frequency $\omega$ and Liu-West resampling based SMC for estimation the parameter $\theta = 1/T_2.$ The likelihood functions $P(0|\omega, T_2)$ and $P(1|\omega, T_2)$ are given by \begin{align} P(0|\omega) &= e^{-t/T_2}\cos^2(\omega t/2) + \frac{1-e^{-t/T_2}}{2}, \nonumber\\ P(1|\omega) &= 1-P(0|\omega). \end{align} Hybrid adaptive grid based and Liu-West resampling based algorithm is shown in algorithm \ref{alg:hybrid_grid_lw}. In this approach the prior on the parameter $\omega$ is defined on an one dimension grid $\omega \in [\omega_l, \omega_u]$. For each gird value $\omega_i$ a Liu-West resampling based sequential Monte Carlo filter is used to estimate the paramter $\theta_i.$ Then the particles corresponding to the one dimensional grid for $\omega$ and particles for $\theta_i$ are stacked together to form a set of two dimensional particles ${\omega_l, \theta_l}.$ Corresponding weights $w_l$ are obtained as tensor product of the weights $\{w_i\}$ and $\{w_{i,j}\}$ corresponding to the particles $\{\omega_i\}$ and $\{\theta_{i,j}\}$ respectively. \begin{algorithm}[H] \label{alg:hybrid_grid_lw} \SetAlgoLined Define prior grid for $\omega$, $\omega_{grid} = \{\omega_1, \omega_2, \cdots, \omega_{N_1}\}$ \\ $k = 0, w^k_{\omega,i} = 1/N_1$,\\ for $k = 1 \cdots N_k,$\\ \quad Define uniform prior on $\theta \sim U[\theta_a, \theta_b]$ \\ \quad for $i=1 \cdots N_1$ \\ \quad \quad $\{\hat{\theta}_{i,j}, \hat{w}_{i,j}\}$ = SMC-LW($\{\theta_{i,j}, w_{i,j}\}, \omega_i$) \\ \quad Stack $\{\omega_i, w_i\}$ and $\{\hat{\theta}_{i,j}, \hat{w}_{i,j}\}$ as $(\{\omega_l,\theta_l\}, w_l)$ \\ \quad Update weights $\hat{w}^l_k = \hat{w}^l_{k-1} p(d_k|\omega^l_k, \theta^l_k)$\\ \quad Normalize $w^l_k = \frac{\hat{w}^l_k}{\sum_{j=1}^N \hat{w}^j_k}$ \\ \quad ($\omega^r_i, w^r_i$) = Grid-Refine Algorithm~\ref{alg:adapt_refine} \\ \quad ($\omega^m_i, w^m_i$) = Grid-Merge Algorithm \ref{alg:adaptive_merge} \\ \quad ($\omega_i, w_i$) = ($\omega^m_i, w^m_i$) \\ Estimate $\hat{\omega} = \sum_{i=1}^N \omega_k^i w^i_k$ and $\hat{\theta} = \sum_{i=1}^N \theta_k^i w^i_k$ \\ \caption{Hybrid grid refinement-LW sampling algorithm} \end{algorithm} \subsection{Computational Complexity} The computational complexity of this algorithm is largely dictated by the number of calls needed to the likelihood function during an update. Each particle in SMC methods requires $O(1)$ calls to the likelihood function to perform an update~\cite{granade2012robust}. Therefore, the number of such calls needed to the likelihood function is in $O(N_1N_k)$ where $N_1$ is the number of grid points that we consider and $N_k$ is the number of particles that we include in our filter at each point. The error in the mean over the particle filter is in $O(1/\sqrt{N_k})$ in the worst case scenario where all the weight is only placed on one of the grid points. The error in the mean over the grid is $O(K_2/N_1^2)$ from~\eqref{eq:error_bound}. Assuming we begin from a uniform prior and experiments are performed with times $t_1,\ldots,t_M$ we then have that the posterior distribution for a uniform prior on the continuum is proportional to the likelihood function. The second derivative of which with respect to the unknown parameter is, from the triangle inequality, at most $K_2 \in O((\sum_i |t_i|)^2)$. Thus if we desire the error in the mean to be at most $\delta$ in the max-norm then it suffices to pick \begin{align} N_1 \in O\left(\frac{\sum_i |t_i|}{\delta} \right),~\qquad N_k \in O\left(\frac{1}{\delta^2} \right). \end{align} This implies that the cost of performing the protocol in terms of the number of likelihood evaluations is in $O(\sum_i |t_i| / \delta^3)$. If the phase estimation procedure is Heisenberg limited then $\sum_i |t_i| \in O(1/\delta)$~\cite{daryanoosh2018experimental} which yields that the number of likelihood evaluations is in $O(1/\delta^4)$. This worst case analysis shows that the algorithm is efficient, irrespective of the dimension of the model space orthogonal to the grid (if a constant value of $\delta$ suffices). However, this estimate radically over-estimates the cost of most applications. This is because most approaches to iterative phase estimation (the method of~\cite{svore2013faster} being a notable exception), use a mixture of short and long experiments that typically lead to a distribution that is supported only over a small fraction of the parameter space~\cite{wiebe2016efficient,granade2012robust,kimmel2015robust,dinani2019bayesian}. This means that the worst case assumptions used above do not hold, and we anticipate empirically that the number of likelihood evaluations needed should be dominated by $N_k$ in such cases. In particular, by choosing a grid that mimics the points evaluated using a high-order integration formula we can evaluate the integral over the grid within error $\delta$ using a polylogarithmic number of likelihood evaluations~\cite{hildebrand1987introduction}. In such cases the complexity will be dominated by the SMC costs, which cause the costs to reduce to $\widetilde{O}(1/\delta^2)$. \subsection{Principal Kurtosis Analysis} \label{sec:pka} Given the cost of using a high-dimensional adaptive grid, a question remains about how one could use these ideas presented here for phase estimation more broadly for parameter estimation. One approach that can be used to address this is a method known as principal kurtosis analysis~\cite{pena2010eigenvectors,meng2016principal}. Kurtosis is a measure of the fourth moment of a distribution and measures the extent to which a distribution deviates from Gaussianity. In particular, we can define a matrix that describes the Kurtosis of a random variable $x$ in a basis independent fashion via \begin{equation} B=\mathbb{E}(((x-\mu)' \Sigma^{-1} (x-\mu))^2), \end{equation} where $\mu$ is the mean and $\Sigma$ is the covariance matrix of $x$. The central idea behind principal Kurtosis analysis is to take a data set and find the directions that the data most strongly deviates most strongly from Gaussianity in by diagonalizing the matrix $B$ and selecting the components that have the largest eigenvalues. This allows us when a resampling step would be invoked to choose the meshed axes of our posterior distribution to align with the directions of greatest kurtosis. This allows us to adaptively choose the direction that we apply the mesh to during the resampling step and apply Liu-West resampling only to the directions with low excess Kurtosis. \section{\label{sec:numerical}Numerical results} In this section we present several numerical results to support adaptive grid refinement and hybrid grid and sampling based particle filters for quantum phase $\omega$ and dephasing parameter $T_2$ estimation. We show that with adaptive grid refinement particle filter, number of particles required can be controlled using a threshold $w_{th}$ for grid merging. We show the results for four different threshold values $10^{-5}, 10^{-4}, 10^{-3}$ and $2\times 10^{-3}.$ The threshold for grid refinement is fixed at $10^{-10}.$ In this section we show the numerical results for three different cases, namely i) quantum phase estimation in the absence of dephasing noise ii) quantum phase estimation in the presence of dephasing noise and iii) quantum phase and dephasing parameter estimation. \subsection{Quantum phase estimation without dephasing noise} Here we consider the problem of quantum phase estimation in the absence of dephasing noise. The likelihood function for this problem are given by \eqref{eq:likelihood1}. For this problem we show quantum phase estimation results using the proposed adaptive grid refinement particle filter method and compare these results with standard Liu-West resampling based sequential Monte Carlo method. We consider two cases when a) $\omega \in [0, 1]$ and b) $\omega \in [-1,1].$ \begin{figure*}[t] \centering \subfloat[]{ \includegraphics[width=0.48\textwidth]{figures/with_1d_grid_median_error_1000}} \subfloat[]{ \includegraphics[width=0.48\textwidth]{figures/with_1d_grid_omega_neg_median_error_1000}} \\ \subfloat[]{ \includegraphics[width=0.48\textwidth]{figures/with_lw_median_error_1000}} \subfloat[]{ \includegraphics[width=0.48\textwidth]{figures/with_lw_omega_neg_median_error_1000}} \caption{\label{with_1d_grid_median_error} Median error for 1000 random a) $\omega \in [0,1]$, b) $\omega \in [-1,1]$ using adaptive grid refinement algorithm. Threshold for merging is $10^{-5}, 10^{-4}, 10^{-3}, 2\times 10^{-3}$. Median of MLE error using LW resampling algorithm for 1000 random c) $\omega \in [0,1]$, d) $\omega \in [-1,1]$ with number of particles $N=100, 200, 300, 500.$ } \end{figure*} For each case we perform simulations for $1000$ experiments (meaning that we perform a measurement, do a Bayesian update and adaptively choose the next experimental time $t_i$ based on the updated posterior distribution~\cite{granade2012robust}). We further repeat this for $1000$ random reference values of $\omega.$ That is, for each random reference $\omega$, we perform $1000$ experiments to estimate its value. At the end we get $1000$ error estimates as a function of the number of experiments corresponding to $1000$ randomly chosen reference $\omega.$ We show the convergence of median error in the estimated phase $\omega$ with respect to the true reference $\omega.$ Due the grid refinement and merging algorithms in the proposed method, final number of particles needed depend on the threshold values used for refinement and merging. Error convergence also depends on the threshold values. Fig.\ref{with_1d_grid_median_error} shows the convergence of median error in posterior estimate of $\omega$ for different threshold values for merging. Fig.\ref{with_1d_grid_median_error}a shows the error for estimate of $\omega \in [0,1]$ and Fig.\ref{with_1d_grid_median_error}b shows the error for estimate of $\omega \in [-1,1].$ In both cases we can clearly see that the median error decreases as a function of the number of experiments and that the convergence is faster for larger values of the merging threshold value $w_{th}.$ To compare the proposed method with the standard Liu-West based sequential Monte Carlo method we show the numerical results for phase estimation in Figs \ref{with_1d_grid_median_error}c and \ref{with_1d_grid_median_error}d. Fig \ref{with_1d_grid_median_error}c shows the error for estimate of $\omega \in [0,1]$ and Fig \ref{with_1d_grid_median_error}d shows the error for estimate of $\omega \in [-1,1].$ As mentioned in adaptive grid refinement method the posterior estimate is used when $\omega \in [0,1]$ and maximum likelihood estimate is used when $\omega \in [-1,1],$ because the posterior distribution in the later case results in bimodal distribution for which posterior mean estimation is not accurate. In Fig \ref{with_1d_grid_median_error}c convergence of median error for different choice of number of particles ($N=10, 20, 50, 100, 200$ and $500$s). Although the median error for $N=10$ and $N=20$ doesn't converge very well, for $N\geq50$ the error converges to $<1e-9$ at around $200-300$ experiments. Fig \ref{with_1d_grid_median_error}d shows median error convergence results when $\omega \in [-1,1].$ Although the maximum likelihood estimate is used in both cases, Liu-West resampling based SMC (Fig \ref{with_1d_grid_median_error}d) performs poorly compared to the proposed adaptive grid refinement method (Fig.\ref{with_1d_grid_median_error}b). In all cases considered for adaptive grid refinement number of measurements needed to reach a particular level of uncertainty scales as $O(\log(1/\epsilon)$. This means, for example, that even with $w_{\rm th} = 10^{-5}$ we will require roughly $1500$ experiments (bits measured) will be needed to hit the limits of numerical precision that SMC with Liu-West can achieve in roughly $200$ experiments. While this approach requires nearly eight times as much data, we will see the processing time is comparable or better than that of Liu-West to hit that same threshold reliably for the case where $\omega \in [0,1]$. In the case of $\omega \in [-1,1]$, however, adaptive grid refinement is clearly a superior strategy since we see Liu-West resampling causes the algorithm to fail. The scalings that we observe above are, in fact, nearly optimal. This can be seen from the fact that the particle guess heuristic uses an evolution time per experiment on the order of $T_j \in O(1/\epsilon_j)$ where $\epsilon_j$ is the (circular) posterior standard deviation at update $j$~\cite{wiebe2014hamiltonian}. Since the posterior variance shrinks exponentially with this the adaptive algorithm, the total evolution time needed obeys $\sum_j T_j \in O( \sum_j 1/\epsilon_j) \subseteq \sum_j\exp(O(j)) \subseteq \exp(O(\log(1/\epsilon))) = O(1/\epsilon)$ which saturates the Heisenberg limit (which coincides with the Bayesian Cram\`er-Rao bound in the absence of dephasing~\cite{berry2000optimal,granade2012robust})) up to a constant and is therefore nearly optimal. However, for the case of SMC using Liu-West resampling we only notice logarithmic scaling if the number of particles, $N$, is sufficiently large. In settings where we want to automate the learning process, the robustness of adaptive grid methods allows it to be applied without having to begin with an exhaustive hyperparameter search to find sufficient values for $N$ and the resampling threshold. % \begin{figure}[h] \includegraphics[width=0.48\textwidth]{figures/with_1d_grid_num_grid_pdf_1000_1e-05.png \caption{\label{fig:pdf_plot} Probability density function of number of particles (grids) and 2.5 and 97.5 percentiles. Threshold for merging is $1e^{-5}$.} \end{figure} \begin{table}[h] \begin{center} \begin{tabular}{| p{1.5cm} | p{1.5cm} | p{1.5cm} | p{1.5cm} |} \hline \multicolumn{4}{|c|}{Percentiles of number of grid points for $\omega \in [0,1]$} \\ \hline $w_{th}$ & $q_{2.5}$ & $q_{50}$ & $q_{97.5}$ \\ \hline $1e^{-5}$ & 8 & 15 & 29 \\ \hline $1e^{-4}$ & 8 & 19 & 136 \\ \hline $1e^{-3}$ & 38 & 147 & 237 \\ \hline $2e^{-3}$ & 5 & 140 & 219 \\ \hline \end{tabular} \caption{\label{tab:table-1} Percentile (2.5, 50 and 97.5) values of number of grid points required for estimating $\omega \in [0,1]$} \end{center} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{| p{1.5cm} | p{1.5cm} | p{1.5cm} | p{1.5cm} |} \hline \multicolumn{4}{|c|}{Percentiles of number of grid points for $\omega \in [-1,1]$} \\ \hline $w_{th}$ & $q_{2.5}$ & $q_{50}$ & $q_{97.5}$ \\ \hline $1e^{-5}$ & 8 & 15 & 29 \\ \hline $1e^{-4}$ & 8 & 20 & 128 \\ \hline $1e^{-3}$ & 20 & 141 & 238 \\ \hline $2e^{-3}$ & 11 & 141 & 219 \\ \hline \end{tabular} \caption{\label{tab:table-2} Percentile (2.5, 50 and 97.5) values of number of grid points required for estimating $\omega \in [-1,1]$} \end{center} \end{table} The probability density function (pdf) and percentiles of number of particles needed for $1000$ random simulations is presented in Fig~\ref{fig:pdf_plot} for threshold value $w_{th}=10^{-5}.$ The median number of particles needed for this case is $15$ where as $2.5$ and $97.5$ percentile values are $8$ and $29$ respectively. Although the error convergence is slower for this case, the number of particles required is quite small compared to the Liu-West resampling case. For other threshold values of $w_{th}$ the median and the percentile values are presented in Tables \ref{tab:table-1} (for $\omega \in [0,1]$) and \ref{tab:table-2} (for $\omega \in [-1,1].$) When $\omega \in [-1,1]$ the posterior distribution is bimodal and hence the posterior estimate of $\omega$ does not provide accurate value. To alleviate this we use maximum likelihood estimate for this case. In Appendix~\ref{sec:appendix} Figs. \ref{with_1d_grid_refine_steps} (without dephasing noise) and \ref{with_dephase_1d_grid_refine_steps} (with dephasing noise) show the progress of grid refinement as we increase the number of experiments. We present detailed discussion in appendix~\ref{sec:appendix}. Although the grid refinement strategy is not efficient for higher dimensional problems due to exponential increase in the number of particles, it is convenient for the problems when the traditional sampling based methods fail. As described in section \ref{sec:hybrid}, we can combine both grid based and sampling based methods as hybrid methods to make use of advantages from both methods. \subsection{Quantum phase estimation with dephasing noise} As shown in the previous section the quantum phase estimation in the absence of dephasing noise is easier and high levels of accuracy ($10^{-12}$). However the quantum systems as are not perfect and quantum experiments are not accurate due to various disturbances such as dephasing noise. In this section we show the numerical results for quantum phase estimation in the presence of dephasing noise using adaptive grid refinement method. Because of the dehasing noise the information in the signal (output of the quantum experiment) is corrupted and hence the accuracy of the phase estimation is deteriorated. Fig \ref{fig:dephase_omega_est} the median error in the phase estimation as a function of the number of experiments for $1000$ random choices of true reference $\omega$ when the depahsing parameter is $T_2=50 \pi.$ As in the previous cases, the numerical experiments are performed for different choices of threshold $w_{th} = 10^{-5}, 10^{-4}, 10^{-3}, 2\times 10^{-3}$ used in the merging algorithm. As expected the accuracy ($\sim 10^{-5}$)of the algorithm in the presence of the dephasing noise is not as good as that we saw in the absence of the dephasing noise. \begin{figure}[t!] \includegraphics[width=0.48\textwidth]{figures/with_1d_grid_invT2_0.01_pe_median_error_1000.png \caption{\label{fig:dephase_omega_est} Adaptive grid refinement with dephasing noise. Median error for 1000 random $\omega$, $T_2=50\pi$. Threshold for merging is $10^{-5}, 10^{-4}, 10^{-3}, 2\times 10^{-3}$.} \end{figure} \subsection{Quantum phase and dephasing parameter estimation using hybrid algorithm} \begin{figure*}[t!] \subfloat[]{ \includegraphics[width=0.48\textwidth]{figures/mixed_median_omega_median_error_1000.png}} \subfloat[]{ \includegraphics[width=0.48\textwidth]{figures/mixed_median_invT2_median_error_1000.png} \caption{\label{fig:hybrid_omega_T2}Hybrid adaptive grid refinement and LW resampling method for a) phase ($\omega$) estimation and b) dephasing parameter ($T_2$). Median error in $\omega$ estimate for 100 random $\omega$ and fixed true $T_2=50\pi$. Threshold for merging is $10^{-5}, 10^{-4}, 10^{-3}, 2\times 10^{-3}$.} \end{figure*} In this section, we provide the numerical results using the hybrid algorithm described in section \ref{sec:hybrid} for estimating the quantum phase $\omega$ and the dephasing parameter $\theta=1/T_2.$ In this hybrid approach, we combine grid based and sampling based approaches such that the adaptive grid refinement is used to estimate the quantum phase $\omega$ and Liu-West resampling based SMC is used to estimate the dephasing parameter $\theta.$ In this method as shown in algorithm \ref{alg:hybrid_grid_lw}, a one dimensional grid in $\omega_l,\omega_u$ is used as a prior for $\omega$ and for each realization of $\omega_i$ as LW SMC is performed on the parameter $\theta_i~U[0,1].$ Then all the realizations $\{\omega_i\}$ and corresponding $\{\theta{i,j}\}$ are stacked together to form a set of particles in two dimensions. The Bayesian update is performed on this two dimensional particles. Although we arbitrarily chose grid refinement method for $\omega$ and Liu-West resampling for $\theta,$ in a multidiemnsioal setting, one can use principal Kurtosis analysis method described in section \ref{sec:pka} to to adaptively choose the direction that we apply the mesh to during the resampling step and apply Liu-West resampling to the directions with low excess Kurtosis. As shown in prior cases we perform numerical experiments for different choices of threshold $w_{th} = 10^{-5}, 10^{-4}, 10^{-3}, 2\times 10^{-3}$ for merging. Fig \ref{fig:hybrid_omega_T2}a shows convergence of median error in $\omega$ for $100$ random choices of true reference $\omega$ and for a given dephasing value of $T_2=50\pi$ obtained using hybrid algorithm. Fig \ref{fig:hybrid_omega_T2}b shows the median error in the parameter $\theta=1/T_2$ obtained using the same algorithms. Although the accuracy of the quantum phase estimate $\omega$ and the dephasing parameter $T_2$ are not as good as those we saw in the previous cases, the proposed approach provides as a powerful alternative when traditional sampling based methods fail one or more dimensions. This approach brings the advantages of both grid based and sampling based methods. We also observe that the effect of $w_th$ in Figs. \ref{fig:hybrid_omega_T2}a and \ref{fig:hybrid_omega_T2}b on the convergence rate is reversed with respect to the that in Fig. \ref{with_1d_grid_median_error}a. We believe that each problem is diffident need to optimize the hyper parameters (e.g. $w_th$, $e_th$) for each case. We will investigate this in the future. \section{\label{sec:conclusions}Conclusions} In this paper we present a novel Bayesian phase estimation method based on adaptive grid refinement method. We also combine grid based and sampling based methods as hybrid methods to simultaneously estimate the quantum phase and other parameters in the quantum Hamiltonian. We present numerical results quantum phase estimation for three different cases. In the first case we consider the phase estimation when the dephasing noise is absent. For this case we compare error convergence on the phase estimation using adaptive grid method and Liu-West resampling SMC method. We show that both adaptive grid and LW methods perform well when $\omega \in [0,1].$ However LW method performs poorly when $\omega \in [-1,1].$ We also provide numerical results for phase estimation using adaptive grid refinement when the dephasing noise is present. Finally we use hybrid algorithm to simultaneously estimate the quantum phase and the dephasing parameters which allows us to combine the benefits of both SMC and grid-based methods. There are many avenues for future work that are posed by this. Although the curse of dimensionality will invariably limit the applicability of adaptive meshing schemes to low dimensional problems, the unprecedented accuracy that adpative meshing can provide can compensate for this. Nonetheless, while the generalization of these results to adaptive grids in greater than one-dimension is conceptually straight forward, it is likely that with optimized implementations of meshing techniques used in finite element analysis and beyond will be able to make the technique computationally tractable in even three-dimensions. Further, subsequent work remains in seeing the impact that rotating the grid to handle the dimensions of greatest Kurtosis in order to stabilize the Liu-West particle filter in the presence of multi-modality. Such explorations are again conceptually straight forward but will require extensive numerical studies to understand the regimes where our techniques can provide an advantage. It is our hope that by providing methods that can adapt the representation of the probability distribution we may be able to make phase estimation, and Hamiltonian learning more generally, robust and fully automatable. \begin{acknowledgments} The material presented here is based upon work supported by the Pacific Northwest National Laboratory (PNNL) ``Quantum Algorithms, Software, and Architectures (QUASAR)". We also would like to thank Dr. Sriram Krishnamoorthy from PNNL for his valuable feedback and suggestions that have significantly improved the paper. PNNL is operated by Battelle for the DOE under Contract DE-AC05-76RL01830. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \subsection{Background and Preliminaries} Circle packing builds a connection between combinatoric and geometry. It was used by Thurston \cite{T1} to construct hyperbolic 3-manifolds or 3-orbifolds. Inspired by Thurston's work, Cooper and Rivin \cite{CR} studied the deformation of ball packings, which are the three dimensional analogues of circle packings. Glickenstein \cite{G1}\cite{G2} then introduced a combinatorial version of Yamabe flow based on Euclidean triangulations coming form ball packings. Besides their works, there is very little known about the deformation of ball packings. In this paper, we shall study the deformation of ball packings. Let $M$ be a closed 3-manifold with a triangulation $\mathcal{T}=\{\mathcal{T}_0,\mathcal{T}_1,\mathcal{T}_2,\mathcal{T}_3\}$, where the symbols $\mathcal{T}_0,\mathcal{T}_1,\mathcal{T}_2,\mathcal{T}_3$ represent the sets of vertices, edges, faces and tetrahedrons respectively. In the whole paper, we denote by $\{ij\}$, $\{ijk\}$ and $\{ijkl\}$ a particular edge, triangle and tetrahedron respectively in the triangulation, while we denote by $\{i,j,\cdots\}$ a particular set with elements $i$, $j, \cdots$. The symbol $(M, \mathcal{T})$ will be referred as a triangulated manifold in the following. All the vertices are ordered one by one, marked by $1,\cdots,N$, where $N=|\mathcal{T}_0|$ is the number of vertices. We use $i\thicksim j$ to denote that the vertices $i$, $j$ are adjacent if there is an edge $\{ij\}\in\mathcal{T}_1$ with $i$, $j$ as end points. For a triangulated three manifold $(M,\mathcal{T})$, Cooper and Rivin \cite{CR} constructed a piecewise linear metric by ball packings. A ball packing (also called sphere packing in other literatures) is a map $r:\mathcal{T}_0\rightarrow (0,+\infty)$ such that the length between vertices $i$ and $j$ is $l_{ij}=r_{i}+r_{j}$ for each edge $\{ij\}\in \mathcal{T}_1$, and each combinatorial tetrahedron $\{ijkl\}\in\mathcal{T}_3$ with six edge lengthes $l_{ij},l_{ik},l_{il},l_{jk},l_{jl},l_{kl}$ forms an Euclidean tetrahedron. Geometrically, a ball packing $r=(r_1,\cdots,r_N)$ attaches to each vertex $i\in\mathcal{T}_0$ a ball $S_i$ with $i$ as center and $r_i$ as radius, and if $i\thicksim j$, the two balls $S_i$ and $S_j$ are externally tangent. Cooper and Rivin \cite{CR} called the Euclidean tetrahedrons generated in this way \emph{conformal} and proved that an Euclidean tetrahedron is conformal if and only if there exists a unique ball tangent to all of the edges of the tetrahedron. Moreover, the point of tangency with the edge $\{ij\}$ is of distance $r_i$ to the vertex $i$. Denote \begin{equation}\label{nondegeneracy condition} Q_{ijkl}=\left(\frac{1}{r_{i}}+\frac{1}{r_{j}}+\frac{1}{r_{k}}+\frac{1}{r_{l}}\right)^2- 2\left(\frac{1}{r_{i}^2}+\frac{1}{r_{j}^2}+\frac{1}{r_{k}^2}+\frac{1}{r_{l}^2}\right). \end{equation} The classical Descartes' circle theorem, also called Soddy-Gossett theorem (for example, see \cite{CR}), says that four circles in the plance of radii $r_i$, $r_j$, $r_k$, $r_l$ are externally tangent if and only if $Q_{ijkl}=0$. This case is often called Apollonian circle packings. It is surprising that Apollonian circle packings are closely related to number theory \cite{Bourgain}\cite{Bour-Kon} and hyperbolic geometry \cite{K-Oh}. Glickenstein \cite{G1} pointed out that a combinatorial tetrahedron $\{ijkl\}\in \mathcal{T}_3$ configured by four externally tangent balls with positive radii $r_{i}$, $r_{j}$, $r_{k}$ and $r_{l}$ can be realized as an Euclidean tetrahedron if and only if $Q_{ijkl}>0$. Denote $\mathcal{M}_{\mathcal{T}}$ by the space of all ball packings, then it can be expressed as a subspace of $\mathds{R}^N_{>0}$: \begin{equation} \mathcal{M}_{\mathcal{T}}=\left\{\;r\in\mathds{R}^N_{>0}\;\big|\;Q_{ijkl}>0, \;\forall \{ijkl\}\in \mathcal{T}_3\;\right\}. \end{equation} Cooper and Rivin proved that $ \mathcal{M}_{\mathcal{T}}$ is a simply connected open subset of $\mathds{R}^N_{>0}$. It is a cone, but not convex. For each ball packing $r$, there corresponds a combinatorial scalar curvature $K_{i}$ at each vertex $i\in\mathcal{T}_0$, which was introduced by Cooper and Rivin \cite{CR}. Denote $\alpha_{ijkl}$ as the solid angle at the vertex $i$ in the Euclidean tetrahedron $\{ijkl\}\in \mathcal{T}_3$, then the combinatorial scalar curvature at $i$ is defined as \begin{equation}\label{Def-CR curvature} K_{i}= 4\pi-\sum_{\{ijkl\}\in \mathcal{T}_3}\alpha_{ijkl}, \end{equation} where the sum is taken over all tetrahedrons in $\mathcal{T}_3$ with $i$ as one of its vertices. They also studied the deformation of ball packings and proved a locally rigidity result about the combinatorial curvature $K=(K_1,\cdots,K_N)$, namely, a conformal tetrahedron cannot be deformed while keeping the solid angles fixed. Or say, the combinatorial curvature map $$K:\mathcal{M}_{\mathcal{T}}\to \mathds{R}^N,$$ up to scaling, is locally injective. Recently, Xu \cite{Xu} showed that $K$ is injective globally. Given the triangulation $\mathcal{T}$, Cooper and Rivin consider $\mathcal{M}_{\mathcal{T}}$ as an analogy of the smooth conformal class $\{e^fg:f\in C^{\infty}(M)\}$ of a smooth Riemannian metric $g$ on $M$. Inspired by this observation, Glickenstein \cite{G1} posed the following combinatorial Yamabe problem:\\[8pt] \noindent \textbf{Combinatorial Yamabe Problem:} \emph{Is there a ball packing with constant combinatorial scalar curvature in the combinatorial conformal class $\mathcal{M}_{\mathcal{T}}$? How to find it?}\\[8pt] To approach this problem, Glickensteinp introduced a combinatorial Yamabe flow \begin{equation}\label{Def-Flow-Glickenstein} \frac{dr_i}{dt}=-K_ir_i, \end{equation} aiming to deform the ball packings to one with constant (or prescribed) scalar curvature. The prototype of (\ref{Def-Flow-Glickenstein}) is Chow and Luo's combinatorial Ricci flow \cite{CL1} and Luo's combinatorial Yamabe flow \cite{L1} on surfaces. Following Chow, Luo and Glickenstein's pioneering work, the first author of this paper and his collaborates Jiang, Xu, Zhang, Ma, Zhou also introduced and studied several combinatorial curvature flows in \cite{Ge}-\cite{GZX}. We must emphasize that combinatorial curvature flows are quite different with their smooth counterparts. It is well known that the solutions to smooth normalized Yamabe (Ricci, Calabi) flows on surfaces exit for all time $t\geq0$ and converges to metrics with constant curvature. However, as numerical simulations indicate, some combinatorial versions of surface Yamabe (Ricci, Calabi) flows may collapse in finite time. Worse still, even one may extend the flows to go through collapsing, the extended flows may not converge and may develop singularities at infinity. We suggest the readers to see Luo's combinatorial Yamabe flow \cite{L1}\cite{GJ1}, and Chow-Luo's combinatorial Ricci flows with inversive distance circle packings \cite{CL1}\cite{GJ2}-\cite{GJ4}. Since all definitions of combinatorial curvature flows are based on triangulations, singularities occur exactly when some triangles or tetrahedrons collapse. Since the triangulation deeply influences the behavior of the combinatorial flows, it takes much more effort to deal with combinatorial curvature flows. In view of the trouble it caused in the study of surface combinatorial curvature flows, one can imagine the difficulties one will meet in the study of 3d-combinatorial curvature flows. Very little is known until now about those flows. In this paper, we will show the global convergence of the (extended) combinatorial Yamabe flow for regular triangulations. As far as we know, this is the first global convergence result for 3d-combinatorial Yamabe flows. \subsection{Main results} \label{section-main-result} Cooper and Rivin first pintroduced the ``total curvature functional" $\mathcal{S}=\sum_{i=1}^N K_i r_i$. Glickenstein then considered the following ``average scalar curvature" functional \begin{equation} \lambda(r)=\frac{\sum_{i=1}^NK_ir_i}{\sum_{i=1}^Nr_i}, \;r\in\mathcal{M}_{\mathcal{T}}, \end{equation} we call which the ``Cooper-Rivin-Glickenstein functional" in the paper. In the following, we abbreviate it as \emph{CRG-functional}. Let the \emph{combinatorial Yamabe invariant} $Y_{\mathcal{T}}$ be defined as (see Definition \ref{Def-alpha-normalize-Regge-functional}) \begin{equation} Y_{\mathcal{T}}=\inf_{r\in \mathcal{M}_{\mathcal{T}}} \lambda(r). \end{equation} We call $Y_{\mathcal{T}}$ is attainable if the CRG-functional $\lambda(r)$ has a minimum in $\mathcal{M}_{\mathcal{T}}$. Our first result is a combination of Theorem \ref{Thm-Q-min-iff-exist-const-curv-metric} and Theorem \ref{corollary-Q-2}: \begin{theorem}\label{Thm-main-1-yamabe-invarint} The following are all mutually equivalent. \begin{enumerate} \item The Combinatorial Yamabe Problem is solvable; \item $Y_{\mathcal{T}}$ is attainable in $ \mathcal{M}_{\mathcal{T}}$, i.e. the CRG-functional $\lambda$ has a global minimum in $ \mathcal{M}_{\mathcal{T}}$; \item The CRG-functional $\lambda$ has a local minimum in $ \mathcal{M}_{\mathcal{T}}$; \item The CRG-functionpal $\lambda$ has a critical point in $ \mathcal{M}_{\mathcal{T}}$. \end{enumerate} \end{theorem} One has an explicit formula for the gradient and the second derivative of $\mathcal{S}(r)$. This is helpful from the practical point of view, because it allows one to use more powerful algorithms to minimize $\mathcal{S}(r)$ under the constraint condition $\sum_{i\in V}r_i=1$ and thus solve the Combinatorial Yamabe Problem. An alternative way to approach the Combinatorial Yamabe Problem is to consider the following combinatorial Yamabe flow \begin{equation}\label{Def-norm-Yamabe-Flow} \frac{dr_i}{dt}=(\lambda-K_i)r_i, \end{equation} which is a normalization of Glickenstein's combinatorial Yamabe flow (\ref{Def-Flow-Glickenstein}). We will show that (see Proposition \ref{prop-converg-imply-const-exist}) if $r(t)$, the unique solution to (\ref{Def-norm-Yamabe-Flow}), exists for all time $t\geq0$ and converges to a ball packing $\hat{r}\in\mathcal{M}_{\mathcal{T}}$, then $\hat{r}$ has constant curvature. Hence (\ref{Def-norm-Yamabe-Flow}) provides a natural way to get the solution of the Combinatorial Yamabe Problem. In practice, one may design algorithms to solve the flow equation (\ref{Def-norm-Yamabe-Flow}) and further the Combinatorial Yamabe Problem, see \cite{GeMa} for example. The solution $\{r(t)\}_{0\leq t<T}$ to (\ref{Def-norm-Yamabe-Flow}) may collapse in finite time, where $0<T\leq\infty$ is the maximal existence time of $r(t)$. Here ``collapse" means $T$ is finite. Because $\lambda-K_i$ have no definition outside $\mathcal{M}_{\mathcal{T}}$, the collapsing happens exactly when $r(t)$ touches the boundary of $\mathcal{M}_{\mathcal{T}}$. More precisely, there exists a sequence of times $t_n\rightarrow T$, and a conformal tetrahedron $\{ijkl\}$ so that $Q_{ijkl}(r(t_n))\rightarrow0$. Geometrically, the geometric tetrahedron $\{ijkl\}$ collapses. That is, the six edges of $\{ijkl\}$ with lengths $l_{ij},l_{ik},l_{il},l_{jk},l_{jl},l_{kl}$ could no more form the edges of any Euclidean tetrahedron as $t_n\rightarrow T$. To prevent finite time collapsing, we introduce a topological-combinatorial invariant (see Definition \ref{Def-chi-invariant}) \begin{equation} \chi(\hat{r},\mathcal{T})=\inf\limits_{\gamma\in{\mathbb{S}}^{N-1};\|\gamma\|_{l^1}=0\;}\sup\limits_{0\leq t< a_\gamma}\lambda(\hat{r}+t\gamma). \end{equation} where $\hat{r}$ is a constant curvature ball packing, and $a_{\gamma}$ is the least upper bound of $t$ such that $\hat{r}+t\gamma\in \mathcal{M}_{\mathcal{T}}$ for all $0\le t<a_\gamma$. By controlling the initial CRG-functional $\lambda(r(0))$, we can prevent finite time collapsing. In fact, we have the following result: \begin{theorem} \label{thm-intro-small-energ-converg} Assume there exists a constant curvature ball packing $\hat{r}$ and \begin{equation} \lambda(r(0))\leq\chi(\hat{r},\mathcal{T}). \end{equation} Then $r(t)$ exists on $[0,\infty)$ and converges exponentially fast to a constant curvature packing. \end{theorem} It's remarkable that the constant curvature packing $\hat{r}$ is unique up to scaling, since the curvature map $K$ is global injective up to scaling by \cite{CR}\cite{Xu} (that is, if $K(r)=K(r')$, then $r'=cr$ for some $c>0$). Hence in the above theorem, $r(t)$ should converges to some packing $c\hat{r}$, which is a scaling of $\hat{r}$. Since $\|r(t)\|_{l^1}=\sum_ir_i(t)$ is invariant along (\ref{Def-norm-Yamabe-Flow}), $c$ can be determined by $c\|\hat{r}\|_{l^1}=\|r(0)\|_{l^1}$. The above theorem is essentially a ``small energy convergence" result. It is based on the fact that there exists a constant curvature packing $\hat{r}$, and moreover, $\hat{r}$ is stable. Inspired by the extension idea introduced by Bobenko, Pinkall and Springborn \cite{Bobenko}, systematically developed by Luo \cite{L2}, Luo and Yang \cite{LuoYang}, and then widely used by Ge and Jiang \cite{GJ1}-\cite{GJ4}, Ge and Xu \cite{ge-xu} and Xu \cite{Xu}, we provide an extension way to handle finite time collapsing. Given four balls $S_1$, $S_2$, $S_3$ and $S_4$ with radii $r_1$, $r_2$, $r_3$ and $r_4$. Let $l_{ij}=r_i+r_j$ be the length of the edge $\{ij\}$, $i,j\in\{1,2,3,4\}$. In case $Q_{1234}>0$, which means that the six edges of $\{1234\}$ with lengths $l_{12},l_{13},l_{14},l_{23},l_{24},l_{34}$ form the edges of an Euclidean tetrahedron, denote $\tilde{\alpha}_{ijkl}$ by the real solid angle at the vertex $i\in\{1,2,3,4\}$. In case $Q_{1234}\leq0$, those $l_{12},l_{13},l_{14},l_{23},l_{24},l_{34}$ can not form the edge lengths of any Euclidean tetrahedron. Denote $\tilde{\alpha}_{ijkl}=2\pi$ if the ball $S_i$ go through the gap between the three mutually tangent balls $S_j$, $S_k$ and $S_l$, where $\{i,j,k,l\}=\{1,2,3,4\}$, while denote $\tilde{\alpha}_{ijkl}=0$ otherwise. The construction shows that $\tilde{\alpha}_{ijkl}$ is an extension of the solid angle $\alpha_{ijkl}$, which is defined only for those $r_1$, $r_2$, $r_3$ and $r_4$ so that $Q_{1234}>0$. We call $\tilde{\alpha}_{ijkl}$ the extended solid angle. It is defined on $\mathds{R}^4_{>0}$ and is continuous by Lemma \ref{lemma-xu-extension} in Section \ref{section-extend-solid-angle}. Using the extended solid angle $\tilde{\alpha}$, we naturally get $\widetilde{K}$, a continuously extension of the curvature $K$, by \begin{equation}\label{Def-extend-curvature} \widetilde{K}_{i}= 4\pi-\sum_{\{ijkl\}\in \mathcal{T}_3}\tilde{\alpha}_{ijkl}. \end{equation} As a consequence, the CRG-functional $\lambda$ extends naturally to $\tilde{\lambda}=\sum_i \widetilde{K}_ir_i/\sum_ir_i$, which is called the \emph{extended CRG-functional}. \begin{theorem}\label{Thm-main-2-extend-norm-Yamabe-flow} We can extend the combinatorial Yamabe flow (\ref{Def-norm-Yamabe-Flow}) to the following \begin{equation} \label{Def-introduct-extend-norm-Yamabe-flow} \frac{dr_i}{dt}=(\tilde{\lambda}-\widetilde{K}_i)r_i \end{equation} so that any solution to the above extended flow (\ref{Def-introduct-extend-norm-Yamabe-flow}) exists for all time $t\geq 0$. \end{theorem} In the following, we call every $r\in\mathcal{M}_\mathcal{T}$ a \emph{real ball packing}, and call every $r\in\mathds{R}^N_{>0}\setminus\mathcal{M}_\mathcal{T}$ a \emph{virtual ball packing}. If we mention a ball packing in this paper, we always mean a real ball packing. It can be shown (see Theorem \ref{thm-extend-flow-converg-imply-exist-const-curv-packing}) that if $\{r(t)\}_{t\geq0}$, a solution to (\ref{Def-introduct-extend-norm-Yamabe-flow}), converges to some $r_{\infty}\in\mathds{R}^N_{>0}$, then $r_{\infty}$ has constant (extended) curvature, either real or virtual. Conversely, if we assume the triangulation $\mathcal{T}$ is \emph{regular} (also called vertex transitive), i.e. a triangulation such that the same number of tetrahedrons meet at every vertex, one can deform any packing to a real one with constant curvature along the extended flow (\ref{Def-introduct-extend-norm-Yamabe-flow}). \begin{theorem}\label{Thm-main-3-converg-to-const-curv} Assume $\mathcal{T}$ is regular. Then the solution $r(t)$ to the extended flow (\ref{Def-introduct-extend-norm-Yamabe-flow}) converges exponentially fast to a real packing with constant curvature as $t$ goes to $+\infty$. \end{theorem} Generally, we can use an extended topological-combinatorial invariant $\tilde{\chi}(\hat{r},\mathcal{T})$ to control (without assuming $\mathcal{T}$ regular) $\tilde{\lambda}(r(0))$ and further control the behavior of the extended flow (\ref{Def-introduct-extend-norm-Yamabe-flow}) so that it converges to a real packing $\hat{r}$ with constant curvature. See Theorem \ref{Thm-tuta-xi-invariant-imply-converg} for more details. Inspired by the the proof of Theorem \ref{Thm-main-3-converg-to-const-curv}, the following conjecture seems to be true. \begin{conjecture} Let $d_i$ be the vertex degree at $i$. Assume $|d_i-d_j|\leq 10$ for each $i,j\in V$. Then there exists a real or virtual ball packing with constant curvature. \end{conjecture} However, we can prove the following theorem, which builds a deep connection between the combinatoric of $\mathcal{T}$ and the geometry of $M$. \begin{theorem} If each vertex degree is no more than $11$, there exists a real or virtual ball packing with constant curvature. \end{theorem} In a future paper \cite{Ge-hua}, the techniques of this paper will be extended to hyperbolic ball packings, the geometry of which is somewhat different from Euclidean case. If all vertex degrees are no more than $22$, there are no hyperbolic ball packings, real or virtual, with zero curvature. However, there always exists a ball packing (may be virtual) with zero curvature if all degrees are no less than $23$. It is amazing that these combinatoric-geometric results can be obtained by studying a hyperbolic version of the combinatorial Yamabe flow (\ref{Def-Flow-Glickenstein}). At the last of this section, we raise a question as follows, which is our ultimate aim: \begin{question} Characterize the image set $K(\mathcal{M}_\mathcal{T})$ of the curvature map $K$, and $\widetilde{K}(\mathds{R}^N_{>0})$ of the extended curvature map $\widetilde{K}$ using the combinatorics of $\mathcal{T}$ and the topology of $M$. \end{question} For compact surface with circle packings, Thurston \cite{T1} completely solved the above question. He characterized the image set of $K$ by a class of combinatorial and topological inequalities. However, we don't know how to approach it in three dimension. The paper is organized as follows. In Section 2, we introduce some energy functionals and the combinatorial Yamabe invariant. In Section 3, we first prove a ``small energy convergence" result. We then introduce a topological-combinatorial invariant to control the convergence behavior of the normalized flow. In Section 4, we extend the definition of solid angles and then combinatorial curvatures. Using the extended energy functional, we solve the Combinatorial Yamabe Problem. In Section 5, we study the extended flow. We shall prove the long-term existence of solutions, and prove any solution converge to a constant curvature real packing for regular triangulations. In the Appendix, we give the Schl\"{a}ffli formula and a proof of a Lemma used in the paper. \section{Combinatorial functionals and invariants} \subsection{Regge's Einstein-Hilbert functional} Given a smooth Riemannian manifold $(M, g)$, let $R$ be the smooth scalar curvature, then the smooth Einstein-Hilbert functional $\mathcal{E}(g)$ is defined by $$\mathcal{E}(g)=\int_MR d\mu_g.$$ It had been extensively studied since its relation to general relativity, the Yamabe problem and geometric curvature flows. In order to quantize gravity, Regge \cite{Re} first suggested to consider a discretization of the smooth Einstein-Hilbert functional, which is called Regge's Einstein-Hilbert functional by Champion, Glickenstein and Young \cite{CGY}. Regge's Einstein-Hilbert functional is usually called Einstein-Hilbert action in Regge calculus by physicist. It is closely related to the gravity theory for simplicial geometry. See \cite{AMM,MM,Miler} for more description. We look back Regge's formulation briefly. For a compact 3-dimensional manifold $M^3$ with a triangulation $\mathcal{T}$, a piecewise flat metric is a map $l:E\rightarrow (0,+\infty)$ such that for every tetrahedron $\tau=\{ijkl\}\in \mathcal{T}_3$, the tetrahedron $\tau$ with edges lengths $l_{ij},l_{ik},l_{il},l_{jk},l_{jl},l_{kl}$ can be realized as a geometric tetrahedron in Euclidean space. For any Euclidean tetrahedron $\{ijkl\}\in \mathcal{T}_3$, the dihedral angle at edge $\{ij\}$ is denoted by $\beta_{ij,kl}$. If an edge is in the interior of the triangulation, the discrete Ricci curvature at this edge is $2\pi$ minus the sum of dihedral angles at the edge. More specifically, denote $R_{ij}$ as the discrete Ricci curvature at an edge $\{ij\}\in \mathcal{T}_1$, then \begin{equation} R_{ij}=2\pi-\sum_{\{ijkl\}\in\mathcal{T}_3}\beta_{ij,kl}, \end{equation} where the sum is taken over all tetrahedrons with $\{ij\}$ as one of its edges. If this edge is on the boundary of the triangulation, then the curvature should be $R_{ij}=\pi-\sum_{\{i,j,k,l\}\in T}\beta_{ij,kl}$. Using the discrete Ricci curvature, Regge's Einstein-Hilbert functional can be expressed as \begin{equation} \mathcal{E}(l)=\sum_{i\thicksim j}R_{ij}l_{ij}, \end{equation} where the sum is taken over all edges $\{ij\}\in \mathcal{T}_1$. It is noticeable that $\{l^2\}$, the space of all admissible piecewise flat metrics parameterized by $l_{ij}^2$, is a nonempty connected open convex cone. This was proved by the first author Ge of the paper, Mei and Zhou \cite{GMZ} and Schrader \cite{Sch} independently. In a forthcoming paper \cite{GJ5}, we will use this observation to prove that the space of all perpendicular ball packings (any two intersecting balls intersect perpendicularly) is the whole space $\mathds{R}^N_{>0}$. Compare that the space of tangential ball packings (i.e. the ball packings considered in this paper) $\mathcal{M}_{\mathcal{T}}$ is non-convex. In the following, we will see that non-convex of the set $\mathcal{M}_{\mathcal{T}}$ is the main difficulty to make a local result global. \subsection{Cooper and Rivin's ``total scalar curvature" functional} \label{section-cooper-rivin-func} For Euclidean triangulation coming from ball packings, Cooper and Rivin \cite{CR} introduced and carefully studied the following ``total scalar curvature" functional \begin{equation} \label{def-cooper-rivin-funct} \mathcal{S}(r)=\sum_{i=1}^N K_i r_i, \;r\in \mathcal{M}_{\mathcal{T}}. \end{equation} Using this functional, they proved that the combinatorial conformal structure cannot be deformed (except by scaling) while keep the solid angles fixed, or equivalently, the set of conformal structures with prescribed solid angles are discrete. They further used this result to prove that the geometry of ball packing of the ball $\mathbb{S}^3$ whose nerve is a triangulation $\mathcal{T}$ is rigid up to M\"obius transformations. The following result is our observation. \begin{proposition} Given a triangulated manifold $(M^3, \mathcal{T})$, each real ball packing $r\in \mathcal{M}_{\mathcal{T}}$ induces a piecewise flat metric $l$ with $l_{ij}=r_i+r_j$. Moreover, $\mathcal{E}(l)=\mathcal{S}(r)$. \end{proposition} \begin{proof} Glickenstein \cite{G4} observed that for each vertex $i\in V$, $$K_i=\sum_{j:j\thicksim i}R_{ij},$$ which can be proved by using Euler's characteristic formula for balls. Then it follows $$\sum_{i=1}^NK_ir_i=\sum_{i=1}^N\sum_{j:j\thicksim i}R_{ij}r_i=\sum_{j\thicksim i}R_{ij}(r_i+r_j)=\sum_{j\thicksim i}R_{ij}l_{ij}.$$ \end{proof} \begin{lemma}\label{Lemma-Lambda-semi-positive} (\cite{CR, Ri, G2}) Given a triangulated manifold $(M^3, \mathcal{T})$. For Cooper and Rivin's functional $\mathcal{S}=\sum K_ir_i$, the classical Schl\"{a}ffli formula says that $d\mathcal{S}=\sum K_idr_i$. This implies $\partial_{r_i}\mathcal{S}=K_i$, or \begin{equation} \nabla_r\mathcal{S}=K, \end{equation} in collum vector form. If we denote \begin{equation} \Lambda=Hess_r\mathcal{S}= \frac{\partial(K_{1},\cdots,K_{N})}{\partial(r_{1},\cdots,r_{N})}, \end{equation} then $\Lambda$ is positive semi-definite with rank $N-1$ and the kernel of $\Lambda$ is the linear space spanned by the vector $r$. \end{lemma} Glickenstein \cite{G1} calculated the entries of matrix $\Lambda$ in detail, and found a new dual structure for conformal tetrahedrons. It's also his insight to elaborate $\Lambda$ as a type of combinatorial Laplace operator and to derive a discrete maximum principle for his curvature flow (\ref{Def-Flow-Glickenstein}). For each $x\in\mathds{R}^N$, denote $\|x\|_{l^1}=\sum_{i=1}^N|x_i|$. The following lemma was essentially stated by Cooper and Rivin in their pioneering study of ball packings \cite{CR}. Since the proof is elementary and is irrelevant with this paper, we postpone it to the appendix. \begin{lemma}\label{Lemma-Lambda-positive} (\cite{CR}) Cooper and Rivin's functional $\mathcal{S}$ is strictly convex on $ \mathcal{M}_{\mathcal{T}}\cap \{r\in\mathds{R}^N_{>0}:\|r\|_{l^1}=1\}$. If we restrict $\mathcal{S}$ to the hyperplane $\{x\in\mathds{R}^N:\|x\|_{l^1}=1\}$, its Hessian $\Lambda'$ is strictly positive definite (the concrete meaning of $\Lambda'$ can be seen in Appendix \ref{appendix-2}). \end{lemma} \subsection{Glickenstein's ``average scalar curvature" functional} If $\hat{r}\in \mathcal{M}_{\mathcal{T}}$ is a real ball packing with constant curvature, then the combinatorial scalar curvature $K_i(\hat{r})$ at each vertex $i\in V$ equals to a constant $\mathcal{S}(\hat{r})/\|\hat{r}\|_{l^1}$. Inspired by this, Glickenstein suggested to consider the following ``average scalar curvature" \begin{equation}\label{Def-3d-Yamabe-functional} \lambda(r)=\frac{\mathcal{S}}{\|r\|_{l^1}}=\frac{\sum_{i=1}^NK_ir_i}{\sum_{i=1}^Nr_i}, \ \ r\in \mathcal{M}_{\mathcal{T}}. \end{equation} Note this functional is called the Cooper-Rivin-Glickenstein functional and abbreviated as the CRG-functional in Section \ref{section-main-result}. For a Riemannian manifold $(M, g)$, the smooth average scalar curvature is $\frac{\int_M R d\mu_g}{\int_M d\mu_g}$. For a triangulated manifold $(M^3,\mathcal{T})$, we consider $r_i$ as a volume element, which is a combinatorial analogue of $d\mu_g$. In this sense, $\|r\|_{l^1}$ is an appropriate combinatorial volume of a ball packing $r$, and Glickenstein's ``average scalar curvature" functional (\ref{Def-3d-Yamabe-functional}) is an appropriate combinatorial analogue of the smooth average scalar curvature $\frac{\int_M R d\mu_g}{\int_M d\mu_g}$. Note that the functional $\lambda(r)=\mathcal{S}/\|r\|_{l^1}$ is also a normalization of Cooper and Rivin's functional $\mathcal{S}$, and to some extent, looks like a combinatorial version of smooth normalized Einstein-Hilbert functional $\frac{\int_M R d\mu_g}{(\int_M d\mu_g)^\frac{1}{3}}$. It is remarkable that the CRG-functional (\ref{Def-3d-Yamabe-functional}) can be generalized to $\alpha$ order. In fact, the first author of this paper Ge and Xu \cite{GX3} once defined the $\alpha$-functional \begin{equation}\label{Def-3d-alpha-Yamabe-functional} \lambda_{\alpha}(r)=\frac{\mathcal{S}}{\|r\|_{\alpha+1}}=\frac{\sum_{i=1}^NK_ir_i}{\big(\sum_{i=1}^Nr_i^{\alpha+1}\big)^{\frac{1}{\alpha+1}}}, \ \ r\in \mathcal{M}_{\mathcal{T}}. \end{equation} for each $\alpha\in \mathds{R}$ with $\alpha\neq-1$. Then the CRG-functional $\lambda(r)$ is in fact the $0$-functional defined above. The major aim for introducing the $\alpha$-functional is to study the $\alpha$-curvature $R_{\alpha,i}=K_i/r_i^{\alpha}$. The critical points of the $\alpha$-functional are exactly the ball packings with constant $\alpha$-curvature. The first author of this paper Ge and his collaborators explained carefully the motivation to study the $\alpha$-curvature, and particularly the $\alpha=2$ case, see \cite{GeMa}\cite{GX1}-\cite{GX4}. We will follow up the deformation of ball packings towards the constant (or prescribed) $\alpha$-curvatures in the subsequent studies. \subsection{The combinatorial Yamabe invariant} Inspired by the above analogy analysis, we introduce some combinatorial invariants here. \begin{definition}\label{Def-alpha-normalize-Regge-functional} The combinatorial Yamabe invariant with respect to $\mathcal{T}$ is defined as \begin{equation}\label{def-Y-T} Y_{\mathcal{T}}=\inf_{r\in \mathcal{M}_{\mathcal{T}}} \lambda(r), \end{equation} where the combinatorial Yamabe constant of $M$ is defined as $Y_{M}=\sup\limits_{\mathcal{T}}\inf\limits_{r\in \mathcal{M}_{\mathcal{T}}} \lambda(r).$ \end{definition} For a fixed triangulation $\mathcal{T}$, all $K_i$ are uniformly bounded by the topology of $M$ and the combinatorics of $\mathcal{T}$ by the definition (\ref{Def-CR curvature}). Note $|\lambda(r)|\leq\|K\|_{l^\infty}$ for every ball packing $r\in\mathcal{M}_{\mathcal{T}}$. Hence $Y_{\mathcal{T}}$ is well defined and is a finite number. It depends on the triangulation $\mathcal{T}$ and $M$. Moreover, $Y_M$ is also well defined, but we don't know whether it is finite. One can design algorithms to calculate $Y_{\mathcal{T}}$ by minimizing $\lambda$ in $\mathcal{M}_{\mathcal{T}}$. However, we don't know how to design algorithms to get $Y_M$. \begin{lemma} \label{lemma-const-curv-equl-cirtical-point} Let $r\in\mathcal{M}_{\mathcal{T}}$ be a real ball packing. Then $r$ has constant combinatorial scalar curvature if and only if it is a critical point of the CRG-functional $\lambda(r)$. \end{lemma} \begin{proof} This can be shown easily from \begin{equation} \partial_{r_i}\lambda=\frac{K_i-\lambda}{\|r\|_{l^1}}. \end{equation} \end{proof} By Lemma \ref{Lemma-Lambda-positive}, the constant curvature ball packings are isolated in $\mathcal{M}_{\mathcal{T}}\cap \{r\in\mathds{R}^N: \sum_{i=1}^Nr_i=1\}$. Equivalently, except for a scaling of radii, one cannot deform a ball packing continuously so that its curvature maintains constant. Recently, Xu \cite{Xu} proved the ``\emph{global rigidity}" of ball packings, i.e. the curvature map $K:V\to \mathds{R}^N, i\mapsto K_i$ is injective if one ignores the scalings of ball radii. Thus a ball packing is determined by its curvature up to scaling. As a consequence, the ball packing with constant curvature (if it exists) is unique up to scaling. Note Xu's global rigidity can be derived from Theorem \ref{thm-extend-xu-rigid} directly. Until we proved Theorem \ref{thm-extend-xu-rigid}, we will not assume the global rigidity of $K$ a priori to prove the results in this paper so as to make the paper self-contained. Xu's global rigidity shows the uniqueness of the packing with constant curvature. The Combinatorial Yamabe Problem asks whether there exists a ball packing with constant curvature and how to find it (if it exists)? We give a glimpse into this problem with the help of the CRG-functional and the combinatorial Yamabe invariant. \begin{theorem}\label{Thm-Q-min-iff-exist-const-curv-metric} Consider the following four descriptions: \begin{description} \item[(1)] There exists a real ball packing $\hat{r}$ with constant curvature. \item[(2)] The CRG-functional $\lambda(r)$ has a local minimum in $\mathcal{M}_{\mathcal{T}}$. \item[(3)] The CRG-functional $\lambda(r)$ has a global minimum in $\mathcal{M}_{\mathcal{T}}$. \item[(4)] The combinatorial Yamabe invariant $Y_{\mathcal{T}}$ is attainable by some real ball packing. \end{description} Then $(3)\Leftrightarrow(4)\Rightarrow(1)\Leftrightarrow(2)$. As a consequence, we get $\lambda(\hat{r})\geq Y_{\mathcal{T}}$ for any ball packing $\hat{r}$ with constant curvature. \end{theorem} \begin{remark} Later we will prove ``$(2)\Rightarrow(3)$". Then it follows that $\lambda(\hat{r})=Y_{\mathcal{T}}$ for any real ball packing $\hat{r}$ with constant curvature. \end{remark} \begin{proof} (3) and (4) say the same thing. They both imply (2). We prove $(1)\Leftrightarrow(2)$ below. $(1)\Rightarrow(2)$: Let $\hat{r}\in \mathcal{M}_{\mathcal{T}}$ be a real ball packing with constant curvature $c=K_i(\hat{r})$ for all $i\in V$. Since $K$ is scaling invariant, we may assume $\|\hat{r}\|_{l^1}=1$. Consider the following functional $$\mathcal{S}_c=\mathcal{S}-c\sum_{i=1}^Nr_i=\sum_{i=1}^N(K_i-c)r_i.$$ By the Schl\"{a}fli formula $\sum_{j\thicksim i}l_{ij}d\beta_{ij}=0$, or by Lemma \ref{Lemma-Lambda-semi-positive}, we obtain $\partial_{r_i}\mathcal{S}_c=K_i-c$. This implies that $\hat{r}$ is a critical point of the functional $\mathcal{S}_c$. Further note $Hess_r\mathcal{S}_c=\Lambda$, then it follows that $\mathcal{S}_c$ is strictly convex when restricted to the hyperplane $\{r\in\mathds{R}^N: \|r\|_{l^1}=1\}$. Hence $\hat{r}$ is a local minimum point. $(2)\Rightarrow(1)$: Assume $\hat{r}\in \mathcal{M}_{\mathcal{T}}$ is a local minimum point of the CRG-functional $\lambda(r)$, then it is a critical point of $\lambda(r)$. Let $\hat{K}$ be the curvature at $\hat{r}$, and $\hat{\lambda}$ be the CRG-functional at $\hat{r}$. From $\partial_{r_i}\lambda=\|r\|^{-1}_{l^1}(K_i-\lambda)$, we see $\hat{K}_i=\hat{\lambda}$ for every $i\in V$. Hence $\hat{r}$ is a real ball packing with constant curvature. \end{proof} The above $(1)$, $(2)$, $(3)$ and $(4)$ are all equivalent. Indeed, we will prove ``$(1)\Rightarrow(3)$" in Section \ref{subsection-converg-to-const} (see Corollary \ref{corollary-Q-2}). The global ridigity ($K$ is globally injective up to scaling) tells a global result, however, we cannot use this directly to derive that a local minimum point $\hat{r}$ is a global one. The difficulty comes from that the combinatorial conformal class $\mathcal{M}_{\mathcal{T}}$ is not convex. It is the main trouble to make a local result global. Let us take a look at the procedures from Guo \cite{Guo} to Luo \cite{L2}, from Luo \cite{L1} to Bobenko, Pincall and Springborn \cite{Bobenko}, from Cooper and Rivin \cite{CR} to Xu \cite{Xu}, the results of which are all form local to global. The former says the curvature map $K$, in three different settings, is locally injective, while the later says $K$ is globally injective. These works are all based on an extension technique, which will be formulated carefully in Section \ref{section-extend-K}. Assume the equivalence between $(1)$, $(2)$, $(3)$ and $(4)$. If any one happens, then the real ball packing $\hat{r}$ with constant curvature (if exists) is the unique (up to scaling) minimum of the CRG-functional $\lambda(r)$. One can design algorithms to minimize $\lambda(r)$, or to minimize $\mathcal{S}(r)$ under the constraint condition $\sum_{i\in V}r_i=1$. If $\lambda$ really has a global minimum in $\mathcal{M}_{\mathcal{T}}$, then this minimum is exactly $Y_{\mathcal{T}}$, and as a consequence, the Combinatorial Yamabe Problem is solvable. Otherwise, the Combinatorial Yamabe Problem has no solution. \section{A combinatorial Yamabe flow with normalization} \subsection{A normalization of Glickenstein's flow} Set $u_i=\ln r_i$, and $u=(u_1,\cdots, u_N)$. Then Glickenstein's flow (\ref{Def-Flow-Glickenstein}) can be abbreviated as an ODE $\dot{u}=-K$. The critical point (also called stable point) of this ODE is a real packing $r^*$ with $K(r^*)=0$. Thus it makes good sense to use Glickenstein's flow (\ref{Def-Flow-Glickenstein}) if one wants to deform the combinatorial scalar curvatures to zero. However, Glickenstein's flow (\ref{Def-Flow-Glickenstein}) is not appropriate to deform the combinatorial scalar curvature to a general constant $\mathcal{S}/\|r\|_{l^1}$ (recall we have shown that, if $K_i\equiv c$ where $c$ is a constant, then $c$ equals to $\mathcal{S}/\|r\|_{l^1}$). It is suitable to consider the following normalization of Glickenstein's flow. \begin{definition} Given a triangulated 3-manifold $(M, \mathcal{T})$. The normalized combinatorial Yamabe flow is \begin{equation}\label{Def-r-normal-Yamabe flow} \frac{dr_i}{dt}=(\lambda-K_i)r_i. \end{equation} \end{definition} The normalized combinatorial Yamabe flow (\ref{Def-r-normal-Yamabe flow}) owes to Glickenstein. It was first introduced in Glickenstein's thesis \cite{G0}. With the help of the coordinate change $u_i=\ln r_i$, we rewrite (\ref{Def-r-normal-Yamabe flow}) as the following autonomous ODE system \begin{equation}\label{Def-u-normal-Yamabe flow} \frac{du_i}{dt}=\lambda-K_i. \end{equation} We explain the meaning of ``normalization". This means that the normalized flow (\ref{Def-r-normal-Yamabe flow}) and the un-normalized flow (\ref{Def-Flow-Glickenstein}) differ only by a change of scale in space. Let $t, r, K$ denote the variables for the flow (\ref{Def-Flow-Glickenstein}), and $t, \tilde{r}, \tilde{K}$ for the flow (\ref{Def-r-normal-Yamabe flow}). Suppose $r(t), t\in [0,T),$ is a solution of (\ref{Def-Flow-Glickenstein}). Set $\tilde{r}(t)=\varphi(t)r(t)$, where $$\varphi(t)=e^{\int_0^t\lambda(r(s))ds}.$$ Then we have $\tilde{K}(t)=K(t)$ and $\tilde{\lambda}(t)=\lambda(t)$. Then it follows $\frac{d\tilde{r_i}}{dt}=(\tilde{\lambda}-\tilde{K_i}(t))\tilde{r_i}(t)$. Conversely, if $\tilde{r}(t), t\in [0,T),$ is a solution of (\ref{Def-r-normal-Yamabe flow}), set $r(t)=e^{-\int_0^t\tilde{\lambda}(\tilde{r}(s))ds}\tilde{r}(t)$. Then it is easy to check that $d r_i/dt=-K_ir_i$. In the space $\mathcal{M}_{\mathcal{T}}$, the coefficient $r_i(\lambda-K_i)$ as a function of $r=(r_1,\cdots,r_N)$ is smooth and hence locally Lipschitz continuous. By Picard theorem in classical ODE theory, the normalized flow (\ref{Def-r-normal-Yamabe flow}) has a unique solution $r(t)$, $t\in[0,\epsilon)$ for some $\epsilon>0$. As a consequence, we have \begin{proposition} Given a triangulated 3-manifold $(M, \mathcal{T})$, for any initial packing $r(0)\in\mathcal{M}_{\mathcal{T}}$, the solution $\{r(t):0\leq t<T\}\subset\mathcal{M}_{\mathcal{T}}$ to the normalized flow (\ref{Def-r-normal-Yamabe flow}) uniquely exists on a maximal time interval $t\in[0, T)$ with $0<T\leq+\infty$. \end{proposition} We give some other elementary properties related to the normalized flow (\ref{Def-r-normal-Yamabe flow}). \begin{proposition}\label{prop-V-descending} Along the normalized flow (\ref{Def-r-normal-Yamabe flow}), $\|r(t)\|_{l^1}$ is invariant. Both the Cooper-Rivin functional $\mathcal{S}(r)$ and the CRG-functional $\lambda(r)$ are descending. \end{proposition} \begin{proof} The conclusions follow by direct calculations $d\|r(t)\|_{l^1}/dt=0$ and \begin{align*} \frac{d\lambda(r(t))}{dt}&=\sum_i\partial_{r_i}\lambda(r(t))\frac{dr_i(t)}{dt}\\ &=\sum_ir_i(\lambda-K_i)\|r\|_{l^1}^{-1}(K_i-\lambda)\\ &=-\|r\|_{l^1}^{-1}\sum_ir_i(K_i-\lambda)^2\leq0. \end{align*} \end{proof} \begin{proposition}\label{prop-G-bound} The CRG-functional $\lambda(r)$ is uniformly bounded (by the information of the triangulation $\mathcal{T}$). The Cooper-Rivin functional $\mathcal{S}(r)$ is bounded along the normalized flow (\ref{Def-r-normal-Yamabe flow}). \end{proposition} \begin{proof} There is a constant $c(\mathcal{T})>0$, depending only on the information of $M$ and $\mathcal{T}$, such that for all $r\in \mathcal{M}_{\mathcal{T}}$, $$|\lambda(r)|\leq\|K\|_{l^{\infty}}\leq c(\mathcal{T}).$$ Along the normalized flow (\ref{Def-r-normal-Yamabe flow}), by Proposition \ref{prop-V-descending} we have $$|\mathcal{S}(r(t))|\leq\|K\|_{l^{\infty}}\sum_ir_i(t)\leq\|K\|_{l^{\infty}}\sum_ir_i(0).$$ \end{proof} \begin{proposition}\label{prop-converg-imply-const-exist} Let $r(t)$ be the unique solution of the normalized flow (\ref{Def-r-normal-Yamabe flow}). If $r(t)$ exists for all time $t\geq0$ and converges to a real ball packing $r_\infty\in\mathcal{M}_{\mathcal{T}}$, then $r_\infty$ has constant combinatorial scalar curvature. \end{proposition} \begin{proof} This is a standard conclusion in classical ODE theory. Here we prove it directly. Since all $K_i(r)$ are continuous functions of $r$, $K_i(r(t))$ converges to $K_i(r_\infty)$ as $t$ goes to infinity. From Proposition \ref{prop-V-descending} and Proposition \ref{prop-G-bound}, the CRG-functional $\lambda(r(t))$ is descending and bounded below, hence converges to $\lambda(r_\infty)$. By the mean value theorem of differential, there is a sequence of times $t_n\uparrow+\infty$, such that $$u_i(n+1)-u_i(n)=u_i'(t_n)=\lambda(r(t_n))-K_i(r(t_n)).$$ Hence $K_i(r(t_n))\to\lambda(r_\infty)$. This leads to $K_i(r_\infty)=\lambda(r_\infty)$ for each $i\in V$. Hence $r_\infty$ has constant combinatorial scalar curvature. \end{proof} \begin{remark} If the normalized combinatorial Yamabe flow (\ref{Def-r-normal-Yamabe flow}) converges, then Proposition \ref{prop-converg-imply-const-exist} says that the Combinatorial Yamabe Problem is solvable. \end{remark} \subsection{Singularities of the solution} Let $\{r(t):0\leq t<T\}$ be the unique solution to the normalized flow (\ref{Def-r-normal-Yamabe flow}) on a right maximal time interval $[0, T)$ with $0<T\leq+\infty$. If the solution $r(t)$ do not converge, we call $r(t)$ develops singularities at time $T$. By Proposition \ref{prop-converg-imply-const-exist}, if there exists no ball packing with constant curvature, then $r(t)$ definitely develops singularities at $T$. Numerical simulations show that the solution $r(t)$ may develop singularities even when the constant curvature ball packings exist. To study the long-term existence and convergence of the solutions of the normalized flow (\ref{Def-r-normal-Yamabe flow}), we need to classify the solutions according to the singularities it develops. Intuitively, when singularities develops, $r(t)$ touches the boundary of $\mathcal{M}_{\mathcal{T}}$ as $t\uparrow T$. Roughly speaking, the boundary of $\mathcal{M}_{\mathcal{T}}$ can be classified into three types. The first type is ``\emph{0 boundary}". $r(t)$ touches the ``0 boundary" means that there exists a sequence of times $t_n\uparrow T$ and a vertex $i\in V$ so that $r_i(t_n)\to 0$. The second type is ``$+\infty$ boundary". $r(t)$ touches the ``\emph{$+\infty$ boundary}" means that there exists $t_n\uparrow T$ and a vertex $i\in V$ so that $r_i(t_n)\to +\infty$. The last type is ``\emph{tetrahedron collapsing boundary}". For this case, there exists $t_n\uparrow T$ and a tetrahedron $\{ijkl\}\in T$, such that the inequality $Q_{ijkl}>0$ does not hold any more as $n\to+\infty$. At first glance the limit behavior of $r(t)$ as $t\uparrow T$ may be mixed of the three types and may be very complicated. We show that in any finite time interval, $r(t)$ never touches the ``0 boundary" and ``$+\infty$ boundary". \begin{proposition} \label{prop-no-finite-time-I-singula} The normalized flow (\ref{Def-r-normal-Yamabe flow}) will not touch the ``0 boundary" and ``$+\infty$ boundary" in finite time. \end{proposition} \begin{proof} Note for all vertex $i\in V$, $|\lambda-K_i|$ are uniformly bounded by a constant $c(\mathcal{T})>0$, which depends only on the information of the triangulation. Hence $$r_i(0)e^{-c(\mathcal{T})t}\leq r_i(t)\leq r_i(0)e^{c(\mathcal{T})t},$$ which implies $r_i(t)$ can not go to zero or $+\infty$ in finite time. \end{proof} Using Glickenstein's monotonicity condition \cite{G2}, which reads as \begin{equation}\label{Def-monotonicity} r_i\le r_j ~\mbox{ if and only if } K_i\le K_j \end{equation} for a tetrahedron $\{ijkl\}$, we can prove the following proposition which is essentially due to Glickenstein. \begin{proposition} \label{prop-glick-mc-condition} Consider the normalized flow (\ref{Def-r-normal-Yamabe flow}) on a given $(M^3,\mathcal{T})$, assume the maximum existence time is $T$. If for all $t\in [0,T)$ and each tetrahedron, $r(t)$ satisfies the monotonicity condition (\ref{Def-monotonicity}), then $T=\infty$. \end{proposition} \begin{proof} We argue by contradiction. Assume $T<\infty$. By Proposition \ref{prop-no-finite-time-I-singula}, the flow (\ref{Def-r-normal-Yamabe flow}) will not touch the ``0 boundary" and ``$+\infty$ boundary" in finite time. So we only need to get rid of $r(t)$ touching the ``tetrahedron collapsing boundary" case. We use the assumption (\ref{Def-monotonicity}). We follow the method used in \cite{G2}. We just need to show $Q_{ijkl}>0$ for every tetrahedron $\{ijkl\}$. Denote $Q=Q_{ijkl}$ without fear of confusion. To show $Q_{ijkl}>0$, we only need to show that $Q=0$ implies $\frac{dQ}{dt}> 0$. By direct calculation, \begin{equation}\label{derivative_of_Q} \frac{\partial Q}{\partial r_i}=-\frac{2}{r_i^2}\left(\frac{1}{r_j}+\frac{1}{r_k}+\frac{1}{r_l}-\frac{1}{r_i}\right). \end{equation} Then arguing as \cite{G2}, we have $$2Q=-\left(\frac{\partial Q}{\partial r_i} r_i+\frac{\partial Q}{\partial r_j} r_j+\frac{\partial Q}{\partial r_k} r_k+\frac{\partial Q}{\partial r_l} r_l\right).$$ If $Q=0$, then $$\frac{\partial Q}{\partial r_i} r_i+\frac{\partial Q}{\partial r_j} r_j+\frac{\partial Q}{\partial r_k} r_k+\frac{\partial Q}{\partial r_l} r_l=0.$$ Along the normalized flow (\ref{Def-r-normal-Yamabe flow}), we have \begin{align*} \frac{dQ}{dt}&=\frac{\partial Q}{\partial r_i} \frac{dr_i}{dt}+\frac{\partial Q}{\partial r_j} \frac{dr_j}{dt}+\frac{\partial Q}{\partial r_k} \frac{dr_k}{dt}+\frac{\partial Q}{\partial r_l} \frac{dr_l}{dt}\\ &=\frac{\partial Q}{\partial r_i}r_i(Y_{\mathcal{T}}-K_i)+\frac{\partial Q}{\partial r_j}r_j(Y_{\mathcal{T}}-K_j)+\frac{\partial Q}{\partial r_k} r_k(Y_{\mathcal{T}}-K_k)+\frac{\partial Q}{\partial r_l}r_l(Y_{\mathcal{T}}-K_l)\\ &=-\left(\frac{\partial Q}{\partial r_i} K_i r_i+\frac{\partial Q}{\partial r_j}K_j r_j+\frac{\partial Q}{\partial r_k} K_k r_k+\frac{\partial Q}{\partial r_l} K_l r_l\right)\\ &=-\left(\frac{\partial Q}{\partial r_j}(K_j-K_i) r_j+\frac{\partial Q}{\partial r_k}( K_k-K_i) r_k+\frac{\partial Q}{\partial r_l} (K_l-K_i) r_l\right). \end{align*} If $r_i$ is the minimum, then by (\ref{derivative_of_Q}), $\frac{\partial Q}{\partial r_j}< 0$ for $j\ne i$. So the assumption (\ref{Def-monotonicity}) implies that $\frac{dQ}{dt}\ge 0$ for $Q=0$, and $\frac{dQ}{dt}=0$ if and only if $$K_i=K_j=K_k=K_l.$$ Using the assumption (\ref{Def-monotonicity}) again, we have $r_i=r_j=r_k=r_l$, but in this case $Q=\frac{8}{r_i^2}>0$. So $\frac{dQ}{dt}>0$ at $Q=0$, which is a contradiction. Thus we have $T=\infty$. \end{proof} \subsection{Small energy convergence} Let $\{r(t)\}_{t\geq0}$ be the unique solution to the normalized flow (\ref{Def-r-normal-Yamabe flow}). We call it \emph{nonsingular} if $\{r(t)\}_{t\geq0}$ is compactly supported in $\mathcal{M}_{\mathcal{T}}$. A nonsingular solution implies that there exists a ball packing with constant curvature in $\mathcal{M}_{\mathcal{T}}$. Furthermore, the nonsingular solution converges to a ball packing with constant curvature. In this subsection, we shall prove this fact with the help of a stability result,{ which says that if the initial ball packing $r(0)$ is very close to a ball packing with constant curvature, then $r(t)$ converges to a ball packing with constant curvature. } It is obvious that if the metrics are close then their energy $\lambda$ would be close. Inversely, we will show that if the energy $\lambda(r(0))$ is close to the energy of a constant curvature metric, then the flow would converge exponentially to a constant curvature metric, which we call it "small energy convergence" or "energy gap". In fact, we will introduce a combinatorial invariant $\chi(\hat{r},\mathcal{T})$ to give a quantitative description of the smallness, see also Theorem \ref{Thm-xi-invariant-imply-converg}. \begin{lemma}\label{Lemma-ODE-asymptotic-stable} (\cite{P1}) Let $\Omega\subset \mathds{R}^n$ be an open set, $f\in C^1(\Omega,\mathds{R}^n)$. Consider an autonomous ODE system $$\dot{x}=f(x),~~~x\in\Omega.$$ Assuming $x^*\in\Omega$ is a critical point of $f$, i.e. $f(x^*)=0$. If all the eigenvalues of $Df(x^*)$ have negative real part, then $x^*$ is an asymptotically stable point. More specifically, there exists a neighbourhood $U\subset \Omega$ of $x^*$, such that for any initial $x(0)\in U$, the solution $x(t)$ to equation $\dot{x}=f(x)$ exists for all time $t\geq0$ and converges exponentially fast to $x^*$. \end{lemma} \begin{lemma}\label{Thm-3d-isolat-const-alpha-metric} (Stability of critical metric) Given a triangulated manifold $(M^3, \mathcal{T})$, assume $\hat{r}\in\mathcal{M}_{\mathcal{T}}$ is a ball packing with constant curvature, then $\hat{r}$ is an asymptotically stable point of the normalized flow (\ref{Def-r-normal-Yamabe flow}). Thus if the initial real ball packing $r(0)$ deviates from $\hat{r}$ not so much, the solution $\{r(t)\}$ to the normalized flow (\ref{Def-r-normal-Yamabe flow}) exists for all time $t\geq 0$ and converges exponentially fast to the constant curvature packing $c\hat{r}$, where $c>0$ is some constant so that $c\|\hat{r}\|_{l^1}=\|r(0)\|_{l^1}$. \end{lemma} \begin{proof} Set the right hand side of the flow (\ref{Def-r-normal-Yamabe flow}) as $\Gamma_i(r)=(\lambda-K_i)r_i$, $1\leq i\leq N$. Then the normalized flow (\ref{Def-r-normal-Yamabe flow}) can be written as $\dot{r}=\Gamma(r)$, which is an autonomy ODE system. Differentiate $\Gamma(r)$ at $r^*$, \begin{equation*} D_r\Gamma|_{r^*}= \frac{\partial(\Gamma_{1},\cdots,\Gamma_{N})}{\partial(r_{1},\cdots,r_{N})}\Bigg|_{r^*}= \left( \begin{array}{ccccc} {\frac{\partial \Gamma_1}{\partial r_1}}& \cdot & \cdot & \cdot & {\frac{\partial\Gamma_1}{\partial r_N}} \\ \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot \\ {\frac{\partial \Gamma_N}{\partial r_1}}& \cdot & \cdot & \cdot & {\frac{\partial \Gamma_N}{\partial r_N}} \end{array} \right)_{\hat{r}}=-\left(\Sigma\Lambda\right)|_{\hat{r}}, \end{equation*} where $\Sigma=diag\{r_1,\cdots,r_N\}$. Because $$-\Sigma\Lambda=-\Sigma^{\frac{1}{2}}\Sigma^{\frac{1}{2}}\Lambda \Sigma^{\frac{1}{2}}\Sigma^{-\frac{1}{2}}\sim \Sigma^{\frac{1}{2}}\Lambda \Sigma^{\frac{1}{2}},$$ $-\Sigma\Lambda$ has an eigenvalue $0$ and $N-1$ negative eigenvalues. Note the normalized flow (\ref{Def-r-normal-Yamabe flow}) is scaling invariant, which means any scaling $cr(t)$ ($c>0$ is a constant) of the solution $r(t)$ is also a solution to (\ref{Def-r-normal-Yamabe flow}) (perhaps with different initial value). Hence we may consider the eigenvalues of $-\Sigma\Lambda$ along (\ref{Def-r-normal-Yamabe flow}) are all negative. By Lemma \ref{Lemma-ODE-asymptotic-stable}, $\hat{r}$ is an asymptotically stable point of the normalized flow (\ref{Def-r-normal-Yamabe flow}). Hence we get the conclusion above. \end{proof} \begin{theorem}\label{Thm-nosingular-imply-converg} Given a triangulated manifold $(M^3, \mathcal{T})$, assume the normalized flow (\ref{Def-r-normal-Yamabe flow}) has a nonsingular solution $r(t)$, then there is a ball packing with constant curvature. Furthermore, $r(t)$ converges exponentially fast to a ball packing with constant curvature as $t$ goes to infinity. \end{theorem} \begin{proof} Since $r(t)$ is nonsingular, the right maximal existence time of $r(t)$ is $T=+\infty$. Set $\lambda(t)=\lambda(r(t))$. One can easily distinguish whether $\lambda$ is a function of $r$ or is a function of $t$ in the context without leading to any confusion. In the proof of Proposition \ref{prop-V-descending} we have derived \begin{equation}\label{S't} \lambda'(t)=-\frac{1}{\|r\|_{l^1}}\sum_ir_i(K_i-\lambda)^2\leq0. \end{equation} By Proposition \ref{prop-G-bound}, $\lambda(t)$ is bounded from below and hence converges to a number $\lambda(+\infty)$. Hence there exists a sequence $t_n\uparrow +\infty$ such that $\lambda'(t_n)\rightarrow 0$. Nonsingular solution means that $\overline{\{r(t)\}}$ stays in a compact subset of $\mathcal{M}_{\mathcal{T}}$. There is a ball packing $r_{\infty}\in\mathcal{M}_{\mathcal{T}}$, and a subsequence of $t_n$, which is still denoted as $t_n$, such that $r(t_n)\rightarrow r_{\infty}$. By (\ref{S't}), $\lambda'(t)$ is a continuous function of $r$. It follows that $\lambda'(t_n)\rightarrow \lambda'(r_{\infty})$ and then $\lambda'(r_{\infty})=0$. Substituting $\lambda'(r_{\infty})=0$ into (\ref{S't}), we see that the ball packing $r_{\infty}$ has constant curvature. Moreover, for some sufficient big $t_{n_0}$, $r(t_{n_0})$ is very close to $r_{\infty}$. Then by the Lemma \ref{Thm-3d-isolat-const-alpha-metric}, the solution $\{r(t)\}_{t\ge t_{n_0}}$ converges exponentially fast to $r_{\infty}$. Hence the original solution $\{r(t)\}_{t\ge 0}$ converges exponentially fast to $r_{\infty}$ too, which is a ball packing with constant curvature. \end{proof} \begin{definition} \label{Def-chi-invariant} Given a triangulated manifold $(M^3, \mathcal{T})$, assume there exists a real ball packing $\hat{r}$ with constant curvature. We introduce a combinatorial invariant with respect to the triangulation $\mathcal{T}$ as \begin{equation} \chi(\hat{r},\mathcal{T})=\inf\limits_{\gamma\in{\mathbb{S}}^{N-1};\|\gamma\|_{l^1}=0\;}\sup\limits_{0\leq t< a_\gamma}\lambda(\hat{r}+t\gamma), \end{equation} where $a_{\gamma}$ is the least upper bound of $t$ such that $\hat{r}+t\gamma\in \mathcal{M}_{\mathcal{T}}$ for all $0\le t<a_\gamma$. \end{definition} Let us denote $\mathcal{M}_{\hat{r}}$ by the star-shaped subset $\{\hat{r}+t\gamma \in \mathcal{M}_{\mathcal{T}}: \text{ for } 0\leq t<a_\gamma \text{ and } \gamma \in \mathbb{S}^{N-1} \text{ such that $\|\gamma\|_{l^1}=0$ }\}$ of the hyperplane $\{r:\|r\|_{l^1}=\|\hat{r}\|_{l^1}\}$, where $a_\gamma$ is defined as the above definition. Since $\mathcal{M}_{\mathcal{T}}$ is not convex, the subset $\mathcal{M}_{\hat{r}}$ might be not equal to $\mathcal{M}_{\mathcal{T}}\cap \{r: \|r\|_{l^1}=\|\hat{r}\|_{l^1}\}$. By the convexity of the extended functional $\tilde{\lambda}$ in Theorem \ref{thm-tuta-S-C1-convex} and the scaling invariant of $\lambda$, we see \begin{equation} \label{jiang-observe} \lambda(r)\ge \chi(\hat{r},\mathcal{T}). \end{equation} for all $r\in \mathcal{M}_{\mathcal{T}}\setminus \mathcal{M}_{\hat{r}}$. (However, we will never use the above inequality in this paper. We give it here just for a better understanding of $\chi(\hat{r},\mathcal{T})$). Let $\delta>0$ be any number so that $B(\hat{r},\delta)$ is compactly contained in $\mathcal{M}_{\mathcal{T}}$. Consider the restricted functional $\mathcal{S}(r), \;r\in B(\hat{r},\delta)\cap \{r: ||r||_{l^1}=||\hat{r}||_{l^1}\}$ as a function of $N-1$ variables. It is strictly convex and has a unique critical point at $\hat{r}$. Hence it is strictly increasing along any segment $\hat{r}+t\xi$, $t\in[0,1]$, $\xi\in\partial B(\hat{r},\delta)\cap \{r: ||r||_{l^1}=||\hat{r}||_{l^1}\}$. Let $\mathcal{S}(r')$ be the minimum of $\{\mathcal{S}(r):r\in \partial B(\hat{r},\delta), \|r\|_{l^1}=\|\hat{r}\|_{l^1}\}$, where $r'\in\partial B(\hat{r},\delta)\cap \{r: ||r||_{l^1}=||\hat{r}||_{l^1}\}$. Then by the analysis above and Theorem \ref{Thm-Q-min-iff-exist-const-curv-metric}, it follows that \begin{equation} \chi(\hat{r},\mathcal{T})\geq\frac{\mathcal{S}(r')}{\|r'\|_{l^1}}>\frac{\mathcal{S}(\hat{r})}{\|\hat{r}\|_{l^1}}\geq Y_{\mathcal{T}}. \end{equation} At first glance, the invariant $\chi(\hat{r},\mathcal{T})$ depends on the existence of a ball packing $\hat{r}$ with constant curvature, or say, $\chi(\hat{r},\mathcal{T})$ depends on the geometric information of the ball packings. However, by the uniqueness of constant curvature packings and Theorem \ref{Thm-Q-min-iff-exist-const-curv-metric}, all information of $\hat{r}$ (such as the existence or non-existence, the uniqueness, the analytical properties) are completely determined by $M$ and $\mathcal{T}$. To this extent, $\hat{r}$ is determined by the topological information of $M$ and the combinatorial information of $\mathcal{T}$. Hence the invariant $\chi(\hat{r},\mathcal{T})$ may be considered as a pure combinatorial-topological invariant. Using the combinatorial-topological invariant $\chi(\hat{r},\mathcal{T})$, we give a sufficient condition to guarantee the long time existence and the convergence of the flow (\ref{Def-r-normal-Yamabe flow}). \begin{theorem}[Energy gap]\label{Thm-xi-invariant-imply-converg} Given a triangulated manifold $(M^3, \mathcal{T})$, let $\hat{r}$ be a real ball packing with constant curvature. Let the initial packing be $r(0)\in\mathcal{M}_{\mathcal{T}}$. Assume \begin{equation} \label{ge-observe} \lambda(r(0))\leq\chi(\hat{r},\mathcal{T}) \end{equation} Then the solution $r(t)$ to (\ref{Def-r-normal-Yamabe flow}) exists for all time $t\geq 0$ and converges exponentially fast to a real packing with constant curvature. \end{theorem} \begin{remark} This theorem could be considered as a ``small energy convergence". Moreover, we give a precise gap bound estimate for the ``small energy convergence". One can compare our theorem with the standard ``small energy convergence theorem" in the smooth setting, for example the Calabi flow \cite{ToWe} and the $L^2$ curvature flow \cite{Str}. \end{remark} \begin{proof} Assume $\lambda(r(0))\leq\chi(\hat{r},\mathcal{T})$. Denote $\lambda(t)=\lambda(r(t))$. Recall (\ref{S't}) says \begin{equation*} \lambda'(t)=-\frac{1}{\|r\|_{l^1}}\sum_ir_i(K_i-\lambda)^2\leq0. \end{equation*} If $K_i(r(0))=\lambda(0)$ for all $i\in V$, then $r(0)$ is a real packing with constant curvature. It follow that $r(t)\equiv r(0)$ is the unique solution to the flow (\ref{Def-r-normal-Yamabe flow}). This leads to the conclusion directly. If there is a vertex $i$ so that $K_i(r(0))\neq\lambda(0)$, then $\lambda'(0)<0$. Hence then $\lambda(r(t))$ is strictly descending along the flow (\ref{Def-r-normal-Yamabe flow}) for at least a small time interval $t\in[0,\epsilon)$. Thus $r(t)$ will never touches the boundary of $\mathcal{M}_{\mathcal{T}}$ along the flow (\ref{Def-r-normal-Yamabe flow}). By classical ODE theory, the solution $r(t)$ exists for all time $t\in[0,+\infty)$. Moreover, $\{r(t)\}_{t\geq0}\subset\subset \mathcal{M}_{\mathcal{T}}$, that is, $\{r(t)\}_{t\geq0}$ is compactly supported in $\mathcal{M}_{\mathcal{T}}$. By Theorem \ref{Thm-nosingular-imply-converg}, there exists a real ball packing $r_{\infty}$ with constant curvature so that $r(t)$ converges exponentially fast to $r_{\infty}$. Thus we get the conclusion. \end{proof} By this time, we have not enough knowledge to show $r_{\infty}$ is actually a scaling of $\hat{r}$, unless we acknowledge that the constant curvature packing is unique (up to scaling) which will be derived after we introduce the extension technique in Section \ref{section-extend-CR-funct}. However, by a more subtle argument, we can prove: if further assume $\lambda(r(0))<\chi(\hat{r},\mathcal{T})$, then $r(t)$ converges exponentially fast to $r_{\infty}$, which is a scaling of $\hat{r}$ (so as $\|r_{\infty}\|_{l^1}=\|r(0)\|_{l^1}$). Note Theorem \ref{Thm-xi-invariant-imply-converg} is established under the framework that there exists a constant curvature ball packing $\hat{r}$ in $\mathcal{M}_{\mathcal{T}}$. Theorem \ref{Thm-nosingular-imply-converg} implies that, if there is no any constant curvature ball packings, then the solution $r(t)$ to (\ref{Def-r-normal-Yamabe flow}) touches the boundary of $\mathcal{M}_{\mathcal{T}}$. More specifically, we have \begin{corollary}\label{coro-collasp-flow} Given a triangulated manifold $(M^3, \mathcal{T})$, let $\{r(t)\}_{0\leq t<T}$ be the unique maximal solution to the normalized flow (\ref{Def-r-normal-Yamabe flow}). Assume there is no any constant curvature ball packings, then there exists a time sequence $t_n\to T$ such that \begin{enumerate} \item[(1)] if $T<+\infty$, then $Q_{ijkl}(r(t_n))\to 0$ for some tetrahedron $\{ijkl\}$; \item[(2)] if $T=+\infty$, then either $Q_{ijkl}(r(t_n))\to 0$ for some tetrahedron $\{ijkl\}$, or $r_i(t_n)\to 0$ for some vertex $i$. \end{enumerate} \end{corollary} \section{The extended curvature and functionals} To get global convergence of the normalized Yamabe flow (\ref{Def-r-normal-Yamabe flow}), we require that its solution $r(t)$ exists for all time $t\in[0,\infty)$ at least. However, as our numerical experiments show, $r(t)$ may collapse in finite time. To prevent finite time collapsing, Glickenstein's monotonicity condition (\ref{Def-monotonicity}) seems useful, but it is too strong to be satisfied. Although (\ref{ge-observe}) guarantees the convergence of $r(t)$, it can't deal with more general case such as $r(0)\in \mathcal{M}_{\mathcal{T}}\setminus \mathcal{M}_{\hat{r}}$ by (\ref{jiang-observe}). We provide a method to extend $r(t)$ so as it always exists for all time in this section. \subsection{Packing configurations by four tangent balls} \label{section-extend-K} In this section and the next Section \ref{section-extend-solid-angle}, $r=(r_1,r_2,r_3,r_4)\in\mathds{R}^4_{>0}$ means a point in $\mathds{R}^4_{>0}$. Recall the definition of $Q_{ijkl}$ in (\ref{nondegeneracy condition}), from which we can derive the expression of $Q_{1234}$. Let $\tau=\{1234\}$ be a combinatorial tetrahedron which contains only combinatorial information but not any geometric information. By definition, the combinatorial information of $\tau$ is a vertex set $\{1,2,3,4\}$, an edge set $\{\{12\},\{13\},\{14\},\{23\},\{24\},\{34\}\}$, a face set $\{\{123\},\{124\},\{134\},\{234\}\}$ and a tetrahedron set $\{\{1234\}\}$. For any $r=(r_1,r_2,r_3,r_4)\in\mathds{R}^4_{>0}$, endow each edge $\{ij\}$ in the edge set with an edge length $l_{ij}=r_i+r_j$. If $Q_{1234}>0$, then the six edges of $\{1234\}$ with lengths $l_{12},l_{13},l_{14},l_{23},l_{24},l_{34}$ form the edges of an Euclidean tetrahedron. In this case, we call $\tau=\{1234\}$ a \emph{real tetrahedron}. Otherwise, $Q_{1234}\leq0$, and we call $\tau=\{1234\}$ a \emph{virtual tetrahedron} (in other words, $\tau$ degenerates). For a real tetrahedron $\tau=\{1234\}$, denote $\alpha_{i}$ by the solid angle at each vertex $i\in\{1,2,3,4\}$. All real tetrahedrons can be considered as the following proper subset of $\mathds{R}^4_{>0}$, \begin{equation} \Omega_{1234}=\left\{(r_1,r_2,r_3,r_4)\in\mathds{R}^4_{>0}:Q_{1234}>0\right\}. \end{equation} Obviously, $\Omega^{-1}_{1234}=\{(r_1^{-1},\cdots,r_4^{-1}):(r_1,r_2,r_3,r_4)\in\Omega_{1234}\}$ is an open convex cone in $\mathds{R}^4_{>0}$. Hence $\Omega_{1234}$, the homeomorphic image of $\Omega^{-1}_{1234}$, is simply-connected with peicewise analytic boundary. Denote $\{i,j,k,l\}=\{1,2,3,4\}$ and place three balls $S_j$, $S_k$ and $S_l$, externally tangent to each other on the plane, with radii $r_j$, $r_k$ and $r_l>0$. Let $S_i$ be the fourth ball with radius $r_i>0$. If $r_i$ is very small and is very closed to $0$, then obviously $$\frac{1}{r_{i}}>\frac{1}{r_{j}}+\frac{1}{r_{k}}+\frac{1}{r_{l}}+ 2\sqrt{\frac{1}{r_{j}r_k}+\frac{1}{r_{j}r_l}+\frac{1}{r_{k}r_l}}.$$ Hence it follows $$\left(\frac{1}{r_{i}}-\left(\frac{1}{r_{j}}+\frac{1}{r_{k}}+\frac{1}{r_{l}}\right)\right)^2 >4\left(\frac{1}{r_{j}r_k}+\frac{1}{r_{j}r_l}+\frac{1}{r_{k}r_l}\right)$$ and further $Q_{1234}<0$. Geometrically, the fourth ball $S_i$ goes through the gap between the other three mutually tangent balls. Let the radius $r_i$ increases gradually to one with $$\frac{1}{r_{i}}=\frac{1}{r_{j}}+\frac{1}{r_{k}}+\frac{1}{r_{l}}+ 2\sqrt{\frac{1}{r_{j}r_k}+\frac{1}{r_{j}r_l}+\frac{1}{r_{k}r_l}}.$$ By this time $Q_{1234}=0$. Geometrically, the fourth ball $S_i$ is in the gap between the other three mutually tangent balls, and is externally tangent to them all. Denote $$f_i(r_j,r_k,r_l)=\left(\frac{1}{r_{j}}+\frac{1}{r_{k}}+\frac{1}{r_{l}}+ 2\sqrt{\frac{1}{r_{j}r_k}+\frac{1}{r_{j}r_l}+\frac{1}{r_{k}r_l}}\,\right)^{-1}$$ and the $i$-th virtual tetrahedron space (abbreviated as ``\emph{$i$-th virtual space}") by \begin{equation} D_i=\{(r_1,r_2,r_3,r_4)\in\mathds{R}_{>0}^4:0<r_i\leq f_i(r_j,r_k,r_l)\}. \end{equation} Note $D_i$ is contractible and hence is simply-connected. \begin{lemma}\label{lemma-Di-imply-ri-small} In the $i$-th virtual space $D_i$, one have $r_i<\min\{r_j,r_k,r_l\}$. \end{lemma} \begin{proof} One can get the conclusion easily from \begin{equation*} \frac{1}{r_{i}}\geq \frac{1}{f_{i}(r_j,r_k,r_l)} =\frac{1}{r_{j}}+\frac{1}{r_{k}}+\frac{1}{r_{l}}+ 2\sqrt{\frac{1}{r_{j}r_k}+\frac{1}{r_{j}r_l}+\frac{1}{r_{k}r_l}}. \end{equation*} \end{proof} Because any two numbers of $r_1$, $r_2$, $r_3$ and $r_4$ can't be strictly minimal simultaneously, we obviously have the following corollary. \begin{corollary} The virtual space $D_1$, $D_2$, $D_3$ and $D_4$ are mutually disjoint. \end{corollary} \begin{lemma}\label{lemma-er-ze-yi} Assume $r_i>0$ is the minimum of $r_1$, $r_2$, $r_3$ and $r_4$. Then the inequality \begin{equation*} \frac{1}{r_{i}}\leq\frac{1}{r_{j}}+\frac{1}{r_{k}}+\frac{1}{r_{l}}- 2\sqrt{\frac{1}{r_{j}r_k}+\frac{1}{r_{j}r_l}+\frac{1}{r_{k}r_l}}. \end{equation*} will never happen. In other words, if the inequality holds true, then $r_i>\min\{r_j,r_k,r_l\}$. \end{lemma} \begin{proof} Assume the above inequality holds true. Note its right hand side is symmetric with respect to $r_j$, $r_k$ and $r_l$. We may assume $r_j\leq r_k\leq r_l$. Then from $1/r_i\geq 1/r_j$, we get $$\frac{1}{r_{k}}+\frac{1}{r_{l}}\geq 2\sqrt{\frac{1}{r_{j}r_k}+\frac{1}{r_{j}r_l}+\frac{1}{r_{k}r_l}}.$$ Taking square and note $2/r_j\geq 1/r_{k}+1/r_{l}$, we see $$\frac{1}{r_{k}^2}+\frac{1}{r_{l}^2}-\frac{2}{r_{k}r_l}\geq \frac{4}{r_{j}r_k}+\frac{4}{r_{j}r_l}\geq2\left(\frac{1}{r_{k}}+\frac{1}{r_{l}}\right)^2,$$ which is a contradiction. \end{proof} \begin{lemma}\label{lemma-ri-small-imply-Di} If $Q_{1234}\leq0$, then $\{r_1,r_2,r_3, r_4\}$ have a strictly minimal value. Moreover, if $\{r_1,r_2,r_3, r_4\}$ attains its strictly minimal value at $r_i$ for some $i\in\{1,2,3,4\}$, then $r\in D_i$. \end{lemma} \begin{proof} We may assume $r_i\leq r_j\leq r_k\leq r_l$. It's easy to express $Q_{1234}\leq0$ as the following $$\frac{1}{r_{i}^2}-\frac{2}{r_i}\left(\frac{1}{r_{j}}+\frac{1}{r_{k}}+\frac{1}{r_{l}}\right)\geq \left(\frac{1}{r_{j}}+\frac{1}{r_{k}}+\frac{1}{r_{l}}\right)^2-2\left(\frac{1}{r_{j}^2}+\frac{1}{r_{k}^2}+\frac{1}{r_{l}^2}\right).$$ Solving the above inequality, we get either $$\frac{1}{r_{i}}\leq\frac{1}{r_{j}}+\frac{1}{r_{k}}+\frac{1}{r_{l}}- 2\sqrt{\frac{1}{r_{j}r_k}+\frac{1}{r_{j}r_l}+\frac{1}{r_{k}r_l}}$$ or $$\frac{1}{r_{i}}\geq\frac{1}{r_{j}}+\frac{1}{r_{k}}+\frac{1}{r_{l}}+ 2\sqrt{\frac{1}{r_{j}r_k}+\frac{1}{r_{j}r_l}+\frac{1}{r_{k}r_l}}.$$ The first case will never happen by Lemma \ref{lemma-er-ze-yi}. Obviously, the second case implies $r_i<\min\{r_j,r_k,r_l\}$, and in this case $r\in D_i$ obviously. \end{proof} From Lemma \ref{lemma-Di-imply-ri-small} and Lemma \ref{lemma-ri-small-imply-Di}, we derive that, under the assumption $Q_{1234}\leq0$, the four radius $r_1$, $r_2$, $r_3$ and $r_4$ have a strict minimal value. Moreover, $r_i$ is a strictly minimum if and only if $r$ lies in the $i$-th virtual space $D_i$. This fact leads to the following observation \begin{equation} \mathds{R}_{>0}^4-\Omega_{1234}=D_1\,\dot{\cup}\,D_2\,\dot{\cup}\,D_3\,\dot{\cup}\,D_4, \end{equation} where the symbol ``$\dot{\cup}$" means ``disjoint union". As a consequence, we can classify all (real or virtual) tetrahedrons $r\in\mathds{R}_{>0}^4$ as follows: \begin{itemize} \item If $Q_{1234}(r)>0$, then $r\in \Omega_{1234}$ and $r$ makes $\{1234\}$ a real tetrahedron. \item If $Q_{1234}(r)\leq0$, then $r$ is virtual packing. At this time, \begin{itemize} \item either $$\frac{1}{r_{i}}\geq\frac{1}{r_{j}}+\frac{1}{r_{k}}+\frac{1}{r_{l}}+ 2\sqrt{\frac{1}{r_{j}r_k}+\frac{1}{r_{j}r_l}+\frac{1}{r_{k}r_l}}.$$ in this case, $r_i$ is the strictly minimum and hence $r\in D_i$; \item or $$\frac{1}{r_{i}}\leq \frac{1}{r_{j}}+\frac{1}{r_{k}}+\frac{1}{r_{l}}- 2\sqrt{\frac{1}{r_{j}r_k}+\frac{1}{r_{j}r_l}+\frac{1}{r_{k}r_l}}.$$ in this case, $r_i>\min\{r_j,r_k,r_l\}$ by Lemma \ref{lemma-er-ze-yi}. Moreover, since the right hand side of the above inequality is positive, we further get \begin{itemize} \item either $$\frac{1}{\sqrt{r_l}}>\frac{1}{\sqrt{r_j}}+\frac{1}{\sqrt{r_k}}.$$ in this case, $r_l$ is the strictly minimum and hence $r\in D_l$; \item or $$\frac{1}{\sqrt{r_l}}<\Big|\frac{1}{\sqrt{r_j}}-\frac{1}{\sqrt{r_k}}\Big|.$$ in this case, one can show $r_j\neq r_k$ and further \begin{itemize} \item if $r_j>r_k$, then $r_k$ is the strictly minimum and hence $r\in D_k$; \item if $r_j<r_k$, then $r_j$ is the strictly minimum and hence $r\in D_j$. \end{itemize} \end{itemize} \end{itemize} \end{itemize} \subsection{A $C^0$-extension of the solid angles} \label{section-extend-solid-angle} The solid angle is initially defined for real tetrahedrons. There is a natural way to extend its definition to even virtual tetrahedrons. We explain this procedure in this section. It seems that Bobenko, Pinkall, Springborn \cite{Bobenko} first introduced the extension methods. If an Euclidean or hyperbolic triangle degenerates (that is, the three side lengthes still positive, but do not satisfy the triangle inequalities anymore), the angle opposite the side that is too long is defined to be $\pi$, while the other two angles are defined to be $0$. Using this method, they established a variational principle connecting surprisingly Milnor's Lobachevsky volume function of decorated hyperbolic ideal tetrahedrons and Luo's discrete conformal changes \cite{L1}. Luo \cite{L2} systematically developed their extension idea and proved some rigidity results related to inversive distance circle packings and discrete conformal factors. See \cite{GJ1}-\cite{GJ4} for more example. The extension of dihedral angles in a $3$-dimensional decorated ideal (or hyper-ideal) hyperbolic polyhedral first appeared in Luo and Yang's work \cite{LuoYang}. They proved the rigidity of hyperbolic cone metrics on $3$-manifolds which are isometric gluing of ideal and hyper-ideal tetrahedra in hyperbolic spaces. As to the conformal tetrahedron configured by four ball packings, Xu \cite{Xu} gave a natural extension of solid angles. More precisely, if $r_1$, $r_2$, $r_3$ and $r_4$ satisfy $Q_{1234}>0$, then the real tetrahedron $\{1234\}$ is embedded in an Euclidean space. Denote $\alpha_i$ by the solid angle at a vertex $i\in\{1,2,3,4\}$ and define $\tilde\alpha_i=\alpha_i$. If $r_1$, $r_2$, $r_3$ and $r_4$ satisfy $Q_{1234}\leq0$, then the tetrahedron $\{1234\}$ is virtual. By Lemma \ref{lemma-ri-small-imply-Di}, $\{r_1,r_2,r_3, r_4\}$ have a strictly minimal value at a vertex $r_i$ for some $i\in\{1,2,3,4\}$, and $r\in D_i$. Geometrically, this is exactly the case that the ball $S_i$ go through the gap between the three mutually tangent balls $S_j$, $S_k$ and $S_l$. Define $\tilde\alpha_i=2\pi$ and the other three solid angles to be $0$. By this, any real solid angle $\alpha_i$ (defined on $\Omega_{1234}$) is extended to the generalized solid angle $\tilde\alpha_i$ (defined on $\mathds{R}^4_{>0}$). Xu (Lemma 2.6, \cite{Xu}) showed that this extension is continuous. The argument there relies on heavily geometric intuition. We give an alternative analytic proof here, which is more rigorous. \begin{lemma} \label{lemma-xu-extension} For each vertex $i\in\{1,2,3,4\}$, the extended solid angle $\tilde{\alpha}_i$, defined on $\mathds{R}^4_{>0}$, is a continuous extension of $\alpha_i$. \end{lemma} \begin{proof} Obviously, $\tilde{\alpha}_i$ is an extension of $\alpha_i$. It is continuous (in fact, $C^{\infty}$-smooth) in $\Omega_{1234}$ since $\alpha_i$ is. It is a constant and hence is continuous in the interior of $D_1$, $D_2$, $D_3$ and $D_4$. Fix an arbitrary point $x=(x_1,x_2,x_3,x_4)\in \partial D_i$, where the boundary is taken with respect to the topology of $\mathds{R}^4_{>0}$. By Lemma \ref{lemma-Di-imply-ri-small}, one have $x_i<\min\{x_j,x_k,x_l\}$. Choose a small open neighborhood $U_x\subset\mathds{R}^4_{>0}$ of $x$, such that $r_i<\min\{r_j,r_k,r_l\}$ for each $r\in U_x$. For any sequence $\{r^{(n)}\}\subset U_x$ with $r^{(n)}\rightarrow x$, if $r^{(n)}$ is contained in $D_i$ then $\tilde{\alpha}_i(r^{(n)})=2\pi$; if $r^{(n)}$ is not in $D_i$, then by Lemma \ref{Lemma-Glicken-bdr-converge} below, $\alpha_i(r^{(n)})$ goes to $2\pi$. Hence there always holds $\tilde{\alpha}_i(r^{(n)})\rightarrow2\pi$, which implying that $\tilde{\alpha}_i$ is continuous at $x$. Thus $\tilde{\alpha}_i$ is continuous on $\partial D_i$. Similarly, one can show that $\tilde{\alpha}_i$ is continuous on $\partial D_j$, $\partial D_k$ and $\partial D_l$. Then it follows that $\tilde{\alpha}_i$ is continuous on $\mathds{R}^4_{>0}$. \end{proof} \begin{lemma}\label{Lemma-Glicken-bdr-converge} (Glickenstein \cite{G2}, Proposition 6) If $Q_{1234}\rightarrow0$ without any of the $r_i$ going to $0$, then one solid angle goes to $2\pi$ and the others go to $0$. The solid angle $\alpha_i$ which goes to $2\pi$ corresponds to $r_i$ being the minimum. \end{lemma} The natural extension of $\alpha$ to $\tilde{\alpha}$ is only $C^0$-continuous. The following example shows that we can't get higher regularity, such as Lipschitz continuity or H\"{o}lder continuity. \begin{example} \label{example-only-c-zero} Fix $r_2=r_3=r_4=1$. Then the critical case is exactly $r_1=2/\sqrt{3}-1$. Hence the point $(2/\sqrt{3}-1,1,1,1)$ lies in $\partial D_1$. Recall Glickenstein's calculation (see the formula (7) in \cite{G1}) $$\frac{\partial\alpha_i}{\partial r_j}=\frac{4r_ir_jr_k^2r_l^2}{3P_{ijk}P_{ijl}V_{ijkl}} \left(\frac{1}{r_i}\left(\frac{1}{r_j}+\frac{1}{r_k}+\frac{1}{r_l}\right)+ \frac{1}{r_j}\left(\frac{1}{r_i}+\frac{1}{r_k}+\frac{1}{r_l}\right)-\left(\frac{1}{r_k}-\frac{1}{r_l}\right)^2\right),$$ where $\{i,j,k,l\}=\{1,2,3,4\}$, $P_{ijk}=2(r_i+r_j+r_k)$, $P_{ijl}=2(r_i+r_j+r_l)$ and $V_{ijk}$ is the volume. Beacuse $V_{ijkl}=0$, we see $\partial\alpha_1/\partial r_2=+\infty$ at this point. \end{example} \subsection{A convex $C^1$-extension of the Cooper-Rivin functional} \label{section-extend-CR-funct} Now consider a triangulated manifold $(M,\mathcal{T})$, with a ball packing $r=(r_1,\cdots,r_N)\in\mathds{R}^N_{>0}$. Recall the space of all real ball packings is \begin{equation*} \mathcal{M}_{\mathcal{T}}=\left\{\;r\in\mathds{R}^N_{>0}\;:\;Q_{ijkl}>0, \;\forall \{i,j,k,l\}\in T\;\right\}, \end{equation*} while the space of all virtual ball packings is $\mathds{R}^N_{>0}\setminus\mathcal{M}_{\mathcal{T}}$, where \begin{equation*} Q_{ijkl}=\left(\frac{1}{r_{i}}+\frac{1}{r_{j}}+\frac{1}{r_{k}}+\frac{1}{r_{l}}\right)^2- 2\left(\frac{1}{r_{i}^2}+\frac{1}{r_{j}^2}+\frac{1}{r_{k}^2}+\frac{1}{r_{l}^2}\right). \end{equation*} Recall $\alpha_{ijkl}$ is the solid angle at $i$ of an Euclidean conformal tetrahedron $\{ijkl\}$ configured by a real ball packing $r\in\mathcal{M}_{\mathcal{T}}$. Thus $\alpha_{ijkl}(r)$ is a smooth function of all real ball packings. By Lemma \ref{lemma-xu-extension}, the solid angle $\alpha_{ijkl}$ extends continuously to an extended solid angle $\tilde{\alpha}_{ijkl}$, with the domain of definition extends from $\mathcal{M}_{\mathcal{T}}$ to $\mathds{R}^N_{>0}$. Consequently, the combinatorial scalar curvature at a vertex $i\in V$, that is \begin{eqnarray*} K_i(r)=4\pi-\sum_{\{ijkl\}\in \mathcal{T}_3}\alpha_{ijkl}(r),\;r\in\mathcal{M}_{\mathcal{T}} \end{eqnarray*} extends continuously to the extended curvature \begin{eqnarray} \widetilde K_i(r)=4\pi-\sum_{\{ijkl\}\in \mathcal{T}_3}\tilde\alpha_{ijkl}(r),\;r\in\mathds{R}^N_{>0}, \end{eqnarray} which is defined for all $r\in\mathds{R}^N_{>0}$. Similarly, the Cooper-Rivin functional $\mathcal{S}(r)$, $r\in\mathcal{M}_{\mathcal{T}}$ in (\ref{def-cooper-rivin-funct}), can be extended naturally to the following ``\emph{extended Cooper-Rivin functional}" \begin{eqnarray} \widetilde{\mathcal{S}}(r)=\sum_{i=1}^N \widetilde K_i r_i, \;r\in\mathds{R}^N_{>0}. \end{eqnarray} Moreover, the CRG-functional $\lambda(r)$, $r\in\mathcal{M}_{\mathcal{T}}$ can be extended naturally to \begin{eqnarray} \tilde{\lambda}(r)=\frac{\sum_{i=1}^N \widetilde K_i r_i}{\sum_{i=1}^N r_i}, \;r\in\mathds{R}^N_{>0}, \end{eqnarray} which is called the \emph{extended CRG-functional} and is uniformly bounded by the information of the triangulation $\mathcal{T}$. Since every $\widetilde{K}_i$ is continuous, both $\widetilde{\mathcal{S}}(r)$ and $\tilde{\lambda}(r)$ are continuous on $\mathds{R}^N_{>0}$. We further prove that $\widetilde{\mathcal{S}}(r)$ are in fact convex and $C^1$-smooth on $\mathds{R}^N_{>0}$.\\ We follow the approach pioneered by Luo \cite{L2}. A differential $1$-form $\omega=\sum_{i=1}^n a_i(x)dx_i$ in an open set $U\subset\mathds{R}^n$ is said to be continuous if each $a_i(x)$ is a continuous function on $U$. A continuous $1$-form $\omega$ is called closed if $\int_{\partial \tau} \omega=0$ for any Euclidean triangle $\tau\subset U$. By the standard approximation theory, if $\omega$ is closed and $\gamma$ is a piecewise $C^1$-smooth null-homologous loop in $U$, then $\int_\gamma \omega=0$. If $U$ is simply connected, then the integral $F(x)=\int_a^x\omega$ is well defined (where $a\in U$ is arbitrarily chosen), independent of the choice of piecewise smooth paths in $U$ from $a$ to $x$. Moreover, the function $F(x)$ is $C^1$-smooth so that $\frac{\partial F(x)}{\partial x_i}=a_i(x)$. Luo established the following fundamental $C^1$-smooth and convex extension theory. \begin{lemma} \label{lemma-luo's-extension} (Luo's convex $C^1$-extension, \cite{L2}) Suppose $X\subset \mathds{R}^n$ is an open convex set and $A\subset X$ is an open and simply connected subset of $X$ bounded by a real analytic codimension-1 submanifold in $X$. If $\omega=\sum_{i=1}^n a_i(x)dx_i$ is a continuous closed $1$-form on $A$ so that $F(x)=\int_a^x\omega$ is locally convex on $A$ and each $a_i$ can be extended continuously to $X$ by constant functions to a function $\tilde a_i$ on $X$, then $\widetilde F(x)=\int_a^x \tilde a_i(x)dx_i$ is a $C^1$-smooth convex function on $X$ extending $F$. \end{lemma} Now we come back to our settings. Since $$\frac{\partial K_i}{\partial r_j}=\frac{\partial K_j}{\partial r_i}$$ on $\mathcal{M}_\mathcal{T}$ (for example, see \cite{CR,G1}, or see Lemma \ref{Lemma-Lambda-semi-positive}), $\sum_{i=1}^N K_i dr_i$ is a closed $C^\infty$-smooth $1$-form on $\mathcal{M}_\mathcal{T}$. Note $\mathcal{M}_\mathcal{T}$ is simply connected (see \cite{CR}), hence for an arbitrarily chosen $r_0\in \mathcal{M}_{\mathcal{T}}$, the potential functional \begin{eqnarray}\label{Def-potential} F(r)=\int_{r_0}^r\sum_{i=1}^NK_idr_i,\ \ r\in \mathcal{M}_{\mathcal{T}} \end{eqnarray} is well defined. Note that $\nabla_r F=K=\nabla_r\mathcal{S}$, we can easily get $F(r)=\mathcal{S}(r)-\mathcal{S}(r_0)$ for each $r\in\mathcal{M}_\mathcal{T}$. By Lemma \ref{Lemma-Lambda-positive}, the potential functional (\ref{Def-potential}) is locally convex on $\mathcal{M}_\mathcal{T}$ and is strictly locally convex when restricted to the hyperplane $\{x\in\mathds{R}^N:\|x\|_{l^1}=1\}$. For each tetrahedron $\{ijkl\}\in \mathcal{T}_3$, $\alpha_{ijkl}dr_i+\alpha_{jikl}dr_j+\alpha_{kijl}dr_k+\alpha_{lijk}dr_l$ is a smooth closed $1$-form on $\mathcal{M}_\mathcal{T}$. Hence the following integration \begin{eqnarray*} F_{ijkl}(r)=\int_{r_0}^r \alpha_{ijkl}dr_i+ \alpha_{jikl}dr_j+ \alpha_{kijl}dr_k+ \alpha_{lijk}dr_l,\ r\in\mathcal{M}_\mathcal{T} \end{eqnarray*} is well defined and is a $C^{\infty}$-smooth locally concave function on $\mathcal{M}_\mathcal{T}$. By Lemma \ref{lemma-xu-extension}, each solid angle $\alpha_{ijkl}$ can be extended continuously by constant functions to a generalized solid angle $\tilde{\alpha}_{ijkl}$. Using Luo's extension Lemma \ref{lemma-luo's-extension}, the following integration \begin{eqnarray*} \widetilde F_{ijkl}(r)=\int_{r_0}^r \tilde\alpha_{ijkl}dr_i+\tilde\alpha_{jikl}dr_j+\tilde\alpha_{kijl}dr_k+\tilde\alpha_{lijk}^ldr_l,\ r\in\mathds{R}^N_{>0} \end{eqnarray*} is well defined and $C^1$-smooth that extends $F_{ijkl}$. Moreover, $\widetilde F_{ijkl}$ is concave on $\mathds{R}^N_{>0}$. By \begin{eqnarray*} \sum_{i=1}^N\widetilde K_idr_i&=&\sum_{i=1}^N\left(4\pi-\sum_{\{ijkl\}\in \mathcal{T}_3}\tilde\alpha_{ijkl}\right)dr_i\\ &=&4\pi dr_i-\sum_{i=1}^N\sum_{\{ijkl\}\in \mathcal{T}_3}\tilde\alpha_{ijkl}dr_i\\ &=&4\pi dr_i-\sum_{\{ijkl\}\in\mathcal{T}_3} \left(\tilde\alpha_{ijkl}dr_i+\tilde\alpha_{jikl}dr_j+\tilde\alpha_{kijl}dr_k+\tilde\alpha_{lijk}dr_l\right), \end{eqnarray*} the following integration \begin{equation} \widetilde F(r)=\int_{r_0}^r\sum_{i=1}^N\widetilde K_i dr_i,\ r\in\mathds{R}^N_{>0} \end{equation} is well defined and $C^1$-smooth that extends $F$ defined in formula (\ref{Def-potential}). Moreover, $\widetilde F(r)$ is convex on $\mathds{R}^N_{>0}$. We shall prove that the extended Cooper-Rivin functional $\widetilde{\mathcal{S}}(r)$ differs form $\widetilde F(r)$ by a constant. First we show that $\widetilde{\mathcal{S}}(r)$ is $C^1$-smooth. \begin{theorem} \label{thm-tuta-S-C1-convex} The extended Cooper-Rivin functional $\widetilde{\mathcal{S}}(r)$ is convex on $\mathds{R}^N_{>0}$. Moreover, $\widetilde{\mathcal{S}}(r)\in C^{\infty}(\mathcal{M}_\mathcal{T})\cap C^{1}(\mathds{R}^N_{>0})$. As a consequence, the extended CRG-functional $\tilde{\lambda}(r)$ is $C^1$-smooth on $\mathds{R}^N_{>0}$ and is convex when restricted to the hyperplane $\{x\in\mathds{R}^N:\|x\|_{l^1}=1\}$. \end{theorem} \begin{proof} We just need to show $\widetilde{\mathcal{S}}(r)\in C^1(\mathds{R}^N_{>0})$. For each tetrahedron $\{ijkl\}\in \mathcal{T}_3$, set $$\widetilde{\mathcal{S}}_{ijkl}(r_i,r_j,r_k,r_l)=\tilde\alpha_{ijkl}r_i+\tilde\alpha_{jikl}r_j+\tilde\alpha_{kijl}r_k+\tilde\alpha_{lijk}r_l.$$ For every vertex $p\in\{i,j,k,l\}$, on the open set $\left\{(r_i,r_j,r_k,r_l)\in\mathds{R}^4_{>0}:Q_{ijkl}>0\right\}$ we get \begin{equation}\label{formula-schlaf-extend} \frac{\partial\widetilde{\mathcal{S}}_{ijkl}}{\partial r_p}=\tilde\alpha_{pqst} \end{equation} by the Schl\"{a}ffli formula (see Appendix \ref{appen-schlafi}), where $q,s,t$ are the other three vertices other than $p$. On the open domain $D_p$ where $\tilde\alpha_{pqst}=2\pi$, we have $\widetilde{\mathcal{S}}_{ijkl}=2\pi r_p$, and hence (\ref{formula-schlaf-extend}) is also valid. On the open domain $D_q$, $D_s$ or $D_t$ where $\tilde\alpha_{pqst}=0$, $\widetilde{\mathcal{S}}_{ijkl}$ equals to $2\pi r_q$, $2\pi r_s$ or $2\pi r_t$, hence we still have (\ref{formula-schlaf-extend}). By the classical Darboux Theorem in mathematical analysis, (\ref{formula-schlaf-extend}) is valid on $\mathds{R}^4_{>0}\cap\partial\left\{(r_i,r_j,r_k,r_l)\in\mathds{R}^4_{>0}:Q_{ijkl}>0\right\}$. Hence (\ref{formula-schlaf-extend}) is always true on $\mathds{R}^4_{>0}$. Because $\tilde\alpha_{pqst}$ is continuous, we see $\widetilde{\mathcal{S}}_{ijkl}$ is $C^1$-smooth on $\mathds{R}^4_{>0}$. Further by \begin{equation*} \widetilde{\mathcal{S}}(r)=4\pi\sum_{i=1}^Nr_i-\sum_{\{ijkl\}\in \mathcal{T}_3}\widetilde{\mathcal{S}}_{ijkl}(r_i,r_j,r_k,r_l), \end{equation*} we get the conclusion. \end{proof} \begin{corollary}\label{coro-extend-schlafi-formula} The following extended Schl\"{a}ffli formula is valid on $\mathds{R}^N_{>0}$, $$d\left(\sum_{i=1}^N\widetilde{K}_ir_i\right)=\sum_{i=1}^N\widetilde{K}_idr_i.$$ \end{corollary} Corollary \ref{coro-extend-schlafi-formula} implies that $\nabla_r\widetilde{\mathcal{S}}=\widetilde{K}$. Note $\nabla_r\widetilde{F}=\widetilde{K}$ too, hence we obtain \begin{corollary} $\widetilde{\mathcal{S}}(r)=\widetilde F(r)+\mathcal{S}(r_0)$ on $\mathds{R}^N_{>0}$. \end{corollary} Denote $K(\mathcal{M}_{\mathcal{T}})$ by the image set of the curvature map $K: \mathcal{M}_{\mathcal{T}}\rightarrow \mathds{R}^N$. Using the extended Cooper-Rivin functional, we get the following \begin{theorem}\label{thm-extend-xu-rigid} (alternativenss) For each $\bar{K}\in K(\mathcal{M}_{\mathcal{T}})$, up to scaling, $\bar{K}$ is realized by a unique (real or virtual) ball packing in $\mathds{R}^N_{>0}$. In other words, there holds \begin{equation} K(\mathcal{M}_{\mathcal{T}})\cap K(\mathds{R}^N_{>0}\setminus\mathcal{M}_{\mathcal{T}})=\emptyset. \end{equation} \end{theorem} \begin{proof} We need to show any virtual ball packing can not have curvature $\bar{K}$. If not, assume $\bar{r}'$ is a virtual ball packing with curvature $\bar{K}$. Let $\bar{r}$ be the unique (up to scaling) real ball packing with curvature $\bar{K}$. We may well suppose $\|\bar{r}\|_{l^1}=\|\bar{r}'\|_{l^1}=1$. Now we consider the functional $\mathcal{S}_p(r)=\sum_{i=1}^N(K_i-\bar{K}_i)r_i$, which has a natural extension $$\widetilde{\mathcal{S}}_p(r)=\sum_{i=1}^N(K_i-\bar{K}_i)r_i.$$ Set $\varphi(t)=\widetilde{\mathcal{S}}_p(\bar{r}+t(\bar{r}'-\bar{r}))$, then we see $\varphi'(0)=\varphi'(1)=0$. Note $\widetilde{\mathcal{S}}_p(r)$ is convex when constricted to the hyperplane $\{r:\|r\|_{l^1}=1\}$, hence $\varphi'(t)$ is monotone increasing. This leads to $\varphi'(t)\equiv0$ for any $t\in[0,1]$. For some small $\epsilon>0$, the functional $\widetilde{\mathcal{S}}_p(r)=\mathcal{S}_p(r)$ is strictly convex when constricted to $B(\bar{r},\epsilon\|\bar{r}'-\bar{r}\|)\cap\{r:\|r\|_{l^1}=1\}$, and for any $t\in[0,\epsilon)$, the ball packing $\bar{r}+t(\bar{r}'-\bar{r})$ is real. Hence $\varphi'(t)$ is strictly monotone increasing for $t\in[0,\epsilon)$, which contradicts with $\varphi'(1)=0$. Thus we get the conclusion above. \end{proof} \begin{remark} \label{Lemma-xu-rigidity} Similar to the proof of Theorem \ref{thm-extend-xu-rigid}, one may prove Xu's global rigidity \cite{Xu}, i.e. a ball packing is determined by its combinatorial scalar curvature $K$ up to scaling. Consequently, the ball packing with constant curvature (if it exists) is unique up to scaling. \end{remark} Theorem \ref{thm-extend-xu-rigid} and its proof have the following interesting corollaries. \begin{corollary} There can't be both a real and a virtual packing with constant curvature. Moreover, the set of all constant curvature virtual packings is a convex set in $\mathds{R}^N_{>0}$. \end{corollary} The following theorem is a supplement of Theorem \ref{Thm-Q-min-iff-exist-const-curv-metric}. \begin{theorem} \label{corollary-Q-2} Assume there exists a real ball packing $\hat{r}\in \mathcal{M}_{\mathcal{T}}$ with constant curvature. Then the CRG-functional $\lambda(r)$ has a unique global minimal point in $\mathcal{M}_{\mathcal{T}}$ (up to scaling). \end{theorem} \begin{proof} We restrict our argument on the hyperplane $\{r\in \mathbb{R}^N: ||r||_{l^1}=||\hat{r}||_{l^1}\}$ on which $\mathcal{S}$ and $\lambda$ differ by a constant. Because $\hat{r}$ has constant curvature, by Lemma \ref{lemma-const-curv-equl-cirtical-point}, $\hat{r}$ is a critical point of $\lambda$. In particular, it is a critical point of $\widetilde{\mathcal{S}}$. Theorem \ref{thm-tuta-S-C1-convex} says $\widetilde{\mathcal{S}}$ is global convex on the above hyperplane. Moreover, $\widetilde{\mathcal{S}}$ is local strictly convex near $\hat{r}$, thus we see $\hat{r}$ is the unique global minimum of $\widetilde{\mathcal{S}}$. In particular, it is a global minimum of $\mathcal{S}$. \end{proof} \section{The extended flow} \subsection{Longtime existence of the extended flow} \label{section-long-exit-extend-flow} In this subsection, we prove that the solution to the flow (\ref{Def-r-normal-Yamabe flow}) can always be extended to a solution that exists for all time. The basic idea is based on the continuous extension of $K$ to $\widetilde K$. This idea has appeared in the first two authors' former work \cite{GJ1}-\cite{GJ4}. \begin{theorem} \label{thm-yang-write} Consider the normalized combinatorial Yamabe flow (\ref{Def-r-normal-Yamabe flow}). Let $\{r(t)|t\in[0, T)\}$ be the unique maximal solution with $0<T\leq +\infty$. Then we can always extend it to a solution $\{r(t)|t\in[0,+\infty)\}$ when $T<+\infty$. In other words, for any initial real or virtual ball packing $r(0)\in\mathds{R}^N_{>0}$, the solution to the following extended flow \begin{equation} \label{Def-Flow-extended} r_i'(t)=(\tilde \lambda-\widetilde K_i)r_i \end{equation} exists for all time $t\in[0,+\infty)$. \end{theorem} \begin{proof} The proof is similar with Proposition \ref{prop-no-finite-time-I-singula}. Since all $\tilde \lambda-\widetilde K_i$ are continuous functions on $\mathds{R}^N_{>0}$, by Peano's existence theorem in classical ODE theory, the extended flow equation (\ref{Def-Flow-extended}) has at least one solution on some interval $[0,\varepsilon)$. By the definition of $\widetilde{K}_i$, all $|\tilde \lambda-\widetilde{K}_i|$ are uniformly bounded by a constant $c(\mathcal{T})>0$, which depends only on the information of the triangulation. Hence $r_i(0)e^{-c(\mathcal{T})t}\leq r_i(t)\leq r_i(0)e^{c(\mathcal{T})t}$, which implies that $r_i(t)$ can not go to $0$ or $+\infty$ in finite time. Then by the extension theorem of solutions in ODE theory, the solution exists for all $t\geq 0$. \end{proof} \begin{remark} Set $r_i=2/\sqrt{3}-1$, and all other $r_j=1$ for $j\in V$ and $j\neq i$ in the triangulation $\mathcal{T}$. Recall Example \ref{example-only-c-zero}, we easily get $\partial K_i/\partial r_j=-\infty$ for all vertex $j$ with $j\thicksim i$. This implies that $\widetilde K_i(r)$ is generally not Lipschitz continuous at the boundary point of $\mathcal{M}_{\mathcal{T}}$. So we don't know whether the solution $\{r(t)\}_{t\geq0}$ to the extended flow (\ref{Def-Flow-extended}) is unique. \end{remark} Similar to Proposition \ref{prop-V-descending} and Proposition \ref{prop-G-bound} we have the following proposition, the proof of which is omitted. \begin{proposition}\label{prop-extend-flow-descending} Along the extended Yamabe flow (\ref{Def-Flow-extended}), $\|r(t)\|_{l^1}$ is invariant. The extended Cooper-Rivin functional $\widetilde{\mathcal{S}}(r)$ is descending and bounded. Moreover, the extended CRG-functional $\tilde{\lambda}(r)$ is descending and uniformly bounded. \end{proposition} \subsection{Convergence to constant curvature: general case} \label{subsection-converg-to-const} In this section, we prove some convergence results for the extended flow (\ref{Def-Flow-extended}). The following result says that the extended Yamebe flow tends to find real or virtual packings with constant curvature. We omit its proof, since it is similar to Proposition \ref{prop-converg-imply-const-exist}. \begin{theorem} \label{thm-extend-flow-converg-imply-exist-const-curv-packing} If a solution $r(t)$ to the extended flow (\ref{Def-Flow-extended}) converges to some $r_{\infty}\in\mathds{R}^N_{>0}$ as $t\to +\infty$, then $r_{\infty}$ is a constant curvature packing (real or virtual). \end{theorem} \begin{remark} $r_{\infty}$ may be a virtual packing. One can't exclude this case generally. \end{remark} \begin{definition} \label{def-tuta-xi} Given a triangulated manifold $(M^3, \mathcal{T})$, let $\hat{r}$ be a real ball packing with constant curvature. We introduce an extended combinatorial invariant with respect to the triangulation $\mathcal{T}$ as \begin{equation} \tilde{\chi}(\hat{r},\mathcal{T})=\inf\limits_{\gamma\in{\mathbb{S}}^{N-1};\|\gamma\|_{l^1}=0\;}\sup\limits_{0\leq t< \tilde{a}_\gamma}\tilde{\lambda}(\hat{r}+t\gamma), \end{equation} where $\tilde{a}_{\gamma}$ is the least upper bound of $t$ such that $\hat{r}+t\gamma\in \mathds{R}_{>0}^N$. \end{definition} As is explained in the paragraph before Theorem \ref{Thm-xi-invariant-imply-converg}, the extended invariant $\tilde{\chi}(\hat{r},\mathcal{T})$ is also a pure combinatorial-topological invariant. Obviously, \begin{equation} \tilde{\chi}(\hat{r},\mathcal{T})>\chi(\hat{r},\mathcal{T}). \end{equation} Similar to Theorem \ref{Thm-xi-invariant-imply-converg}, we have \begin{theorem}\label{Thm-tuta-xi-invariant-imply-converg} Assume $\hat{r}$ is a real ball packing with constant curvature. Moreover, \begin{equation} \tilde{\lambda}(r(0))\leq\tilde{\chi}(\hat{r},\mathcal{T}). \end{equation} Then the solution to the extended normalized flow (\ref{Def-Flow-extended}) exists for all time $t\in[0,+\infty)$ and converges exponentially fast to a real ball packing with constant curvature. \end{theorem} \begin{proof} If $\tilde{\lambda}(r(0))\leq \chi(\hat{r},\mathcal{T})$, then Theorem \ref{Thm-xi-invariant-imply-converg} implies the conclusion directly. We may assume $\tilde{\lambda}(r(0))\geq \chi(\hat{r},\mathcal{T})>Y_{\mathcal{T}}$. Denote $\tilde{\lambda}(t)=\lambda(r(t))$. It's easy to get \begin{equation} \label{for-tilde-lambda'} \tilde{\lambda}'(t)=-\frac{1}{\|r\|_{l^1}}\sum_ir_i(\widetilde{K}_i-\tilde{\lambda})^2\leq0. \end{equation} We show $\tilde{\lambda}'(0)<0$. Otherwise $\tilde{\lambda}'(0)=0$ by (\ref{for-tilde-lambda'}), and $\widetilde{K}_i=\tilde{\lambda}$ for all $i\in V$. Hence the real or virtual packing $r(0)$ has constant curvature. By Theorem \ref{thm-extend-xu-rigid} the alternativeness, $r(0)$ must be a real ball packing. Hence by Theorem \ref{corollary-Q-2}, we get $\tilde{\lambda}(r(0))=Y_{\mathcal{T}}$, which is a contradiction. Hence $\tilde{\lambda}'(0)<0$. Therefore, $\tilde{\lambda}(r(t))$ is strictly descending along (\ref{Def-Flow-extended}) for at least a small time interval $t\in[0,\epsilon)$. Thus $r(t)$ will never touche the boundary of $\mathds{R}_{>0}^N$ along (\ref{Def-Flow-extended}). By classical ODE theory, the solution $r(t)$ exists for all time $t\in[0,+\infty)$. Moreover, $\{r(t)\}_{t\geq0}$ is compactly supported in $\mathds{R}_{>0}^N$. Consider the functional $\widetilde{\mathcal{S}}_Y=\sum_i(\widetilde{K}_i-Y_{\mathcal{T}})r_i$. By the extended Schl\"{a}ffli formula in Corollary \ref{coro-extend-schlafi-formula}, we get $d\widetilde{\mathcal{S}}_Y=\sum_i(\widetilde{K}_i-Y_{\mathcal{T}})dr_i$. As a consequence, along (\ref{Def-Flow-extended}) we have $$\widetilde{\mathcal{S}}'_Y(t)=-\|r\|^{-1}_{l^1}\sum_ir_i(\widetilde{K}_i-\tilde{\lambda})^2\leq0.$$ Hence then $\widetilde{\mathcal{S}}_Y(t)$ is descending. Because $\widetilde{\mathcal{S}}_Y\geq0$, so $\widetilde{\mathcal{S}}_Y(+\infty)$ exists. Hence there is a time sequence $t_n\uparrow+\infty$, such that $$\widetilde{\mathcal{S}}'_Y(t_n)=\widetilde{\mathcal{S}}_Y(n+1)-\widetilde{\mathcal{S}}_Y(n)\rightarrow0.$$ Note $\tilde{\lambda}(+\infty)$ exists by (\ref{for-tilde-lambda'}) and Proposition \ref{prop-extend-flow-descending}. It follows that $$r_i(t_n)(\widetilde{K}_i(t_n)-\tilde{\lambda}(+\infty))\rightarrow0$$ at each vertex $i\in V$. Because $\{r(t)\}_{t\geq0}$ is compactly supported in $\mathds{R}_{>0}^N$, we may choose a subsequence of $\{t_n\}_{n\geq1}$, which is still denoted as $\{t_n\}_{n\geq1}$ itself, so that $r(t_n)\rightarrow r^*\in\mathds{R}_{>0}^N$. Because the extended curvature $\widetilde{K}_i$ is continuous, so $\widetilde{K}_i(t_n)\rightarrow\widetilde{K}_i(r^*)$. This implies that $r^*$ is a real or virtual ball packing with constant curvature. By Theorem \ref{thm-extend-xu-rigid} the alternativeness, $r^*$ must be a real ball packing. Because $r^*$ is asymptotically stable (see Lemma \ref{Thm-3d-isolat-const-alpha-metric}), and $r(t)$ goes to $r^*$ along the time sequence $\{t_n\}$, then $r(t)$ converges to $r^*$ as $t$ goes to $+\infty$. By Lemma \ref{Lemma-ODE-asymptotic-stable}, the convergence rate of $r(t)$ is exponential. \end{proof} \begin{definition} The extended combinatorial Yamabe invariant $\widetilde{Y}_{\mathcal{T}}$ (with respect to a triangulation $\mathcal{T}$) is \begin{equation} \widetilde{Y}_{\mathcal{T}}=\inf_{r\in \mathds{R}_{>0}^N} \tilde{\lambda}(r)=\inf_{r\in \mathds{R}_{>0}^N}\frac{\sum_{i=1}^N \tilde K_i r_i}{\sum_{i=1}^N r_i}. \end{equation} \end{definition} From the definition of $\widetilde{Y}_{\mathcal{T}}$, we see $\widetilde{Y}_{\mathcal{T}}\leq Y_{\mathcal{T}}$. Moreover, by Corollary \ref{corollary-Q-2} we have \begin{corollary}\label{Thm-Y-equal-tuta-Y} Assume there exists a real packing $\hat{r}\in \mathcal{M}_{\mathcal{T}}$ with constant curvature. Then \begin{equation} \widetilde{Y}_{\mathcal{T}}=Y_{\mathcal{T}}. \end{equation} \end{corollary} \begin{conjecture} Assume $\widetilde{Y}_{\mathcal{T}}=Y_{\mathcal{T}}$, then there exists a ball packing (real or virtual) with constant curvature. \end{conjecture} We say the extended combinatorial Yamabe invariant $\widetilde{Y}_{\mathcal{T}}$ is \emph{attainable} if the extended CRG-functional $\tilde{\lambda}(r)$ has a global minimum in $\mathds{R}^N_{>0}$. Similar to Theorem \ref{Thm-Q-min-iff-exist-const-curv-metric} and Theorem \ref{corollary-Q-2}, we have \begin{theorem}\label{Thm-tuta-yamabe-invarint} Given a triangulated manifold $(M^3, \mathcal{T})$, the following four descriptions are mutually equivalent. \begin{enumerate} \item There exists a real or virtual ball packing $\hat{r}$ with constant curvature. \item The extended CRG-functional $\tilde{\lambda}(r)$ has a local minimum in $\mathds{R}^N_{>0}$. \item The extended CRG-functional $\tilde{\lambda}(r)$ has a global minimum in $\mathds{R}^N_{>0}$. \item The extended Yamabe invariant $\widetilde{Y}_{\mathcal{T}}$ is attainable by a real or virtual ball packing. \end{enumerate} Moreover, if $\widetilde{Y}_{\mathcal{T}}$ is attained by a virtual packing, the set of virtual packings that realized $\widetilde{Y}_{\mathcal{T}}$ equals to the set of constant curvature virtual packings, which form a convex set in $\mathds{R}^N_{>0}$. \end{theorem} \subsection{Convergence with regular triangulation} \label{section-regular-converge} The main purpose of this section is to prove the following theorem: \begin{theorem}\label{Thm-regular-triangu-converge} Assume the triangulation $\mathcal{T}$ is regular. Then the solution $\{r(t)\}_{t\geq0}$ to the extended Yamabe flow (\ref{Def-Flow-extended}) converges exponentially fast to the unique real packing with constant curvature as $t$ goes to $+\infty$. \end{theorem} Before giving the proof of the above theorem, we need to study some deep relations between $r$ and $\tilde{\alpha}$. We also need to compare a conformal tetrahedron to a regular tetrahedron. By definition, a regular tetrahedron is one with all four radii equal. Hence all four solid angles are equal in a regular tetrahedron. We denote this angle by $\bar{\alpha}$, i.e. \begin{equation} \bar{\alpha}=3\cos^{-1}\frac{1}{3}-\pi. \end{equation} Let $\tau=\{1234\}$ be a conformal tetrahedron (real or virtual) patterned by four mutually externally tangent balls with radii $r_1$, $r_2$, $r_3$ and $r_4$. For each vertex $i\in\{1,2,3,4\}$, let $\alpha_i$ be the solid angle at $i$. Recall $\tilde{\alpha}_i$ is the continuous extension of $\alpha_i$. \begin{lemma}\label{Lemma-Glicken-big-r-small-angle} (Glickenstein, Lemma 7 \cite{G2}) $\alpha_i\geq\alpha_j$ if and only if $r_i\leq r_j$. \end{lemma} It's easy to show that Glickenstein's Lemma \ref{Lemma-Glicken-big-r-small-angle} also holds true for virtual tetrahedrons, that is, $\tilde{\alpha}_i\geq\tilde{\alpha}_j$ if and only if $r_i\leq r_j$. The following two lemmas are very important. They establish two comparison principles for the extended solid angles between a general tetrahedron (real or virtual) and a regular one. \begin{lemma}\label{Lemma-compare-regular-1} (first comparison principle) If $r_j$ is maximal, then $\tilde{\alpha}_j\leq\bar{\alpha}$. \end{lemma} \begin{proof} Let $r_j=\max\{r_1, r_2, r_3, r_4\}$ be maximal. If the tetrahedron $\tau$ is virtual, then either $\tilde{\alpha}_j=0$ or $\tilde{\alpha}_j=2\pi$. By the definition of the solid angle $\tilde{\alpha}$ (see Section \ref{section-extend-solid-angle}), $\tilde{\alpha}_j=2\pi$ implies that $\tau\in D_j$. By Lemma \ref{lemma-Di-imply-ri-small}, $r_j$ is strictly minimal, which is a contradiction. Hence we get $\tilde{\alpha}_j=0<\bar{\alpha}$. If the conformal tetrahedron $\tau$ is real, by Lemma \ref{Lemma-Glicken-big-r-small-angle}, $\alpha_j$ is the minimum of $\{\alpha_1, \alpha_2, \alpha_3, \alpha_4\}$. Denote $r=(r_1,r_2,r_3,r_4)$. Consider the functional $$\varphi(r)=\sum_{i=1}^4(\alpha_i-\bar{\alpha})r_i,\;r\in\Omega_{1234}.$$ By the Schl\"{a}ffli formula, we get $d\varphi=\sum_{i=1}^4(\alpha_i-\bar{\alpha})dr_i$. Taking the differential, we have $$\text{Hess}\varphi=\frac{\partial(\alpha_1,\alpha_2,\alpha_3,\alpha_4)}{\partial(r_1,r_2,r_3,r_4)}.$$ By Lemma \ref{Lemma-Lambda-semi-positive}, Hess$\varphi$ is negative semi-definite with rank $3$ and the kernel $\{tr:t\in\mathds{R}\}$. Hence $\varphi$ is strictly local concave when restricted to the hyperplane $\{r:\sum_{i=1}^4r_i=1\}$. Similarly (see Section \ref{section-extend-CR-funct} and Theorem \ref{thm-tuta-S-C1-convex}), the extended functional $$\tilde{\varphi}(r)=\sum_{i=1}^4(\tilde{\alpha}_i-\bar{\alpha})r_i,\;r\in\mathds{R}_{>0}^4$$ is $C^1$-smooth and is concave on $\mathds{R}_{>0}^4$. Because it equals to $\varphi$ on $\Omega_{1234}$, it is $C^{\infty}$-smooth on $\Omega_{1234}$ and is strictly concave on $\Omega_{1234}\cap\{r:\sum_{i=1}^4r_i=1\}$. Note $d\tilde{\varphi}=\sum_{i=1}^4(\tilde{\alpha_i}-\bar{\alpha})dr_i$, we see $\nabla\tilde{\varphi}=\tilde{\alpha}-\bar{\alpha}$, implying that the regular radius $\bar{r}=(1,1,1,1)$ is the unique critical point of $\tilde{\varphi}$. It follows that $\tilde{\varphi}$ has a unique maximal point at $\bar{r}$. Thus $\varphi(r)\leq\varphi(\bar{r})=0$. By Glickenstein's Lemma \ref{Lemma-Glicken-big-r-small-angle}, we see $\alpha_j=\min\{\alpha_1,\alpha_2,\alpha_3,\alpha_4\}$. The conclusion follows from $$\min\{\alpha_1,\alpha_2,\alpha_3,\alpha_4\}\leq\frac{\sum_{i=1}^4\alpha_ir_i}{\sum_{i=1}^4 r_i}\leq\bar{\alpha}.$$ \end{proof} \begin{lemma}\label{Lemma-compare-regular-2} (second comparison principle) If $r_i$ is minimal, then $\tilde{\alpha}_i\geq\bar{\alpha}$. \end{lemma} \begin{proof} Let $r_i=\min\{r_1, r_2, r_3, r_4\}$ be minimal. Let $j$, $k$ and $l$ be the other three vertices in $\{1,2,3,4\}$ which is different with $i$. We first prove the following two facts: \begin{enumerate} \item If the conformal tetrahedron $\tau$ is virtual, then $\tilde{\alpha}_i=2\pi$; \item If $\tau$ is real, then $\partial\alpha_i/\partial r_j$, $\partial\alpha_i/\partial r_k$ and $\partial\alpha_i/\partial r_l$ are positive, while $\partial\alpha_i/\partial r_i$ is negative. \end{enumerate} To get the above fact 1, we assume $\tau$ is virtual, then either $\tilde{\alpha}_i=0$ or $\tilde{\alpha}_i=2\pi$. However, $\tilde{\alpha}_i=0$ is impossible. In fact, if this happens, then $\tilde{\alpha}_p=2\pi$ for some $p\in\{1,2,3,4\}$ with $p\neq i$. By the definition of the extended solid angles $\tilde{\alpha}$ (see Section \ref{section-extend-solid-angle}), we know $\tau\in D_p$. By Lemma \ref{lemma-Di-imply-ri-small}, $r_p$ is strictly minimal. This contradicts with the assumption that $r_i$ is minimal. The only possible case left is $\tilde{\alpha}_i=2\pi$, which implies the conclusion. To get the above fact 2, we assume $\tau$ is non-degenerate, or say real. Glickenstein (see formula (7) in \cite{G1}) once calculated $$\frac{\partial\alpha_i}{\partial r_j}=\frac{4r_ir_jr_k^2r^2_l}{3P_{ijk}P_{ijl}V_{ijkl}} \left(\frac{1}{r_i}\left(\frac{1}{r_j}+\frac{1}{r_k}+\frac{1}{r_l}\right)+\frac{1}{r_j}\left(\frac{1}{r_i}+\frac{1}{r_k}+\frac{1}{r_l}\right)- \left(\frac{1}{r_k}-\frac{1}{r_l}\right)^2\right),$$ where $P_{ijk}=2(r_i+r_j+r_k)$ is the perimeter of the triangle $\{ijk\}$, $V_{ijkl}$ is the volume of the conformal tetrahedron $\tau$. Since $r_i$ is minimal, we get \begin{align*} &\frac{1}{r_i}\left(\frac{1}{r_j}+\frac{1}{r_k}+\frac{1}{r_l}\right)+\frac{1}{r_j}\left(\frac{1}{r_i}+\frac{1}{r_k}+\frac{1}{r_l}\right)- \left(\frac{1}{r_k}-\frac{1}{r_l}\right)^2\\[8pt] &>\frac{1}{r_ir_j}-\frac{1}{r^2_j}+\frac{1}{r_ir_l}-\frac{1}{r^2_l}=\frac{r_j-r_i}{r_ir^2_j}+\frac{r_l-r_i}{r_ir^2_l}\geq0. \end{align*} Hence $\partial\alpha_i/\partial r_j>0$. Similarly, we have $\partial\alpha_i/\partial r_k>0$ and $\partial\alpha_i/\partial r_l>0$. From $$\frac{\partial\alpha_i}{\partial r_i}r_i+\frac{\partial\alpha_i}{\partial r_j}r_j+ \frac{\partial\alpha_i}{\partial r_k}r_k+\frac{\partial\alpha_i}{\partial r_l}r_l=0,$$ we further get $\partial\alpha_i/\partial r_i<0$. Thus we get the above two facts. Now we come back to the initial setting, where $r_i=\min\{r_1, r_2, r_3, r_4\}$ is minimal. Since the final conclusion is symmetric with respect to $r_j$, $r_k$ and $r_l$, we may suppose $$r_i\leq r_j\leq r_k\leq r_l.$$ If the initial conformal tetrahedron $\tau$ is virtual, then we get the conclusion already. If the initial conformal tetrahedron $\tau$ is real, then $\alpha_i<2\pi$ by definition. We use a continuous method to get the final conclusion. We approach it by three steps: Step 1. Let $r_i$ increase to $r_j$. From the above fact 2 we get $\partial\alpha_i/\partial r_i<0$. It follows that $\alpha_i$ is descending. Moreover, along this procedure there maintains $r_i\leq r_j\leq r_k\leq r_l$, hence the degeneration will never happen along this procedure (otherwise $\tilde{\alpha}_i=2\pi$ by the above fact 1, contradicting with that $\alpha_i<2\pi$ is descending). By this time, we get $$r_i=r_j\leq r_k\leq r_l.$$ Step 2. Let $r_k$ decrease to $r_j=r_i$. From the above fact 2 we get $\partial\alpha_i/\partial r_k>0$. It follows that $\alpha_i$ is descending. Moreover, along this procedure there maintains $r_i=r_j\leq r_k\leq r_l$, hence the degeneration will never happen (otherwise $\tilde{\alpha}_i=2\pi$ by the above fact 1, contradicting with pthat $\alpha_i<2\pi$ is descending). By this time, we get $$r_i=r_j=r_k\leq r_l.$$ Step 3. Let $r_l$ decreases to $r_k=r_j=r_i$. Similar to Step 2, from $\partial\alpha_i/\partial r_l>0$ we see that $\alpha_i$ is descending. Moreover, along this procedure there maintains $r_i=r_j=r_k\leq r_l$, hence there is no degeneration happen (otherwise $\tilde{\alpha}_i=2\pi$, contradicting with that $\alpha_i<2\pi$ is descending). By the time, we finally get $$r_i=r_j=r_k=r_l.$$ Note along these procedure, $\alpha_i$ is always descending and at last tends to $\bar{\alpha}$. Hence we havep $\alpha_i\geq\bar{\alpha}$, which is the conclusion. \end{proof} Now it's time to prove Theorem \ref{Thm-regular-triangu-converge}. The following proof can be viewed as to derive a Harnack-type inequality.\\ \noindent\emph{Proof of Theorem \ref{Thm-regular-triangu-converge}.} Assume at some time $t$, $r_i(t)$ is minimal while $r_j(t)$ is maximal. By the two comparison principles Lemma \ref{Lemma-compare-regular-1} and Lemma \ref{Lemma-compare-regular-2}, we see for each tetrahedron with vertex $i$ that $\tilde{\alpha}_i\geq\bar{\alpha}$ and each tetrahedron with vertex $j$ that $\bar \alpha\geq\tilde{\alpha}_j$. Compare the extended flow $d\ln r_i/dt=\tilde \lambda-\widetilde{K}_i$ at $i$ and $d\ln r_j/dt=\widetilde \lambda-\widetilde{K}_j$ at $j$, their difference is \begin{align*} \frac{d}{dt}\left(\min\limits_{p,q\in V}\Big\{\frac{r_{p}(t)}{r_{q}(t)}\Big\}\right)&=\frac{d}{dt}\left(\frac{r_i(t)}{r_j(t)}\right)\\[5pt] &=\frac{r_i(t)}{r_j(t)}\big(\widetilde{K}_j-\widetilde{K}_i\big)\\[5pt] &=\frac{r_i(t)}{r_j(t)}\left(\sum \tilde{\alpha}_i-\sum \tilde{\alpha}_j\right)\geq0. \end{align*} It follows that $\min\limits_{p,q\in V}\Big\{\cfrac{r_{p}(t)}{r_{q}(t)}\Big\}$ is non-descending along the extended flow (\ref{Def-Flow-extended}). Hence there is a constant $c>1$ so that $$c^{-1}r_p(t)\leq r_q(t)\leq c r_p(t).$$ This implies that the solution $\{r(t)\}_{t\geq0}$ lies in a compact subset of $\mathds{R}^N_{>0}$. Similar to the proof of Theorem \ref{Thm-nosingular-imply-converg} and Theorem \ref{Thm-tuta-xi-invariant-imply-converg}, we can find a particular real or virtual ball packing $\hat{r}\in\mathds{R}^N_{>0}$ that has constant curvature. Note all $r_i=1$ provides a real ball packing with constant curvature, and the alternativeness Theorem \ref{thm-extend-xu-rigid} says that a constant curvature real packing and a constant curvature virtual packing can't exist simultaneously, we know $\hat{r}$ is a real packing. Thus we get the conclusion.\qed \begin{remark} We can't exclude the possibility that $r(t)$ runs outside $\mathcal{M}_{\mathcal{T}}$ on the way to $\hat{r}$. Consider the extreme case that the initial packing $r(0)$ is virtual. However, any solution $r(t)$ will eventually runs into $\mathcal{M}_{\mathcal{T}}$ and converges to $\hat{r}$ exponentially fast. \end{remark} We want to prove the convergence of (\ref{Def-Flow-extended}) under the only assumption that there exists a real packing with constant curvature. Indeed, we can do so for combinatorial Yamabe (or Ricci, Calabi) flows on surfaces, see \cite{Ge}-\cite{GJ4}\cite{GX2}-\cite{ge-xu}. In three dimension, the trouble comes from that $\widetilde{\mathcal{S}}(r)$ is not proper on the hyperplane $\{r:\sum_ir_i=1\}$. We guess there is a suitable continuous extension of $K_i$, so that $\widetilde{\mathcal{S}}(r)$ is proper, and hence the corresponding extended flow converges. \subsection{A conjecture for degree difference $\leq10$ triangulations} Inspired by the two comparison principles Lemma \ref{Lemma-compare-regular-1} and Lemma \ref{Lemma-compare-regular-2} and the proof of Theorem \ref{Thm-regular-triangu-converge}, we pose the following \begin{conjecture} \label{conjec-differ-small-11} Let $d_i$ be the vertex degree at $i$. Assume $|d_i-d_j|\leq 10$ for each $i,j\in V$. Then there exists a real or virtual ball packing with constant curvature. \end{conjecture} In the abvoe conjecture, the basic assumption is \emph{combinatorial}, while the conclusion is \emph{geometric}. Thus it builds a connection between combinatoric and geometry. It says that the combinatoric effects geometry. It is based on the following intuitive but not so rigorous observation: If the solution $r(t)$ lies in a compact subset of $\mathds{R}^N_{>0}$, then $r(t)$ converges to some real or virtual packing $\hat{r}$ with constant curvature. This had been used in the proof of Theorem \ref{Thm-regular-triangu-converge}. If $r(t)$ does not lie in any compact subset of $\mathds{R}^N_{>0}$, then it touches the boundary of $\mathds{R}^N_{>0}$. Because $\sum_ir_i(t)$ is invariant, so all $r_i(t)$ are bounded above. Thus at least one coordinates of $r(t)$, for example $r_i(t)$, goes to $0$ as the time goes to infinity. We may assume $r_i(t)$ is minimal, while $r_p(t)$ is maximal. It seems that there is at least one tetrahedron $\{ijkl\}$ going to degenerate. It's easy to see $\sum\tilde{\alpha}_i\geq 2\pi+(d_i-1)\bar{\alpha}$. Hence by the assumption that all degree differences are no more than $10$, there holds \begin{equation*} \sum\tilde{\alpha}_i-\sum\tilde{\alpha}_p\geq 2\pi+(d_i-1)\bar{\alpha}-d_p\bar{\alpha}>2\pi-11\bar{\alpha}>0. \end{equation*} Therefore $$\frac{d}{dt} \left(\frac{r_{i}(t)}{r_{p}(t)}\right)=\left(\frac{r_i(t)}{r_p(t)}\right)\big(\sum\tilde{\alpha}_i-\sum\tilde{\alpha}_p\big)>0.$$ This shows that $r_i(t)/r_p(t)$ has no tendency of descending, which contradicts with $r_i(t)$ goes to $0$. From above analysis, we see the main difficulty is how to show at the minimum $r_i(t)$, there is a tetrahedron $\{ijkl\}$ going to degenerate. This may be overcome by combinatorial techniques. However, we can adjust the above explanation to show \begin{theorem} If each vertex degree is no more than $11$, there exists a real or virtual ball packing with constant curvature. \end{theorem} \begin{proof} We sketch the proof by contradiction. If there is a tetrahedron $\{ijkl\}$ tends to degenerate at infinity, then there is at least a solid angle, for example $\alpha_{ijkl}$ tends to $2\pi$. Hence $r_i$ is the strictly minimum of $\{r_i,r_j,r_k,r_l\}$. $r_i$ tends to $0$ at infinity, otherwise $r(t)$ will lie in a compact set of $\mathds{R}^N_{>0}$. Let $r_p(t)$ be maximal, then $$\frac{d}{dt} \left(\frac{r_{i}(t)}{r_{p}(t)}\right)= \left(\frac{r_i(t)}{r_p(t)}\right)\Big(\sum\tilde{\alpha}_i-\sum\tilde{\alpha}_p\Big)\geq \left(\frac{r_i(t)}{r_p(t)}\right)(2\pi-d_p\bar{\alpha})>0.$$ This leads to a contradiction. \end{proof} \subsection{A prescribed curvature problem} For any $\bar{K}=(\bar{K}_1, \cdots, \bar{K}_N)$, we want to know if it can be realized as the combinatorial scalar curvature of some real or virtual ball packing $\bar{r}$. In other words, is there a packing $\bar{r}$ so that the corresponding curvature $K(\bar{r})=\bar{K}$. Similarly, we consider the following\\[8pt] \noindent \textbf{Prescribed Curvature Problem:} \emph{Is there a real ball packing with the prescribed combinatorial scalar curvature $\bar{K}$ in the combinatorial conformal class $\mathcal{M}_{\mathcal{T}}$? How to find it?}\\[8pt] By Xu's global rigidity (see Remark \ref{Lemma-xu-rigidity}), a real ball packing $\bar{r}$ that realized $\bar{K}$ is unique up to scaling. To study the above ``Prescribed Curvature Problem", we introduce: \begin{definition} Any $\bar{K}\in\mathds{R}^N_{>0}$ is called a prescribed curvature. Given any prescribed curvature $\bar{K}$, the prescribed Cooper-Rivin functional is defined as $$\mathcal{S}_p(r)=\sum_{i=1}^N(K_i-\bar{K}_i)r_i.$$ The prescribed CRG-functional is defined as $\lambda_p(r)=\mathcal{S}_p(r)/\|r\|_{l^1}$. \end{definition} We summarize some results related to the ``Prescribed Curvature Problem" as follows. We omit their proofs since they are similar to the results in previous sections. \begin{theorem}\label{Thm-Q-bar-curv} Given a triangulated manifold $(M^3, \mathcal{T})$. The following three properties are mutually equivalent. \begin{enumerate} \item The ``Prescribed Curvature Problem" is solvable. That is, there exists a real packing $\bar{r}$ that realizes $\bar{K}$. \item The prescribed CRG-functional $\lambda_p(r)$ has a local minimum in $\mathcal{M}_{\mathcal{T}}$. \item The prescribed CRG-functional $\lambda_p(r)$ has a global minimum in $\mathcal{M}_{\mathcal{T}}$. \end{enumerate} \end{theorem} \begin{proposition} If the prescribed combinatorial Yamabe flow \begin{equation}\label{Def-Flow-modified} \frac{dr_i}{dt}=(\bar{K}_i-K_i)r_i \end{equation} converges, then $r(+\infty)$, the solution to (\ref{Def-Flow-modified}) at infinity, solves the ``Prescribed Curvature Problem". Conversely, if the ``Prescribed Curvature Problem" is solvable by a real packing $\bar{r}$, and if the initial real packing $r(0)$ deviates from $\bar{r}$ not so much. then the solution $r(t)$ to (\ref{Def-Flow-modified}) exists for all time and converges exponentially fast to $\bar{r}$. \end{proposition} \begin{definition} Let $\bar{r}\in \mathcal{M}_{\mathcal{T}}$ be a real ball packing. Define a prescribed combinatorial-topological invariant (with respect to $\bar{r}$ and $\mathcal{T}$) as \begin{equation} \chi(\bar{r},\mathcal{T})=\inf\limits_{\gamma\in{\mathbb{S}}^{N-1};\|\gamma\|_{l^1}=1\;}\sup\limits_{0\leq t< a_{\bar{r},\gamma}}\lambda_p(\bar{r}+t\gamma), \end{equation} where $a_{\bar{r},\gamma}$ is the least upper bound of $t$ such that $\bar{r}+t\gamma\in \mathcal{M}_{\mathcal{T}}$. \end{definition} \begin{theorem}\label{Thm-prescrib-xi-invariant-imply-converg} Let $\bar{r}$ be a real ball packing with curvature $\bar{K}=K(\bar{r})$. Consider the prescribed flow (\ref{Def-Flow-modified}). If the initial real packing $r(0)$ satisfies \begin{equation} \lambda_p(r(0))\leq\chi(\bar{r},\mathcal{T}). \end{equation} Then the solution to (\ref{Def-Flow-modified}) exists for all time $t\geq 0$ and converges exponentially fast to $\bar{r}$. \end{theorem} \begin{theorem} Given any initial ball packing (real or virtual) $r(0)\in\mathds{R}^N_{>0}$, the following extended prescribed combinatorial Yamabe flow \begin{equation} \label{Def-Flow-modified-extended} r_i'(t)=(\bar K_i-\tilde K_i)r_i \end{equation} has a solution $r(t)$ with $t\in[0,+\infty)$. In other words, the solution to (\ref{Def-Flow-modified}) can always be extended to a solution that exists for all time $t\geq 0$. \end{theorem} One can also extend the prescribed CRG-functional $\lambda_p(r)$, $r\in\mathcal{M}_{\mathcal{T}}$ to $\tilde{\lambda}_p(r)$, $r\in\mathds{R}^N_{>0}$, and introduce a combinatorial-topological invariant (with respect to a real ball packing $\bar{r}$ and the triangulation $\mathcal{T}$) \begin{equation} \tilde{\chi}(\bar{r},\mathcal{T})=\inf\limits_{\gamma\in{\mathbb{S}}^{N-1};\|\gamma\|_{l^1}=1\;}\sup\limits_{0\leq t< \tilde{a}_{\bar{r},\gamma}}\tilde{\lambda}_p(\bar{r}+t\gamma), \end{equation} where $\tilde{a}_{\bar{r},\gamma}$ is the least upper bound of $t$ such that $\bar{r}+t\gamma\in \mathds{R}_{>0}^N$. Similar to Theorem \ref{Thm-tuta-xi-invariant-imply-converg}, we have \begin{theorem} \label{Thm-extend-prescrib-xi-invariant-imply-converg} Let $\bar{r}$ be a real ball packing with curvature $\bar{K}=K(\bar{r})$. If the initial ball packing $r(0)$ (real or virtual) satisfies \begin{equation} \tilde{\lambda}_p(r(0))\leq\tilde{\chi}(\bar{r},\mathcal{T}), \end{equation} then the solution to (\ref{Def-Flow-modified-extended}) exists for all time $t\geq 0$ and converges exponentially fast to $\bar{r}$. \end{theorem} \section{Appendix} \subsection{The Schl\"{a}ffli formula} \label{appen-schlafi} Given an Euclidean tetrahedron $\tau$ with four vertices $1,2,3,4$. For each edge $\{ij\}$, denote $l_{ij}$ as the edge length of $\{ij\}$, and denote $\beta_{ij}$ as the dihedral angle at the edge $\{ij\}$. The classical schl\"{a}ffli formula reads as $\sum_{i\thicksim j}l_{ij}d\beta_{ij}=0$, where the sum is taken over all six edges of $\tau$. If $\tau$ is configured by four mutually externally tangent ball packings $r_1, r_2, r_3$ and $r_4$, then on one hand, \begin{align*} d\Big(\sum_{i\thicksim j}l_{ij}\beta_{ij}\Big)=\sum_{i\thicksim j}\beta_{ij}dl_{ij}&=\sum_{i\thicksim j}\beta_{ij}\big(dr_i+dr_j\big)\\ &=\sum_i\Big(\sum_{j:j\thicksim i}\beta_{ij}\Big)dr_i=\sum_i\big(\alpha_i+\pi\big)dr_i. \end{align*} On the other hand, $$\sum\limits_{i\thicksim j}l_{ij}\beta_{ij}=\sum\limits_{i\thicksim j}\beta_{ij}(r_i+r_j)=\sum_{i}\Big(\sum\limits_{j:j\thicksim i}\beta_{ij}\Big)r_i=\sum_{i}\big(\alpha_i+\pi\big)r_i.$$ Hence we obtain $d\Big(\sum_i\alpha_ir_i\Big)=\sum_{i}\alpha_idr_i$. Consider a triangulated manifold $(M^3,\mathcal{T})$ with ball packings $r$, we can further get $d\Big(\sum_iK_ir_i\Big)=\sum_{i}K_idr_i$. \subsection{The proof of Lemma \ref{Lemma-Lambda-positive}} \label{appendix-2} \begin{proof} Denote $\mathscr{U}=\{u\in\mathds{R}^N|\sum_iu_i=0\}$. Set $\alpha=(1,\cdots,1)^T/\sqrt{N}$. Choose $A\in O(N)$, such that $A\alpha=(0,\cdots,0,1)^T$, meanwhile, $A$ transforms $\mathscr{U}$ to $\{\zeta\in\mathds{R}^N|\zeta_N=0\}$. It's easy to see $A$ transforms $\{r\in\mathds{R}^N|\sum_ir_i=1\}$ to $\{\zeta\in\mathds{R}^N|\zeta_N=1/\sqrt{N}\}$. Define $g(\zeta_1,\cdots,\zeta_{N-1})\triangleq S(A^T(\zeta_1,\cdots,\zeta_{N-1},1/\sqrt{N})^T)$, we can finish the proof by showing that $Hess_{\zeta}(g)$ is positive definite. Because $A\alpha=(0,\cdots,0,1)^T$, we get $\alpha^T=(0,\cdots,0,1)A$, which implies that we can partition $A$ into two blocks with $A^T=\left[B^{N\times(N-1)}, \alpha\right]$. By direct calculation, we get $\nabla_{\zeta}g=B^TK$ and $Hess_{\zeta}(g)=B^T\Lambda B$. Next we prove $B^T\Lambda B$ positive definite. Assuming $x^TB^T\Lambda Bx=0$, where $x$ is a nonzero $(N-1)\times1$ vector. From Lemma \ref{Lemma-Lambda-semi-positive}, there exists a $c\neq0$ such that $Bx=cr$. On the other hand, from \begin{equation*} \begin{bmatrix} I_{N-1}& 0\,\,\\ 0 & 1\,\, \end{bmatrix}=AA^T= \begin{bmatrix} B^T \\ \alpha^T \end{bmatrix} \big[B,\alpha\big]= \begin{bmatrix} B^TB & B^T\alpha \\ \alpha^TB & \alpha^T\alpha \end{bmatrix} \end{equation*} we know $\alpha^TB=0$. Then $0=\alpha^TBx=c\alpha^Tr=c/\sqrt{N}$, which is a contradiction. Hence $x^TB^T\Lambda Bx=0$ has no nonzero solution. Note that $B^T\Lambda B$ is positive semi-definite due to Lemma \ref{Lemma-Lambda-semi-positive}, thus $B^T\Lambda B$ is positive definite. \end{proof} \noindent \textbf{Acknowledgements:} The first two authors want to thank Bennett Chow for reminding us that (\ref{Def-norm-Yamabe-Flow}) was first introduced in Glickenstein's thesis \cite{G0}. The first author would like to thank Professor Feng Luo, Jie Qing, Xiaochun Rong for many helpful conversations. The first author is partially supported by the NSFC of China (No.11501027). The second author was supported by the Fundamental Research Funds for the Central Universities (No. 2017QNA3001) and NSFC (No. 11701507).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{Sec:Intro} Introducing new and surprising concepts to students is a good way to get them excited about mathematics. One of these concepts is non-Euclidean geometry. Since we live on a globe, spherical geometry is quite intuitive to many but it also offers a couple of surprises because most people have only encountered Euclidean geometry in school. An even more interesting topic is hyperbolic geometry of surfaces with a constant negative curvature. Both of these concepts can be introduced using polygon tilings. One way to approximate a sphere is to start with a pentagon and surround it with five hexagons. When continued, this pattern creates the traditional football approximation of the sphere (truncated icosahedron), as shown in Figure \ref{Fig:HyperbolicFootball}. To create an approximation of a hyperbolic plane, where every point is a saddle point, we can start with a heptagon instead of a pentagon and surround it with seven hexagons. This `hyperbolic football' model has the advantage that, while the model is curved, each polygon is flat and so you can draw straight lines on it. For more details see \cite{Henderson, Sottile}. In the model introduced by Keith Henderson, the polygons are printed on paper, cut, and then taped together. I really liked the idea and tested the model in a couple of hands-on outreach events, but noticed that many students struggled to build a sufficiently large model during the time that was reserved for the activity. \begin{figure}[H] \centering \begin{minipage}[b]{0.9\textwidth} \includegraphics[width=\textwidth]{HyperbolicFootball} \end{minipage} \caption{`Football' tiling models of the hyperbolic plane, Euclidean plane, and sphere using Curvagons.}\label{Fig:HyperbolicFootball} \label{Fig:Tiling} \end{figure} \noindent Inspired by the paper version of the hyperbolic football, I wanted to create building blocks that could be used to quickly create surfaces with negative curvature. The building blocks should be flexible allowing the surface to bend naturally and the connection mechanism should be symmetric since half of the polygons have odd number of sides. The result is Curvagons; flexible regular polygon tiles (all angles and edges identical) that are interlocked together to create a supple surface that can be bent and twisted smoothly. These tiles can be made with a cutting machine from different materials depending on the intended use. Tiles made from EVA (ethylene-vinyl acetate) craft foam can be used in a classroom to build everything from hyperbolic surfaces to dinosaurs. You can also use narrow tape to create straight lines, which allows further exploration of hyperbolic geometry, as shown in Figure \ref{Fig:curvatures}. Tear-resistant paper and paper-like materials, e.g. leather paper, can be used to create durable models that can be drawn on with a normal pencil, as shown in Figure \ref{Fig:Triangles}. \section{Mathematics with Curvagons} \subsection{Non-Euclidean Geometry} \begin{wrapfigure}{r}{5.1cm} \includegraphics[width=5.1cm]{TriangleApproximations} \caption{Tilings made from SnapPap Curvagons.} \label{Fig:Triangles} \end{wrapfigure} Consider six equilateral triangles meeting at a vertex. Since the angles of a triangle add up to $180\degree$ in Euclidean geometry the internal angles of an equilateral triangle are $60\degree$ and six of them add up to $360\degree$ (or equivalently $2\pi$ radians) creating a flat surface, as shown in Figure \ref{Fig:Triangles}. If you instead place five triangles at a vertex, the surface will curve towards itself, and if continued, close creating a crude approximation of a sphere called icosahedron. The amount by which the sum of the angles at a vertex falls short of $360\degree$ is called angle defect. For the icosahedron, this shortfall is $60\degree$ since one triangle is missing. We can also approximate the hyperbolic plane by putting seven equilateral triangles at a vertex, creating a $\{3,7\}$ polyhedral model. This model of the hyperbolic plane is rather wrinkly due to the $-60\degree$ angle defect which we from now on call $60\degree$ angle excess. The hyperbolic plane looks locally like the Euclidean plane meaning that we can get smoother approximations of it using tessellations where the angle excess is small. One of these models is the `hyperbolic football' tessellation described in the Introduction, where the angle excess in every vertex is only $8\sfrac{4}{7}\degree$. Notice that while the plane is flat, the sphere and the hyperbolic plane curve. Looking at the model of the hyperbolic plane, we see that it creates a saddle shape and we can find orthogonal directions with largest curving. One of these curves looks like a hill while the other one looks like a valley, as shown in Figure \ref{Fig:curvatures}. To distinguish hills from valleys we can consider one of them as having positive curvature and the other\linebreak \vspace{-4mm} \begin{figure}[H] \centering \begin{minipage}[b]{0.9\textwidth} \includegraphics[width=\textwidth]{Curvature} \end{minipage} \caption{One of the largest curves on a saddle shape looks like a hill (red) while the other one looks like a valley (blue). For a sphere both of the curves are either hills or valleys.} \label{Fig:curvatures} \end{figure} \noindent one as having negative curvature. We call these maximum and minimum values of curvature the principal curvatures. If we do the same for the sphere, we notice that both of the lines curve in the same direction and are the same size, therefore the maximum and minimum curvature are the same. The Gaussian curvature, that describes how a surface curves at a point, is given as the product of the two principal curvatures. So the hyperbolic plane has negative Gaussian curvature while spheres have positive Gaussian curvature. The curvature of a circle is simply given by $1/R$, where $R$ is the radius of the circle, and so the curvature of a sphere is $1/R^2$. This means that larger spheres have smaller curvature, as we would expect. Similarly, the milder the saddle shape of the hyperbolic plane, the closer the curvature is to zero. \begin{wrapfigure}{l}{6cm} \includegraphics[width=6cm]{Lines} \caption{Diverging parallels on an EVA foam Curvagon model.} \label{Fig:Lines} \end{wrapfigure} Many of the fundamental truths you learn in school are not true when you leave the Euclidean plane. For example, on a hyperbolic surface the angles of a triangle add up to less than $180\degree$, a line has infinitely many parallel lines through a given point, and the circumference of a circle is more than $2\pi R$. Also, even though a hyperbolic surface is larger than the flat plane you cannot draw arbitrarily large triangles on them. This might seem unintuitive at first but becomes clear if you study how parallel lines behave on hyperbolic surfaces. You can draw a straight line on an EVA foam `hyperbolic football' model by gently straightening it along a line with a ruler and tracing the line using a narrow tape. You can then straighten the model along another line that does not cross the original line. You will notice that the lines are closest to each other at one point but diverge in both directions, as shown in Figure \ref{Fig:Lines}. If you start drawing a very large triangle you will notice that this is not possible because the lines never meet. They might seem to converge at first but after reaching a certain point, where they are closest to each other, they start diverging again. The largest triangle you can draw on a hyperbolic plane is called the ideal triangle and the sum of its angles is zero. Notice that even though the ideal triangle has finite area its endpoints stretch to infinity so drawing one would be rather challenging. For other models introducing hyperbolic geometry see \cite{Kekkonen} and for more in-depth introduction to curvature and hyperbolic geometry see e.g. \cite{Taimina1, Taimina2}. You can find other hyperbolic geometry activities for the hyperbolic football from \cite{Sottile}. \subsection{Gauss-Bonnet Theorem} \begin{wrapfigure}{r}{5cm} \includegraphics[width=5cm]{Torus1} \caption{A torus has regions with positive (red), negative (blue) and zero curvature (purple).} \label{Fig:TorusCurvature} \end{wrapfigure} Spheres and hyperbolic surfaces have the same Gaussian curvature everywhere, which is why we say that they have constant curvature. Since they look the same at every point they can be easily approximated using simple tessellations as described above. But how about less regular shapes? We start by considering a torus. If you put your finger on the outer edge of a torus, you will notice that the surface curves away from your hand, much like a sphere does, as shown in Figure \ref{Fig:TorusCurvature}. These points on the outer edge have positive Gaussian curvature. If you put your finger on the inner circle of the torus, the surface curves away from your hand in one direction but in the other direction it curves towards it, creating a saddle shape. This region has negative Gaussian curvature. If you place the torus on a table, you can see that on the very top and bottom, between the regions of positive and negative curvature, there is a circle that lies perfectly flat and hence has zero curvature. So, to approximate a torus we need to create a tessellation with angle excess in the middle and angle defect on the outer edge. But how much excess and defect should we have? \begin{wrapfigure}{r}{5.7cm} \includegraphics[width=5.7cm]{Torus} \caption{An approximation of a torus} \label{Fig:TorusTesselation} \end{wrapfigure} Here we can consider several more complex mathematical concepts with advanced students. One of these is the Euler characteristic which describes the shape or structure of a topological space regardless of the way it is bent. We first consider shapes without a boundary like the torus and sphere. The Euler characteristic is then given by $\chi=vertices-edges+faces$ which was later proven to equal to $\chi=2-2*genus$. Genus is the number of doughnut-like holes on the surface. Thus a sphere has genus zero, a torus (and a coffee cup) has genus one, and a double torus has genus two. Since a torus has genus one, its Euler characteristic is zero. The approximation of a torus in Figure \ref{Fig:TorusTesselation} consists of $2*9$ triangles, $3*9$ squares and $2*9$ heptagons, and it has $9*9$ vertices, $16*9$ edges and $7*9$ faces, which means that its Euler characteristic given by the first definition is indeed zero. We can also compute the angle defects and excess' of all the vertices of the torus in Figure \ref{Fig:TorusTesselation} and notice that they add up to zero. This means that the angle defect on the outer surface is as large as the angle excess in the middle. The result is an example of the Descartes' theorem on the total defect of a polyhedron, which states that the sum of the total defect is $2\pi*\chi$. Descartes' theorem is a special case of the Gauss-Bonnet theorem, where the curvature is concentrated at discrete points, the vertices. The Gauss-Bonnet theorem is a fundamental result that bridges the gap between differential geometry and topology by connecting the geometrical concept of curvature with the topological concept of Euler characteristic. It states that for closed surfaces the total curvature equals to $2\pi*\chi$. We can conclude that since the Euler characteristic of a torus is zero, its total curvature is also zero. In general the Gauss-Bonnet theorem means that no matter how you bend a surface (without creating holes) its total curvature will stay the same since its Euler characteristic, being a topological invariant, does not change even though the Gaussian curvature at some points does. For example, if you push a dimple into a sphere, the total curvature of the surface will stay $4\pi$ even though the Gaussian curvature changes around the dimple. Notice that Curvagons are made of solid flat pieces so the curvature is indeed concentrated at the vertices. Other way to approximate curved shapes is to create pieces with holes, like Curvahedra \cite{Harriss}, where the curvature is spread over the open region, or specially design pieces where the curvature is pushed into meandering edges as in Zippergons \cite{Delp}. We can also consider hyperbolic triangles to see what extra information the Gauss-Bonnet theorem provides. Calculating the Euler characteristic is easy since all triangles have three vertices, three edges, and one face, that is, $\chi=3-3+1=1$. Notice that triangles have a boundary and so they are quite different from the torus considered above. Because of the boundary, the Gauss-Bonnet theorem becomes a bit more \linebreak \vspace{-3mm} \begin{figure}[H] \centering \begin{minipage}[b]{0.71\textwidth} \includegraphics[width=\textwidth]{TurningAngleRow} \end{minipage} \caption{Turning angles in red and the interior angles in blue. The total turning is the sum of the turning angles. For a flat triangle, this is $360\degree$. For a hyperbolic triangle, this is strictly more than $360\degree$. } \label{Fig:TotalTurning} \end{figure} \begin{wrapfigure}{l}{5.23cm} \includegraphics[width=5.23cm]{HyperbolicTriangle} \caption{A hyperbolic triangle on a Curvagon paper model.} \label{Fig:HyperbolicTriangles} \end{wrapfigure} \noindent complex stating that $total\ curvature\ within\ a \ triangle =2\pi*\chi- total\ turning =2\pi- total\ turning$. Total turning describes how much a person following the edge of a triangle would have to turn in order to return to the point they started from, as shown in Figure \ref{Fig:TotalTurning}. It is given by the sum of the turning angles, where $turning\ angle=\pi-interior\ angle$. Hence the total turning for a triangle can be written as $3\pi-\angle\alpha-\angle\beta-\angle\gamma$, where $\angle\alpha,\angle\beta,\angle\gamma$ are the interior angles of the triangle, and so we can write $total\ curvature =2\pi- (3\pi-\angle\alpha-\angle\beta-\angle\gamma)=\angle\alpha+\angle\beta+\angle\gamma-\pi$. We can now use the knowledge that the Euclidean plane has zero curvature to deduce the fact that the angles of a triangle add up to $\pi$ on a flat surface. For hyperbolic triangles the total curvature within a triangle equals to $\angle\alpha+\angle\beta+\angle\gamma-\pi$, the deviation of the angle sum from the Euclidean triangle angle sum $\pi$. Since the curvature of the tiling model is concentrated at the vertices, the deviation should equal $-\ number\ of \ vertices\ inside\ the\ triangle*angle\ excess$. You can test this by drawing several triangles of different sizes on your model and calculating the angle sums. Figure \ref{Fig:HyperbolicTriangles} shows a triangle on a hyperbolic football model. It has has eight vertices inside it, so the curvature within the triangle is $-8*8\sfrac{4}{7}\degree\approx -68.57\degree$. The interior angles measure $43\degree$, $32\degree$ and $37\degree$, and hence the deviation of the angle sum from $180\degree$ is $-68\degree$. Notice that the largest triangle you could in theory draw on an infinitely large hyperbolic football model would have $21$ vertices inside it and all the angles would be zero. \subsection{Platonic and Archimedean Solids} Platonic solids are convex regular polyhedra, which means that all their faces are identical regular polygons and that the same number of faces meet at each vertex. You can easily test that there exists only five platonic solids. Notice that a vertex needs at least 3 faces and an angle defect. If the angle defect is zero the regular tiling will fill the Euclidean plane. If there is angle excess you will get a saddle shape. Next we relax the above rules and allow the use of different regular polygons to create convex polyhedra called Archimedean solids, as shown in Figure \ref{Fig:Solids}. All the vertices are still expected \linebreak \vspace{-1mm} \begin{figure}[H] \centering \begin{minipage}[b]{0.98\textwidth} \includegraphics[width=\textwidth]{Solids2} \end{minipage} \caption{The $13$ Archimedean solids build from EVA foam Curvagons.} \label{Fig:Solids} \end{figure} \begin{wrapfigure}{l}{6cm} \includegraphics[width=6cm]{NotSolids} \caption{A prism (left), elongated square gyrobicupola (middle), and antiprism (right) are only locally symmetric.} \label{Fig:Prisms} \end{wrapfigure} \noindent to be identical and the shapes have to be globally symmetric. The symmetry requirement means that prisms, antiprisms and the elongated square gyrobicupola are not considered to be Archimedean solids, as shown in Figure \ref{Fig:Prisms}. There are 13 Archimedean solids (excluding the five Platonic solids) and they can be constructed from the Platonic solids by truncation, that is, by cutting away corners. Notice that all of these convex polyhedra can be considered to be approximations of the sphere. They have no holes in them so their genus is zero and so their Euler characteristic $\chi$ is two. Using the Descartes' theorem we see that the total angle defect of all of the solids is $2\pi*\chi=4\pi$ or $720\degree$. This can be quite surprising since the smallest solid (tetrahedron) has only four vertices while the biggest solid (truncated icosidodecahedron) has $120$. The result is explained by the fact that the more vertices a solid has the smaller the angle defects become. The angle defects of a tetrahedron are $180\degree$ while the angle defects of a truncated icosidodecahedron are only $6\degree$. Since all the vertices of Platonic and Archimedean solids are identical, the total defect is given by $angle\ defect*number\ of\ vertices$ which means that $number\ of\ vertices=720\degree/angle\ defect$. \section{Polygon Sculpting} Approximating shapes and understanding curvature is important in many applications. For example, polygon, and especially triangle, meshes are used in 3D computer graphics. Also, to create a well-fitting dress you need to know how to cut patterns to create positive and negative curvature. Since Curvagons are flexible and can be woven together quickly they can be used for testing new ideas and for polygon sculpting. As a simple exercise, students can consider what kind of packaging and capsules can be created using different polygons, and which of them can be opened in a way that would result in near zero-waste cutting patterns. We can also consider how to approximate more complex shapes using Curvagons. Regular polygon tessellations have long been used to create crocheted blankets but these so-called granny squares (or, more generally, polygons) have also been used to make stuffed animals. We can use Curvagons instead of crocheted \begin{figure}[H] \centering \begin{minipage}[b]{0.86\textwidth} \includegraphics[width=\textwidth]{Dino} \end{minipage} \caption{A polygon brontosaurus made from EVA foam Curvagons.} \label{Fig:Bronto} \end{figure} \begin{wrapfigure}{r}{6.7cm} \includegraphics[width=6.7cm]{Shirt} \caption{A polygon approximation of a torso from EVA foam Curvagons.} \label{Fig:Torso} \end{wrapfigure} \noindent polygons to test different animal models. See Figure \ref{Fig:Bronto} for a Curvagon brontosaurus. Building different animals allows students to get creative while still forcing them to think about how different areas curve and how to mimic this using polygons. Polygonal animals can also be used as an introduction to computer graphics where smooth shapes are estimated by triangle meshes. How would you need to divide the regular polygons of your model to create a good piecewise flat approximation? Weaving has long been used to create clothing and other utility articles. One example is the Finnish \textit{virsu}, a type of bast shoe traditionally woven from strips of birch bark. The weave creates a square tiling where corners can be made by placing three squares around a vertex and a saddle shape (for the ankle) by using five squares. One can use small square Curvagons to create different versions of \textit{virsu}, as shown in Figure \ref{Fig:Shoes}. These models can also be opened several ways to find cutting patterns for \textit{virsu} type slippers. Curvagon slippers could be made from leather or felt. Since the individual pieces are small, one can use discarded scrap materials which would otherwise go to waste. These type of modular techniques, where small pieces are used to create larger surfaces, have gained popularity in recent years by repurposing waste materials from fashion houses and tanneries to create sustainable fashion. Interlocking small square Curvagons creates an interesting texture on the smooth side of the model. This simple weave could be used to create pillowcases, rugs, bags and many other everyday items from different leftover materials. We can also take the idea of modular fashion a bit further and use Curvagons to study the different types of shapes needed for creating clothes in general. A well-fitting shirt has areas of positive, negative and zero curvature, as shown in Figure \ref{Fig:Torso}. A tight pencil skirt takes much less fabric but requires more modelling than a 50s-style skirt, which can be laid out to create a full circle, and so could be constructed using an Euclidean tiling. The inspiration between mathematics and fashion can also work to the other direction as was illustrated by Zippergons \cite{Delp}. Zippergons are flexible patterned pieces that are designed to fit a given shape and were inspired by discussions between a mathematician Bill Thurston and Dai Fujiwara, the director of design for the Issey Miyake fashion house, on how designing fitting clothes and mathematics is related. \begin{figure}[H] \centering \begin{minipage}[b]{0.93\textwidth} \includegraphics[width=\textwidth]{ShoesNew1} \end{minipage} \caption{A traditionally woven {virsu} and the same design using Curvagons (left), a more slipper-like design showing the interesting pattern on the smooth side (middle), and a {virsu} inspired slipper (right).} \label{Fig:Shoes} \end{figure} \newpage \section{Summary and Conclusions} This paper demonstrates how Curvagon tiles can be used to introduce and explore several mathematical concepts. Curvagons are flexible regular polygon building blocks that can be made from different materials and they can be assembled into infinitely many possible shapes. They can be used to introduce mathematical concepts suitable for different educational levels ranging from elementary school to university. Younger students can experiment with angles, different ways to tile the plane, and the Platonic and Archimedean solids. Older students can be introduced to the concept of non-Euclidean geometry and how many geometry facts they have learned in school are actually true only in Euclidean geometry. They can also test how all the Platonic and Archimedean solids (or any polyhedron that is homeomorphic to the sphere) have the same total angle defect. Hyperbolic geometry is also interesting for university-level students and Curvagons can even be used to introduce fundamental results like Gauss-Bonnet theorem through its special case of Descartes' theorem on total angle defect. Curvagons can also be used to construct beautiful mathematical objects like triply periodic minimal surfaces such as Schoen's Gyroid and the Schwarz' D surface, as shown in Figure \ref{Fig:MinimalSurfaces}. In conclusion, Curvagons can be used to construct just about any shape you can imagine. \begin{figure}[H] \centering \begin{minipage}[b]{0.84\textwidth} \includegraphics[width=\textwidth]{MinimalSurfaces} \end{minipage} \caption{A Curvagon approximation of the Schoen's Gyroid (left) and the Schwarz' D surface (right), which are examples of triply periodic minimal surfaces.} \label{Fig:MinimalSurfaces} \end{figure} {\setlength{\baselineskip}{12pt} \raggedright
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Transverse momentum spectra of open charm mesons at LHC} Recently ALICE, LHCb and ATLAS collaborations have measured inclusive transverse momentum spectra of open charm mesons in proton-proton collisions at $\sqrt{s}=7$ TeV \cite{ALICE,LHCb,ATLAS}. These measurements are very interesting from the theoretical point of view because of the collision energy never achieved before and unique rapidity acceptance of the detectors. Especially, results from forward rapidity region $2 < y < 4$, obtained by the LHCb as well as ATLAS data from wide pseudorapidity range $|\eta| < 2.1$ can improve our understanding of pQCD production of heavy quarks. The inclusive production of heavy quark/antiquark pairs can be calculated in the framework of the $k_t$-factorization \cite{CCH91}. In this approach transverse momenta of initial partons are included and emission of gluons is encoded in the so-called unintegrated gluon, in general parton, distributions (UGDFs). In the leading-order approximation (LO) within the $k_t$-factorization approach the differential cross section for the $Q \bar Q$ can be written as: \begin{eqnarray} \frac{d \sigma}{d y_1 d y_2 d^2 p_{1t} d^2 p_{2t}} = \sum_{i,j} \; \int \frac{d^2 k_{1,t}}{\pi} \frac{d^2 k_{2,t}}{\pi} \frac{1}{16 \pi^2 (x_1 x_2 s)^2} \; \overline{ | {\cal M}_{ij} |^2}\\ \nonumber \times \;\; \delta^{2} \left( \vec{k}_{1,t} + \vec{k}_{2,t} - \vec{p}_{1,t} - \vec{p}_{2,t} \right) \; {\cal F}_i(x_1,k_{1,t}^2) \; {\cal F}_j(x_2,k_{2,t}^2) \; , \nonumber \,\, \end{eqnarray} where ${\cal F}_i(x_1,k_{1,t}^2)$ and ${\cal F}_j(x_2,k_{2,t}^2)$ are the unintegrated gluon (parton) distribution functions. There are two types of the LO $2 \to 2$ subprocesses which contribute to heavy quarks production, $gg \to Q \bar Q$ and $q \bar q \to Q \bar Q$. The first mechanism dominates at large energies and the second one near the threshold. Only $g g \to Q \bar Q$ mechanism is included here. We use off-shell matrix elements corresponding to off-shell kinematics so hard amplitude depends on transverse momenta (virtualities of initial gluons). In the case of charm production at very high energies, especially at forward rapidities, rather small $x$-values become relevant. Taken wide range of $x$ necessary for the calculation we follow the Kimber-Martin-Ryskin (KMR) \cite{KMR01} prescription for unintegrated gluon distributions. More details of theoretical model can be found in Ref.~\cite{LMS09}. The hadronization of heavy quarks is usually done with the help of fragmentation functions. The inclusive distributions of hadrons can be obtained through a convolution of inclusive distributions of heavy quarks/antiquarks and Q $\to$ h fragmentation functions: \begin{equation} \frac{d \sigma(y_h,p_{t,h})}{d y_h d^2 p_{t,h}} \approx \int_0^1 \frac{dz}{z^2} D_{Q \to h}(z) \frac{d \sigma_{g g \to Q}^{A}(y_Q,p_{t,Q})}{d y_Q d^2 p_{t,Q}} \Bigg\vert_{y_Q = y_h \atop p_{t,Q} = p_{t,h}/z} \; , \label{Q_to_h} \end{equation} where $p_{t,Q} = \frac{p_{t,h}}{z}$, where $z$ is the fraction of longitudinal momentum of heavy quark carried by meson. We have made approximation assuming that $y_{Q}$ is unchanged in the fragmentation process. In Fig.~\ref{fig:pt-alice-D-1} we present our predictions for differential distributions in transverse momentum of open charm mesons together with the ALICE (left panel) and LHCb (right panel) experimental data. The uncertainties are obtained by changing charm quark mass $m_c = 1.5\pm 0.3$ GeV and by varying renormalization and factorization scales $\mu^2=\zeta m_{t}^2$, where $\zeta \in (0.5;2)$. The gray shaded bands represent these both sources of uncertainties summed in quadrature. Using KMR model of UGDFs we get very good description of the experimental data, in both ALICE and LHCb cases. Here, we also compare central values of our LO $k_t$-factorization calculations (solid line) with NLO parton model (dashed line) and FONLL \cite{FONLL} predicitons (long-dashed line). All of these three models are consistent and give very similar results. The only difference is obtained at very small meson $p_{t}$'s (below $2$ GeV), where transverse momenta of initial gluons play a very improtant role. In Fig.\ref{fig:pt-alice-D-2} we show transverse momentum (left panel) and pseudorapidity (right panel) spectra of $D^{\pm}$ mesons measured by ATLAS. The representation of theoretical results and uncertainties is the same as in Fig.~\ref{fig:pt-alice-D-1}. In contrast to the ALICE midrapidity measurements, here the experimental data points can be described only by the upper limit of our theoretical predictions. Therefore one can conclude, that covering wider range of (pseudo)rapidities (getting larger rapidity differences between produced quark and antiquark) the theoretical description of measured data becomes somewhat worse. \begin{figure}[!h] \begin{center} \begin{minipage}{0.47\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{dsig_dpt_alice_D+_uncert.eps}} \end{minipage} \hspace{0.5cm} \begin{minipage}{0.47\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{dsig_dpt_lhcb_D+S_uncert.eps}} \end{minipage} \caption{ \small Transverse momentum distributions of $D^{\pm}$ and $D^{\pm}_{S}$ mesons together with the ALICE (left) and LHCb (right) data at $\sqrt{s} = 7$ TeV. Predictions of LO $k_t$-factorization together with uncertainties are compared with NLO parton model and FONLL calculations.} \label{fig:pt-alice-D-1} \end{center} \end{figure} \begin{figure}[!h] \begin{minipage}{0.47\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{dsig_dpt_atlas_D+_compar.eps}} \end{minipage} \hspace{0.5cm} \begin{minipage}{0.47\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{dsig_deta_atlas_D+_compar.eps}} \end{minipage} \caption{ \small Transverse momentum (left) and pseudorapidity (right) distributions of $D^{\pm}$ mesons with the ATLAS experimental data at $\sqrt{s} = 7$ TeV. Predictions of LO $k_t$-factorization together with uncertainties are compared with NLO paton model and FONLL calculations.} \label{fig:pt-alice-D-2} \end{figure} \section{Double charm production via Double Parton Scattering} The mechanism of double-parton scattering (DPS) production of two pairs of heavy quark and heavy antiquark is shown in Fig.~\ref{fig:diagram} together with corresponding mechanism of single-scattering production. The double-parton scattering has been recognized and discussed already in seventies and eighties. The activity stopped when it was realized that their contribution at center-of-mass energies available then was negligible. Nowadays, the theory of the double-parton scattering is quickly developing (see e.g. \cite{S2003,KS2004,GS2010}) which is partly driven by new results from the LHC. The double-parton scattering formalism in the simplest form assumes two single-parton scatterings. Then in a simple probabilistic picture the cross section for double-parton scattering can be written as: \begin{equation} \sigma^{DPS}(p p \to c \bar c c \bar c X) = \frac{1}{2 \sigma_{eff}} \sigma^{SPS}(p p \to c \bar c X_1) \cdot \sigma^{SPS}(p p \to c \bar c X_2). \label{basic_formula} \end{equation} This formula assumes that the two subprocesses are not correlated and do not interfere. At low energies one has to include parton momentum conservation i.e. extra limitations: $x_1+x_3 <$ 1 and $x_2+x_4 <$ 1, where $x_1$ and $x_3$ are longitudinal momentum fractions of gluons emitted from one proton and $x_2$ and $x_4$ their counterparts for gluons emitted from the second proton. The "second" emission must take into account that some momentum was used up in the "first" parton collision. This effect is important at large quark or antiquark rapidities. Experimental data \cite{Tevatron} provide an estimate of $\sigma_{eff}$ in the denominator of formula (\ref{basic_formula}). In our analysis we take $\sigma_{eff}$ = 15 mb. \begin{figure}[!h] \begin{center} \includegraphics[width=4cm]{diff7.eps} \includegraphics[width=4cm]{diff1.eps} \end{center} \caption{ \small SPS (left) and DPS (right) mechanisms of $(c \bar c) (c \bar c)$ production. } \label{fig:diagram} \end{figure} A more general formula for the cross section can be written formally in terms of double-parton distributions (dPDFs), e.g. $F_{gg}$, $F_{qq}$, etc. In the case of heavy quark production at high energies: \begin{eqnarray} d \sigma^{DPS} &=& \frac{1}{2 \sigma_{eff}} F_{gg}(x_1,x_3,\mu_1^2,\mu_2^2) F_{gg}(x_2,x_4,\mu_1^2,\mu_2^2) \times \nonumber \\ &&d \sigma_{gg \to c \bar c}(x_1,x_2,\mu_1^2) d \sigma_{gg \to c \bar c}(x_3,x_4,\mu_2^2) \; dx_1 dx_2 dx_3 dx_4 \, . \label{cs_via_doublePDFs} \end{eqnarray} It is physically motivated to write the dPDFs rather in the impact parameter space $F_{gg}(x_1,x_2,b) = g(x_1) g(x_2) F(b)$, where $g$ are usual conventional parton distributions and $F(b)$ is an overlap of the matter distribution in the transverse plane where $b$ is a distance between both gluons \cite{CT1999}. The effective cross section in (\ref{basic_formula}) is then $1/\sigma_{eff} = \int d^2b F^2(b)$ and in this approximation is energy independent. In the left panel of Fig.~\ref{fig:single_vs_double_LO} we compare cross sections for the single $c \bar c$ pair production as well as for single-parton and double-parton scattering $c \bar c c \bar c$ production as a function of proton-proton center-of-mass energy. At low energies the single $c \bar c$ pair production cross section is much larger. The cross section for SPS production of $c \bar c c \bar c$ system \cite{SS2012} is more than two orders of magnitude smaller than that for single $c \bar c$ production. For reference we show the proton-proton total cross section as a function of energy as parametrized in Ref.~\cite{DL92}. At low energy the $c \bar c$ or $ c \bar c c \bar c$ cross sections are much smaller than the total cross section. At higher energies the contributions approach the total cross section. This shows that inclusion of unitarity effect and/or saturation of parton distributions may be necessary. At LHC energies the cross section for both terms becomes comparable. This is a new situation when the DPS gives a huge contribution to inclusive charm production. \begin{figure}[!h] \begin{minipage}{0.47\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{sig_tot_LO_v2.eps}} \end{minipage} \hspace{0.5cm} \begin{minipage}{0.47\textwidth} \centerline{\includegraphics[width=1.0\textwidth]{dsig_dydiff.eps}} \end{minipage} \caption{ \small Total LO cross section for single $c \bar c$ pair and SPS and DPS $c \bar c c \bar c$ production as a function of center-of-mass energy (left panel) and differential distribution in rapidity difference (right panel) between $c$ and $\bar{c}$ quarks at $\sqrt{s}$ = 7 TeV. Cross section for DPS should be multiplied in addition by a factor 2 in the case when all $c$ ($\bar c$) are counted. We show in addition a parametrization of the total cross section in the left panel. } \label{fig:single_vs_double_LO} \end{figure} In the right panel of Fig.~\ref{fig:single_vs_double_LO} we present distribution in the difference of $c$ and $\bar c$ rapidities $y_{diff} = y_c - y_{\bar c}$. We show both terms: when $c \bar c$ are emitted in the same parton scattering ($c_1\bar c_2$ or $c_3\bar c_4$) and when they are emitted from different parton scatterings ($c_1\bar c_4$ or $c_2\bar c_3$). In the latter case we observe a long tail for large rapidity difference as well as at large invariant masses of $c \bar c$. In particular, $c c$ (or $\bar c \bar c$) should be predominantly produced from two different parton scatterings which opens a possibility to study the double scattering processes. A good signature of the $c \bar c c \bar c$ final state is a production of two mesons, both containing $c$ quark or two mesons both containing $\bar c$ antiquark ($D^0 D^0$ or/and ${\bar D}^0 {\bar D}^0$) in one physical event. A more detailed discussion of the DPS charm production can be found in our original paper Ref.~\cite{LMS2012}. In the present approach we have calculated cross section in a simple collinear leading-order approach. A better approximation would be to include multiple gluon emissions. This can be done e.g. in soft gluon resummation or in $k_t$-factorization approach. This will be discussed in detail elsewhere \cite{MS2012}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} One of Franco Tricerri's interests was in Hermitian manifolds. In \cite{T-V} various types of Hermitian structures are discussed and conditions on the Lee form are of paramount importance. That Hermitian structures are closely connected with harmonic morphisms is shown in \cite{B-W} and \cite{W-4d}. In this paper we study this connection for more general almost Hermitian manifolds. We obtain conditions involving the Lee form under which holomorphic maps between almost Hermitian manifolds are harmonic maps or morphisms. We show that the image of certain holomorphic maps from a cosymplectic manifold is cosymplectic if and only if the map is a harmonic morphism, generalizing a result of Watson \cite{Wat}. Finally, in Theorem \ref{theo-integral} we give conditions under which a harmonic morphism into a Hermitian manifold defines an integrable Hermitian structure on its domain. \section{Harmonic morphisms} For a smooth map $\phi:(M,g)\to(N,h)$ between Riemannian manifolds its {\it tension field} $\tau(\phi)$ is the trace of the second fundamental form $\nabla d\phi$ of $\phi$: \begin{equation}\label{equa-tau} \tau(\phi)=\sum_{j}\{\nabf{e_j}{d\phi(e_j)}-d\phi(\nabm{e_j}{e_j})\} \end{equation} where $\{e_j\}$ is a local orthonormal frame for $TM$, $\nabla^{\phi^{-1}TN}$ denotes the pull-back of the Levi-Civita connection $\nabla^N$ on $N$ to the pull-back bundle $\phi^{-1}TN\to M$ and $d\phi:TM\to\phi^{-1}TN$ is the pull-back of the differential of $\phi$. The map $\phi$ is said to be {\it harmonic} if its tension field vanishes i.e. $\tau(\phi)=0$. J. Eells and J. H. Sampson proved in \cite{E-S} that any holomorphic map between K\"ahler manifolds is harmonic and this was later generalized by A. Lichnerowicz in \cite{L}. For information on harmonic maps, see \cite{E-L-1}, \cite{E-L-2}, \cite{E-L-3} and the references therein. A {\it harmonic morphism} is a smooth map $\phi:(M,g)\to(N,h)$ between Riemannian manifolds which pulls back germs of real-valued harmonic functions on $N$ to germs of harmonic functions on $M$. A smooth map $\phi:M\to N$ is called {\it horizontally (weakly) conformal} if for each $x\in M$ {\it either} \begin{enumerate} \item[(i)] the rank of the differential $d\phi_x$ is $0$, {\it or} \item[(ii)] for the orthogonal decomposition $T_xM={\cal H}_x\oplus{\cal V}_x$ with ${\cal V}_x=\ker\ {d\phi}_x$ the restriction $d\phi_x|_{{\cal H}_x}$ is a conformal linear map {\it onto} $T_{\phi(x)}N$. \end{enumerate} Points of type (i) are called {\it critical points} of $\phi$ and those of type (ii) {\it regular points}. The conformal factor $\lambda(x)$ is called the {\it dilation} of $\phi$ at $x$. Setting $\lambda=0$ at the critical points gives a continuous function $\lambda:M\to [0,\infty)$ which is smooth at regular points, but whose square $\lambda^2$ is smooth on the whole of $M$. Note that at a regular point $\phi$ is a submersion. A horizontally weakly conformal map is called {\it horizontally homothetic} if $d\phi(\text{grad}(\lambda^2))=0$. B. Fuglede showed in \cite{F-2} that a horizontally homothetic harmonic morphism has no critical points. The following characterization of harmonic morphisms is due to Fuglede and T. Ishihara, see \cite{F-1}, \cite{I}: {\it A smooth map $\phi$ is a harmonic morphism if and only if it is a horizontally weakly conformal harmonic map}. More geometrically we have the following result due to P. Baird and Eells, see \cite{B-E}: \begin{theorem}\label{theo-B-E} Let $\phi:(M,g)\to(N,h)$ be a non-constant horizontally weakly conformal map. Then \begin{enumerate} \item[(i)] if $N$ is a surface, i.e. of real dimension $2$, then $\phi$ is a harmonic morphism if and only if its fibres are minimal at regular points, \item[(ii)] if the real dimension of $N$ is greater than $2$ then any two of the following conditions imply the third: \begin{enumerate} \item[(a)] $\phi$ is a harmonic morphism, \item[(b)] the fibres of $\phi$ are minimal at regular points, \item[(c)] $\phi$ is horizontally homothetic. \end{enumerate} \end{enumerate} \end{theorem} Harmonic morphisms exhibit many properties which are ``dual'' to those of harmonic maps. For example, whereas harmonic maps exhibit conformal invariance in a $2$-dimensional domain (cf. \cite{E-S}, Proposition p.126), harmonic morphisms exhibit conformal invariance in a $2$-dimensional codomain: {\it If $\phi:(M,g)\to(N,h)$ is a harmonic morphism to a $2$-dimensional Riemannian manifold and $\psi:(N,h)\to(\tilde N,\tilde h)$ is a weakly conformal map to another $2$-dimensional Riemannian manifold, then the composition $\psi\circ\phi$ is a harmonic morphism}. In particular the concept of a {\it harmonic morphism to a Riemann surface} is well-defined. For information on harmonic morphisms see \cite{B-W} and \cite{W-Sendai}. \section{Almost Hermitian manifolds} Let $(M^m,g,J)$ be an almost Hermitian manifold i.e. a Riemannian manifold $(M,g)$ of even real dimension $2m$ together with an almost complex structure $J:TM\to TM$ which is isometric on each tangent space and satisfies $J^2=-I$. Let $T^{\Bbb C}M$ be the complexification of the tangent bundle. We then have an orthogonal decomposition $$T^{\Bbb C}M=T^{1,0}M\oplus T^{0,1}M$$ of $T^{\Bbb C}M$ into the $\pm i$-eigenspaces of $J$, respectively. Each vector $X\in T^{\Bbb C}M$ can be written as $X=X^{1,0}+X^{0,1}$ with $$X^{1,0}=\frac 12(X-iJX)\in T^{1,0}M\ \ \text{and}\ \ X^{0,1}=\frac 12(X+iJX)\in T^{0,1}M,$$ and locally one can always choose an orthonormal frame $\{e_1,\dots,e_m,Je_1,\dots,Je_m\}$ for $TM$ such that $$T^{1,0}M=\text{span}_{\Bbb C} \{Z_1=\frac{e_1-iJe_1}{\sqrt 2},\dots, Z_m=\frac{e_m-iJe_m}{\sqrt 2}\},$$ $$T^{0,1}M=\text{span}_{\Bbb C} \{\bar Z_1=\frac{e_1+iJe_1}{\sqrt 2},\dots, \bar Z_m=\frac{e_m+iJe_m}{\sqrt 2}\}.$$ The set $\{Z_k|\ k=1,\dots,m\}$ is called a local {\it Hermitian frame} on $M$. As for any other $(1,1)$-tensor the {\it divergence} of $J$ is given by \begin{eqnarray*}\delta J=\text{div}(J) &=&\sum_{k=1}^{m}(\nabm{e_k}J)(e_k)+(\nabm{Je_k}J)(Je_k)\\ &=&\sum_{k=1}^{m}(\nabm{\bar Z_k}J)(Z_k)+(\nabm{Z_k}J)(\bar Z_k). \end{eqnarray*} \begin{remark} Modulo a constant, the vector field $J\delta J$ is the dual to the Lee form, see \cite{T-V}. It is called the {\it Lee vector field}. \end{remark} Following Kot\=o \cite{K} and Gray \cite{Gr-2} with alternative terminology due to Salamon \cite{S} we call an almost Hermitian manifold $(M,g,J)$ \begin{enumerate} \item[(i)] {\it quasi-K\"ahler} or {\it $(1,2)$-symplectic} if $$(\nabm XJ)Y+(\nabm{JX}J)JY=0\ \ \text{for all $X,Y\in C^\infty(TM)$},\ \ \text{and}$$ \item[(ii)] {\it semi-K\"ahler} or {\it cosymplectic} if $\delta J=0$. \end{enumerate} Note that a $(1,2)$-symplectic manifold $(M,g,J)$ is automatically cosymplectic. It is an easy exercise to prove the following two well-known results: \begin{lemma}\label{lemm-(1,2)-symplectic} Let $(M,g,J)$ be an almost Hermitian manifold. Then the following conditions are equivalent: \begin{enumerate} \item[(i)] $M$ is $(1,2)$-symplectic, \item[(ii)] $\nab{\bar Z}W\in C^\infty(T^{1,0}M)$ for all $Z,W\in C^\infty(T^{1,0}M)$. \end{enumerate} \end{lemma} \begin{lemma}\label{lemm-cosymplectic} Let $(M,g,J)$ be an almost Hermitian manifold. Then the following conditions are equivalent: \begin{enumerate} \item[(i)] $M$ is cosymplectic, \item[(ii)] $\sum_{k=1}^m\nab{\bar Z_k}{Z_k}\in C^\infty(T^{1,0}M)$ for any local Hermitian frame $\{Z_k\}$. \end{enumerate} \end{lemma} \begin{example}\label{exam-Cal-Eck} For $r,s\ge 0$ let $(M,g)$ be the product $S^{2r+1}\times S^{2s+1}$ of the two unit spheres in ${\Bbb C}^{r+1}$ and ${\Bbb C}^{s+1}$ equipped with their standard Euclidean metrics. The manifold $(M,g)$ has a standard almost Hermitian structure $J$ which can be described as follows (cf. \cite{Gr-Her} and \cite{Tri-Van}): Let $n_1,n_2$ be the unit normals to $S^{2r+1},S^{2s+1}$ in ${\Bbb C}^{r+1},{\Bbb C}^{s+1}$ and let ${\cal H}_1,{\cal H}_2$ be the horizontal spaces of the Hopf maps $S^{2r+1}\to\proc r$, $S^{2s+1}\to\proc s$, respectively. Then any vector tangent to $M$ has the form $$X=X_1+aJ_1n_1+X_2+bJ_2n_2$$ where $a,b\in\Bbb R$, $X_1\in{\cal H}_1$, $X_2\in{\cal H}_2$, and $J_1,J_2$ are the standard K\"ahler structures on ${\Bbb C}^{r+1}$ and ${\Bbb C}^{s+1}$, respectively. Then the almost complex structure $J$ on $M$ is given by $$J:X\mapsto J_1X_1-bJ_1n_1+J_2X_2+aJ_2n_2.$$ We calculate that (cf. \cite{CMW}): $$\delta J=-2(rJ_1n_1+sJ_2n_2).$$ The almost Hermitian manifold $(M,g,J)$ is called the {\it Calabi-Eckmann manifold}. It is cosymplectic if and only if $s=r=0$ i.e. when $M$ is the real $2$-dimensional torus in ${\Bbb C}^2$. \end{example} \begin{example}\label{exam-twistor} Any invariant metric on a $3$-symmetric space gives it a $(1,2)$-symplec\-tic structure (cf. Proposition 3.2 of \cite{Gr-3}). Such $3$-symmetric spaces occur as twistor spaces of symmetric spaces. One interesting example is the complex Grassmannian $G_n({\Bbb C}^{m+n})=\SU{m+n}/\text{\bf S}(\U m\times \U n)$ with twistor bundle the flag manifold $N=\SU{m+n}/\text{\bf S}(\U m\times \U k\times\U{n-k})$ and projection $\pi:N\to G_n({\Bbb C}^{m+n})$ induced by the inclusion map $\U k\times\U{n-k}\hookrightarrow\U n$. The manifold $N$ has an almost Hermitian structure usually denoted by $J^2$ such that $(N,g,J^2)$ is $(1,2)$-symplectic for any $\SU{m+n}$-invariant metric $g$ on $N$. For further details see \cite{S}. \end{example} Finally, recall that an almost Hermitian manifold is called {\it Hermitian} if its almost complex structure is integrable. A necessary and sufficient condition for this is the vanishing of the Nijenhuis tensor (cf. \cite{N-N}), or equivalently, that $T^{1,0}M$ is closed under the Lie bracket i.e. $[T^{1,0}M,T^{1,0}M]\subset T^{1,0}M$. \section{The harmonicity of holomorphic maps} Throughout this section we shall assume that $(M^m,g,J)$ and $(N^n,h,J^N)$ are almost Hermitian manifolds of complex dimensions $m$ and $n$ with Levi-Civita connections $\nabla$ and $\nabla^N$, respectively. Furthermore we suppose that the map $\phi:M\to N$ is holomorphic i.e. its differential $d\phi$ satisfies $d\phi\circ J=J^N\circ d\phi$. We are interested in studying under what additional assumptions the map $\phi$ is a harmonic map or morphism. For a local Hermitian frame $\{Z_k\}$ on $M$ we define $$A=\sum_{k=1}^m\nabf{\bar Z_k}d\phi(Z_k)\ \ \text{and}\ \ B=-\sum_{k=1}^md\phi(\nabm{\bar Z_k}{Z_k}).$$ \begin{lemma}\label{lemm-tension} Let $\phi:(M,g,J)\to(N,h,J^N)$ be a map between almost Hermitian manifolds. If $N$ is $(1,2)$-symplectic then the tension field $\tau(\phi)$ of $\phi$ is given by \begin{equation}\label{equa-tau-J} \tau(\phi)=-d\phi(J\delta J). \end{equation} \end{lemma} \begin{proof} Let $\{Z_k\}$ be a local Hermitian frame, then a simple calculation shows that $$J\delta J =\sum_{k=1}^m\{(1+iJ)\nabm{\bar Z_k}{Z_k}+(1-iJ)\nabm{Z_k}{\bar Z_k}\},$$ so that the $(0,1)$-part of $J\delta J$ is given by $$(J\delta J)^{0,1}=(1+iJ)\sum_{k=1}^m\nabm{\bar Z_k}{Z_k}.$$ The holomorphy of $\phi$ implies that $d\phi(Z_k)$ belongs to $C^\infty(\phi^{-1}T^{1,0}N)$ and the $(1,2)$-symplecticity on $N$ that $\nabf{\bar Z_k}d\phi(Z_k)\in C^\infty(\phi^{-1}T^{1,0}N)$. This means that $A^{0,1}=0$. {}From Equation (\ref{equa-tau}) and the symmetry of the second fundamental form $\nabla d\phi$ we deduce that $\tau(\phi)=2(A+B)$. Taking the $(0,1)$-part and using the fact that $\phi$ is holomorphic we obtain $$\tau(\phi)^{0,1}=2(A^{0,1}+B^{0,1})=2B^{0,1}=-d\phi(J\delta J)^{0,1}.$$ Since $\tau(\phi)$ and $d\phi(J\delta J)$ are both real, we deduce the result. \end{proof} \vskip .5cm The next proposition gives a criterion for harmonicity in terms of the Lee vector field. \begin{proposition}\label{prop-harmonic} Let $\phi:(M,g,J)\to(N,h,J^N)$ be a holomorphic map from an almost Hermitian manifold to a $(1,2)$-symplectic manifold. Then $\phi$ is harmonic if and only if $d\phi(J\delta J)=0$. \end{proposition} Note that since we are assuming that $\phi$ is holomorphic $d\phi(J\delta J)=0$ is equivalent to $d\phi(\delta J)=0$. As a direct consequence of Proposition \ref{prop-harmonic} we have the following result of Lichnerowicz, see \cite{L}: \begin{corollary}\label{corr-L} Let $\phi:(M,g,J)\to(N,h,J^N)$ be a holomorphic map from a cosymplectic manifold to a $(1,2)$-symplectic one. Then $\phi$ is harmonic. \end{corollary} To deduce that $\phi$ is a harmonic morphism we must assume that $\phi$ is horizontally weakly conformal. In that situation we can say more: \begin{proposition}\label{prop-harm-morph-1} Let $\phi:(M^{m},g,J)\to(N^{n},h,J^N)$ be a surjective horizontally weakly conformal holomorphic map between almost Hermitian manifolds. Then any two of the following conditions imply the third: \begin{enumerate} \item[(i)] $\phi$ is harmonic and so a harmonic morphism, \item[(ii)] $d\phi(J\delta J)=0$. \item[(iii)] $N$ is cosymplectic, \end{enumerate} \end{proposition} \begin{proof} By taking the $(0,1)$-part of equation (\ref{equa-tau}) we obtain $$\tau(\phi)^{0,1}=2(A^{0,1}+B^{0,1}).$$ The tension field $\tau(\phi)$ is real so that the map $\phi$ is harmonic if and only if $\tau(\phi)^{0,1}=0$. Since $2B^{0,1}=-d\phi(J\delta J)^{0,1}$ and the vector field $d\phi(J\delta J)$ is real the condition $d\phi(J\delta J)=0$ is equivalent to $B^{0,1}=0$. To complete the proof we shall now show that $A^{0,1}=0$ on $M$ if and only if $N$ is cosymplectic. Let $R$ be the open subset of regular points of $\phi$. Let $p\in R$ and $\{Z'_1,\dots,Z'_n\}$ a local Hermitian frame on an open neighbourhood $V$ of $\phi(p)\in N$. Let $Z^*_1,\dots,Z^*_n$ be the unique horizontal lifts of $Z'_1,\dots,Z'_n$ to $\phi^{-1}(V)$ and normalize by setting $Z_k=\lambda Z^*_k$ for $k=1,2,\dots,n$, where $\lambda$ is the dilation of $\phi$ defined in Section 2. Then we can, on an open neighbourhood of $p$, extend $\{Z_1,\dots,Z_n\}$ to a local Hermitian frame $\{Z_1,\dots,Z_m\}$ for $M$. We then have $$A=\sum_{k=1}^n\nabf{\bar Z_k}{(\lambda Z'_k)}= \sum_{k=1}^n\bar Z_k(\lambda)Z'_k +\lambda^2\sum_{k=1}^n\nabn{\bar Z'_k}{Z'_k}.$$ The vector field $\sum_{k=1}^n\bar Z_k(\lambda)Z'_k$ belongs to $\phi^{-1}T^{1,0}N$, so by Lemma \ref{lemm-cosymplectic}, $A^{0,1}$ vanishes on $R$ if and only if $N$ is cosymplectic at each point of $\phi(R)$. Now note that if $p$ is a critical point of $\phi$ then {\it either} $p$ is a limit point of a sequence of regular points {\it or} $p$ is contained in an open subset $W$ of critical points. In the first case, if $A^{0,1}$ vanishes on $R$ then it vanishes also at $p$ by continuity. In the second case $d\phi=0$ on $W$ so that $A^{0,1}=0$ at $p$. This means that $A^{0,1}$ vanishes on $R$ if and only if it vanishes on $M$. On the other hand, since $\phi$ is surjective it follows from Sard's theorem that $\phi(R)$ is dense in $N$. This implies that $N$ is cosymplectic at points of $\phi(R)$ if and only if $N$ is cosymplectic everywhere. Putting the above remarks together yields the proof. \end{proof} \vskip .5cm As a direct consequence of Proposition \ref{prop-harm-morph-1} we have the following: \begin{theorem}\label{theo-harm-morp} Let $\phi:(M,g,J)\to (N,h,J^N)$ be a surjective horizontally weakly conformal holomorphic map from a cosymplectic manifold to an almost Hermitian manifold. Then $N$ is cosymplectic if and only if $\phi$ is a harmonic morphism. \end{theorem} Combining Theorems \ref{theo-harm-morp} and \ref{theo-B-E} we then obtain: \begin{corollary}\label{corr-harm-morp-2} Let $\phi:(M,g,J)\to(N,h,J^N)$ be a surjective horizontally homothetic holomorphic map from a cosymplectic manifold to an almost Hermitian manifold. Then $N$ is cosymplectic if and only if $\phi$ has minimal fibres. \end{corollary} Corollary \ref{corr-harm-morp-2} generalizes a result of B. Watson in \cite{Wat} where it is assumed that the map $\phi$ is a Riemannian submersion. If the manifold $(M,g,J)$ is $(1,2)$-symplectic we have the following version of Theorem \ref{theo-harm-morp}: \begin{proposition}\label{prop-harm-morph-2} Let $\phi:(M,g,J)\to (N^n,h,J^N)$ be a horizontally weakly conformal holomorphic map from a $(1,2)$-symplectic manifold to a cosymplectic one. Then $\phi$ is a harmonic morphism whose fibres are minimal at regular points. If $n>1$ then $\phi$ is horizontally homothetic. \end{proposition} \begin{proof} The inclusion maps of the fibres of $\phi$ are holomorphic maps between $(1,2)$-symplectic manifolds. They are isometric immersions and, by Corollary \ref{corr-L}, harmonic so the fibres are minimal. For an alternative argument see \cite{Gr-1}. If $n>1$ then Theorem \ref{theo-B-E} implies that $\phi$ is horizontally homothetic. \end{proof} \vskip .5cm Now assume that $n=1$, then $N$ is automatically K\"ahler and therefore $(1,2)$-symplectic. Further, any holomorphic map from an almost Hermitian manifold $(M,g,J)$ to $N$ is horizontally weakly conformal. Hence Proposition \ref{prop-harmonic} implies the following results: \begin{corollary}\label{corr-surf-1} Let $\phi:(M,g,J)\to N$ be a holomorphic map from an almost Hermitian manifold to a Riemann surface. Then $\phi$ is a harmonic morphism if and only if $d\phi(J\delta J)=0$. \end{corollary} \begin{corollary}\label{corr-surf-2} Let $\phi:(M,g,J)\to N$ be a holomorphic map from a cosymplectic manifold to a Riemann surface. Then $\phi$ is a harmonic morphism. \end{corollary} \begin{example} For two integers $r,s\ge 0$ let $M$ be the Calabi-Eckmann manifold $(S^{2r+1}\times S^{2s+1},g,J)$ and $\phi:M\to\proc r\times\proc s$ be the product of the Hopf maps $S^{2r+1}\to\proc r$, $S^{2s+1}\to\proc s$. Then it is not difficult to see that $\phi$ is holomorphic. Further the kernel of $d\phi$ is given by $\ker d\phi=\text{span}\{J_1n_1,J_2n_2\}$. {}From Example \ref{exam-Cal-Eck} we get $d\phi(\delta J)=-2d\phi(rJ_1n_1+sJ_2n_2)=0$. Since the map $\phi$ is a Riemannian submersion we deduce by Proposition \ref{prop-harm-morph-1} that $\phi$ is a harmonic morphism. \end{example} The next result can be extended to any of the twistor spaces considered by Salamon in \cite{S}, but for clarity we state it for a particular case. \begin{proposition}\label{prop-twistor} Let $\pi:N\to G_n({\Bbb C}^{m+n})$ be the twistor fibration of Example \ref{exam-twistor} and $\phi:(M,g,J)\to N$ be a holomorphic map from an almost Hermitian manifold into the flag manifold $N$. Although $\psi=\pi\circ\phi:M\to G_n({\Bbb C}^{m+n})$ is not, in general, a holomorphic map, we have $$\tau(\psi)=-d\psi(J\delta J).$$ \end{proposition} \begin{proof} Let $\{Z_k\}$ be a local Hermitian frame on $M$. Then by using the Composition Law for the tension field and Lemmas \ref{lemm-tension} and \ref{lemm-vanish} we obtain: \begin{eqnarray*}\tau(\psi)&=&d\pi(\tau(\phi)) +\sum_{k=1}^m\nabla d\pi(d\phi(\bar Z_k),d\phi(Z_k))\\ &=&-d\pi(d\phi(J\delta J))+0\\ &=&-d\psi(J\delta J). \end{eqnarray*} \end{proof} \begin{lemma}\label{lemm-vanish} The twistor fibration $\pi:N\to G_n({\Bbb C}^{m+n})$ is $(1,1)$-geodesic i.e. $$\nabla d\pi(Z,W)=0$$ for all $p\in N$, $Z\in T^{1,0}_pN$ and $W\in T^{0,1}_pN$. \end{lemma} \begin{proof} Decompose $Z$ and $W$ into vertical and horizontal parts $Z=Z^{\cal V}+Z^{\cal H}$, $W=W^{\cal V}+W^{\cal H}$. Now since $\pi$ is a Riemannian submersion $\nabla d\pi(Z^{\cal H},W^{\cal H})=0$ by Lemma 1.3 of \cite{O}. Further $$\nabla d\pi(Z^{\cal V},W)=\nabfg{Z^{\cal V}}{d\pi(W)} -d\pi(\nabm{Z^{\cal V}}W).$$ The first term is zero and the second term is of type $(0,1)$ with respect to the almost Hermitian structure $J_p$ on $G_n({\Bbb C}^{m+n})$ defined by $p$, since $\nabm{Z^{\cal V}}W$ is of type $(0,1)$ and $d\pi_p:T_pM\to T_{\pi(p)}G_n({\Bbb C}^{m+n})$ intertwines $J$ and $J_p$. Similarily $\nabla d\pi(W,Z^{\cal V})$ is of type $(1,0)$, so by the symmetry of $\nabla d\pi$, $\nabla d\pi(Z^{\cal V},W)=0$. Hence $$\nabla d\pi(Z,W)=\nabla d\pi(Z^{\cal V},W)+\nabla d\pi(Z^{\cal H},W^{\cal H}) +\nabla d\pi(Z^{\cal H},W^{\cal V})=0.$$ \end{proof} \section{Superminimality} Let $\phi:(M,g)\to(N,h,J^N)$ be a horizontally conformal submersion from a Riemannian manifold to an almost Hermitian manifold. Assume that the fibres of $\phi$ are orientable and of real dimension $2$. Then we can construct an almost Hermitian structure $J$ on $(M,g)$ such that $\phi$ becomes holomorphic: make a smooth choice of an almost Hermitian structure on each fibre and lift $J^N$ to the horizontal spaces $\cal H$ using $d\phi\circ J=J^N\circ d\phi$. For an almost Hermitian manifold $(M,g,J)$ we shall call an almost complex submanifold $F$ of $M$ {\it superminimal} if $J$ is parallel along $F$ i.e. $\nabm VJ=0$ for all vector fields $V$ tangent to $F$. It is not difficult to see that any superminimal $F$ is minimal. Superminimality of surfaces in $4$-dimensional manifolds has been discussed by several authors, see for example \cite{Br}. \begin{theorem}\label{theo-integral} Let $\phi:(M,g,J)\to(N,h,J^N)$ be a horizontally conformal holomorphic map from an almost Hermitian manifold to a Hermitian manifold with complex $1$-dimensional fibres. If \begin{enumerate} \item[(i)] the fibres of $\phi$ superminimal with respect to $J$, and \item[(ii)] the horizontal distribution $\cal H$ satisfies $[{\cal H}^{1,0},{\cal H}^{1,0}]^{\cal V}\subset {\cal V}^{1,0}$, \end{enumerate} then $J$ is integrable. \end{theorem} \begin{proof} We will show that $T^{1,0}M$ is closed under the Lie bracket i.e. $[T^{1,0}M,T^{1,0}M]\subset T^{1,0}M$, or equivalently: \begin{enumerate} \item[(a)] $[{\cal V}^{1,0},{\cal V}^{1,0}]\subset T^{1,0}M$, \item[(b)] $[{\cal H}^{1,0},{\cal H}^{1,0}]\subset T^{1,0}M$, \item[(c)] $[{\cal H}^{1,0},{\cal V}^{1,0}]\subset T^{1,0}M$. \end{enumerate} The fibres are complex $1$-dimensional so $[{\cal V}^{1,0},{\cal V}^{1,0}]=0$. This proves (a). Let $Z,W$ be two vector fields on $N$ of type $(1,0)$ and let $Z^*,W^*$ be their horizontal lifts to ${\cal H}^{1,0}$. Then $d\phi[Z^*,W^*]=[Z,W]$ is of type $(1,0)$ since $J^N$ is integrable. The holomorphy of $\phi$ and assumption (ii) then imply (b). Let $\sp{}{}$ be the complex bilinear extention of $g$ to $T^{\Bbb C}M$ and $V$ be a vertical vector field of type $(1,0)$. Then $d\phi([V,Z^*])=[d\phi(V),Z]=0$, so $[V,Z^*]^{\cal H}=0$. On the other hand, \begin{eqnarray*} \sp{[V,Z^*]}V&=&\sp{\nabm V{Z^*}}V-\sp{\nabm {Z^*}V}V\\ &=&-\sp{Z^*}{\nabm VV}-\frac 12Z^*(\sp VV). \end{eqnarray*} The subspace $T^{1,0}M$ is isotropic w.r.t. $\sp{}{}$ so $\sp VV=0$. The superminimality of the fibres implies that $J(\nabm VV)=\nabm V{JV}=i\nabm VV$. Hence $\nabm VV$ is an element of $T^{1,0}M$ so $\sp{Z^*}{\nabm VV}=0$. This shows that $\sp{[V,Z^*]}V=0$ so $[V,Z^*]^{\cal V}$ belongs to $T^{1,0}M$. This completes the proof. \end{proof} \vskip .5cm The reader should note that condition (ii) of Theorem \ref{theo-integral} is satisfied when the horizontal distribution $\cal H$ is integrable, or when $N$ is complex $1$-dimensional, since in both cases $[{\cal H}^{1,0},{\cal H}^{1,0}]^{\cal V}=0$. Another example where the Theorem \ref{theo-integral} applies is the following: \begin{example} The Hopf map $\phi:{\Bbb C}^{n+1}-\{0\}\to\proc n$ is a horizontally conformal submersion with complex $1$-dimensional fibres. The horizontal distribution is non-integrable, but it is easily seen that condition (ii) is satisfied, in fact $[{\cal H}^{1,0},{\cal H}^{1,0}]^{\cal V}=0$. The K\"ahler structure on $\proc n$ lifts to two almost Hermitian structures on ${\Bbb C}^{n+1}-\{0\}$, the fibres are superminimal with respect to both of these, so by Theorem \ref{theo-integral} they are both Hermitian. In fact one is the standard K\"ahler structure; the other is not K\"ahler. \end{example} If $N$ is complex $1$-dimensional and $M$ is real $4$-dimensional then Theorem \ref{theo-integral} reduces to a result of the second author given in Proposition 3.9 of \cite{W-4d}. \begin{example} Let $(M^2,g,J)$ be an almost Hermitian manifold of complex dimension $2$ and $N$ a Riemann surface. Then we have the identities \cite{Gau} $$\nabm{\delta J}J=\nabm{J\delta J}J=0.$$ Otherwise said, span$_{\Bbb R}\{\delta J,J\delta J\}$ is contained in $\ker\nabla J$. The condition $d\phi(\delta J)=0$ for a non-constant holomorphic map $\phi:M\to N$ is thus equivalent to the superminimality of the fibres (at regular points) so that Corollary \ref{corr-surf-1} translates into the following result of the second author, see Proposition 1.3 of \cite{W-4d}. {\it A holomorphic map from a Hermitian manifold of complex dimension $2$ to a Riemann surface is a harmonic morphism if and only if its fibres are superminimal at the regular points of $\phi$}. \end{example}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION}\label{intro} The analysis of the spectra of \ion{H}{ii} regions provides information about the chemical composition of the present-day interstellar medium in different kinds of star-forming galaxies and in different regions across these galaxies. The results supply fundamental input for our models of galactic chemical evolution. Oxygen, the third most abundant element, is taken as representative of the metallicity of the medium, since the oxygen abundance is the one most easily derived from the optical spectra of photoionized gas. Leaving aside the construction of photoionization models that reproduce the spectra, there are different ways to derive oxygen abundances from the observed spectra. When the spectra are deep enough to allow the measurement of the weak lines needed for the determination of electron temperatures, such as [\ion{O}{iii}]~$\lambda4363$ or [\ion{N}{ii}]~$\lambda5755$, we can use the direct method to derive the O$^+$ and O$^{++}$ abundances, and obtain the total oxygen abundance by adding these ionic abundances. On the other hand, when the temperature-sensitive lines are not detected, one must resort to alternative methods that are based on the intensities of the strongest lines, the so-called strong-line methods. These methods are calibrated using grids of photoionization models or samples of \ion{H}{ii} regions that have estimates of the electron temperature (the empirical methods). The strongest lines in the optical spectra of \ion{H}{ii} regions that are usually used by strong-line methods are [\ion{O}{ii}]~$\lambda3727$, [\ion{O}{iii}]~$\lambda\lambda4959,5007$, [\ion{N}{ii}]~$\lambda\lambda6548+84$, [\ion{S}{ii}]~$\lambda\lambda6717+31$, H$\alpha$, and H$\beta$. Different methods use different combinations of line ratios involving these lines and, although a large variety of methods are available, it is important to consider the procedures that have been used to calibrate them, and whether the samples of observed objects or photoionization models used for the calibration cover the same physical properties as the \ion{H}{ii} regions to which the method will be applied \citep{Sta:10b}. In general, different methods and different calibrations of the same method will lead to different results. It is not easy to construct grids of photoionization models that reproduce well enough the main characteristics of the observed \ion{H}{ii} regions so that they can be used to calibrate the strong-line methods \citep{Dop:06,Sta:08}. This might explain the fact that the abundances derived with methods based on this type of calibration differ from those derived with empirical methods \citep{Kew:08a,Lo:10a}. As an example of the complications that arise when defining the input parameters of photoionization models, we do not have much information about the properties of dust grains inside \ion{H}{ii} regions \citep[see e.g.][]{Och:15} and they have important effects on the emitted spectrum \citep{vanH:04}, especially at high metallicities. Empirical methods also have their problems: it is difficult to measure the lines needed for temperature determinations in metal-rich regions, the electron temperatures estimated for these regions can introduce important biases in the abundance determinations \citep{Sta:05}, and if, as suggested by several authors, there are temperature fluctuations in \ion{H}{ii} regions which are larger than those predicted by photoionization models, they can lead to lower abundances than the real ones at any metallicity \citep[e.g.][]{Pena:12a}. If one excludes from the samples high-metallicity objects, and if temperature fluctuations turn out to be not much higher than the ones expected from photoionization models, it can be argued that empirical calibrations of the strong-line methods should be preferred because they are based on a lower number or assumptions, although photoionization models can provide much insight on the explanations behind the behaviour and applicability of the strong-line methods. One important question is how well strong-line methods can be expected to do. Grids of photoionization models can be used to show that strong-line methods work because the metallicity of most \ion{H}{ii} regions is strongly related to the effective temperature of the ionizing radiation and to the ionization parameter of the region\footnote{The number of ionizing photons per atom arriving to the inner face of the ionized region.}\citep{Dop:06,Sta:08}. This implies that strong-line methods will not work properly when applied to regions that do not follow this general relation due to variations in their star formation histories, ages, or chemical evolution histories \citep{Sta:10b}. The direct method is expected to work better since it is based on a smaller number of assumptions, and when observations of \ion{H}{ii} regions are presented in any publication, it is usually described as an achievement to detect the weak lines that allow a temperature determination. However, the measurement of the weak, temperature-sensitive, lines can be affected by large uncertainties when these lines have a low signal-to-noise ratio in the nebular spectrum. When the oxygen abundances are derived with the direct method using temperature estimates based on these lines, the results will also have large uncertainties. The calibration of strong-line methods using these oxygen abundances can be affected by the large uncertainties, but this problem can be alleviated by a careful selection of calibration samples trying to have small, randomly distributed, uncertainties, and by cleaning up the samples excluding the outliers, since it can be assumed that they depart from the relation implied by the rest of the sample either because they have different properties or because their line intensities have large uncertainties. In principle the average behaviour of these samples could allow good calibrations of strong-line methods which might then show lower dispersions than the results of the direct method when applied to objects in the calibration sample or to objects that have the average properties of the calibration sample. In these cases, strong-line methods will be more robust than the direct method. The measurement of the intensities of the strong lines used by the strong-line methods should present less problems. However, there are observational effects that introduce uncertainties in all the measurements of line intensity ratios, effects that are not necessarily included in the estimated uncertainties, namely, atmospheric differential refraction leading to the measurement of different lines at different spatial positions, the incorrect extraction of 1D spectra from tilted 2D spectra, undetected absorption features beneath the emission lines, problems with the estimation of the continuum or with deblending procedures, the presence of unnoticed cosmic rays, or any bias introduced by the flux calibration or the extinction correction. Some of the line ratios used by strong-line methods will be more sensitive to these effects, making these methods less robust than others that are based on less-sensitive line ratios. Moreover, since the line ratios used as temperature diagnostics can be very sensitive to these observational problems, the results of the direct method might be less robust than those derived with strong-line methods even when the weak temperature-sensitive lines are measured with a good signal-to-noise ratio. One way to infer the robustness of the methods used for abundance determinations in the presence of observational problems is to compare their performance when they are used to estimate metallicity gradients in galaxies. The observational problems are likely to introduce dispersions around an existing gradient that can be interpreted as azimuthal abundance variations. If any of the methods implies significantly lower dispersions, it seems reasonable to assume that azimuthal variations must be lower than the estimated dispersions, and hence that the method is behaving in a more robust way. Since spectra obtained by different authors are likely to be affected by various observational problems in different amounts, the robustness of each method to observational effects can be inferred from the dispersions around the gradient implied by the method when using spectra observed by different authors in the same galaxy. Methods that show significantly lower dispersions can then be inferred to be more robust. Here we present an analysis of the oxygen abundance gradient in M81, using this galaxy as a case study of the robustness of some of the methods used for abundance determinations in \ion{H}{ii} regions. We will explore the behaviour of methods that have been calibrated using large samples of \ion{H}{ii} regions that have temperature determinations. M81 is an ideal candidate for this study, since it is a nearby spiral galaxy, at a distance of 3.63$\pm$0.34 Mpc~\citep{Free:01a}. This galaxy belongs to an interacting group of galaxies and has well-defined spiral arms that contain a large number of \ion{H}{ii} regions. The oxygen abundance gradient of M81 has been calculated in different studies using several methods (\citealt{Stau:84a,Gar:87a,Pil:04a,Stan:10a,Patt:12a,Stan:14a}; \citealp*{Pil:14a}). These works find slopes that go from $-0.093$ to $-0.011$ dex kpc$^{-1}$, and some of them include \ion{H}{ii} regions where it is possible to measure the electron temperature and calculate the metallicity with the direct method. This paper is structured as follows: in Section~2 we describe our observations, which were obtained with the Gran Teles\-copio Canarias (GTC), the data reduction, the sample selection, the measurement of the line intensities, and the reddening corrections; in Section~3 we describe the methods we apply to calculate the physical conditions and chemical abundances of the sample of \ion{H}{ii} regions; in Section 4 we present the results of this analysis, and the implied metallicity gradients, using our data and other observations from the literature; in Section 5 we discuss the scatter around the metallicity gradient implied by the different methods; and finally, in Section~6, we summarize our results and present our conclusions. \section[Observaciones]{OBSERVATIONS AND DATA REDUCTION} Spectroscopic observations (programme GTC11-10AMEX, PI: DRG) were carried out using the long-slit spectrograph of the OSIRIS instrument at the 10.4-m GTC telescope in the Observatorio del Roque de los Muchachos (La Palma, Spain). We used the five slit positions listed in Table~\ref{Slits}, with a slit width of 1 arcsec and length of 8 arcmin. Table~\ref{Slits} provides the central positions of the slits, the exposure times we used, the slit position angles (P.A.), and the airmasses during the observations. We obtained three exposures of 900 s at each slit position using the R1000B grism, which allowed us to cover the spectral range 3630--7500 \AA\ with a spectral resolution of $\sim7$ \AA\ full width at half-maximun. The observations were acquired on 2010 April 5--7 when the seeing was $\sim1$ arcsec. The detector binning by 2 pixels in the spatial dimension provided a scale of 0.25 arcsec pixel$^{-1}$. The airmasses were in the range 1.3--1.5 and, at these values, departures from the parallactic angle can introduce light losses at some wavelengths due to differential atmospheric refraction~\citep{Fil:82}. In our observations, the differences between the position angle and the parallactic angle go from 8 to 23 degrees. Although small, the differences imply that we might be losing some light in the blue, especially for the few objects with sizes around 1 arcsec observed with slit positions P1 and P2. This is one of the possible observational problems that we listed in Section~\ref{intro}, and the combined effects of these problems are explored in our analysis. The data were reduced using the tasks available in the \textsc{iraf}\footnote{\textsc{iraf} is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} software package. The reduction process included bias subtraction, flat-field and illumination correction, sky subtraction, wavelength calibration, and flux calibration using the standard star Feige 34. The final spectra result from the median of the three exposures obtained at each slit position. \begin{table} \caption{Log of the observations.} \begin{tabular}{lcccrc} \hline Slit & R.A. & Dec. & Exposure & P.A.& Airmass\\ ID & (J2000) & (J2000) & times (s)& ($\degr$)\\ \hline P1& 09:54:38 & +69:05:48 & $3\times900$ & 171 & 1.3\\ P2& 09:54:52 & +69:08:11 & $3\times900$ & 6 & 1.3\\ P3& 09:55:37 & +69:07:46 & $3\times900$ & 123 & 1.4\\ P4& 09:55:46 & +69:07:48 & $3\times900$ & 105 & 1.5\\ P5& 09:55:48 & +69:04:53 & $3\times900$ & 127 & 1.4\\ \hline \end{tabular} \label{Slits} \end{table} The slit positions were selected to pass through some of the brightest stellar compact clusters in the catalogue of \citet*{Santi:10a} for M81. These observations are part of a large-scale program dedicated to study the star formation in this galaxy \citep{Mayya:13}. Here we use them to study the chemical abundances and the abundance gradient provided by \ion{H}{ii} regions in M81. We extracted spectra using the task \textsc{apall} of \textsc{iraf} for each knot of ionized gas that we found along the five slits. There were two or three bright stellar clusters in each slit and we used the one closest to each ionized knot to trace the small changes of position of the stellar continuum in the CCD. We fitted polynomial functions to these traces and used them as a reference to extract the spectrum of the knots. The size of the apertures goes from 4 to 28 pixels (1 to 7 arcsec). The final sample consists of 48 \ion{H}{ii} regions located in the disc of M81. Fig.~\ref{slit-regions} shows the UV image of M81 from \textit{GALEX} (Galaxy Evolution Explorer) with our slit positions superposed. We also show boxes around the regions where we could extract spectra for several knots of ionized gas. One to eight knots were extracted in each of the boxes shown in Fig.~\ref{slit-regions}. The boxes are tagged as P$n$-$m$, where $n$ identifies the slit and $m$ the box along this slit. We identify the knots with numbers going from 1 to 48, starting with the first knot in box P1-1 and ending with the knots in P5-2, moving from from South to North in the slits P1 and P2 and from East to West for the slits P3, P4, and P5. We also show an inset in Fig.~\ref{slit-regions} with a cut in the spatial direction along one of the columns with H$\alpha$ emission in our 2D spectra for box P3-3, illustrating the procedure we followed for selecting the ionized knots. Fig.~\ref{spectra} shows two examples of the extracted spectra, one with a high signal-to-noise ratio (region~1) and a second one with a low signal-to-noise ratio (region~22). \begin{figure*} \begin{center} \includegraphics[width=0.99\textwidth]{fig1.eps} \caption{UV image of M81 from \textit{GALEX} showing the slit positions listed in Table~\ref{Slits}. The boxes show the locations of the ionized knots in our sample. The inset shows a cut in the spatial direction along the H$\alpha$ emission line for box P3-3. We identify in the inset the knots of ionized gas whose spectra we extracted in this region.} \label{slit-regions} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.8\textwidth]{fig2.eps} \caption{Spectra of two of our observed regions. Region~1 has one of the spectra with the highest signal-to-noise ratios; region~22 has one of the lowest signal-to-noise ratios. The inset shows our detection of the temperature-sensitive line [\ion{N}{ii}] $\lambda5755$ in region~1.} \label{spectra} \end{center} \end{figure*} \subsection{Line measurements} Line intensities were measured using the \textsc{splot} routine of \textsc{iraf} by integrating the flux above the continuum defined by two points on each side of the emission lines. We fitted Gaussian profiles for those lines that appear blended. The errors in the line intensities were calculated using the expression \citep{Tres:99a}: \begin{equation} \sigma_{I}=\sigma_{c}D\sqrt{2N_{pix}+\frac{EW}{D}}, \end{equation} where $D$ is the spectral dispersion in \AA\ per pixel, $\sigma_c$ is the mean standard deviation per pixel of the continuum on each side of the line, $N_{pix}$ is the number of pixels covered by the line and EW is the equivalent width. We corrected the Balmer line intensities for the effects of stellar absorption by assuming absorption equivalent widths of 2~\AA\ \citep{Mc:85}. The correction is small for most of our regions, with changes in the H$\alpha$ and H$\beta$ intensities below 7 and 10 per cent, respectively, but it is significant for six regions. In four of them (regions 7, 42, 43, and 44) it increases the intensity of H$\beta$ by just 12--14 per cent, but regions 6 and 14 have increments of 72 and 27 per cent, respectively. The effects of these changes on our results are described in Section~\ref{Oab}. The emission lines were corrected for extinction assuming an intrinsic line ratio of H$\alpha$/H$\beta=2.86$, suitable for $T_{\rm e}=10000$ K and $n_{\rm e}=100$ cm$^{-3}$ \citep{Os:06a}, since we find similar values for the physical conditions in our objects. We used the extinction law of \citet*{Car:89a} with a ratio of total to selective extinction in $V$ and $B-V$ of $R_V=3.1$. To correct for reddening each emission line ratio, we use the expression: \begin{equation} \frac{I({\lambda})}{I({\mbox{H}\beta})}= \frac{I_0({\lambda)}}{I_0(\mbox{H}\beta)}10^{-c({\rm H}\beta)[f(\lambda)-1]} \end{equation} where $I(\lambda)/I(\mbox{H}\beta)$ is the observed line intensity ratio, $I_0(\lambda)/I_0(\mbox{H}\beta)$ is the reddening-corrected ratio, $c(\mbox{H}\beta)$ is the reddening coefficient, and $f(\lambda)$ is the extinction law normalized to H$\beta$. Tables~\ref{chb} and \ref{lines}, whose full versions are available online, show the values of the extinction coefficients and the observed and reddening-corrected line ratios for each region. We also provide for each region the extinction corrected $I(\mbox{H}\beta)$ in Table~\ref{chb}. The final errors are the result of adding quadratically the uncertainties in the measured intensities, 4 per cent as our estimate of the uncertainty in the flux calibration, and the uncertainty in the reddening correction. The values we find for $c(\mbox{H}\beta)$ are in the range 0--0.51, in agreement with the values found by \citet{Patt:12a} for several \ion{H}{ii} regions in M81, $c(\mbox{H}\beta)=0.07\mbox{--}0.43$ but significantly lower than the values obtained by \citet{Stan:10a} for \ion{H}{ii} regions in this galaxy, $c(\mbox{H}\beta)=0.48\mbox{--}0.92$. \begin{table} \caption{The extinction coefficients $c(\mbox{H}\beta)$ and the reddening-corrected intensities for H$\beta$. The full table for the 48 regions is available online.} \label{chb} \begin{tabular}{cccc} \hline \multicolumn{1}{c}{Region} & \multicolumn{1}{c}{$c(\mbox{H}\beta)$}& \multicolumn{1}{c}{Error} & \multicolumn{1}{c}{ $I(H\beta$)} \\ & & & (erg cm$^{-2}$ s$^{-1}$)\\ \hline 1 & 0.14 & 0.07 & 2.75$\times10^{-14}$ \\ 2 & 0.35 & 0.08 & 5.95$\times10^{-15}$ \\ 3 & 0.39 & 0.07 & 8.20$\times10^{-15}$ \\ 4 & 0.00 & 0.10 & 5.51$\times10^{-16}$ \\ 5 & 0.06 & 0.08 & 1.37$\times10^{-15}$ \\ 6 & 0.00 & 0.09 & 2.84$\times10^{-16}$ \\ 7 & 0.28 & 0.08 & 7.87$\times10^{-16}$ \\ 8 & 0.00 & 0.07 & 3.60$\times10^{-15}$ \\ 9 & 0.25 & 0.09 & 9.44$\times10^{-16}$ \\ 10 & 0.35 & 0.08 & 1.25$\times10^{-14}$ \\ \hline \end{tabular} \end{table} \begin{table} \caption{Some of the observed and reddening-corrected line ratios, normalized to $I(\mbox{H}\beta)=100$, for region~1. The error is expressed as a percentage of the reddening-corrected values. The full table with the line intensities for the 48 regions is available online.} \label{lines} \begin{tabular}{cllrrcc} \hline \multicolumn{1}{l}{Region} & \multicolumn{1}{l}{$\lambda$(\AA)} & ID & \multicolumn{1}{r}{$I(\lambda)$} & \multicolumn{1}{r}{$I_0(\lambda$)} & \multicolumn{1}{c}{Error (\%)} \\ \hline 1 & 3727 & $[$\ion{O}{ii}$]$ & 266 & 306 & 8 \\ 1 &4101 & H$\delta$ & 21.4 & 23.7 & 7 \\ 1 &4341 & H$\gamma$ & 41.2 & 44.2 & 6 \\ 1 &4471 & \ion{He}{i} & 3.3 & 3.5 & 8 \\ 1 &4861 & H$\beta$ & 100.0 & 100.0 & 5 \\ 1 &4959 & $[$\ion{O}{iii}$]$ & 40.0 & 39.5 & 5 \\ 1 &5007 & $[$\ion{O}{iii}$]$ & 120.1 & 118.1 & 5 \\ 1 &5200 & $[$\ion{N}{i}$]$ & 2.5 & 2.5 & 8 \\ 1 &5755 & $[$\ion{N}{ii}$]$ & 1.4 & 1.3 & 9 \\ 1 &5876 & \ion{He}{i} & 11.8 & 10.8 & 6 \\ \hline \end{tabular} \end{table} \section{Physical conditions and oxygen abundances} \subsection{The direct method} We could measure the temperature-sensitive [\ion{N}{ii}] $\lambda5755$ line in 12 of the 48 \ion{H}{ii} regions in our sample, where it shows a well-defined profile with a S/N $\geq$ 3.6 (see e.g. Fig.~\ref{spectra}). This allows us to use the so-called direct method to derive the oxygen abundances, which is, in principle, the most reliable method. The [\ion{O}{iii}]~$\lambda4363$ auroral line was marginally detected in two regions with a noisy profile. The line can be affected by imperfect sky subtraction of the Hg~$\lambda4358$ sky line, and we decided not to use it. In order to calculate the physical conditions and the ionic oxygen abundances in these 12 \ion{H}{ii} regions, we use the tasks available in the \textsc{nebular} package of \textsc{iraf}, originally based on the calculations of \citet*{Rob:87a} and \citet{Shaw:95a}. We adopted the following atomic data: the transition probabilities of \citet{Ze:82a} for O$^+$, \citet*{wi:96a} and \citet{Sto:00a} for O$^{++}$, \citet{wi:96a} for N$^+$ and \citet{Men:82a} for S$^+$; and the effective collision strengths of \citet{Prad:06a} for O$^+$, \citet{agg:99a} for O$^{++}$, \citet{Len:94a} for N$^+$, and \citet{Kee:96a} for S$^+$. We use the line intensity ratio [\ion{S}{ii}] $\lambda6717/\lambda6731$ to calculate the electron density, $n_{\rm e}$, and [\ion{N}{ii}] $(\lambda6548+\lambda6583)/\lambda5755$ to calculate the electron temperature. The [\ion{S}{ii}] ratio could be measured in all the regions, and we used $T_{\rm e}=10000$ K to derive $n_{\rm e}$ in those regions where the [\ion{N}{ii}] $\lambda5755$ line was not available. We obtain $n_{\rm e}\la100$ cm$^{-3}$ in most of the regions. At these densities, the [\ion{S}{ii}] diagnostic is not very sensitive to density variations \citep{Os:06a} and, in fact, some of the regions have a line ratio that lies above the range of expected values. However, all the [\ion{S}{ii}] line ratios but two are consistent within one sigma with $n_{\rm e}\la100$ cm$^{-3}$ and, since for these values of $n_{\rm e}$ the derived ionic abundances show a slight dependence on density, we use $n_{\rm e}=100$ cm$^{-3}$ in all our calculations. On the other hand, the upper level of the [N II] $\lambda5755$ line can be populated by transitions resulting from recombination, leading to an overestimate of the electron temperature \citep{Rub:86}. We used the expression derived by \citet{Liu:00a} to estimate a correction for this contribution, but found that the effect is very small in our objects, $\la40$ K in $T_{\rm e}$, so that it is safe to ignore this correction. The values derived for $n_{\rm e}$ and $T_{\rm e}$([\ion{N}{ii}]) are listed in Table~\ref{Oxygen-abundances}, where we use `:' to identify the most uncertain values of $n_{\rm e}$. Table~\ref{Oxygen-abundances} also gives for all the objects in our sample the number that we use for identification purposes, the coordinates of the region, the slit and box where the spectra were extracted, the angular sizes of the extracted regions, their galactocentric distances (see Section~\ref{Ograd}), and their oxygen and nitrogen abundances derived with the methods described below. We adopt a two-zone ionization structure characterized by $T_{\rm e}$([\ion{N}{ii}]) in the [\ion{O}{ii}] emitting region and by $T_{\rm e}$([\ion{O}{iii}]) in the [\ion{O}{iii}] emitting region, where the value of $T_{\rm e}$([\ion{O}{iii}]) is obtained using the relation given by \citet[see also \citealp{Gar:92a}]{Camp:86a}: \begin{equation} T_{\rm e}([\mbox{\ion{N}{ii}}])\simeq T_{\rm e}([\mbox{\ion{O}{ii}}])= 0.7\, T_{\rm e}([\mbox{\ion{O}{iii}}]) + 3000\ \mbox{K}, \label{Relation} \end{equation} which is based on the photoionization models of \citet{Sta:82}. This relation is widely used (see e.g. \citealt{Bre:11a, Patt:12a, Pil:12a}) and is similar to the one obtained from good-quality observations of \ion{H}{ii} regions \citep{Est:09a}. The ionic oxygen abundances are derived using the physical conditions described above and the intensities of [\ion{O}{ii}]~$\lambda3727$ and [\ion{O}{iii}]~$\lambda\lambda4959, 5007$ with respect to H$\beta$. The final values of the oxygen abundances can be obtained by adding the contribution of both ions: $\mbox{O}/\mbox{H}=\mbox{O}^+/\mbox{H}^++\mbox{O}^{++}/\mbox{H}^+$. The N abundance is calculated using the [\ion{N}{ii}]~$\lambda\lambda6548+84$ lines and the assumption that N/O~$\simeq\mbox{N}^+/\mbox{O}^+$. \subsection{Strong-line methods} When the emission lines needed to derive the electron temperature are too weak to be observed, it is still possible to estimate chemical abundances with the so-called strong-line methods. These methods are based on the intensities of lines that can be easily measured, such as [\ion{O}{ii}] $\lambda3727$, [\ion{O}{iii}] $\lambda5007$, or [\ion{N}{ii}] $\lambda6584$, and are calibrated using photoionization models or observational data of \ion{H}{ii} regions that include measurements of the electron temperature. The two approaches often lead to different results \citep[see e.g.,][]{Kew:08a}, but we will not enter here into a discussion of which one yields the better estimates; we will use the empirical methods just because they provide the simplest approach to the problem. We have selected some of the empirical calibrations that are based on the largest numbers of \ion{H}{ii} regions: the P method of \citet{Pil:05a}, the ONS method of \citet{Pil:10a}, the C method of \citet{Pil:12a} and the O3N2 and N2 methods calibrated by \citet{Marino:13a}. The methods use initial samples of around 100--700 \ion{H}{ii} regions that have temperature measurements, although in some cases different criteria are applied in order to select more adequate or more reliable subsamples. All these methods provide estimates of the oxygen abundance, whereas nitrogen abundances can only be obtained with the ONS and C methods. We describe the methods below. \subsubsection{The P method} Some of the most widely used strong-line methods are based on the parameter $R_{23}=I([$\ion{O}{ii}$]~\lambda3727)/I(\mbox{H}\beta)+ I([$\ion{O}{iii}$]~\lambda\lambda4959,5007)/I(\mbox{H}\beta)$, first introduced by~\citet{Pa:79a}. There are many different calibrations of this method, and they can lead to oxygen abundances up to 0.5 dex above those obtained from the direct method \citep*{Kenni:03a}. Here we use the calibration of \citet{Pil:05a}, which is based on a large sample of \ion{H}{ii} regions that have temperature measurements. This calibration is called the P method because it uses as a second parameter in the abundance determination an estimate of the hardness of the ionizing radiation, $P=I([$\ion{O}{iii}$]~\lambda\lambda4959,5007)/(I([$\ion{O}{iii}$]~\lambda\lambda4959,5007) +I([$\ion{O}{ii}$]~\lambda3727))$, as proposed by \citet{Pil:01a,Pil:01b}. According to \citet{Pil:05a}, this method provides oxygen abundances that differ by less than 0.1 dex from the values obtained with the direct method. The main problem with the methods based on $R_{23}$ is that the relation of this parameter with $12+\log(\mbox{O}/\mbox{H})$ is double valued: the same value of $R_{23}$ can lead to two different values of the oxygen abundance and one must find a procedure to break this degeneracy. Following \citet{Kew:08a}, we use $\log(I([$\ion{N}{ii}$]~\lambda6584)/I([$\ion{O}{ii}$]~\lambda3727))=-1.2$ as the dividing line between low- and high-metallicity objects. \subsubsection{The ONS method} The ONS method, proposed by \citet{Pil:10a}, uses the relative intensities of the lines [\ion{O}{ii}]~$\lambda3727$, [\ion{O}{iii}]~$\lambda\lambda4959,5007$, [\ion{N}{ii}]~$\lambda6548+84$, [\ion{S}{ii}]~$\lambda6717+31$, and H$\beta$. \citet{Pil:10a} classify the \ion{H}{ii} regions as cool, warm, or hot depending on the relative intensities of the [\ion{N}{ii}], [\ion{S}{ii}] and H$\beta$ lines, and provide different formulae that relate the oxygen and nitrogen abundances to several line ratios for each case. \citet{Pil:10a} find that the method shows very good agreement with the abundances they derive using the direct method, with root mean square differences of 0.075 dex for the oxygen abundance and 0.05 dex for the nitrogen abundance. \citet*{Li:13a} find similar differences with the direct method for their sample \ion{H}{ii} regions, around 0.09 dex in the oxygen abundance. \subsubsection{The C method} The counterpart method or C method of \citet{Pil:12a} is based on the assumption that \ion{H}{ii} regions that have similar intensities in their strong emission lines have similar physical properties and chemical abundances. The method uses a data base of 414 reference \ion{H}{ii} regions that are considered to have good estimates of the electron temperature, and looks for objects that have values which are similar to the ones observed in the \ion{H}{ii} region under study for several line ratios involving the lines [\ion{O}{ii}]~$\lambda3727$, [\ion{O}{iii}]~$\lambda5007$, [\ion{N}{ii}]~$\lambda6584$, [\ion{S}{ii}]~$\lambda6717+31$, and H$\beta$. The method then finds a relation between the oxygen or nitrogen abundance and the values of these line intensity ratios for these objects, which is then applied to derive the oxygen abundance of the observed \ion{H}{ii} region. \citet{Pil:12a} estimate that if the errors in the line intensity ratios are below 10 per cent, the method leads to abundance uncertainties ofless than 0.1 dex in the oxygen abundance, and 0.15 dex in the nitrogen abundance. \subsubsection{The O3N2 and N2 methods} The O3N2 and N2 methods were proposed by \citet{All:79a} and \citet{Sto:94}, respectively. They use the line ratios: \begin{equation} \mbox{O3N2}=\log\left(\frac{I([\mbox{\ion{O}{iii}}]~\lambda5007)/I(\mbox{H}\beta)} {I([\mbox{\ion{N}{ii}}]~\lambda6584)/I(\mbox{H}\alpha)}\right) \end{equation} and \begin{equation} \mbox{N2}=\log(I([\mbox{\ion{N}{ii}}]~\lambda6584)/I(\mbox{H}\alpha)). \end{equation} These methods are not sensitive to the extinction correction or flux calibration and have been widely used. However, the O3N2 method cannot be used at low metallicities, the N2 method can be affected by shocks or the presence of an AGN in nuclear H II regions \citep{Kew:02a}, and both methods are very sensitive to the degree of ionization of the observed region and to its value of N/O. This might explain the large dispersions usually found in their calibration, although this could also be due to the selection of the calibration sample. We will use the calibrations of \citet{Marino:13a} for these two methods that are based on \ion{H}{ii} regions with temperature measurements. The root mean square differences between the oxygen abundances derived with these methods and those derived with the direct method for the objects used by \citet{Marino:13a} are 0.16 dex (N2 method) and 0.18 dex (O3N2 method). \section{Results}\label{Ograd} \subsection{Oxygen abundances and the oxygen abundance gradient}\label{Oab} Table~\ref{Oxygen-abundances} shows the oxygen abundances derived for the 48 regions in our sample using the methods described above. The uncertainties provided for the results of the direct method are those arising from the estimated errors in the line intensities. For the results of the P and ONS methods, we have added quadratically the estimated uncertainties of the methods, 0.1 dex, to the uncertainties in the measured line ratios. In the case of the ONS method, the derived uncertainties are in the range 0.10--0.12 dex in all cases, and we decided to adopt an uncertainty of 0.12 dex for this method. For the C method we adopt an uncertainty of 0.10 dex, the value estimated by \citet{Pil:12a} for the case when the line ratios involved in the calculations have uncertainties below 10 per cent. Some of the regions have line ratios with larger uncertainties, up to 40 per cent, but our results below agree with uncertainties around or below 0.10 dex for the oxygen abundances derived with this method in most of the \ion{H}{ii} regions. We assigned uncertainties of 0.16 and 0.18 dex for the N2 and O3N2 methods, respectively, the ones found in the calibration of these methods, since the errors in the line intensities do not add significantly to this result. \begin{table*} \begin{minipage}{170mm} \caption{Coordinates, sizes, galactocentric distances, physical conditions and oxygen abundances for the 48 \ion{H}{ii} regions in our sample. The oxygen abundances have been derived with the direct method ($T_{\rm e}$) and five strong-line methods (P, ONS, C, O3N2, and N2).} \begin{tabular}{lccccccccccccc} \hline \multicolumn{1}{l}{ID} & \multicolumn{1}{c}{Box} & \multicolumn{1}{c}{RA} & \multicolumn{1}{c}{Dec.} & \multicolumn{1}{c}{Size} & \multicolumn{1}{c}{$R$} & \multicolumn{1}{c}{$n_{\rm e}$} & \multicolumn{1}{c}{$T_{\rm e}$([\ion{N}{ii}])} & \multicolumn{6}{c}{$12+\log(\mbox{O}/\mbox{H})$} \\ & & (J2000)& (J2000) & (arcsec) & (kpc) & (cm$^{-3}$) & (K) & ($T_{\rm e}$) & (P) & (ONS) & (C) & (O3N2) & (N2) \\ \hline 1 & P1-1 & 09:54:43 & +69:03:39 & 5.1 & 8.9 & 115$\pm$6 & 10100$^{+500}_{-400}$ & $8.13^{+0.06}_{-0.07}$ & 8.33 & 8.45 & 8.43 & 8.42 & 8.53 \\ 2 & & 09:54:43 & +69:03:33 & 2.1 & 8.9 & $-$ & & $8.09^{+0.08}_{-0.09}$ & 8.22 & 8.45 & 8.52 & 8.45 & 8.54 \\ 3 & & 09:54:43 & +69:03:31 & 2.5 & 8.9 & 27$\pm$21 & $-$ & $-$ & 8.42 & 8.48 & 8.49 & 8.39 & 8.59 \\ 4 & & 09:54:43 & +69:03:24 & 2.0 & 9.0 & $-$ & $-$ & $-$ & 8.08 & 8.40 & 8.44 & 8.43 & 8.52 \\ 5 & P1-2 & 09:54:41 & +69:04:23 & 4.6 & 8.7 & $-$ & 10900$^{+1700}_{-1000}$ & 7.96$^{+0.16}_{-0.20}$ & 8.22 & 8.46 & 8.55 & 8.49 & 8.56 \\ 6 & & 09:54:42 & +69:04:08 & 1.5 & 8.8 & 314: & $-$ & $-$ & 8.53 & 8.52 & 8.47 & 8.41 & 8.48 \\ 7 & & 09:54:42 & +69:04:06 & 2.6 & 8.8 & $-$ & $-$ & $-$ & 8.30 & 8.46 & 8.49 & 8.43 & 8.50 \\ 8 & P1-3 & 09:54:39 & +69:05:01 & 2.9 & 8.7 & 23: & 9400$^{+800}_{-600}$ & 8.17$^{+0.11}_{-0.12}$ & 8.43 & 8.49 & 8.46 & 8.45 & 8.53 \\ 9 & & 09:54:40 & +69:04:58 & 2.0 & 8.7 & $-$ & $-$ & $-$ & 8.27 & 8.57 & 8.57 & 8.58 & 8.54 \\ 10 & & 09:54:40 & +69:04:49 & 6.2 & 8.7 & 84$\pm$22 & 10400$^{+500}_{-400}$ & 8.13$\pm$0.07 & 8.33 & 8.44 & 8.40 & 8.40 & 8.55 \\ 11 & & 09:54:40 & +69:04:40 & 5.1 & 8.7 & 19: & $-$ & $-$ & 8.29 & 8.50 & 8.50 & 8.52 & 8.57 \\ 12 & & 09:54:40 & +69:05:06 & 3.7 & 8.7 & $-$ & 10600$^{+1800}_{-1000}$ & 8.08$^{+0.16}_{-0.20}$ & 8.21 & 8.43 & 8.51 & 8.43 & 8.53 \\ 13 & & 09:54:40 & +69:05:10 & 1.0 & 8.7 & 2: & $-$ & $-$ & 8.24 & 8.59 & 8.56 & 8.58 & 8.54 \\ 14 & & 09:54:40 & +69:05:13 & 2.3 & 8.7 & 50: & $-$ & $-$ & 8.06 & 8.51 & 8.53 & 8.58 & 8.59 \\ 15 & & 09:54:40 & +69:05:24 & 5.4 & 8.7 & 3: & $-$ & $-$ & 8.28 & 8.58 & 8.56 & 8.57 & 8.52 \\ 16 & P1-4 & 09:54:38 & +69:06:38 & 4.9 & 8.5 & $-$ & $-$ & $-$ & 8.44 & 8.53 & 8.52 & 8.53 & 8.56 \\ 17 & P2-1 & 09:54:47 & +69:04:25 & 3.1 & 7.7 & $-$ & 13300$^{+1800}_{-1200}$ & 7.70$^{+0.11}_{-0.12}$ & 8.51 & 8.50 & 8.34 & 8.27 & 8.39 \\ 18 & P2-2 & 09:54:50 & +69:06:56 & 5.8 & 6.6 & 117: & $-$ & $-$ & 8.27 & 8.44 & 8.50 & 8.42 & 8.52 \\ 19 & P2-3 & 09:54:54 & +69:10:23 & 1.7 & 7.9 & $-$ & $-$ & $-$ & 8.60 & 8.37 & 8.60 & 8.56 & 8.31 \\ 20 & & 09:54:54 & +69:10:21 & 1.4 & 7.8 & 2: & $-$ & $-$ & 8.34 & 8.52 & 8.52 & 8.51 & 8.52 \\ 21 & & 09:54:54 & +69:10:19 & 5.1 & 7.8 & $-$ & $-$ & $-$ & 8.51 & 8.40 & 8.52 & 8.49 & 8.41 \\ 22 & & 09:54:54 & +69:10:17 & 1.8 & 7.8 & $-$ & $-$ & $-$ & 8.38 & 8.51 & 8.46 & 8.50 & 8.54 \\ 23 & & 09:54:54 & +69:10:23 & 6.0 & 7.9 & 14: & $-$ & $-$ & 8.24 & 8.42 & 8.37 & 8.33 & 8.57 \\ 24 & P3-1 & 09:55:44 & +69:07:19 & 1.4 & 5.4 & 18$\pm$8 & 10500$^{+900}_{-700}$ & 7.95$^{+0.11}_{-0.13}$ & 8.15 & 8.54 & 8.54 & 8.56 & 8.54 \\ 25 & & 09:55:45 & +69:07:18 & 1.6 & 5.5 & 34: & $-$ & $-$ & 8.10 & 8.50 & 8.53 & 8.53 & 8.55 \\ 26 & & 09:55:45 & +69:07:18 & 2.3 & 5.5 & 2: & $-$ & $-$ & 8.28 & 8.70 & 8.58 & 8.65 & 8.49 \\ 27 & P3-2 & 09:55:36 & +69:07:48 & 1.6 & 5.1 & 36: & $-$ & $-$ & 8.60 & 8.55 & 8.47 & 8.47 & 8.57 \\ 28 & & 09:55:36 & +69:07:47 & 2.6 & 5.1 & $-$ & $-$ & $-$ & 8.61 & 8.56 & 8.47 & 8.45 & 8.51 \\ 29 & & 09:55:35 & +69:07:50 & 1.0 & 5.1 & $-$ & $-$ & $-$ & 8.73 & 8.63 & 8.49 & 8.49 & 8.53 \\ 30 & P3-3 & 09:55:21 & +69:08:40 & 3.9 & 5.4 & $-$ & $-$ & $-$ & 8.35 & 8.50 & 8.55 & 8.50 & 8.53 \\ 31 & & 09:55:20 & +69:08:44 & 5.3 & 5.4 & $-$ & $-$ & $-$ & 8.28 & 8.62 & 8.58 & 8.61 & 8.53 \\ 32 & & 09:55:18 & +69:08:48 & 4.4 & 5.5 & $-$ & $-$ & $-$ & 8.26 & 8.44 & 8.33 & 8.31 & 8.55 \\ 33 & & 09:55:17 & +69:08:51 & 1.5 & 5.5 & $-$ & 9000$^{+900}_{-600}$ & 8.13$^{+0.14}_{-0.17}$ & 8.35 & 8.54 & 8.56 & 8.55 & 8.55 \\ 34 & & 09:55:17 & +69:08:52 & 1.9 & 5.6 & 16: & $-$ & $-$ & 8.52 & 8.53 & 8.50 & 8.47 & 8.52 \\ 35 & & 09:55:17 & +69:08:55 & 4.1 & 5.6 & 16: & 8200$^{+700}_{-600}$ & 8.46$^{+0.15}_{-0.16}$ & 8.57 & 8.53 & 8.49 & 8.39 & 8.49 \\ 36 & & 09:55:16 & +69:08:59 & 2.0 & 5.6 & $-$ & 8400$^{+1000}_{-600}$ & 8.39$^{+0.15}_{-0.17}$ & 8.52 & 8.51 & 8.49 & 8.42 & 8.51 \\ 37 & & 09:55:15 & +69:09:01 & 5.2 & 5.7 & $-$ & $-$ & $-$ & 8.32 & 8.64 & 8.59 & 8.61 & 8.53 \\ 38 & P4-1 & 09:55:25 & +69:08:19 & 7.2 & 5.1 & 13: & $-$ & $-$ & 8.35 & 8.54 & 8.55 & 8.55 & 8.56 \\ 39 & & 09:55:26 & +69:08:17 & 2.5 & 5.1 & 18$\pm$8 & $-$ & $-$ & 8.47 & 8.56 & 8.56 & 8.54 & 8.55 \\ 40 & P4-2 & 09:55:19 & +69:08:29 & 2.5 & 5.1 & 26$\pm$6 & 10000$^{+400}_{-300}$ & 8.11$\pm0.05$ & 8.53 & 8.52 & 8.48 & 8.34 & 8.46 \\ 41 & & 09:55:17 & +69:08:31 & 3.0 & 5.2 & $-$ & $-$ & $-$ & 8.47 & 8.48 & 8.44 & 8.41 & 8.54 \\ 42 & & 09:55:14 & +69:08:34 & 2.5 & 5.2 & 6: & $-$ & $-$ & 8.45 & 8.57 & 8.56 & 8.55 & 8.55 \\ 43 & & 09:55:19 & +69:08:29 & 1.5 & 5.1 & $-$ & $-$ & $-$ & 8.60 & 8.62 & 8.57 & 8.49 & 8.40 \\ 44 & & 09:55:22 & +69:08:25 & 2.7 & 5.1 & $-$ & $-$ & $-$ & 8.16 & 8.39 & 8.37 & 8.47 & 8.69 \\ 45 & P5-1 & 09:56:05 & +69:03:44 & 3.2 & 5.4 & 3: & $-$ & $-$ & 8.52 & 8.58 & 8.55 & 8.51 & 8.50 \\ 46 & & 09:56:05 & +69:03:45 & 1.0 & 5.4 & $-$ & $-$ & $-$ & 8.31 & 8.58 & 8.57 & 8.58 & 8.55 \\ 47 & P5-2 & 09:56:01 & +69:04:00 & 2.6 & 4.8 & 16: & $-$ & $-$ & 8.56 & 8.53 & 8.53 & 8.44 & 8.51 \\ 48 & & 09:55:60 & +69:04:03 & 2.0 & 4.8 & $-$ & $-$ & $-$ & 8.39 & 8.56 & 8.57 & 8.57 & 8.57 \\ \hline \end{tabular} \label{Oxygen-abundances} \end{minipage} \end{table*} We checked for the effect of the correction for stellar absorption on the oxygen abundances derived for our observed \ion{H}{ii} regions. The values of $12+\log(\mbox{O}/\mbox{H})$ change by $0\mbox{--}0.04$ dex in most of our regions for the direct, ONS, C, O3N2, and N2 methods. The exceptions are region 24, where the results of the direct method increase by 0.08 dex with the correction, and region 6, the one with the largest correction, where the oxygen abundance derived with the ONS method increases by 0.13 dex. The results of the P method are more sensitive to this correction, with six regions showing increments larger than 0.10 dex: regions 7, 21, and 44, where the oxygen abundance increases by $\sim0.15$ dex, and regions 6, 14, and 26, with increments of 0.49, 0.28, and 0.24, respectively. We have calculated the galactocentric distances of the observed \ion{H}{ii} regions assuming a planar geometry for M81, with a rotation angle of the major axis of M81 of $157\degr$, a disc inclination of $59\degr$ \citep{KO:00a}, and a distance of $3.63\pm0.34$ Mpc \citep{Free:01a}. Our 48 \ion{H}{ii} regions cover a range of galactocentric distances of 4.8--9.0 kpc. In order to increase this range, we selected from the literature other observations of \ion{H}{ii} regions in M81. This also allows us to look for observational effects on the derived abundances. The final sample is composed of 116 \ion{H}{ii} regions spanning a range of galactocentric distances of 3--33 kpc, where 48 \ion{H}{ii} regions are from this work and the remaining 68 from the works of \citet{Gar:87a}, \citet*{Bre:99a}, \citet{Stan:10a}, and \citet{Patt:12a}. We applied the same procedures explained above to derive physical conditions and oxygen abundances for the \ion{H}{ii} regions from the literature, using the line intensities reported in the original papers. We also recalculated the galactocentric distances of these \ion{H}{ii} regions using the same parameters stated above for M81. The results are presented in Tables~\ref{ON1} and \ref{ON2}. \begin{table*} \begin{minipage}{180mm} \caption{Oxygen and nitrogen abundances for the regions observed by \citet{Patt:12a} and \citet{Stan:10a}.} \begin{tabular}{lccccccccccc} \hline \multicolumn{1}{l}{ID} & \multicolumn{1}{c}{$R$} & \multicolumn{1}{c}{$T_{\rm e}$([\ion{N}{ii}])/$T_{\rm e}$([\ion{O}{iii}])} & \multicolumn{6}{c}{$12+\log(\mbox{O}/\mbox{H})$} & \multicolumn{3}{c}{$12+\log(\mbox{N}/\mbox{H})$}\\ & (kpc) & (K) & ($T_{\rm e}$) & (P) & (ONS) & (C) & (O3N2) & (N2) & ($T_{\rm e}$) & (ONS) & (C) \\ \hline \multicolumn{12}{c}{\citet{Patt:12a}}\\ 02 & 22.6 & $-$ & $-$ & 8.08 & 8.43 & 8.53 & 8.42 & 8.46 & $-$ & 7.33 & 7.40 \\ 03 & 22.2 & $-$ & $-$ & 8.25 & 8.43 & 8.40 & 8.37 & 8.46 & $-$ & 7.41 & 7.40 \\ 07 & 22.8 & $-$ & $-$ & 7.78 & 8.51 & 8.52 & 8.54 & 8.48 & $-$ & 7.32 & 7.24 \\ 14 & 14.6 & $-$ & $-$ & 8.41 & 8.57 & 8.44 & 8.29 & 8.39 & $-$ & 7.54 & 7.45 \\ 17 & 21.6 & $-$ & $-$ & 8.29 & 8.39 & 8.37 & 8.26 & 8.27 & $-$ & 7.07 & 7.03 \\ 21 & 15.9 & $14100\pm3800/11200^{+1000}_{-700}$ & $8.16^{+0.13}_{-0.10}$ & 8.27 & 8.48 & 8.33 & 8.20 & 8.34 & $7.41^{+0.17}_{-0.24}$ & 7.47 & 7.39 \\ 24 & 16.1 & $-$ & $-$ & 8.28 & 8.56 & 8.43 & 8.32 & 8.42 & $-$ & 7.45 & 7.36 \\ 25 & 15.0 & $-$ & $-$ & 8.07 & 8.49 & 8.45 & 8.50 & 8.52 & $-$ & 7.40 & 7.41 \\ 26 & 31.1 & $-$ & $-$ & 8.33 & 8.35 & 8.29 & 8.23 & 8.28 & $-$ & 7.11 & 7.05 \\ 28 & 31.4 & $-/12700^{+ 900}_{-700}$ & $8.19^{+0.06}_{-0.07}$ & 8.15 & 8.41 & 8.24 & 8.12 & 8.26 & $7.23^{+0.07}_{-0.08}$ & 7.33 & 7.24 \\ 29 & 29.2 & $-$ & $-$ & 7.99 & 8.24 & 8.47 & 8.50 & 8.36 & $-$ & 6.93 & 7.08 \\ 33 & 32.7 & $-$ & $-$ & 8.23 & 8.41 & 8.21 & 8.24 & 8.27 & $-$ & 7.21 & 6.87 \\ 35 & 24.1 & $-$ & $-$ & 7.62 & 8.33 & 8.28 & 8.38 & 8.51 & $-$ & 7.19 & 7.17 \\ 37 & 21.9 & $-$ & $-$ & 8.01 & 8.40 & 8.43 & 8.37 & 8.44 & $-$ & 7.26 & 7.27 \\ disc1 & 6.4 & $7500^{+900 }_{-500}/7300^{+ 700}_{-400}$ & $8.74^{+0.16}_{-0.20}$ & 8.21 & 8.45 & 8.49 & 8.47 & 8.56 & $7.82^{+0.18}_{-0.22}$ & 7.61 & 7.66 \\ disc2 & 11.5 & $-$ & $-$ & 8.16 & 8.46 & 8.47 & 8.48 & 8.53 & $-$ & 7.55 & 7.55 \\ disc3 & 10.2 & $8500^{+2000}_{-900}/9500^{+1000}_{-600}$ & $8.55^{+0.20}_{-0.25}$ & 8.24 & 8.42 & 8.43 & 8.32 & 8.48 & $7.56^{+0.24}_{-0.30}$ & 7.49 & 7.51 \\ disc4 & 7.9 & $7800^{+900 }_{-600}/-$ & $8.67^{+0.16}_{-0.19}$ & 8.29 & 8.46 & 8.53 & 8.46 & 8.53 & $7.78^{+0.18}_{-0.21}$ & 7.62 & 7.68 \\ disc5 & 5.7 & $7900^{+1000}_{-600}/-$ & $8.60^{+0.17}_{-0.19}$ & 8.42 & 8.48 & 8.47 & 8.44 & 8.53 & $7.80^{+0.19}_{-0.22}$ & 7.71 & 7.72 \\ disc6 & 5.0 & $-$ & $-$ & 8.39 & 8.48 & 8.50 & 8.46 & 8.53 & $-$ & 7.70 & 7.73 \\ disc7 & 2.9 & $-$ & $-$ & 8.27 & 8.55 & 8.57 & 8.59 & 8.61 & $-$ & 7.86 & 7.88 \\ \multicolumn{12}{c}{\citet{Stan:10a}}\\ HII4 & 9.3 & $10800^{+9200 }_{-2300}/-$ & $8.12^{+0.40}_{-0.41}$ & 8.31 & 8.44 & 8.46 & 8.33 & 8.47 & $7.32^{+0.45}_{-0.52}$ & 7.50 & 7.53 \\ HII5 & 8.9 & $11100\pm300/-$ & $8.06\pm0.04$ & 7.95 & 8.44 & 8.52 & 8.49 & 8.56 & $7.29\pm0.04$ & 7.45 & 7.52 \\ HII21 & 8.7 & $-$ & $-$ & 7.63 & 8.36 & 8.48 & 8.46 & 8.60 & $-$ & 7.37 & 7.45 \\ HII31 & 8.8 & $8400^{+10300}_{-1400}/-$ & $8.59^{+0.40}_{-0.81}$ & 7.93 & 8.46 & 8.53 & 8.52 & 8.56 & $7.63^{+0.54}_{-1.02}$ & 7.46 & 7.51 \\ HII42 & 9.0 & $-$ & $-$ & 7.88 & 8.41 & 8.45 & 8.47 & 8.57 & $-$ & 7.41 & 7.46 \\ HII72 & 6.9 & $10300^{+3400}_{-1400}/-$ & $8.12^{+0.21}_{-0.26}$ & 8.32 & 8.45 & 8.47 & 8.34 & 8.43 & $7.21^{+0.25}_{-0.30}$ & 7.42 & 7.43 \\ HII78 & 9.0 & $-$ & $-$ & 7.82 & 8.52 & 8.53 & 8.61 & 8.65 & $-$ & 7.58 & 7.62 \\ HII79 & 8.3 & $9200^{+3200}_{-1200}/-$ & $8.25^{+0.27}_{-0.42}$ & 8.30 & 8.45 & 8.50 & 8.38 & 8.47 & $7.32^{+0.31}_{-0.48}$ & 7.47 & 7.51 \\ HII81 & 7.2 & $9100^{+10900}_{-1600}/-$ & $8.39^{+0.41}_{-0.86}$ & 8.07 & 8.41 & 8.48 & 8.42 & 8.52 & $7.42^{+0.52}_{-1.11}$ & 7.42 & 7.47 \\ HII123 & 7.9 & $8900^{+1000}_{-700}/-$ & $8.46^{+0.14}_{-0.17}$ & 8.14 & 8.43 & 8.54 & 8.42 & 8.48 & $7.45^{+0.16}_{-0.19}$ & 7.40 & 7.47 \\ HII133 & 6.9 & $11800^{+400}_{-300}/-$ & $7.95\pm0.04$ & 8.22 & 8.42 & 8.47 & 8.38 & 8.50 & $7.21\pm0.04$ & 7.47 & 7.52 \\ HII201 & 6.9 & $-/13300^{+2400}_{-1300}$ & $7.83^{+0.10}_{-0.13}$ & 8.48 & 8.51 & 8.36 & 8.28 & 8.40 & $7.18^{+0.12}_{-0.14}$ & 7.60 & 7.50 \\ HII213 & 9.7 & $-$ & $-$ & 7.76 & 8.39 & 8.50 & 8.46 & 8.56 & $-$ & 7.35 & 7.42 \\ HII228 & 10.1 & $9300\pm300/-$ & $8.36\pm0.06$ & 8.01 & 8.46 & 8.50 & 8.49 & 8.53 & $7.44^{+0.06}_{-0.07}$ & 7.44 & 7.47 \\ HII233 & 5.9 & $-$ & $-$ & 8.02 & 8.48 & 8.53 & 8.53 & 8.58 & $-$ & 7.55 & 7.61 \\ HII249 & 10.6 & $-$ & $-$ & 7.49 & 8.34 & 8.48 & 8.44 & 8.57 & $-$ & 7.27 & 7.34 \\ HII262 & 9.9 & $11500^{+1100}_{-800}/-$ & $8.19^{+0.11}_{-0.13}$ & 7.69 & 8.36 & 8.45 & 8.44 & 8.57 & $7.31^{+0.12}_{-0.14}$ & 7.32 & 7.40 \\ HII282 & 5.1 & $-$ & $-$ & 8.09 & 8.48 & 8.52 & 8.52 & 8.55 & $-$ & 7.54 & 7.57 \\ HII325 & 9.5 & $10800\pm2500/-$ & $8.14^{+0.46}_{-0.23}$ & 8.15 & 8.41 & 8.45 & 8.37 & 8.47 & $7.22^{+0.47}_{-0.36}$ & 7.36 & 7.40 \\ HII352 & 10.7 & $-$ & $-$ & 7.24 & 8.26 & 8.37 & 8.38 & 8.61 & $-$ & 7.25 & 7.32 \\ HII384 & 7.0 & $-$ & $-$ & 8.14 & 8.49 & 8.51 & 8.52 & 8.54 & $-$ & 7.56 & 7.59 \\ HII403 & 9.9 & $9400^{+2400}_{-1100}/-$ & $8.57^{+0.23}_{-0.31}$ & 7.74 & 8.35 & 8.44 & 8.41 & 8.54 & $7.49^{+0.26}_{-0.35}$ & 7.27 & 7.34 \\ \hline \end{tabular} \label{ON1} \end{minipage} \end{table*} \begin{table*} \caption{Oxygen and nitrogen abundances for the regions observed by \citet{Bre:99a} and \citet{Gar:87a}.} \begin{tabular}{lcccccccc} \hline \multicolumn{1}{l}{ID} & \multicolumn{1}{c}{$R$} & \multicolumn{5}{c}{$12+\log(\mbox{O}/\mbox{H})$} & \multicolumn{2}{c}{$12+\log(\mbox{N}/\mbox{H})$} \\ & (kpc) & (P) & (ONS) & (C) & (O3N2) & (N2) & (ONS) & (C) \\ \hline \multicolumn{9}{c}{\citet{Bre:99a}}\\ GS1 & 5.5 & 8.35 & 8.48 & 8.53 & 8.45 & 8.50 & 7.60 & 7.64 \\ GS2 & 4.8 & 8.35 & 8.47 & 8.51 & 8.43 & 8.50 & 7.57 & 7.59 \\ GS4 & 8.6 & 8.24 & 8.43 & 8.49 & 8.34 & 8.46 & 7.41 & 7.45 \\ GS7 & 9.0 & 8.18 & 8.44 & 8.49 & 8.46 & 8.54 & 7.52 & 7.56 \\ GS9 & 6.5 & 8.40 & 8.49 & 8.50 & 8.49 & 8.57 & 7.77 & 7.79 \\ GS11 & 5.6 & 8.51 & 8.50 & 8.49 & 8.41 & 8.52 & 7.75 & 7.75 \\ GS12 & 5.0 & 8.14 & 8.47 & 8.54 & 8.49 & 8.52 & 7.50 & 7.54 \\ GS13 & 4.8 & 8.52 & 8.57 & 8.58 & 8.54 & 8.54 & 7.92 & 7.93 \\ M\"unch1 & 16.0 & 8.13 & 8.47 & 8.29 & 8.16 & 8.33 & 7.43 & 7.34 \\ M\"unch18 & 10.1 & 8.48 & 8.56 & 8.37 & 8.28 & 8.42 & 7.73 & 7.62 \\ \multicolumn{9}{c}{\citet{Gar:87a}}\\ HK105 & 9.2 & 7.99 & 8.48 & 8.44 & 8.50 & 8.50 & 7.39 & 7.36 \\ HK152 & 5.6 & 8.41 & 8.51 & 8.49 & 8.46 & 8.48 & 7.63 & 7.62 \\ HK230 & 4.8 & 8.58 & 8.57 & 8.47 & 8.51 & 8.55 & 7.98 & 7.94 \\ HK268 & 5.5 & 8.48 & 8.51 & 8.53 & 8.45 & 8.51 & 7.72 & 7.75 \\ HK305-12 & 5.1 & 8.48 & 8.50 & 8.49 & 8.41 & 8.51 & 7.73 & 7.73 \\ HK343-50 & 4.8 & 8.40 & 8.48 & 8.47 & 8.43 & 8.50 & 7.63 & 7.63 \\ HK453 & 5.0 & 8.21 & 8.48 & 8.47 & 8.49 & 8.52 & 7.54 & 7.54 \\ HK472 & 4.0 & 8.39 & 8.53 & 8.47 & 8.56 & 8.60 & 7.94 & 7.88 \\ HK500 & 5.6 & 8.57 & 8.54 & 8.39 & 8.35 & 8.49 & 7.82 & 7.78 \\ HK652 & 6.5 & 8.47 & 8.51 & 8.45 & 8.48 & 8.56 & 7.82 & 7.81 \\ HK666 & 7.0 & 8.37 & 8.46 & 8.42 & 8.37 & 8.50 & 7.62 & 7.59 \\ HK712 & 7.0 & 8.54 & 8.50 & 8.38 & 8.31 & 8.42 & 7.64 & 7.57 \\ HK741 & 9.0 & 8.29 & 8.47 & 8.49 & 8.45 & 8.53 & 7.54 & 7.56 \\ HK767 & 8.6 & 8.30 & 8.44 & 8.49 & 8.35 & 8.48 & 7.47 & 7.54 \\ M\"unch18 & 10.1 & 8.47 & 8.51 & 8.36 & 8.31 & 8.50 & 7.85 & 7.78 \\ \hline \end{tabular} \label{ON2} \end{table*} The direct method could be applied to 31 \ion{H}{ii} regions of the final sample where the electron temperature can be estimated ($T_{\rm e}$([\ion{N}{ii}]), $T_{\rm e}$([\ion{O}{iii}]), or both): 12 from this work, 13 from \citet{Stan:10a} and six from \citet{Patt:12a}. The strong-line methods were applied to all the regions in the final sample. Fig.~\ref{gradients} shows the oxygen abundances obtained with the different methods we are using as a function of galactocentric distance for the \ion{H}{ii} regions in our final sample. Panel (a) shows the results for the 31 \ion{H}{ii} regions with some temperature estimate that allows us to use the direct method; panels (b) to (f) show the results obtained with the strong-line methods for the 116 \ion{H}{ii} regions of the whole sample. In panel (b) we plot with open symbols the results for the 14 regions that are classified as belonging to the upper branch of the metallicity relation, but whose values of $12+\log(\mbox{O}/\mbox{H})$, derived with this relation, fall below 8.0, the region of the lower branch. \begin{figure*} \includegraphics[width=0.85\textwidth, trim=20 0 15 0, clip=yes]{fig3.eps} \caption{Oxygen abundances in \ion{H}{ii} regions of M81 as a function of their galactocentric distances and the abundance gradients resulting from our fits. Panels (a) to (f) show the results of the direct method and the methods P, ONS, C, O3N2, and N2. The different symbols indicate the references for the observational data we used, and are identified in panel (f). Panels (c) to (f) show in the lower right corner the typical uncertainty in the oxygen abundances derived with the corresponding method. In panel (b) we also plot with a discontinuous line the gradient fitted when the regions where the P method is not working (plotted as empty symbols; see text) are included in the fit. Note that all the panels are at the same scale. } \label{gradients} \end{figure*} We fitted straight lines with the least-squares method to the data in Fig.~\ref{gradients} in order to derive the abundance gradient implied by each of the methods used for the abundance determination. Weighted least-squares fits produce similar values for the parameters, but we present the non-weighted results because some of the data seem to be affected by systematic errors, and we do not think that a robust estimation is required for our purposes. The fits are plotted in Fig.~\ref{gradients}, and the parameters of the fitted gradients are listed in Table~\ref{Results}, where we list for each method the number of regions used ($N$), the intercept and the slope of the fit, and the standard deviation of the points from this fit. In the case of the P method, we excluded from the fit the regions where this method does not seem to be working properly (see above). The discontinuous line in panel (b) shows the results when these regions are included. The intercept and slope for this fit are $8.48\pm0.03$ and $-0.018\pm0.004$, respectively, with a dispersion of 0.24 dex. \begin{table} \caption{Oxygen abundance gradients and dispersions.} \begin{tabular}{lcccc} \hline \multicolumn{1}{l}{Method} & \multicolumn{1}{c}{$N$} & \multicolumn{1}{c}{12+log(O/H)$_0$} & \multicolumn{1}{c}{$\frac{\Delta(\log({\rmn O}/{\rmn H}))}{\Delta(R)}$} & \multicolumn{1}{c}{$\sigma$} \\ \multicolumn{3}{l}{} & \multicolumn{1}{c}{(dex kpc$^{-1}$)} & \multicolumn{1}{l}{} \\ \hline $T_{\rm e}$ &\ 31 & $8.26\pm0.10$ &$-0.002\pm0.010$ & 0.25 \\ P & 102 & $8.41\pm0.03$ &$-0.010\pm0.003$ & 0.15 \\ ONS & 116 & $8.53\pm0.01$ &$-0.006\pm0.001$ & 0.07 \\ C & 116 & $8.54\pm0.01$ &$-0.007\pm0.001$ & 0.06 \\ O3N2 & 116 & $8.52\pm0.02$ &$-0.008\pm0.001$ & 0.09 \\ N2 & 116 & $8.58\pm0.01$ &$-0.008\pm0.001$ & 0.06 \\ \hline \end{tabular} \label{Results} \end{table} \subsection{Nitrogen abundances and the N/O abundance gradient} The N/H and N/O abundance ratios were calculated using the direct method for 31 \ion{H}{ii} regions and the ONS and C methods for the whole sample. Tables~\ref{ON1}, \ref{ON2}, and \ref{nitrogen-abundances} show the results. \begin{table*} \caption{The nitrogen abundances derived with the direct method ($T_{\rm e}$) and two strong-line methods (ONS and C) for the 48 regions in our observed sample.} \begin{tabular}{lcccccc} \hline \multicolumn{1}{c}{ID} & \multicolumn{3}{c}{$12+\log(\mbox{N}/\mbox{H})$} & \multicolumn{3}{c}{$\log(\mbox{N}/\mbox{O})$} \\ & ($T_{\rm e}$) & (ONS) & (C) & ($T_{\rm e}$) & (ONS) & (C) \\ \hline 1 & $7.39^{+0.08}_{-0.09}$ & 7.58 & 7.58 & $-0.76\pm0.05$ & $-0.87$ & $-0.85$ \\ 2 & $7.35^{+0.10}_{-0.12}$ & 7.53 & 7.59 & $-0.75\pm0.06$ & $-0.91$ & $-0.93$ \\ 3 & $-$ & 7.61 & 7.62 & $-$ & $-0.87$ & $-0.87$ \\ 4 & $-$ & 7.46 & 7.50 & $-$ & $-0.94$ & $-0.94$ \\ 5 & $7.34^{+0.18}_{-0.23}$ & 7.62 & 7.69 & $-0.63^{+0.10}_{-0.09}$ & $-0.84$ & $-0.86$ \\ 6 & $-$ & 7.70 & 7.65 & $-$ & $-0.83$ & $-0.82$ \\ 7 & $-$ & 7.54 & 7.56 & $-$ & $-0.92$ & $-0.93$ \\ 8 & $7.50^{+0.13}_{-0.14}$ & 7.68 & 7.68 & $-0.68^{+0.07}_{-0.06}$ & $-0.81$ & $-0.78$ \\ 9 & $-$ & 7.73 & 7.72 & $-$ & $-0.84$ & $-0.85$ \\ 10 & $7.48^{+0.09}_{-0.10}$ & 7.65 & 7.64 & $-0.65\pm0.06$ & $-0.79$ & $-0.76$ \\ 11 & $-$ & 7.70 & 7.72 & $-$ & $-0.80$ & $-0.78$ \\ 12 & $7.34^{+0.19}_{-0.24}$ & 7.53 & 7.59 & $-0.74^{+0.11}_{-0.09}$ & $-0.90$ & $-0.92$ \\ 13 & $-$ & 7.65 & 7.64 & $-$ & $-0.94$ & $-0.92$ \\ 14 & $-$ & 7.61 & 7.63 & $-$ & $-0.90$ & $-0.90$ \\ 15 & $-$ & 7.68 & 7.67 & $-$ & $-0.91$ & $-0.89$ \\ 16 & $-$ & 7.85 & 7.85 & $-$ & $-0.68$ & $-0.67$ \\ 17 & $7.14^{+0.13}_{-0.15}$ & 7.65 & 7.56 & $-0.56\pm0.08$ & $-0.85$ & $-0.78$ \\ 18 & $-$ & 7.55 & 7.59 & $-$ & $-0.89$ & $-0.91$ \\ 19 & $-$ & 7.58 & 7.71 & $-$ & $-0.79$ & $-0.89$ \\ 20 & $-$ & 7.66 & 7.67 & $-$ & $-0.86$ & $-0.85$ \\ 21 & $-$ & 7.48 & 7.62 & $-$ & $-0.91$ & $-0.90$ \\ 22 & $-$ & 7.73 & 7.71 & $-$ & $-0.78$ & $-0.75$ \\ 23 & $-$ & 7.71 & 7.70 & $-$ & $-0.71$ & $-0.67$ \\ 24 & $7.35^{+0.14}_{-0.17}$ & 7.60 & 7.61 & $-0.68\pm0.07$ & $-0.98$ & $-0.93$ \\ 25 & $-$ & 7.52 & 7.53 & $-$ & $-0.98$ & $-1.00$ \\ 26 & $-$ & 7.74 & 7.64 & $-$ & $-0.96$ & $-0.94$ \\ 27 & $-$ & 7.99 & 7.95 & $-$ & $-0.56$ & $-0.52$ \\ 28 & $-$ & 7.89 & 7.84 & $-$ & $-0.68$ & $-0.63$ \\ 29 & $-$ & 8.16 & 8.07 & $-$ & $-0.48$ & $-0.42$ \\ 30 & $-$ & 7.68 & 7.72 & $-$ & $-0.82$ & $-0.83$ \\ 31 & $-$ & 7.76 & 7.73 & $-$ & $-0.86$ & $-0.85$ \\ 32 & $-$ & 7.74 & 7.73 & $-$ & $-0.70$ & $-0.60$ \\ 33 & $7.52^{+0.16}_{-0.21}$ & 7.76 & 7.77 & $-0.62\pm0.09$ & $-0.79$ & $-0.79$ \\ 34 & $-$ & 7.80 & 7.79 & $-$ & $-0.73$ & $-0.71$ \\ 35 & $7.73^{+0.17}_{-0.19}$ & 7.79 & 7.75 & $-0.75^{+0.09}_{-0.08}$ & $-0.74$ & $-0.74$ \\ 36 & $7.69^{+0.18}_{-0.21}$ & 7.76 & 7.75 & $-0.71\pm0.10$ & $-0.75$ & $-0.74$ \\ 37 & $-$ & 7.81 & 7.78 & $-$ & $-0.83$ & $-0.81$ \\ 38 & $-$ & 7.79 & 7.81 & $-$ & $-0.75$ & $-0.74$ \\ 39 & $-$ & 7.86 & 7.87 & $-$ & $-0.69$ & $-0.69$ \\ 40 & $7.47^{+0.06}_{-0.07}$ & 7.71 & 7.69 & $-0.65\pm0.04$ & $-0.81$ & $-0.79$ \\ 41 & $-$ & 7.77 & 7.76 & $-$ & $-0.72$ & $-0.68$ \\ 42 & $-$ & 7.88 & 7.88 & $-$ & $-0.70$ & $-0.68$ \\ 43 & $-$ & 7.89 & 7.74 & $-$ & $-0.73$ & $-0.83$ \\ 44 & $-$ & 7.78 & 7.82 & $-$ & $-0.60$ & $-0.55$ \\ 45 & $-$ & 7.83 & 7.81 & $-$ & $-0.75$ & $-0.74$ \\ 46 & $-$ & 7.78 & 7.78 & $-$ & $-0.80$ & $-0.79$ \\ 47 & $-$ & 7.79 & 7.81 & $-$ & $-0.74$ & $-0.72$ \\ 48 & $-$ & 7.85 & 7.87 & $-$ & $-0.71$ & $-0.70$ \\ \hline \end{tabular} \label{nitrogen-abundances} \end{table*} Fig.~\ref{N-gradient} shows the results for the N/H and N/O abundances as a function of galactocentric distance. Panels~(a) and (c) are for the abundances obtained with the direct method and panels~(b) and (d) those for the ONS method. For ease of comparison, the panels cover the same range in orders of magnitude that we used in Fig.~\ref{gradients}. We have not plotted the results of the C method, because they show a similar distribution of values to those of the ONS method. The least-squares fits to the data are also plotted in the figure, and in Table~\ref{Results-nitrogen} we list for each method the number of regions used in the fits, the derived intercepts and slopes, and the dispersions around the gradients. The slopes obtained with the ONS and C methods are very similar, $\sim-0.020$ dex kpc$^{-1}$, whereas the direct method implies a shallower slope, $-0.008$ dex kpc$^{-1}$. The N/H abundance ratios derived with the ONS and C methods can be assigned uncertainties of $\sim0.10\mbox{--}0.15$ dex. The methods do not provide estimates of the uncertainties in the derived N/O abundance ratios, but the dispersions around the gradients implied by these methods suggest that the random uncertainties are $\sim0.1$~dex. \begin{figure*} \includegraphics[width=0.90\textwidth, trim=10 0 10 0, clip=yes]{fig5.eps} \caption{N/H and N/O abundances in the \ion{H}{ii} regions of M81 as a function of their galactocentric distances and the abundance gradients resulting from the fits. Panels~(a) and (c) show the results of the direct method, and panels~(b) and (d) the results of the ONS method. The different symbols indicate the references for the observational data we used. In all panels, the vertical scale spans the same range in orders of magnitude displayed in Fig.~\ref{gradients}.} \label{N-gradient} \end{figure*} \begin{table*} \caption{N/H and N/O abundance gradients and dispersions} \begin{tabular}{lccccccc} \hline \multicolumn{1}{l}{Method} & \multicolumn{1}{c}{$N$} & \multicolumn{1}{c}{12+log(N/H)$_0$} & \multicolumn{1}{c}{$\frac{\Delta(\log({\rmn N}/{\rmn H}))}{\Delta(R)}$} & \multicolumn{1}{c}{$\sigma$} & \multicolumn{1}{c}{log(N/O)$_0$} & \multicolumn{1}{c}{$\frac{\Delta(\log({\rmn N}/{\rmn O}))}{\Delta(R)}$} & \multicolumn{1}{c}{$\sigma$} \\ \multicolumn{3}{l}{} & \multicolumn{1}{c}{(dex kpc$^{-1}$)} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{c}{(dex kpc$^{-1}$)} & \multicolumn{1}{l}{}\\ \hline $T_{\rm e}$ & 31 & 7.53$\pm$0.07 & $-$0.011$\pm$0.007 & 0.18 & $-$0.73$\pm$0.05 & $-$0.008$\pm$0.005 & 0.13 \\ ONS & 116 & 7.82$\pm$0.03 & $-$0.025$\pm$0.002 & 0.15 & $-$0.71$\pm$0.05 & $-$0.019$\pm$0.002 & 0.11 \\ C & 116 & 7.85$\pm$0.02 & $-$0.020$\pm$0.002 & 0.12 & $-$0.69$\pm$0.05 & $-$0.020$\pm$0.002 & 0.13 \\ \hline \end{tabular} \label{Results-nitrogen} \end{table*} \subsection{Comparison with other works} The values that we obtain for the slope of the metallicity gradient go from $-0.010$ to $-0.002$ dex kpc$^{-1}$, smaller in absolute values than most other determinations of the oxygen abundance gradient in M81. Table~\ref{Literature} provides a compilation of some previous results ordered chronologically, where we list the method and number of regions used in each case, the range of galactocentric distances covered by the objects and the intercept and the slope of the fits. Besides two old determinations based on the R$_{23}$ method calibrated with photoionization models by \citet{Pa:79a}, we have chosen to present the results that are based on methods similar to the ones we use. The most recent determination, that of \citet{Pil:14a}, is based on abundances calculated with the P and C methods slightly modified, which we label as P$^\prime$ and C$^\prime$. \citet{Pil:14a} also derived the gradient for N/H with their C$^\prime$ method for regions with galactocentric distances in the range 4--13 kpc, finding a slope of $-0.033$, steeper than the one we find with the C method for the range of 3--33 kpc, $-0.020$. \begin{table} \caption{Oxygen abundance gradients from the literature.} \begin{tabular}{lclllc} \hline \multicolumn{1}{l}{Method} & \multicolumn{1}{c}{$N$} & \multicolumn{1}{c}{$\Delta R$} & \multicolumn{1}{c}{log(O/H)$_0$} & \multicolumn{1}{c}{$\frac{\Delta(\log({\rmn O}/{\rmn H}))}{\Delta(R)}$} & Ref. \\ \multicolumn{2}{l}{} & \multicolumn{1}{l}{(kpc)} & \multicolumn{1}{l}{+12} & \multicolumn{1}{c}{(dex kpc$^{-1}$)} & \multicolumn{1}{l}{} \\ \hline R$_{23}$ & 10 & \ 4--8 & \ -- & $-$0.045 & 1 \\ R$_{23}$ & 18 & \ 3--15 & \ -- & $-$0.08 & 2 \\ P & 36 & \ 4--12 & 8.69 & $-$0.031 & 3 \\ $T_{\rm e}$ & 31 & \ 4--17 & 9.37$\pm$0.24 & $-$0.093$\pm$0.020 & 4 \\ P & 21 & \ 3--33 & 8.34$\pm$0.12 & $-$0.013$\pm$0.006 & 5 \\ P & 49 & \ 3--33 & 8.47$\pm$0.06 & $-$0.016$\pm$0.006 & 5 \\ $T_{\rm e}$ &\ 7 & \ 6--32 & 8.76$\pm$0.13 & $-$0.020$\pm$0.006 & 5 \\ $T_{\rm e}$ & 28 & \ 5--10 & 9.20$\pm$0.11 & $-$0.088$\pm$0.013 & 6 \\ P$^\prime$+C$^\prime$ & -- & \ 4--13 & 8.58$\pm$0.02 & $-$0.011$\pm$0.003 & 7 \\ \hline \end{tabular} \label{Literature} References: (1) \citet{Stau:84a}, (2) \citet{Gar:87a}, (3) \citet{Pil:04a}, (4) \citet{Stan:10a}, (5) \citet{Patt:12a}, (6) \citet{Stan:14a}, (7) \citet{Pil:14a}. \end{table} The results shown in Figs.~\ref{gradients} and \ref{N-gradient}, and Tables~\ref{Results}, \ref{Results-nitrogen}, and \ref{Literature} illustrate the well-known fact that gradient determinations are very sensitive to the method, to the number of objects used, and to the range of galactocentric distances covered by these objects. Our results with the P method are very similar to those obtained by \citet{Patt:12a} with this method, and this is the case where both the procedure followed in the abundance determination and the range of galactocentric distances covered agree more closely. \citet{Patt:12a} use larger error bars than we do for the results of the P method and do a weighted least-squares fit, but the main difference between their results and ours is in the abundances obtained for the \ion{H}{ii} regions observed by \citet{Stan:10a}, which they also use. The oxygen abundances that we derive for these regions are lower than the ones they find. This is clearly seen in panel (b) of Fig.~\ref{gradients} where several objects located between 8 and 11 kpc have oxygen abundances much lower than $12+\log(\mbox{O}/\mbox{H})=8.0$, whereas \citet{Patt:12a} find $12+\log(\mbox{O}/\mbox{H})>8.0$ for all these regions. These differences are partly due to the fact that \citet{Patt:12a} do not include all the \ion{H}{ii} regions of \citet{Stan:10a}, but use only those for which there is also an estimate of the electron temperature. However, we can only reproduce their results for the \ion{H}{ii} regions in common if we use the line ratios of \citet{Stan:10a} uncorrected for extinction, which \citet{Patt:12a} seem to have inadvertently done. The results we derive with the P method for the \ion{H}{ii} regions observed by \citet{Patt:12a} agree within 0.01 dex with the ones derived by these authors with the exception of four objects which belong to the upper branch of the metallicity calibration according to our classification scheme (see Section 3.2.1), but are in an ambiguous region according to the procedure followed by \citet{Patt:12a}. For these regions they calculate an average of the oxygen abundances implied by the upper and lower branch of the calibration, obtaining values that differ from the ones we calculated, using their line intensities, by 0.05--0.26 dex. On the other hand, there are several \ion{H}{ii} regions in our full sample which are classified as belonging to the upper branch following both our classification scheme and the one used by \citet{Patt:12a}, but whose abundances, calculated using the upper-branch relation of the P method, lie in the region that should be covered by the lower branch [all the regions with $12+\log(\mbox{O}/\mbox{H})\le8.0$ in Fig.~\ref{gradients}(b), which are plotted as empty symbols, and in the lower panel of fig. 10 of \citealt{Patt:12a}]. Our observed regions do not show this problem, but two of them, regions 14 and 44, would have the same behaviour if we had not corrected their spectra for the effects of stellar absorption: the uncorrected spectra imply values of $12+\log(\mbox{O}/\mbox{H})=7.76$ and 8.00, whereas the corrected spectra change those values to 8.06 and 8.16, respectively. Since neither \citet{Patt:12a} nor \citet{Stan:10a} correct their spectra for stellar absorption, the regions they observed where the P method has problems might also be affected in the same way. We do not consider in our fit of Table~\ref{Results} the \ion{H}{ii} regions where the P method is not working properly. If we include them, we get an intercept and slope for the gradient of $8.48\pm0.03$ and $-0.018\pm0.004$, respectively, with a dispersion of 0.24 dex. This fit is plotted with a discontinuous line in Fig.~\ref{gradients}(b). \citealt{Patt:12a} did not reject from their fits the objects that had problems with the P method, and the gradients they derive with this method are intermediate between our two fits. Our results for the abundances implied by the direct method in the \ion{H}{ii} regions observed by \citet{Stan:10a} are significantly different from those derived by these authors: we get oxygen abundances that are lower by up to 0.3 dex. The differences are mainly due to the fact that \citet{Stan:10a} calculated the neutral oxygen abundance in several objects using [\ion{O}{i}] emission and added it to the O$^+$ and O$^{++}$ abundances to get the total oxygen abundance as can be seen in their table~3, available online; see, for example, the results for their region number 5. This is not a procedure usually followed for \ion{H}{ii} regions since the ionization potentials of O$^0$ and H$^0$ are both $\simeq13.6$ eV, suggesting that [\ion{O}{i}] emission should arise in regions close to the ionization front. Besides, charge-exchange reactions between O$^0$ and H$^+$ tend to keep O$^0$ outside the ionized region \citep{Os:06a}. The O$^0$/H$^+$ abundance ratios derived by \citet{Stan:10a} are also very high, 30 to 230 times larger than the ones we estimate. Since \citet{Patt:12a} compared their results with the direct method with those reported by \citet{Stan:10a}, they found a better agreement of the two sets than the one that can be observed in Fig.~\ref{gradients}. The differences between the abundances we derive with the direct method using the line intensities of \citet{Patt:12a} and the values given by these authors are below 0.2 dex, and seem to be due to typos in their tables. For example, \citet{Patt:12a} give a value for $T_{\rm e}$([\ion{O}{iii}]) for their region 26, but no intensity is provided for the [\ion{O}{iii}]~$\lambda4363$ line for this region in their table 2. In addition, some of the values they list for the total oxygen abundance in their table~4, and plot in their figures, are transposed, namely the values of O/H given for their regions disc1, disc3, and disc4. If we add the values of O$^+$/H$^+$ and O$^{++}$/H$^+$ listed in their table~4 for each of these regions, we get the total oxygen abundance that they attribute to a different region; for example, the oxygen abundance implied by their ionic abundances in disc3 is assigned by them to region disc4. These differences, along with the fact that our observations lead to lower oxygen abundances for the galactocentric range in common with the other samples, explain the very different value that we obtain with the direct method for the abundance gradient, $-0.002$ dex kpc$^{-1}$ versus $-0.020$ dex kpc$^{-1}$ \citep[the result of][that covers a range of galactocentric distances similar to ours]{Patt:12a}. An inspection of Fig.~\ref{gradients}(a) shows that the inclusion of data from different works is the main reason of this difference: we would get a steeper gradient if we only used the data obtained by \citet{Patt:12a}. We have several regions in common with other authors, and Table~\ref{comparison} shows a comparison between the oxygen and nitrogen abundances we derive with different methods using the line intensities reported for each region. The apertures are different, and in two cases we extracted the spectra of two knots at the positions covered by other works, but the differences in the abundances implied by each method are of the same order as the differences that we find in Figs.~\ref{gradients} and \ref{N-gradient} for regions at similar galactocentric distances. Since these differences depend on the method and in some cases are larger than the estimated uncertainties, we think that the results in Table~\ref{comparison} and Figs.~\ref{gradients} and \ref{N-gradient} illustrate the robustness of the methods to different observational problems that are not necessarily included in the estimates of the uncertainties in the line intensities. The data obtained by different authors will be affected in different amounts by uncertainties that are difficult to estimate, such as those introduced by atmospheric differential refraction \citep{Fil:82}, flux calibration or extraction, extinction correction, and the measurement of weak lines in spectra that are not deep enough or have poor spectral resolution. Those methods that give consistent results when applied to different sets of observations can be considered more robust to these observational effects. \begin{table*} \begin{minipage}{150mm} \caption{Comparison of our results for the \ion{H}{ii} regions in common with other samples.} \begin{tabular}{lccccccccccc} \hline \multicolumn{1}{l}{ID} & \multicolumn{1}{c}{Ref.} & \multicolumn{1}{c}{$T_{\rm e}$([\ion{N}{ii}])} & \multicolumn{6}{c}{$12+\log(\mbox{O}/\mbox{H})$} & \multicolumn{3}{c}{$12+\log(\mbox{N}/\mbox{H})$} \\ & & (K) & ($T_{\rm e}$) & (P) & (ONS) & (C) & (O3N2) & (N2) & ($T_{\rm e}$) & (ONS) & (C) \\ \hline 1 & 1 & 10100$^{+500}_{-400}$ & $8.13^{+0.06}_{-0.07}$ & 8.33 & 8.45 & 8.43 & 8.42 & 8.53 & $7.39^{+0.08}_{-0.09}$ & 7.58 & 7.58 \\ HII31 & 2 & 8400$^{+10300}_{-1400}$ & $8.59^{+0.40}_{-0.81}$ & 7.93 & 8.46 & 8.53 & 8.52 & 8.56 & $7.63^{+0.54}_{-1.02}$ & 7.46 & 7.51 \\ GS7 & 3 & $-$ & $-$ & 8.18 & 8.44 & 8.49 & 8.46 & 8.54 & $-$ & 7.52 & 7.56 \\ HK741 & 4 & $-$ & $-$ & 8.29 & 8.47 & 8.49 & 8.45 & 8.53 & $-$ & 7.54 & 7.56 \\ \hline 15 & 1 & $-$ & $-$ & 8.28 & 8.58 & 8.56 & 8.57 & 8.52 & $-$ & 7.68 & 7.67 \\ GS4 & 3 & $-$ & $-$ & 8.24 & 8.43 & 8.49 & 8.34 & 8.46 & $-$ & 7.41 & 7.45 \\ HK767 & 4 & $-$ & $-$ & 8.30 & 8.44 & 8.49 & 8.35 & 8.48 & $-$ & 7.47 & 7.54 \\ \hline 35 & 1 & 8200$^{+700}_{-600}$ & 8.46$^{+0.15}_{-0.16}$ & 8.57 & 8.53 & 8.49 & 8.39 & 8.49 & $7.73^{+0.17}_{-0.19}$ & 7.79 & 7.75 \\ disc5 & 5 & 7900$^{+1000}_{-600}$ & 8.60$^{+0.17}_{-0.19}$ & 8.42 & 8.48 & 8.47 & 8.44 & 8.53 & $7.80^{+0.19}_{-0.22}$ & 7.71 & 7.72 \\ GS11 & 3 & $-$ & $-$ & 8.51 & 8.50 & 8.49 & 8.41 & 8.52 & $-$ & 7.75 & 7.75 \\ HK500 & 4 & $-$ & $-$ & 8.57 & 8.54 & 8.39 & 8.35 & 8.49 & $-$ & 7.82 & 7.78 \\ \hline 38 & 1 & $-$ & $-$ & 8.35 & 8.54 & 8.55 & 8.55 & 8.56 & $-$ & 7.79 & 7.81 \\ 39 & 1 &$-$ & $-$ & 8.47 & 8.56 & 8.56 & 8.54 & 8.55 & $-$ & 7.86 & 7.87 \\ GS12 & 3 & $-$ & $-$ & 8.14 & 8.47 & 8.54 & 8.49 & 8.52 & $-$ & 7.50 & 7.54 \\ HK453 & 4 & $-$ & $-$ & 8.21 & 8.48 & 8.47 & 8.49 & 8.52 & $-$ & 7.54 & 7.54 \\ disc6 & 5 & $-$ & $-$ & 8.39 & 8.48 & 8.50 & 8.46 & 8.53 & $-$ & 7.70 & 7.73 \\ \hline 47 & 1 & $-$ & $-$ & 8.56 & 8.53 & 8.53 & 8.44 & 8.51 & $-$ & 7.79 & 7.81 \\ 48 & 1 & $-$ & $-$ & 8.39 & 8.56 & 8.57 & 8.57 & 8.57 & $-$ & 7.85 & 7.87 \\ GS13 & 3 & $-$ & $-$ & 8.52 & 8.57 & 8.58 & 8.54 & 8.54 & $-$ & 7.92 & 7.93 \\ HK230 & 4 & $-$ & $-$ & 8.58 & 8.57 & 8.47 & 8.51 & 8.55 & $-$ & 7.98 & 7.94 \\ \hline \end{tabular} \label{comparison} References for the ID and line intensities: (1) this work, (2) \citet{Stan:10a}, (3) \citet{Bre:99a}, (4) \citet{Gar:87a}, (5) \citet{Patt:12a}. \end{minipage} \end{table*} \section{Discussion} The question of whether a single straight-line fit describes well the metallicity gradient in a galaxy is often raised \citep[see e.g.][]{Patt:12a}. This does not concern us here. We have fitted straight lines in order to see the dependence of the slope on the method used for the abundance determination and to measure the dispersion of the results around these fits. We would get similar dispersions if we just measured the dispersion in abundances for regions located at similar galactocentric distances. Besides, the low dispersions around the gradient shown by the abundances derived with the ONS, C, and N2 methods suggest that straight-line fits are good first approximations to the data. The main objective of this work is to study the effectiveness of the methods in producing robust measurements of abundance variations across a galaxy. One assumption we make is that the more robust methods will produce lower dispersions around the gradient. In the presence of azimuthal variations, we do not expect that any method will imply a dispersion lower than the real one. We think that this is a reasonable assumption. Hence, we use the dispersions introduced by the different methods as a measure of their robustness or sensitivity to the observational data set used. Note that the robustness of a method should not be confused with its reliability. The more robust methods will not necessarily provide better results. The reliability of the direct method depends on the validity of its assumptions; the reliability of the strong-line methods depends on their calibration and on their application to objects that are well represented in the calibration samples. In what follows, we will centre our discussion in the robustness of the methods, and will assume that if a strong-line method does not provide a good estimate of the oxygen abundance, it is possible that it can be better calibrated to do so. Figs.~\ref{sens} and \ref{sens-n} illustrate the sensitivity of each method to the main line ratios involved in the calculations. We plot in these figures the changes in the O/H, N/H, and N/O abundance ratios resulting from changes of 20 per cent in the main line intensity ratios involved in the calculations for all the regions in our sample. Note that in these figures `[\ion{N}{ii}] $\lambda$5755' identifies the results of changes in the [\ion{N}{ii}]~$(\lambda6548+\lambda6583)/\lambda5755$ temperature diagnostic, `[\ion{N}{ii}]' identifies the results of changes in the [\ion{N}{ii}]~$(\lambda6548+\lambda6583)/$H$\beta$ ratio for the ONS method and the direct method, and the [\ion{N}{ii}]~$\lambda6583/$H$\beta$ ratio for the C, O3N2 and N2 methods. \begin{figure*} \includegraphics[width=0.53\textwidth, trim=10 0 10 0, clip=yes, angle=90]{fig4.eps} \caption{Changes in the oxygen abundances for our sample of \ion{H}{ii} regions introduced by changes of 20 per cent in the main line ratios used by each method. Circles, stars, squares and triangles are used to represent changes in line ratios involving lines of [\ion{O}{ii}], [\ion{O}{iii}], [\ion{N}{ii}], and [\ion{S}{ii}], respectively. [\ion{N}{ii}] 5755' implies changes in the [\ion{N}{ii}]~$(\lambda6548+\lambda6583)/\lambda5755$ intensity ratio, `[\ion{N}{ii}]' implies changes in the [\ion{N}{ii}]~$(\lambda6548+\lambda6583)/$H$\beta$ ratio for the ONS method, and the [\ion{N}{ii}]~$\lambda6583/$H$\beta$ ratio for the C, O3N2, and N2 methods.} \label{sens} \end{figure*} \begin{figure} \includegraphics[width=0.45\textwidth, trim=25 0 40 0, clip=yes]{fig6a.eps} \includegraphics[width=0.45\textwidth, trim=25 0 40 0, clip=yes]{fig6b.eps} \caption{Changes in the N/H and N/O abundance ratios for our sample of \ion{H}{ii} regions introduced by changes of 20 per cent in the main line ratios used by each method. Circles, stars, squares and triangles are used to represent changes in line ratios involving lines of [\ion{O}{ii}], [\ion{O}{iii}], [\ion{N}{ii}], and [\ion{S}{ii}], respectively. `[\ion{N}{ii}] 5755' implies changes in the [\ion{N}{ii}]~$(\lambda6548+\lambda6583)/\lambda5755$ ratio for the direct method, `[\ion{N}{ii}]' implies changes in the [\ion{N}{ii}]~$(\lambda6548+\lambda6583)/$H$\beta$ ratio for the direct method and the ONS method, and the [\ion{N}{ii}]~$\lambda6583/$H$\beta$ ratio for the C method.} \label{sens-n} \end{figure} As expected, the results of the direct method are very sensitive to variations in the line ratio used to derive the electron temperature. This makes this method vulnerable to different observational problems, especially the ones arising from the measurement of the intensity of the weak line [\ion{N}{ii}]~$\lambda5755$. The P method of \citet{Pil:05a} shows an even larger sensitivity to changes in the line ratio [\ion{O}{ii}]~$\lambda3727/$H$\beta$, making it vulnerable to problems introduced by atmospheric differential refraction and defective flux calibrations or extinction corrections. This is even more clear if we consider the dispersion from the gradient implied by this method when the regions where it has problems are included in the fit, 0.24 dex. It can be argued that these two line ratios, [\ion{N}{ii}]~$(\lambda6548+\lambda6583)/\lambda5755$ and [\ion{O}{ii}]~$\lambda3727/$H$\beta$, are the ones most likely to be affected by observational problems, making the direct method and the P method the least robust methods, in agreement with our results. In fact, the dispersions around the gradients listed in Tables~\ref{Results} and \ref{Results-nitrogen} can be qualitatively understood in terms of the sensitivity of the methods to changes in these two line ratios, shown in Figs.~\ref{sens} and \ref{sens-n}. The results we obtain for N/O with the direct method, shown in Fig.~\ref{N-gradient}c, can be used to illustrate this effect, since the N/O abundances derived with this method depend mainly on the value of $T_{\rm e}$ implied by the [\ion{N}{ii}]~$(\lambda6548+\lambda6583)/\lambda5755$ intensity ratio and on the [\ion{N}{ii}]~$(\lambda6548+\lambda6583)/$[\ion{O}{ii}]~$\lambda3727$ intensity ratio. Our observed regions (the diamonds in this figure) have larger N/O ratios than most of the regions observed by \citet{Patt:12a} and \citet{Stan:10a}. The values we find for $T_{\rm e}$([\ion{N}{ii}]) in our observed regions are generally higher than those we find for the regions of \citet{Patt:12a} by an amount that can explain the differences in this abundance ratio. On the other hand, we find similar values of $T_{\rm e}$([\ion{N}{ii}]) for our regions and the regions observed by \citet{Stan:10a}. In this case the differences can be attributed to the large values of the [\ion{O}{ii}]~$\lambda3727/$H$\beta$ measured by \citet{Stan:10a} in several regions, which are higher than the ones observed by us and by \citet{Patt:12a}. If we compare the values of this line ratio for the regions that have temperature determinations and are located at galactocentric distances between 4 and 11 kpc, we find a range of values 149--338 for our observed objects and 229--327 for the regions observed by \citet{Patt:12a}, whereas the regions observed by \citet{Stan:10a} span a range of 180--660. This translates into an [\ion{N}{ii}] to [\ion{O}{ii}] line intensity ratio of 0.30--0.59 (this work), 0.15--0.45 \citep{Patt:12a}, and 0.16--0.29 \citep{Stan:10a}. The high values of $c(\mbox{H}\beta)$ found by \citet{Stan:10a} contribute in part to these differences, but they are already present in their observed intensities. Any work whose objective is the determination of abundances in \ion{H}{ii} regions considers an achievement the detection of the weak lines required for the calculation of electron temperature, since temperature-based abundances are expected to be more reliable than those based on strong-line methods. Our results in Fig.~\ref{gradients} and Tables~\ref{Results} and \ref{comparison} suggest otherwise. The abundances derived with the direct method are very sensitive to the assumed temperature, which in turn is sensitive to the line intensity ratio used for the diagnostic, as illustrated in Fig.~\ref{sens}. The precision required to get a good estimate of this ratio is often underestimated. The P method, based on the intensities of strong [\ion{O}{ii}] and [\ion{O}{iii}] lines relative to H$\beta$, seems to be working slightly better in many cases, although there are regions whose abundances show large deviations from their expected values. The results shown in Fig.~\ref{sens} suggest that the spectra of these regions might have problems related with atmospheric differential refraction, flux calibration or extinction correction. In this context, it would be useful to check whether the deviations are correlated with the airmass during the observation, but none of the papers whose spectra we use provides the airmass values of their observations. The new calibration of the P method of \citet{Pil:14a}, which we have called P$^\prime$ above, is less sensitive to the [\ion{O}{ii}]~$\lambda3727/$H$\beta$ line ratio and performs much better when used to derive the oxygen abundance gradient, implying a slope of $-0.008$ dex kpc$^{-1}$ and a dispersion around the gradient of 0.09 dex. However, the calibration sample of the P$^\prime$ method includes regions with abundances determined using the C method. Since we have centred here on methods calibrated with \ion{H}{ii} regions that have temperature measurements, we only show the results of the P method in Fig.~\ref{gradients} and Table~\ref{Results}. The other strong-line methods, especially the ONS, C, and N2 methods, seem to be working remarkably well (see the dispersions in Table~\ref{Results} and Fig.~\ref{gradients}). These methods suggest that azimuthal variations, if present, are very small. The low dispersion implied by the N2 method is especially remarkable, since it is due to a low dispersion in the values of the [\ion{N}{ii}]~$\lambda\lambda6548,6583/$H$\alpha$ intensity ratio that can only arise if N/H and the degree of ionization are both varying smoothly across the disc of M81. Since these quantities and N/O might show different variations in other environments, the N2 method will not necessarily give consistent results for O/H when applied to \ion{H}{ii} regions in other galaxies or to regions located near galactic centres. In fact, \citet{PMC:09} find that the N2 method can lead to values of O/H that differ from the ones derived with the direct method by up to an order of magnitude. The ONS and C methods should be preferred for this reason, although we note that any strong-line method could easily fail for \ion{H}{ii} regions whose properties are not represented in the calibration sample \citep{Sta:10b}. The best estimates of the chemical abundances in \ion{H}{ii} regions implied by forbidden lines will still be based on the measurement of electron temperatures, but we stress that they require data of high quality. This is illustrated by the work of \citet{Bre:11a}, who found that the scatter in the oxygen abundances derived with the direct method in the central part of the galaxy M33 is around 0.06 dex when using his observations, whereas the data of \citet{Roso:08a} lead to much larger variations, with a dispersion of 0.21 dex. The spectra of \citet{Bre:11a} were deeper than the ones observed by \citet{Roso:08a}, which might explain this result, although there could be other effects involved in the explanation. Another example of the low dispersion that can be found with the direct method is provided by \citet{Bre:09a} for NGC~300, where 28 \ion{H}{ii} regions covering a relatively large range of galactocentric distances show a dispersion around the gradient of only 0.05 dex. \section{Summary and Conclusions} We have used long slit spectra obtained with the GTC telescope to extract spectra for 48 \ion{H}{ii} regions in the galaxy M81. We have added to this sample the spectra of 68 \ion{H}{ii} regions in M81 observed by different authors \citep{Gar:87a,Bre:99a,Stan:10a,Patt:12a}. This sample was re-analysed using the line intensities reported in each work. We followed the same procedure that we applied in our sample to calculate physical properties, chemical abundances and galactocentric distances for these \ion{H}{ii} regions. The final sample contains 116 \ion{H}{ii} regions that cover a range of galactocentric distances of 3--33 kpc. We have used these data to derive the oxygen and nitrogen abundance gradients in M81. We could calculate the electron temperature and apply the direct method to 31 \ion{H}{ii} regions of the sample. We used different strong-line methods to derive oxygen and nitrogen abundances for the full sample. We have chosen strong-line methods calibrated with large samples of \ion{H}{ii} regions with temperature-based abundance determinations: the P method of \citet{Pil:05a}, the ONS method of \citet{Pil:10a}, the C method of \citet{Pil:12a}, and the O3N2 and N2 methods calibrated by \citet{Marino:13a}. We have fitted straight lines to the variation with galactocentric distance of the oxygen abundances implied by each method. We find metallicity gradients with slopes that go from $-0.010$ to $-0.002$ dex kpc$^{-1}$. The two extreme values are derived with the P method and the direct method (the shallower value). These two methods are the ones that are more sensitive to variations in two of the line ratios, most likely affected by observational problems, [\ion{N}{ii}]~$(\lambda6548+\lambda6583)/\lambda5755$ and [\ion{O}{ii}]~$\lambda3727/$H$\beta$, and show the largest dispersions around the gradient, 0.25 and 0.15 dex, respectively, whereas the ONS, C, O3N2, and N2 methods imply oxygen abundance gradients in the range from $-0.008$ to $-0.006$ dex kpc$^{-1}$ and very low dispersions, equal to 0.06 dex, for the C and N2 methods, 0.07 dex for the ONS method, and 0.09 dex for the O3N2 method. Since we are using observations from five different works, which are likely to be affected by diverse observational problems by differing amounts, we argue that this implies that the ONS, C, and N2 methods are the more robust methods. Our comparison of the results implied by the different methods for several of our objects that were also observed by other authors agree with this result. The low dispersions also imply that if there are azimuthal variations in the oxygen abundance in M81, they must be small. In the case of N/H, we have used the direct method, the C method, and the ONS method, and find gradients of $-0.025$ to $-0.011$ dex kpc$^{-1}$, with the direct method providing again the shallower slope and the largest dispersion around the fit, 0.18 dex, versus 0.15 dex for the ONS method and 0.12 dex for the C method. For N/O we find slopes that go from $-0.020$ to $-0.008$ dex kpc$^{-1}$, with the latter value derived with the direct method, although for this abundance ratio the dispersions are similar for the three methods, 0.11--0.13 dex. The dispersions around the gradients obtained with the different methods for O/H, N/H, and N/O can be qualitatively accounted for by considering the sensitivity of the methods to the two critical line ratios, [\ion{N}{ii}]~$(\lambda6548+\lambda6583)/\lambda5755$ (our main temperature diagnostic in this work) and [\ion{O}{ii}]~$\lambda3727/$H$\beta$. All the robust methods use the intensity of [\ion{N}{ii}]~$\lambda6584$, and the N2 method is only based on the intensity of this line with respect to H$\alpha$. Since nitrogen and oxygen do not vary in lockstep because they are produced by different types of stars, and their relative abundances depend on the star formation history of the observed galactic region \citep[see, e.g.,][]{Mol:06}, the low dispersions around the oxygen abundance gradient found with the robust methods suggest that both N/O and the degree of ionization vary smoothly along the disc of M81. On the other hand, the different values of N/O generally found for regions with similar oxygen abundances imply that strong-line methods that use the intensities of [\ion{N}{ii}] lines will produce different oxygen abundances in regions that have similar values of O/H but different values of N/O. The ONS and C methods, that use line ratios involving several ions, and also estimate the N/H abundance ratio, can be expected to correct for this effect, at least for regions whose properties are well represented in their calibration samples, but the N2 method by itself cannot achieve this correction. Since our analysis indicates that the available observations do not allow reliable determinations of abundances through the direct method in this galaxy, and since we do not know if the more robust methods are working properly for the observed \ion{H}{ii} regions, the magnitude of the metallicity gradient in M81 remains uncertain. These issues should be further investigated using observations of \ion{H}{ii} regions in different environments that allow the determination of electron temperatures and N and O abundances through the direct method. The large dispersion in the abundances around the gradient that we find here when using the direct method implies that these observations should have high quality in order to get meaningful results. For the time being, we recommend the use of the ONS or C methods when no temperature determinations are possible or when the available determinations are of poor quality. \section*{Acknowledgements} We thank the anonymous referee for useful comments that helped us to improve the content of the paper. Based on observations made with the GTC, installed in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\'isica de Canarias, in the island of La Palma. We acknowledge support from Mexican CONACYT grants CB-2010-01-155142-G3 (PI: YDM), CB-2011-01-167281-F3 (PI: DRG) and CB-2014-240562 (PI: MR). K.Z.A.-C. acknowledges support from CONACYT grant 351585.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $(R, \mathfrak m, k)$ be a local ring (a commutative Noetherian ring with unique maximal ideal $\mathfrak m$) and let $M$ and $N$ be finitely generated nonzero $R$-modules. We say the pair $(M,N)$ satisfies the {\it depth formula} provided: $$\depth(M)+\depth(N)=\depth(R)+\depth(M\otimes_{R}N)$$ This useful formula is not true in general, as one can see by taking $R$ to have depth at least $1$ and $M=N=k$. The depth formula was first studied by Auslander \cite{Au} in 1961 for finitely generated modules over regular local rings. More precisely, if $R$ is a local ring, Auslander proved that $M$ and $N$ satisfy the depth formula provided $M$ has finite projective dimension and $\Tor_i^R(M,N)=0$ for all $i\geq 1$ \cite[1.2]{Au}. Three decades later Huneke and Wiegand \cite[2.5]{HW1} proved that the depth formula holds for $M$ and $N$ over {\it complete intersection} rings $R$ provided $\Tor_i^R(M,N)=0$ for all $i\geq 1$, even if $M$ does not have finite projective dimension. Recall that $R$ is said to be a \textit{complete intersection} if the defining ideal of some (equivalently every) Cohen presentation of the $\mathfrak m$-adic completion $\widehat{R}$ of $R$ can be generated by a regular sequence. If $R$ is such a ring, then $\widehat{R}$ has the form $Q/(\underline{f})$, where $\underline{f}$ is a regular sequence of $Q$ and $Q$ is a ring of formal power series over the field $k$, or over a complete discrete valuation ring with residue field $k$. There are plenty of sufficient conditions in the literature for $M$ and $N$ to satisfy the depth formula, cf., for example, \cite{BerJor}, \cite{IC}, \cite{CJ}, \cite{HW1}, \cite{Jo1} and \cite{Mi}. A common ingredient of those conditions is the vanishing of $\Tor_i^R(M,N)$ for all $i\geq 1$. In particular the following theorem, proved independently by Araya-Yoshino \cite[2.5]{ArY} and Iyengar \cite[4.3]{I}, shows that the vanishing of $\Tor_{i}^{R}(M,N)$ for modules of finite complete intersection dimension is sufficient for the depth formula (cf. Section \ref{Tor} for the definition of finite complete intersection dimension.) \begin{thm}\label{Choi}(Araya-Yoshino \cite{ArY}, Iyengar \cite{I}) Let $R$ be a local ring and let $M$ and $N$ be finitely generated $R$-modules. Assume that $M$ has finite complete intersection dimension (e.g., $R$ is a complete intersection.) If $\Tor_{i}^{R}(M,N)=0$ for all $i\geq 1$, then $(M,N)$ satisfies the depth formula. \end{thm} We note that Iyengar's result \cite[Section 4]{I} that concerns the depth formula is more general than the one stated in Theorem \ref{Choi}; it establishes the validity of the (derived) depth formula for certain complexes of modules (cf. also \cite{CJ} and \cite{Foxby}). The purpose of this note is to understand {\it necessary} conditions for the depth formula. From the known results, one obvious candidate is the vanishing of $\Tor_{i}^{R}(M,N)$ for all $i\geq 1$. In general, one cannot hope for such a phenomenon; the depth formula is satisfied by any pair of finitely generated modules $(M,N)$ such that $\depth(M) =\depth(R)$ and $N$ has finite length. It is therefore somewhat surprising that a partial converse of Theorem \ref{Choi} can be obtained for a special class of rings. Here is one of the main corollaries of our results which we will prove in Section \ref{Tor}. Recall that the embedding dimension of $R$, denoted by $\edim(R)$, is the minimal number of generators of $\mathfrak m$. \begin{thm} \label{main1} Let $R$ be a Cohen-Macaulay local ring and let $M$ and $N$ be non-zero finitely generated $R$-modules. Set $e= \edim(R)- \depth(R)$ and assume: \begin{enumerate} \item $M$ has finite complete intersection dimension. \item $R_{p}$ is regular for each prime ideal $p$ of $R$ of height at most $e$. \item $\Tor^{R}_{i}(M,N)=0$ for all $i=1, \dots, \depth(R)-\depth(M\otimes_{R}N)$. \end{enumerate} Then $(M,N)$ satisfies the depth formula if and only if $\Tor^{R}_{i}(M,N)=0$ for all $i\geq 1$. \end{thm} In particular, we have: \begin{cor} Let $R$ be a Cohen-Macaulay local ring and let $M$ and $N$ be non-zero finitely generated $R$-modules. Set $e=\edim(R)- \depth(R)$ and assume: \begin{enumerate} \item $M$ has finite complete intersection dimension. \item $R_{p}$ is regular for each prime ideal $p$ of $R$ of height at most $e$. \item $\depth(M) =\depth(N) =\depth(R)$. \end{enumerate} Then $(M,N)$ satisfies the depth formula if and only if $\Tor^{R}_{i}(M,N)=0$ for all $i\geq 1$. \end{cor} In Section \ref{Ext} we investigate similar conditions on the depth of $M\otimes_RN$ that force the vanishing of $\Ext$ modules. Such conditions turn out to be quite useful for various applications. Our main result in this section is Theorem \ref{t1}. A consequence of this theorem, Corollary \ref{cor1}, implies: \begin{cor} Let $(R, \mathfrak m)$ be a $d$-dimensional local Cohen-Macaulay ring ($d\geq 1$) with a canonical module $\omega_R$ and let $M$ and $N$ be maximal Cohen-Macaulay $R$-modules. Assume $R$ has an isolated singularity. Then $(M,N)$ satisfies the depth formula if and only if $\Ext^i_R(M, \Hom_R(N,\omega_R))=0$ for all $i=1,\dots ,d$. \end{cor} Recall that a local ring $(R, \mathfrak m)$ is said to be have an \emph{isolated singularity} provided $R_{p}$ is regular for all prime ideals $p$ of $R$ with $p \neq \mathfrak{m}$. A nonzero finitely generated module $M$ is called a \emph{maximal Cohen-Macaulay} module in case $\depth(M)=\dim(R)$. We exploit the main result of Section \ref{Ext} in several directions. For example, Corollary \ref{c6} is a partial extension of a Theorem of Auslander \cite[3.7]{Au} that concerns the torsion-freeness of $M\otimes M^*$ ($M^{\ast}=\Hom(M,R)$) over regular local rings $R$: \begin{cor} \label{t2} Let $R$ be an even dimensional complete intersection that has an isolated singularity and let $M$ be a maximal Cohen-Macaulay $R$-module. If $\depth(M\otimes M^*)>0$, then $M$ is free. \end{cor} We also give a short proof of a result of Huneke and Leuschke \cite{HL}; this is a special case of the Auslander-Reiten conjecture \cite{AuRe} for Gorenstein rings (cf. also \cite{Ar}). In Proposition \ref{goodDepth} and Example \ref{exDepth}, we discuss a fairly general method to construct nonfree finitely generated modules $M$ and $N$ over certain Cohen-Macaulay normal local domains $R$ such that the tensor product $M\otimes_RN$ has high depth. Our example shows that, in contrast to the well-studied cases over regular or complete intersection rings, good depths of tensor products is a less restrictive phenomenon in general. Finally, we apply Theorem \ref{t1} to show that over Veronese subrings of power series rings, the only semi-dualizing modules are free or dualizing. The study of semi-dualizing modules has attracted some attention lately, and our results contribute a new class of rings whose semi-dualizing modules are completely understood. \section{On the converse of the depth formula}\label{Tor} This section is devoted to the connection between the depth formula and the vanishing of $\Tor$ modules. We start by recording the following observation: \begin{lem} \label{l1} Let $R$ be a local ring and let $M$ and $N$ be nonzero finitely generated $R$-modules. Assume $(M,N)$ satisfies the depth formula and that $\depth(R)>\depth(M)$. Set $M'=\syz^{R}_1(M)$. If $\Tor^{R}_{1}(M,N)=0$, then $\depth(M'\otimes_{R}N)=\depth(M\otimes_{R}N)+1$ and hence $(M',N)$ satisfies the depth formula. \end{lem} \begin{proof} As $\Tor^{R}_{1}(M,N)=0$, one has the following exact sequence $$\ses {M'\otimes_RN}{F\otimes_RN }{M\otimes_RN},$$ for some free $R$-module $F$. Notice that the assumptions imply $\depth(F\otimes_RN)>\depth(M\otimes_RN)$. Hence counting the depths of the modules in the exact sequence above gives the desired result. \end{proof} We now recall some definitions needed for the rest of the paper. If $\textbf{F}:\ldots \rightarrow F_{2}\rightarrow F_{1} \rightarrow F_{0} \rightarrow 0$ is a minimal free resolution of $M$ over $R$, then the rank of $F_{n}$, that is, the integer $\dim_{k}(\Ext^{n}_{R}(M,k))$, is the $n$th \textit{Betti} number $\beta^{R}_{n}(M)$ of $M$. This integer is well-defined for all $n$ since minimal free resolutions over $R$ are unique up to isomorphism. $M$ has \emph{complexity} $r$, written as $\cx_{R}(M)=r$, provided $r$ is the least nonnegative integer for which there exists a real number $\gamma$ such that $\beta^{R}_{n}(M)\leq \gamma \cdot n^{r-1}$ for all $n\gg 0$ \cite[3.1]{Av1}. If there are no such $r$ and $\gamma$, then one sets $\cx_{R}(M)=\infty$. The notion of complexity, which is a homological characteristic of modules, was first introduced by Alperin in \cite{Alp} to study minimal projective resolutions of modules over group algebras. It was then brought into local algebra by Avramov \cite{Av1}. The complexity of $M$ measures how the Betti sequence $\beta^{R}_{0}(M), \beta^{R}_{1}(M), \dots$ behaves with respect to polynomial growth. In general complexity may be infinite; for example, if $R=k[X,Y]/(X^{2},XY,Y^{2})$, then $\cx_{R}(M)\in \{0,\infty \}$ \cite[4.2.2]{Av2}. It follows from the definition of complexity that $M$ has finite projective dimension if and only if $\cx_{R}(M)=0$, and has bounded Betti numbers if and only if $\cx_{R}(M)\leq 1$. A \textit{quasi-deformation} of $R$ \cite{AGP} is a diagram $R \rightarrow S \twoheadleftarrow P$ of local homomorphisms, where $R\rightarrow S$ is flat and $S\twoheadleftarrow P$ is surjective with kernel generated by a regular sequence of $P$ contained in the maximal ideal of $P$. $M$ is said to have finite \textit{complete intersection dimension}, denoted by $\CI_{R}(M)<\infty$, if there exists a quasi-deformation $R \rightarrow S \twoheadleftarrow P$ such that $\pd_{P}(M\otimes_{R}S)<\infty$. It follows from the definition that modules of finite projective dimension and modules over complete intersection rings have finite complete intersection dimension. There are also local rings $R$ that are not complete intersections, and finitely generated $R$-modules that do not have finite projective dimension but have finite complete intersection dimension (cf. for example \cite[\textnormal{Chapter 4}]{AGP}). A result of Avramov, Gasharov and Peeva shows that finite complete intersection dimension implies finite complexity; more precisely, if $\CI_{R}(M)<\infty$, then $\cx_{R}(M) \leq \edim(R)-\depth(R)$ \cite[5.6]{AGP}. In particular, if $R$ is a complete intersection, then $\cx_{R}(M)$ cannot exceed $\edim(R)-\dim(R)$, namely the \emph{codimension} of $R$ (cf. also \cite{Gu}). Assume that the natural map $M \rightarrow M^{\ast\ast}$ is injective. Let $\{f_{1},f_{2},\dots, f_{m}\}$ be a minimal generating set for $M^{\ast}$ and let $\displaystyle{\delta: R^{(m)} \twoheadrightarrow M^{\ast}}$ be defined by $\delta(e_{i})=f_{i}$ for $i=1,2,\dots, m$ where $\{e_{1},e_{2},\dots, e_{m}\}$ is the standard basis for $R^{(m)}$. Then, composing the natural map $M\hookrightarrow M^{\ast\ast}$ with $\delta^{\ast}$, we obtain a short exact sequence $$\;\;\textnormal{(PF)}\;\;\; 0 \rightarrow M \stackrel{u}{\rightarrow} R^{(m)} \rightarrow M_{1} \rightarrow 0$$ where $u(x)=(f_{1}(x),f_{2}(x),\dots, f_{m}(x))$ for all $x\in M$. Any module $M_{1}$ obtained in this way is referred to as a \emph{pushforward} of $M$ \cite[Lemma 3.4 and page 49]{EG} (cf. also \cite{HJW}). We should note that such a construction is unique, up to a non-canonical isomorphism (cf. \textnormal{page} 62 of \cite{EG}). Throughout the rest of the paper $X^{n}(R)$ denotes the set $\{p\in \text{Spec}(R): \text{depth}(R_{p}) \leq n \}$. \begin{thm} \label{p1} Let $R$ be a Cohen-Macaulay local ring and let $M$ and $N$ be nonzero finitely generated $R$-modules. Let $w$ be a nonnegative integer such that $w\leq r$ where $r=\cx_{R}(M)$. Assume: \begin{enumerate} \item $\CI_{R}(M)<\infty$. \item $\Tor_{i}^{R}(M,N)_{p}=0$ for all $i\gg 0$ and for all $p \in X^{r-w}(R)$. \item $\Tor^{R}_{i}(M,N)=0$ for all $i=1, \dots, \depth(R)-\depth(M\otimes_{R}N)+w$. \end{enumerate} Then $(M,N)$ satisfies the depth formula if and only if $\Tor^{R}_{i}(M,N)=0$ for all $i\geq 1$. \end{thm} \begin{proof} By Theorem \ref{Choi} it suffices to prove the case where $(M,N)$ satisfies the depth formula. Assume $(M,N)$ satisfies the depth formula and set $n(M,N)= \depth(R)-\depth(M\otimes_{R}N)$. We shall proceed by induction on $n(M,N)$. Since any syzygy of $M$ has finite complete intersection dimension \cite[1.9]{AGP}, in view of Lemma \ref{l1} and the induction hypothesis, it is enough to prove the case where $n(M,N)=0$. Assume now $n(M,N)=0$. Then $M$, $N$ and $M\otimes_{R}N$ are maximal Cohen-Macaulay. We may assume $r>0$; otherwise $M$ will be free by the Auslander-Buchsbaum formula. Since $\CI_{R}(M)=0$ \cite[1.4]{AGP} and $\ds{\CI_{R_{p}}(M_{p})\leq \CI_{R}(M)}$ for each prime ideal $p$ of $R$ \cite[1.6]{AGP}, we have, by (2) and \cite[4.2]{ArY}, that $\Tor_{i}^{R}(M,N)_{p}=0$ for all $i\geq 1$ and for all $p \in X^{r-w}(R)$. This shows, since $r-w\geq 0$, we may assume that $\dim(R)>0$. As $\CI_{R}(M)=0$, by the pushforward construction (cf. also \cite[Proposition 11]{Mas}), there are exact sequences $$(\ref{p1}.1)\;\;\; 0 \rightarrow M_{j-1} \rightarrow F_{j} \rightarrow M_{j} \rightarrow 0 $$ where $M_{0}=M$, $\CI_{R}(M_{j})=0$ and $F_{j}$ is a finitely generated free $R$-module, for each positive integer $j$. Assume now $j=1$. Then tensoring $(\ref{p1}.1)$ with $N$ and noting that $\Tor_1^R(M_1,N)$ is not supported in $X^{r-w}(R)$, we conclude that $\Tor_1^R(M_1,N)=0$. If $j\geq 2$, continuing in a similar fashion, we see $\Tor^{R}_{1}(M_{r-w+1},N)=\dots=\Tor^{R}_{r-w+1}(M_{r-w+1},N)=0$; this follows from the fact that $M_{i}\otimes_{R}N$ is torsion-free for all $i=0,1, \dots, r-w$ (cf. the proof of \cite[2.1]{HJW}). Hence, by (3), we have $\Tor^{R}_{1}(M_{r-w+1},N)=\dots=\Tor^{R}_{r+1}(M_{r-w+1},N)=0$. It now follows from \cite[2.6]{Jor} that $\Tor^{R}_{i}(M_{r-w+1},N)=0$ for all $i\geq 1$. Thus $(\ref{p1}.1)$ shows that $\Tor^{R}_{i}(M,N)=0$ for all $i\geq 1$. \end{proof} Theorem \ref{main1} advertised in the introduction now follows rather easily: \begin{proof}(of Theorem \ref{main1}) We know, by \cite[5.6]{AGP}, that $\cx_R(M)\leq e$. Therefore Proposition \ref{p1} yields the desired conclusion. \end{proof} \begin{cor} \label{corintro} Let $R$ be a complete intersection of codimension $c$ and let $M$ and $N$ be nonzero finitely generated $R$-modules. Assume: \begin{enumerate} \item $R_{p}$ is regular for each $p\in X^{c}(R)$. \item $\Tor^{R}_{i}(M,N)=0$ for all $i=1, \dots, \depth(R)-\depth(M\otimes_{R}N)$. \end{enumerate} Then $(M,N)$ satisfies the depth formula if and only if $\Tor^{R}_{i}(M,N)=0$ for all $i\geq 1$. \end{cor} It follows from Corollary \ref{corintro} and \cite[4.7 \textnormal{and} 6.1]{AvBu} that: \begin{cor} Let $R$ be a complete intersection of codimension $c$ and let $M$ and $N$ be maximal Cohen-Macaulay $R$-modules. Assume $R_{p}$ is regular for each $p\in X^{c}(R)$. Then the following are equivalent: \begin{enumerate} \item $M\otimes_{R}N$ is maximal Cohen-Macaulay. \item $\Tor^{R}_{i}(M,N)=0$ for all $i\geq 1$. \item $\Ext^{i}_{R}(M,N)=0$ for all $i\geq 1$. \item $\Ext^{i}_{R}(N,M)=0$ for all $i\geq 1$. \end{enumerate} \end{cor} \section{Depth of $M\otimes_{R}N$ and the vanishing of $\Ext_{R}^{i}(M,N)$}\label{Ext} In this section we investigate the connection between the depth of $M\otimes_{R}N$ and the vanishing of certain $\Ext$ modules. We illustrate this connection by giving a number of applications, including another proof of a special case of the Auslander-Reiten conjecture, which was first proved by Huneke and Leuschke in \cite{HL}. We also extend a result of Auslander (on the torsion-freeness of $M\otimes_{R}M^{\ast}$) to hypersurface singularities and discuss its possible extension to complete intersections. We finish this section by showing a method to construct non-trivial examples of modules $M$ and $N$ such that $M\otimes_RN$ has high depth. If $R$ has a canonical module $\omega_{R}$, we set $M^{\vee}=\Hom_R(M,\omega_{R})$. $M$ is said to be locally free on $U_{R}$ provided $M_{p}$ is a free $R_{p}$-module for all $p\in X^{d-1}(R)$ where $d=\dim(R)$ (Recall that $X^{n}(R)=\{p\in \text{Spec}(R): \text{depth}(R_{p}) \leq n \}$). Now we follow the proof of \cite[3.10]{Yo} and record a useful lemma: \begin{lem}\label{ExtTor} Let $R$ be a $d$-dimensional local Cohen-Macaulay ring with a canonical module $\omega_{R}$ and let $M$ and $N$ be finitely generated $R$-modules. Assume $M$ is locally free on $U_R$ and $N$ is maximal Cohen-Macaulay. Then the following isomorphism holds for all positive integers $i$: $$ \Ext_R^{d+i}(M,N^{\vee}) \cong \Ext_{R}^d(\Tor_i^R(M,N),\omega_{R})$$ \end{lem} \begin{proof} By \cite[10.62]{Roit} there is a third quadrant spectral sequence: $$\Ext^p_{R}(\Tor_q^R(M,N),\omega_{R}) \underset{p}{\Longrightarrow} \Ext_R^n(M,N^{\vee}) $$ Since $M$ is locally free on $U_R$, $\Tor_q^R(M,N)$ has finite length for all $q>0$. Therefore, unless $p=d$, $\Ext^p_{R}(\Tor_q^R(M,N),\omega_{R})=0$. It follows that the spectral sequence considered collapses and hence gives the desired isomorphism. \end{proof} Before we prove the main result of this section, we recall some relevant definitions and facts. An $R$-module $M$ is said to satisfy \emph{Serre's condition} $(S_{n})$ if $\depth_{R_{p}}(M_{p})\geq \text{min}\left\{n, \dim(R_{p})\right\}$ for all $p\in \Spec(R)$ \cite{EG}. If $R$ is Cohen-Macaulay, then $M$ satisfies $(S_{n})$ if and only if every $R$-regular sequence $x_{1},x_{2},\dots, x_{k}$, with $k\leq n$, is also an $M$-regular sequence \cite{Sam}. In particular, if $R$ is Cohen-Macaulay, then $M$ satisfies $(S_{1})$ if and only if it is torsion-free. Moreover, if $R$ is Gorenstein, then $M$ satisfies $(S_{2})$ if and only if it is reflexive, that is, the natural map $M \rightarrow M^{\ast\ast}$ is bijective (cf. \cite[3.6]{EG}). More generally, over a local Gorenstein ring, $M$ satisfies $(S_{n})$ if and only if it is an $n$th syzygy (cf. \cite{Mas}). We now improve a theorem of Huneke and Jorgensen \cite{HJ}; A special case of Theorem \ref{t1}, namely the case where $R$ is Gorenstein and $n=\dim(R)$, was proved in \cite[5.9]{HJ} by different techniques (cf. also \cite[4.6]{HW1}). \begin{thm} \label{t1} Let $R$ be a $d$-dimensional local Cohen-Macaulay ring with a canonical module $\omega_R$ and let $M$ and $N$ be finitely generated $R$-modules. Assume; \begin{enumerate} \item There exists an integer $n$ such that $1\leq n \leq \depth(M)$. \item $M$ is locally free on $U_R$. \item $N$ is maximal Cohen-Macaulay. \end{enumerate} Then $\depth(M\otimes_{R}N)\geq n$ if and only if $\Ext^i_R(M,N^{\vee})=0$ for all $i=d-n+1,\dots, d-1,d$. \end{thm} \begin{proof} Note that $M$ is a torsion-free $R$-module; it is locally free on $U_R$ and has positive depth. Hence it follows from \cite[1.4.1(a)]{BH} (cf. also the proof of \cite[3.5]{EG}) that the natural map $M \rightarrow M^{\ast\ast}$ is injective. Therefore, by the pushforward construction (cf. section \ref{Tor}), there are exact sequences $$(\ref{t1}.1)\;\;\; 0 \rightarrow M_{j-1} \rightarrow F_{j} \rightarrow M_{j} \rightarrow 0 $$ where $M_{0}=M$, $F_{j}$ is a finitely generated free $R$-module and $\depth(M_j) \geq \depth(M)-j$ for all $j=1,\dots,\depth(M)$. Furthermore it is clear from the construction that each $M_j$ is also locally free on $U_R$. We now proceed by induction on $n$. As $N$ is maximal Cohen-Macaulay and $M$ is locally free on $U_R$, $\depth(M\otimes_{R}N)\geq n$ if and only if $M\otimes_{R}N$ satisfies Serre's condition $(S_{n})$. First assume $n=1$. We will prove that $M\otimes_{R}N$ is torsion-free if and only if $\Ext^d_R(M,N^{\vee})=0$. Consider the short exact sequence: $$(\ref{t1}.2)\;\;\; \ses{M}{F_1}{M_1}$$ Tensoring (\ref{t1}.2) with $N$, we see that $M\otimes_RN$ is torsion-free if and only if $\Tor_1^R(M_{1},N)=0$. Therefore, by Lemma \ref{ExtTor} and \cite[3.5.8]{BH}, $M\otimes_{R}N$ is torsion-free if and only if $\Ext_R^{d+1}(M_{1}, N^{\vee})\cong \Ext_R^{d}(M, N^{\vee})=0$. This proves, in particular, the case where $d=1$. Hence we may assume $d\geq 2$ for the rest of the proof. Now assume $n>1$. We claim that the following are equivalent: $$\text{(i)} \depth(M\otimes_RN)\geq n \text{, (ii)} \depth (M\otimes_RN)\geq 1 \text{ and} \depth (M_1\otimes_RN)\geq n-1.$$ Note that, if either (i) or (ii) holds, then $M\otimes_{R}N$ is torsion-free and hence $\Tor_1^R(M_{1},N)=0$. Thus we have the following exact sequence: $$(\ref{t1}.3)\;\;\; \ses{M\otimes_RN}{F_1\otimes_RN}{M_1\otimes_RN} $$ Now it is clear that (i) implies (ii) since the module $F_1\otimes_RN$ in (\ref{t1}.3) is maximal Cohen-Macaulay. Similarly, by counting the depths of the modules in (\ref{t1}.3), we see that (ii) implies (i). Hence, by our claim and the induction hypothesis, $\depth (M\otimes_RN)\geq n$ if and only if $\Ext_R^{d}(M, N^{\vee})=0$ \emph{and} $\Ext_R^{i}(M_1, N^{\vee})=0$ for all $i=d-n+2,\dots, d-1,d$. Since $M$ is a first syzygy of $M_1$, we are done. \end{proof} \begin{cor}\label{cor1} Let $R$ be a $d$-dimensional local Cohen-Macaulay ring with a canonical module $\omega_R$ and let $M$ and $N$ be maximal Cohen-Macaulay $R$-modules. Assume $M$ is locally free on $U_R$ (e.g., $R$ has an isolated singularity) and $n$ is an integer such that $1\leq n\leq d$. Then $\depth(M\otimes_{R}N)\geq n$ if and only if $\Ext^i_R(M,N^{\vee})=0$ for all $i=d-n+1,\dots, d-1,d$. \end{cor} We will use Corollary \ref{cor1} and investigate the depth of $M\otimes_{R}M^{\ast}$. First we briefly review some of the related results in the literature. Auslander \cite[3.3]{Au} proved that, under mild conditions, good depth properties of $M$ and $M\otimes_{R}M^{\ast}$ force $M$ to be free. In particular the depth formula rarely holds for the pair $(M, M^{\ast})$. More precisely the following result can be deduced from the proof of \cite[3.3]{Au}: \begin{thm}\label{Aus} (Auslander \cite{Au}, cf. also \cite[5.2]{HW1}) Let $R$ be a local Cohen-Macaulay ring and let $M$ be a finitely generated torsion-free $R$-module. Assume $M_{p}$ is a free $R_{p}$-module for each $p\in X^{1}(R)$. If $M\otimes_{R}M^{\ast}$ is reflexive, then $M$ is free. \end{thm} The conclusion of Auslander's result fails if $M\otimes_RM^*$ is a torsion-free module that is \emph{not} reflexive: \begin{eg} \label{eg1}Let $R=k[[X,Y,Z]]/(XY-Z^2)$ and let $I$ be the ideal of $R$ generated by $X$ and $Y$. Then it is clear that $R$ is a two-dimensional normal hypersurface domain and $I$ is locally free on $U_R$. Consider the short exact sequence: $$(\ref{eg1}.1) \;\; \;0 \to I \to R \to R/I \to 0$$ Since $\dim(R/I)=0$, it follows from (\ref{eg1}.1) and the depth lemma that $\depth(I)=1$. Furthermore, as $R/I$ is torsion and $\Ext^{1}_{R}(R/I,R)=0$ \cite[3.3.10]{BH}, applying $\Hom(-,R)$ to (\ref{eg1}.1), we conlude that $I^{\ast}=\Hom(I,R) \cong R$. Therefore $I\otimes_{R}I^{\ast}\cong I$ is a torsion-free module that is not reflexive. \end{eg} Example \ref{eg1} raises the question of what can be deduced if one merely assumes $M\otimes_RM^*$ is torsion-free in Theorem \ref{Aus}. Auslander studied this question and proved the following result: \begin{thm}\label{Aus2} (\cite[3.7]{Au}) Let $R$ be an \emph{even} dimensional regular local ring and let $M$ be a finitely generated $R$-module. Assume $M$ is locally free on $U_R$. Assume further that $\depth(M) =\depth(M^{\ast})$. If $\depth(M\otimes_{R}M^{\ast})>0$, then $M$ is free. \end{thm} Auslander's original result assumes that the ring considered in Theorem \ref{Aus2} is unramified but this assumption can be removed by Lichtenbaum's $\Tor$-rigidity result \cite{Li}. Auslander also showed that such a result is no longer true for odd dimensional regular local rings; if $R$ is a regular local ring of odd dimension greater than one, then there exists a non-free finitely generated $R$-module $M$ such that $M$ is locally free on $U_R$, $M\cong M^{\ast}$ and $M\otimes_{R}M^{\ast}$ is torsion-free. This is fascinating since it indicates the parity of the dimension of $R$ may affect the homological properties of $R$-modules. Next we will prove that the conclusion of Theorem \ref{Aus2} carries over to certain types of hypersurfaces. Recall that if $R$ is a local Gorenstein ring and $M$ is a finitely generated $R$-module such that $M$ is locally free on $U_{R}$ and $\depth(M)>0$ (respectively $\depth(M)>1$), then $M$ is torsion-free (respectively reflexive). Note that the depth of the zero module is defined as $\infty$ (cf. \cite{HJW}). \begin{prop} \label{hyp} Let $R$ be a even dimensional hypersurface with an isolated singularity such that $\widehat R\cong S/(f)$ for some unramified (or equicharacteristic) regular local ring $S$ and let $M$ be a finitely generated $R$-module. Assume $M$ is locally free on $U_R$. Assume further that $\depth(M) =\depth(M^{\ast})$. If $\depth(M\otimes_RM^{\ast})>0$, then $M$ is free. \end{prop} \begin{proof} Notice $R$ is a domain since it is normal. If $M^{\ast}=0$, then $\depth(M)=\depth(0)=\infty$ so that $M=0$. Hence we may assume $M^{\ast} \neq 0 \neq M$. Therefore, since $\depth(M)=\depth(M^{\ast})\geq 1$ and $M$ is locally free on $U_{R}$, $M$ is torsion-free. This shows that $M$ can be embedded in a free module: $$(\ref{hyp}.1)\;\;\; 0 \rightarrow M \rightarrow R^{(m)} \rightarrow M_{1} \rightarrow 0 $$ Tensoring (\ref{hyp}.1) with $M^{\ast}$, we conclude that $\Tor_{1}^{R}(M_{1},M^{\ast})=0$. It follows from \cite[4.1]{Da3} that the pair $(M_{1},M^{\ast})$ is $\Tor$-rigid. Thus $\Tor_{i}^{R}(M_{1},M^{\ast})=0=\Tor_{i}^{R}(M,M^{\ast})$ for all $i\geq 1$. Now Theorem \ref{Choi} implies that $\displaystyle{\depth(M)+\depth(M^{\ast})=\dim(R)+\depth(M\otimes_{R}M^{\ast})}$. Write $\dim(R)=2 \cdot n$ for some integer $n$. Then $\depth(M\otimes_RM^{\ast})=2 \cdot (\depth(M)-n)$ is a positive integer by assumption. This implies $\depth(M\otimes_{R}M^{\ast})\geq 2$ so that the result follows from Theorem \ref{Aus}. \end{proof} We suspect that Proposition \ref{hyp} is true for all even dimensional {\it complete intersection} rings. \begin{conj}\label{conjAu} Let $R$ be a even dimensional complete intersection with an isolated singularity and let $M$ be a finitely generated $R$-module. Assume $M$ is locally free on $U_R$. Assume further that $\depth(M) =\depth(M^{\ast})$. If $\depth(M\otimes_RM^{\ast})>0$, then $M$ is free. \end{conj} As an additional supporting evidence, we prove a special case of Conjecture \ref{conjAu}, namely the case where $M$ is maximal Cohen-Macaulay: \begin{cor} \label{c6} Let $R$ be a even dimensional local Gorenstein ring and let $M$ be a maximal Cohen-Macaulay $R$-module that is locally free on $U_R$. Assume $\CI_{R}(M)<\infty$ and $\depth(M\otimes_{R}M^{\ast})>0$. Then $M$ is free. \end{cor} \begin{proof} It follows from Corollary \ref{cor1} that $\Ext_{R}^{d}(M,M)=0$ where $d=\dim(R)$. Since $d$ is even, \cite[4.2]{AvBu} implies that $\pd_{R}(M)<\infty$. Hence $M$ is free by the Auslander-Buchsbaum formula. \end{proof} Recall that modules over complete intersections have finite complete intersection dimension. Therefore we immediately obtain Corollary \ref{t2} as advertised in the introduction: \begin{cor} Let $R$ be an even dimensional complete intersection that has an isolated singularity and let $M$ be a maximal Cohen-Macaulay $R$-module. If $\depth(M\otimes M^*)>0$, then $M$ is free. \end{cor} We now present examples showing that the conclusion of Corollary \ref{c6} fails for odd dimensional local rings; more precisely we show that, for each positive odd integer $n$, there exists a $n$-dimensional hypersurface $R$ and a \emph{nonfree} maximal Cohen-Macaulay $R$-module $M$ such that $R$ has an isolated singularity and $\depth(M\otimes_{R}M^{\ast})>0$. We will use the next result which is known as \emph{Kn\"{o}rrer's periodicity theorem} (cf. \cite[12.10]{Yo}): \begin{thm} (Kn\"{o}rrer)\label{kr} Let $R=k[[X_{1}, X_{2}, \dots, X_{n}]]/(f)$ and let $R^{\sharp\sharp}=R[[U,V]]/(f+UV)$ where $k$ is an algebraically closed field of characteristic zero. Suppose $\underline{\MCM}(R)$ denotes the stable category of maximal Cohen-Macaulay $R$-modules. Then there is an equivalence of categories: $$\Omega: \underline{\MCM}(R) \to \underline{\MCM}(R^{\sharp\sharp})$$ Here the objects of $\underline{\MCM}(R)$ are maximal Cohen-Macaulay $R$-modules and the $\Hom$ sets are defined as $\displaystyle{ \underline{\Hom}_{R}(M,N)=\frac{\Hom_{R}(M,N)}{S(M,N)} }$ where $S(M,N)$ is the $R$-submodule of $\Hom_{R}(M,N)$ that consists of $R$-homomorphisms factoring through free $R$-modules. \end{thm} As $\underline{\Hom}_{R}(M,N) \cong \Ext_{R}^{2}(M,N)$ and $\Omega(\syz^{R}_{1}(M)) \cong \syz^{R^{\sharp\sharp}}_{1}(\Omega(M))$ (cf. \cite[Chapter 12]{Yo}), it follows from Theorem \ref{kr} that $\Ext_{R}^{j}(M,N)=0$ if and only if $\Ext_{R^{\sharp\sharp}}^{j}(\Omega(M),\Omega(N))=0$ for $j\geq 0$. \begin{eg} \label{e3} We use induction and prove that if $n$ is a positive odd integer, then there exists a $n$-dimensional hypersurface $R$ and a non-free maximal Cohen-Macaulay $R$-module $M$ such that $R$ has an isolated singularity and $\depth(M\otimes_{R}M^{\ast})>0$, that is, $M\otimes_{R}M^{\ast}$ is torsion-free. Throughout the example, $k$ denotes an algebraically closed field of characteristic zero. Assume $n=1$. Let $R=k[[X,Y]]/(f)$ for some element $f$ in $k[[X,Y]]$ such that $f$ is reducible and has no repeated factors. We pick $M=k[[x,y]]/(g)$ for some element $g\in k[[X,Y]]$ such that $g$ divides $f$. Then it can be easily checked that $R$ has an isolated singularity (that is $R$ is reduced), $M$ is torsion-free, $M^{\ast}\cong M$ and $M\otimes_{R}M^{\ast}\cong M$. It is also easy to see that $\Ext_{R}^{2i-1}(M,M)=0$ for all $i\geq 1$. Assume now $n=2d+1$ for some positive integer $d$. Suppose there exist a hypersurface $R=k[[X_{1}, X_{2}, \dots, X_{2d}]]/(f)$ that has an isolated singularity and a non-free maximal Cohen-Macaulay $R$-module $M$ such that $\Ext^{2i-1}_{R}(M,M)=0$ for all $i\geq 1$. Then, using the notations of Theorem \ref{kr}, we set $S=R^{\sharp\sharp}$ and $N=\Omega(M)$. Hence $S$ is a $n$-dimensional hypersurface that has an isolated singularity and $N$ is a non-free maximal Cohen-Macaulay $S$-module. It follows from the discussion following Theorem \ref{kr} that $\Ext^{2i-1}_{S}(N,N)=0$ for all $i\geq 1$. Thus Theorem \ref{t1} implies that $\depth_{S}(N\otimes_{S}N^{\ast})>0$ where $N^{\ast}=\Hom_{S}(N,S)$ (indeed, as $N$ is not free, $\depth_{S}(N\otimes_{S}N^{\ast})=1$ by Theorem \ref{Aus}.) \end{eg} \begin{rmk} It is proved in Example \ref{e3} that there are many one-dimensional hypersurfaces $R$ and non-free finitely generated $R$-modules $M$ such that $M$ and $M\otimes_{R}M^{\ast}$ are torsion-free. However it is not known whether there exist such modules over one-dimensional complete intersection \emph{domains} of codimension at least two. This was first addressed by Huneke and Wiegand in \cite[page 473]{HW1} (cf. also \cite[4.1.6]{Ce} and \cite[3.1]{HW1}). \end{rmk} The Auslander-Reiten conjecture (for commutative local rings) states that if $R$ is a local ring and and $M$ is a finitely generated $R$-module such that $\Ext_R^i(M,M\oplus R)=0$ for all $i>0$, then $M$ is free \cite{AuRe}. Huneke and Leuschke \cite{HL} proved that this long-standing conjecture is true for Gorenstein rings that are complete intersections in codimension one (see also \cite[Theorem 3]{Ar} and \cite[5.9]{HJ}). As another application of Corollary \ref{cor1} we give a short proof of this result. \begin{thm}(Huneke-Leuschke \cite{HJ})\label{Ar} Let $R$ be a local Gorenstein ring and let $M$ be a finitely generated $R$-module. Assume $R_{p}$ is a complete intersection for all $p\in X^{1}(R)$. If $\Ext^{i}_{R}(M,M\oplus R)=0$ for all $i>0$, then $M$ is free. \end{thm} \begin{proof}[A proof of Theorem \ref{Ar}] Since $R$ is Gorenstein and $\Ext^{i}_{R}(M,R)=0$ for all $i>0$, $M$ is maximal Cohen-Macaulay. We proceed by induction on $\dim(R)$. If $R$ has dimension at most one, then the result follows from \cite[1.8]{ADS}. Otherwise, by the induction hypothesis, $M$ is locally free on $U_R$. Hence Corollary \ref{cor1} implies that $\depth(M\otimes_{R}M^{\ast})\geq 2$. Therefore $M$ is free by Theorem \ref{Aus}. \end{proof} Next we exploit Theorem \ref{t1} and construct finitely generated modules $M$ and $N$ such that $M\otimes_RN$ has high depth. Recall that the divisor class group $\Cl(R)$ of a normal domain $R$ is the group of isomorphism classes of rank one reflexive $R$-modules. If $[I]$ represents the class of an element in $\Cl(R)$, then the group law of this group can be defined via: $[\Hom_R(I,J)]=[J]-[I]$ (cf. for example \cite{BourCommChp7}). \begin{prop}\label{goodDepth} Let $R$ be a $d$-dimensional local Cohen-Macaulay normal domain with a canonical module $\omega_{R}$. Assume there exists a nonfree maximal Cohen-Macaulay $R$-module $N$ of rank one. Assume further that $N$ is locally free on $U_R$. If $d\geq 3$, then there exists a nonfree finitely generated $R$-module $M$ such that $\depth(M\otimes N) \geq d-2$. \end{prop} \begin{proof} Recall that maximal Cohen-Macaulay modules over normal domains are reflexive \cite{BH}. Therefore $[\omega_R]\in \Cl(R)$. Furthermore $N^{\vee}$ represents an element in $\Cl(R)$ (cf. for example \cite[1.5]{Vas}). Thus $[\Hom(N^*,N^{\vee})]=[\omega_R]$ and hence $\Hom(N^*,N^{\vee}) \cong \omega_R$ (Recall that $()^{\vee}=\Hom(-,w_{R})$). This implies that $\Hom(N^*,N^{\vee})$ is maximal Cohen-Macaulay. Let $M_0=N^*$. Since $M_0$ is locally free on $U_R$ and $N^{\vee}$ is maximal Cohen-macaulay, it follows from \cite[2.3]{Da4} that $\Ext_R^i(M_0, N^{\vee})=0$ for all $i=1, \dots ,d-2$. Applying the pushforward construction twice, we obtain a module $M$ such that $M_0$ is a second syzygy of $M$. Thus $\Ext_R^i(M,N^{\vee})=0$ for all $i=3, \dots, d$. Now Theorem \ref{t1} implies that the depth of $M\otimes_RN$ is at least $d-2$. \end{proof} \begin{rmk} \label{Veronese} Let $k$ be a field and let $S=k[[x_1,\cdots,x_d]]$ be the formal power series over $k$. For a given integer $n$ with $n>1$, we let $R=S^{(n)}$ to be the subring of $S$ generated by monomials of degree $n$, that is, the $n$-th \emph{Veronese} subring $S$. Then $R$ is a Cohen-Macaulay local ring with a canonical module $w_{R}$ (since it is complete), $R$ has an isolated singularity. The class group of $R$ is well-understood (cf. for example \cite[Example 4.2]{Anu} or \cite[Example 2.3.1 and 4.2.2]{BG}). We have $\Cl(R) \cong \mathbb{Z}/n\mathbb{Z}$ and it is generated by the element $L = x_1S\cap R$. \end{rmk} Throughout the rest of the paper, $k[[x_1,\cdots,x_d]]^{(n)}$ will denote the $n$th ($n>1$) Veronese subring of the formal power series ring $k[[x_1,\cdots,x_d]]$ over a field $k$. \begin{eg}\label{exDepth} Let $R=k[[x_1,\cdots,x_d]]^{(n)}$. If $[N]$ is a nontrivial element in $\Cl(R)$, then it follows from Proposition \ref{goodDepth} and Remark \ref{Veronese} that there exists a nonfree finitely generated $R$-module $M$ such that $\depth(M\otimes_RN)\geq d-2$. \end{eg} We will finish this section with an application concerning semidualizing modules. Recall that a finitely generated module $C$ over a Noetherian ring $R$ is called \textit{semidualizing} if the natural homothety homomorphism $R \longrightarrow \Hom_{R}(C,C)$ is an isomorphism and $\Ext^{i}_{R}(C,C)=0$ for all $i\geq 1$ (cf. for example \cite{Wag1} for the basic properties of semidualizing modules.). We will prove that there is no nontrivial semidualizing module over $R=k[[x_1,\cdots,x_d]]^{(n)}$, that is, if $C$ is a semidualizing module over $R$, then either $C \cong R$ or $C \cong w_{R}$, where $w_{R}$ is the canonical (dualizing) module of $R$ (cf. Corollary \ref{last corollary}). We first start proving a lemma: \begin{lem} \label{lemma for semidualizing} Let $R=k[[x_1,\cdots,x_d]]^{(n)}$ and let $L$ represents the generator of $\Cl(R)=\mathbb{Z}/n\mathbb{Z}$ as in remark \ref{Veronese}. Then $\displaystyle{\mu(L^{(i)})=\frac{(n+d-i-1)!}{(d-1)!}}$ for $i=1, \dots, n-1$, where $\mu(L^{i})$ denotes the minimal number of generators of $L^{(i)}$, the $i$-symbolic power of $L$, which represents the element $i[L]$ in $\Cl(R)$. \end{lem} \begin{proof} Note that $S=R\oplus L \oplus L^{(2)} \oplus \dots \oplus L^{(n-1)}$, where $L^{(i)}$ is generated by the monomials of degree $n$ divisible by $x^{i}_{1}$. Therefore $\mu(L^{(i)})$ is the number of monomials of degree $n-i$, which is exactly $\displaystyle{\frac{(n-i+d-1)!}{(d-1)!}}$. \end{proof} \begin{prop} \label{proposition for semidualizing} Let $R=k[[x_1,\cdots,x_d]]^{(n)}$ and let $I$ and $J$ be elements in $\Cl(R)$. If $I\otimes_{R}J$ is reflexive, then $I\cong R$ or $J\cong R$. \end{prop} \begin{proof} As $I\otimes_{R}J$ is reflexive, it is torsion-free so that $I\otimes_{R}J \cong IJ$. This implies $\mu(I)\mu(J)=\mu(IJ)$. Assume $L$ is a generator of $\Cl(R)=\mathbb{Z}/n\mathbb{Z}$ so that $I=L^{(i)}$ and $J=L^{(j)}$ for some $i$ and $j$. Suppose $I$ and $J$ are not free, that is, $0<i\leq n-1$ and $0<j\leq n-1$. Supose first $i+j<n$. Notice, if $a<b<n$, then $\mu(L^{(a)})>\mu(L^{(b)})$ by Lemma \ref{lemma for semidualizing}. Therefore the case where $i+j<n$ contradicts the fact that $\mu(I)\mu(J)=\mu(IJ)$. Next assume $i+j\geq n$, that is, $i+j=n+h$ for some nonnegative integer $h$. Then $L^{(i+j)} \cong L^{(h)}$ and hence, by Lemma \ref{lemma for semidualizing}, we obtain: $$(n-i+d-1)! \cdot (n-j+d-1)! = (n-h+d-1)! \cdot (d-1)!$$ Setting $N=n-h+2(d-1)$, we conclude: $$ {N\choose n-i+d-1} = {N\choose n-h+d-1} $$ Now, without loss of generality, we may assume $i\leq j$. Set $N_{1}=n-i+d-1$ and $N_{2}=n-h+d-1$. Then $\displaystyle{N_{1}>N_{2}\geq \frac{N}{2}}$. Hence the above equality is impossible since binomials are unimodular. Therefore either $i=0$ or $j=0$, that is, either $I\cong R$ or $J\cong R$. \end{proof} \begin{cor} \label{corollary for semidualizing} Let $R=k[[x_1,\cdots,x_d]]^{(n)}$ and let $I$ and $J$ be elements in $\Cl(R)$. If $\Ext^{d-1}_{R}(I,J)$ $=\Ext_{R}^{d}(I,J)=0$, then $I \cong R$ or $J \cong w_{R}$. \end{cor} \begin{proof} We know, by Remark \ref{Veronese}, that $I$ and $J$ are maximal Cohen-Macaulay $R$-modules both of which are locally free on $U_{R}$. Therefore it follows from Corollary \ref{cor1} that $\depth(I\otimes_{R}J^{\vee})>1$, that is, $I\otimes_{R}J^{\vee}$ is reflexive. Now Proposition \ref{proposition for semidualizing} implies either $I\cong R$ or $J^{\vee}\cong R$. Since $J^{\vee\vee} \cong J$, the desired conclusion follows. \end{proof} Sather-Wagstaff \cite[3.4]{Wag1} exhibited a natural inclusion from the set of isomorphism classes of semidualizing R-modules to the divisor class group of a normal domain $R$; every semidualizing module $C$ is a rank one reflexive module so that it represents an element in the class group $\Cl(R)$. Therefore, since $\Ext^{i}_{R}(C,C)=0$ for all $i>0$, it follows from Corollary \ref{corollary for semidualizing} that: \begin{cor} \label{last corollary} If $C$ is a semidualizing module over the Veronese subring $R=k[[x_1,\cdots,x_d]]^{(n)}$, then $C \cong R$ or $C\cong w_{R}$. \end{cor}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Theoretical attempts to unify gauge forces necessarily lead to new particles with masses way above the electroweak scale $v=174\ensuremath{\;\mbox{GeV}}$ defined by the vacuum expectation value (vev) of the Standard-Model (SM) Higgs boson. Such heavy particles generally lead to unduly large radiative corrections to $v^2$, in conflict with the naturalness principle which forbids fine-tuned cancellations between loop contribution and counterterm for any fundamental parameter in the lagrangian \cite{Weinberg:1975gm,Susskind:1978ms,tHooft:1979rat, Veltman:1980mj}. The observation that in supersymmetric field theories \cite{Wess:1974tw} corrections to the electroweak scale vanish exactly \cite{Kaul:1981hi,Inami:1982xb,Deshpande:1984ke} made supersymmetric models the most popular framework for studies of beyond-Standard-Model (BSM) phenomenology. Supersymmetry breaking introduces a mass splitting between the SM particles and their superpartners. Increasing lower bounds on the masses of the latter derived from unsuccessful searches at the LEP, Tevatron, and LHC colliders brought the fine-tuning problem back: Specifically, stops heavier than $~1\ensuremath{\;\mbox{TeV}}$ induce loop corrections to the Higgs potential which must be cancelled by tree-level parameters to two or more digits. Owing to this \emph{little fine-tuning problem}\ low-energy supersymmetry has lost some of its appeal as a candidate for BSM physics. Nevertheless, analyses of naturalness in supersymmetric theories, which are under study since the pre-LEP era, still receive a lot of attention \cite{Sakai:1981gr, Ellis:1986yg,Barbieri:1987fn, Chankowski:1997zh,Chankowski:1998xv, Barbieri:1998uv, Feng:1999zg, Kitano:2005wc, Kitano:2006gv, Ellis:2007by, Hall:2011aa, Strumia:2011dv, Baer:2012mv,Fichet:2012sn,Cabrera:2012vu, Baer:2012cf, Baer:2012up,Boehm:2013gst,Balazs:2013qva,Kim:2013uxa, Casas:2014eca, Baer:2015tva, Drees:2015aeo, Baer:2015rja, Kim:2016rsd, vanBeekveld:2016hug,Cici:2016oqr,Cabrera:2016wwr,Buckley:2016kvr,Baer:2017pba, Abdughani:2017dqs,Fundira:2017vip,Baer:2018rhs,vanBeekveld:2019tqp}. In this paper we study the little fine-tuning problem for the case of a hierarchical superpartner spectrum, with gluinos several times heavier than the stops. The gluino mass is less critical for fine-tuning, because gluinos couple to Higgs fields only at the two-loop level. In such a scenario the usual fine-tuning analyses based on fixed order perturbation theory break down. Denoting the left-chiral and right-chiral stop mass parameters by $m_{L,R}^2$ and the gluino mass by $M_3$ we identify $n$-loop corrections enhanced by $\left[ M_3^2/m_{L,R}^2\right]^{n-1}$ and resum them. These terms are not captured by renormalization-group (RG) analyses of effective Lagrangians derived by successively integrating out heavy particles at their respective mass scales, which instead target large logarithms. Our findings do not depend on details of the Higgs sector, and we exemplify our results for both the Minimal Supersymmetric Standard Model (MSSM) and its next-to-minimal variant NMSSM. The results also trivially generalise to non-supersymmetric theories with little hierarchies involving a heavy scalar field coupling to Higgs fields and a heavier fermion coupling to this scalar. \section{Corrections to the Higgs mass parameters in the (N)MSSM} We consider only small or moderate values of the ratio $\tan\beta\equiv v_2/v_1$ of the vacuum expectation values (vevs) of the two Higgs doublets $H_{1}=(h_{1}^0, h_{1}^-)^T$, $H_{2}=(h_{2}^+, h_{2}^0)^T$, so that all Yukawa couplings are small except for the coupling $y_t$ of the (s)tops to $H_2$. Our (N)MSSM loop calculations involve the gluino-stop-top vertices as well as the couplings encoded in the superpotential \begin{equation} \label{eq:supo} \mathcal{W} = y_t\, \left( \tilde t_R \tilde t_L\, h_2^0 - \tilde t_R \tilde b_L \, h_2^+ \right) \end{equation} and the supersymmetry-breaking Lagrangian \begin{align} -\mathcal{L}_{\mathrm{soft}} &= A_t \, \left(\tilde t_R \tilde t_L\, h_2^0 - \tilde t_R \tilde b_L\, h_2^+ \right) + \mathrm{H.c.} \nonumber\\ &\quad + m_L^2 \left(\tilde t_L^\star \tilde t_L + \tilde b_L^\star \tilde b_L\right) + m^2_{h_2} \left(h_2^{0,\star}h_2^0 + h_2^{+,\star}h_2^+\right) \nonumber\\ &\quad + m^2_{h_1} \left(h_1^{0,\star}h_1^0 + h_1^{-,\star}h_1^-\right) + m_R^2\, t_R t_R^\star \nonumber \\ &\quad+ \frac 12\, M_3\, \overline{\psi_{\ensuremath{\tilde{g}}}} \psi_{\ensuremath{\tilde{g}}} \label{eq:lsoft} \end{align} with the stop, sbottom, and gluino fields $\tilde t_{L,R}$,$\tilde b_{L,R}$,$\psi_{\ensuremath{\tilde{g}}}$, respectively. In the notation of Ref.~\cite{Ellwanger:2009dp} the ($\mathbb{Z}_3$ symmetric) NMSSM Higgs potential reads \begin{align} V_{higgs} = &\abs{\kappa s^2-\lambda h_1^0 h_2^0}^2 + (m_{h_1^2}^2 + \lambda^2 \abs{s}^2) \abs{h_1^0}^2 \nonumber\\ & + (m_{h_2^2}^2 + \lambda^2 \abs{s}^2) \abs{h_2^0}^2 + \frac{g^2}{4} \left( \abs{h_2^0}^2-\abs{h_1^2}\right)^2 \nonumber\\ & + m_s^2 \abs{s}^2 + \left(\frac{1}{3} A_\kappa s^3 - A_\lambda h_1^0h_2^0s + \mathrm{H.c.}\right). \label{eq:vh} \end{align} Note that $g^2\equiv (g_1^2+g_2)^2/2$ and terms with charged fields are dropped. The singlet field $s$ acquires the vev $v_s$. The electroweak scale is represented by the $Z$ boson mass $M_Z$. \begin{figure}[t] \includegraphics[width=.35\linewidth]{yt_loops.pdf}\hfill \includegraphics[width=.55\linewidth]{at_loops.pdf} \caption{Resummed contributions to $m_{22}^{2}$.\label{fig:sum}} \end{figure} Minimizing $V_{higgs}$ gives \begin{align} \frac{1}{2}M_Z^2 &= \frac{ m_{11}^2 \cos^2\beta - m_{22}^2 \sin^2\beta}{\sin^2\beta-\cos^2\beta} \label{eq:min} \end{align} with the tree-level contributions \begin{align} m_{11}^{2\,(0)} &= m_{h_1}^2 + \lambda^2 |v_s|^2,\quad m_{22}^{2\, (0)} = m_{h_2}^2 + \lambda^2 |v_s|^2 . \label{eq:defm11} \end{align} In the MSSM \eqsand{eq:vh}{eq:defm11} hold with the replacements $\lambda s, \lambda v_s \to \mu_h $, $\lambda, \kappa,A_\kappa \to 0 $, and $A_\lambda s, A_\lambda v_s\to B\mu_h$ with the higg\-sino mass term $\mu_h$ and the soft supersymmetry breaking term $B\mu_h$. In the following we identify $\mu_h\equiv \lambda v_s$ and $B\mu_h\equiv A_\lambda v_s$, which allows us to use the same notation for MSSM and NMSSM. Next we integrate out the heavy sparticles and thereby match the (N)MSSM onto an effective two-Higgs-doublet model. We parametrize the loop contributions as \begin{align} m_{22}^2 &= m_{22}^{2\,(0)} + m_{22}^{2\,(1)} + m_{22}^{2\,(2)} + m_{22}^{2\,(\geq 3)} \label{eq:m22exp} \end{align} with the well-known one-loop term \begin{align} m_{22}^{2,(1)} = & -\frac{3\, \abs{y_t}^2}{16\, \pi^2} \left[ m_L^2 \left(1-\mlog{m_L^2}{\mu^2}\right) + L\to R % \right] \nonumber\\ &\hspace{-2em} -\frac{3\, \abs{A_t}^2}{16\, \pi^2} \frac{m_R^2 - m_R^2\mlog{m_R^2}{\mu^2} -m_L^2 + m_L^2 \mlog{m_L^2}{\mu^2}}{m_L^2-m_R^2} \label{eq:m221} \end{align} in the modified dimensional reduction ($\overline{\rm DR}$) scheme. $\mu={\cal O} (m_{L,R})$ is the renormalization scale. The corrections to other mass parameters like $m_{11}^2$ are small as long as $|A_t|,|\mu_h|$ are not too large. At one-loop order the fine-tuning issue only concerns the first term in $m_{22}^{2,(1)}$, which requires sizable cancellations with $m_{22}^{(0)}$ to reproduce the correct $M_Z$ in \eq{eq:min}. At $n$-loop level with $n\geq 2$ we only consider the contributions enhanced by $ \left( M_3^2/m_{L,R}^2 \right)^{n-1} $ with respect to $ m_{22}^{2\,(1)}$ stemming solely from Feynman diagrams with $n-1$ stop self-energies shown in Fig.~\ref{fig:sum}. Other multi-loop diagrams involve fewer stop propagators and do not contribute to the highest power of $M_3^2/m_{L,R}^2$. The self-energies involve a gluino-top loop and a stop mass counterterm, see Fig.~\ref{fig:se}. \begin{figure}[t] \includegraphics[width=\linewidth]{selfenergy.pdf} \caption{Stop self-energies with gluino loop and counterterm.\label{fig:se}} \end{figure} We decompose $ m_{22}^{2\,(n)}$ as \begin{align} m_{22}^{2\,(n)} &= m_{22\, I}^{2\,(n)} \, +\, m_{22\, II}^{2\,(n)} \end{align} for the two sets of diagrams in Fig.~\ref{fig:sum}. The left diagrams constituting $ m_{22\, I}^{2\,(n)}$ have $n$ stop propagators while the right ones summing to $ m_{22\, II}^{2\,(n)}$ have $n+1$ stop propagators. Inspecting the UV behaviour of the stop loop shows that only $ m_{22\, I}^{2\,(2)}$ contains a logarithm $\log (M_3/m_{L,R})$. Explicit calculation of the two-loop diagrams yields \begin{align} m_{22}^{2\,(2)} =& \frac{\alpha_s(\mu) \,\abs{y_t}^2 M_3^2}{4\,\pi^3}\biggl[ \nonumber\\ & \quad - \left(1+ \mlog{\mu^2}{M_3^2} \right) \left(1+2\mlog{\mu\,M_3}{m_L\, m_R} \right) \nonumber\\ & \qquad \qquad + \frac{\pi^2}{3} +{\cal O} \left( \frac{m_{L,R}^2}{M_3^2}\right) \biggr] \label{eq:m222} \end{align} in the ($\overline{\rm DR}$) scheme. If one considers very large mass splitting between $m_{L,R} $ and $M_3$, one may choose to integrate out these sparticles at different scales and finds $\mu\sim M_3$ more appropriate than $\mu={\cal O} (m_{L,R})$ in $\alpha_s(\mu)$ and the first logarithm in \eq{eq:m222}. $ m_{22\, II}^{2\,(2)}$ has no $\log (M_3/m_{L,R})$ and amounts to only $\sim 10\%$ of $ m_{22\, I}^{2\,(2)}$ for the numerical examples considered below. For $M_3\gg m_{L,R}$ we find for the resummed higher-order contributions: \begin{align} m_{22\, I}^{2\,(\geq 3)} &= \frac{3\,\abs{y_t}^2 }{16\pi^2} m_L^2 \sum_{k=2}^\infty \frac{\xi_L^k}{k(k-1)} + L\to R \nonumber\\ & \!\!\!\!\!\!\! = \frac{3\,\abs{y_t}^2 }{16\pi^2} m_L^2 \left[\xi_L +(1-\xi_L) \log(1-\xi_L) \right] + L\to R \label{eq:m22n1} \\ m_{22\, II}^{2\,(2)} &+ m_{22\, II}^{2\,(\geq 3)} \, = -\frac{3\, \abs{A_t}^2}{16\pi^2} \sum_{k=1}^\infty \frac{\xi_{L,R}^k}{k} \nonumber\\ &\qquad\qquad\;\, = \frac{3\, \abs{A_t}^2}{16\pi^2} \log(1-\xi_{L,R}) \label{eq:m22n2} \end{align} with \begin{align} \xi_{L,R} &\equiv& - \frac{4 \alpha_s(\mu)}{3\pi} \frac{M_3^2}{m_{L,R}^2} \left[ 1+ \mlog{\mu^2}{M_3^2}\right] \, + \, \Delta \xi_{L,R}. \label{eq:xi} \end{align} $ \Delta \xi_{L,R}$ controls the renormalization scheme of the stop masses, $ \Delta \xi_{L,R}=0$ for the $\overline{\rm DR}$ scheme. For simplicity we quote the numerically less important term in \eq{eq:m22n2} for the special case $m_L=m_R$. For $M_3\sim 5\, m_{L,R}$ one finds $\xi_{L,R}\sim -1$, so that $ m_{22\, I,II}^{2\,(\geq 3)}$ is of similar size as $ m_{22\, I,II}^{2\,(1)}$. The expressions above define $m_{22, I,II}^{2\,(n)}$ at the scale $\mu\sim m_{L,R}$. We minimize the Higgs potential at the lower scale $m_t$ (denoting the top mass) where \begin{align} m_{22}^2 (\ensuremath{ m_\text{t}}) &= \Bigl(1 - \frac{6\, \abs{y_t}^2}{16\,\pi^2} \mlog{\mu}{\ensuremath{ m_\text{t}}} \Bigr)\, m_{22}^2\;(\mu) \;, \end{align} while the running of $m_{11}^2$ and $m_{12}^2\equiv B\mu_h$ is negligible. Next we switch to the on-shell (OS) scheme for the stop masses. For clarity we consider the case of small $|A_t|$ and $|\mu_h|$, so that stop mixing is negligible and $m_{L,R}^{\rm OS}$ coincide with the two mass eigenstates. In the OS scheme the counterterm $ \Delta \xi_{L,R}$ in \eq{eq:xi} cancels the stop self-energies and renders $\xi_{L,R}=0$. Thus $ m_{22}^{2\,(\geq 3)}=m_{22,II}^{2\,(2)}=0$, while $m_{22,I}^{2\,(2)}$ is non-zero due to the different UV behavior of the stop momentum loop: \begin{align} m_{22}^{2,(2)\,\rm OS} &= \frac{\alpha_s(\mu) \,\abs{y_t}^2 M_3^2}{4\,\pi^3} \biggl[ -1 + \mlog{\mu^2}{m_L\,m_R} \nonumber\\ & \qquad + \log^2 \frac{\mu^2}{M_3^2} + \frac{\pi^2}{3} + {\cal O} \left( \frac{m_{L,R}^2}{M_3^2}\right) \biggr] \label{eq:m222os} \end{align} Thus with stop pole masses no $M_3^2/m_{L,R}^2$ enhanced terms appear beyond two loops and the resummation of the higher-order terms is implicitly contained in the shift $m_{L,R} \to m_{L,R}^{\rm OS}$, which absorbs the higher-order terms into $m_{22}^{(1)}$ and $m_{22}^{(2)}$. The $\mu$ dependence in \eq{eq:m222os} results from the stop loop integration, i.e.\ the superscript ``OS'' in \eq{eq:m222os} only refers to the definition of the stop mass, while $ m_{22}^{2}$ is still $\overline{\rm DR}$ renormalized. For the fine-tuning issue there are several important lessons: Most importantly, $m_{L,R}^{2\, \rm OS}$ is \emph{larger} than $m_{L,R}^2$ by terms $\propto \alpha_s M_3^2$, meaning that the LHC lower bound on $m_{L,R}^{\rm OS}$ permits a $\overline{\rm DR}$ mass $m_{L,R}$ closer to the electroweak scale complying with naturalness. That is, $m_{L,R}^{\rm OS}$ could well be dominated by the gluino-top self-energy. In the on-shell scheme we observe moderate fine-tuning in $m_{22}^2$ if we vary $m_{L,R}$, partly because the large radiative piece of $m_{L,R}^{\rm OS}$ depends only logarithmically on $m_{L,R}$, and partly because the effects from $m_{22}^{2\, (1)}$ and $m_{22}^{2\, (2)}$ have opposite signs and tend to cancel out. This behavior can be better understood if we solely work in the $\overline{\rm DR}$ scheme: For $m_{L,R}$ close to the electroweak scale none of the infinite number of terms $m_{22}^{(n)}$ is individually so large that it calls for a fine-tuned $m_{22}^{(0)}$ in \eq{eq:m22exp}. We may instead be concerned about the fine-tuning related to a variation of $M_3$: In a perturbation series truncated at order $n$ we see a powerlike growth with terms up to $\xi_{L,R}^n$ in the sum in \eq{eq:m22n1}, with the terms of different loop orders having similar magnitude and alternating signs. However, the resummation tempers this behaviour to $m_{L,R}^2 \xi_{L,R} \sim M_3^2$. We have numerically checked that we obtain the same results for $m_{22}^{2}$ in both approaches, i.e.\ by either employing the explicit resummation in the $\overline{\rm DR}$ scheme or converting the stop masses to the OS scheme. \section{Numerical study of the fine-tuning} We use the Ellis-Barbieri-Giudice \begin{figure}[t] \includegraphics[width=.95\linewidth]{FT_2d.pdf} \caption{Fine-tuning measure $\Delta(m_L)$ for different values of the lighter on-shell stop mass (essentially equal to $m_L^{\rm OS}$ in our analysis) and $M_3$. The number gives the mean of 100 sample points that correctly reproduce $M_Z=91\ensuremath{\;\mbox{GeV}}$ and $m_h=125\ensuremath{\;\mbox{GeV}}$ \cite{Chatrchyan:2012xdj,Aad:2012tfa}. \label{fig:nmssm} } \end{figure} fine-tuning measure \cite{Ellis:1986yg,Barbieri:1987fn} \begin{equation} \label{eq:FT} \Delta(p) = \abs{ \frac{p}{\ensuremath{ M_\text{Z}}(p)}\, \frac{\partial\, \ensuremath{ M_\text{Z}}(p)}{\partial p}} \;, \end{equation} where $p$ stands for any Lagrangian parameter. Using $\overline{\rm DR}$ stop masses as input we calculate the $\overline{\rm OS}$ masses which enter the loop-corrected Higgs potential through \eqsand{eq:m221}{eq:m222os}. For the latter we determine all two-loop contributions to $m_{11}^2$, $m_{12}^2$, and $m_{22}^2$ involving $\alpha_s$, $y_t$, $A_t$ exactly. E.g.\ we go beyond the large-$M_3$ limit of the previous section and calculate 205 two-loop diagrams in total. For this we have used the Mathematica packages \texttt{FeynArts} \cite{Hahn:2000kx} (with the Feynman rules of Ref.~\cite{Rosiek:1995kg}) and \texttt{Medusa} \cite{cwiegand1,cwiegand2}, which performs asymptotic expansions in small external momenta and large masses. The analytic methods involved are based on Refs.~\cite{Davydychev:1992mt,Nierste1993, Fleischer:1994ef, Davydychev:1995nq, Nierste:1995fr,Anastasiou:2006hc}. We start with the discussion of the NMSSM: With two of the three minimization conditions we trade the parameters $m_s^2$ and $A_\lambda$ for $\mu_h\equiv\lambda v_s$ and $\tan\beta$. (The third minimization condition is \eq{eq:min} yielding $M_Z$.) For the illustrative example in Fig.~\ref{fig:nmssm} we fix the parameters $\tan\beta=3$, $\lambda=0.64$, $\kappa=0.25$, $\mu_h=200\, \mathrm{GeV}$, and $m_{11}^{(0)} =600\, \mathrm{GeV}$. Then we choose $m_{22}^{2,(0)}$, $A_t$, $m_L^{\overline{\rm DR}}$, $m_R^{\overline{\rm DR}}$, $A_\kappa$ randomly subject to the contraints that the correct values of $M_Z$ and the lightest Higgs mass $m_h=125\ensuremath{\;\mbox{GeV}}$ as well as the smaller stop mass $m_{\tilde t,1}^{\rm OS}$ displayed in Fig.~\ref{fig:nmssm} are reproduced for a given value of $M_3$. We calculate $\Delta(m_L)$ for over 100 different parameter points corresponding to a given point $(m_{\tilde t,1}^{\rm OS},M_3)$; the number in the colored square is the average $\Delta(m_L)$ found for these points. For most of our parameter points $m_{\tilde t,1}^{\rm OS} \approx m_L$, but this feature is irrelevant because the formulae are symmetric under $m_L \leftrightarrow m_R$. By quoting the average rather than the minimum of $\Delta(m_L)$ we make sure that a good fine-tuning measure is not due to accidental cancellations. To illustrate the result of Fig.~\ref{fig:nmssm} with an example we consider the parameter point with \begin{align} m_{11}^{(0)} &= 600\, \mathrm{GeV} & m_{22}^{(0)} &= 94\, \mathrm{GeV} & M_3 & = 3\ensuremath{\;\mbox{TeV}} \nonumber\\ A_\kappa &= -6.5\, \mathrm{GeV} & A_t &= 453\, \mathrm{GeV} && \nonumber\\ m_L^{\overline{\mathrm{DR}}} &= 611\, \mathrm{GeV} & m_R^{\overline{\mathrm{DR}}} &= 902\, \mathrm{GeV} \label{eq:bp} \end{align} which yields $m_{\tilde t,1}^{\rm OS} = 1\ensuremath{\;\mbox{TeV}}$, lying substantially above $m_L$. Note that $M_3/m_L\approx 5$, while the hierarchy in the physical masses is moderate, $M_3/m_{\tilde t_1}^{\rm OS} = 3$. The fine-tuning measures for this benchmark point are $\Delta(m_L)=6.0$, $\Delta(m_R)=10.8$, $\Delta(M_3)=6.3$, $\Delta(A_t)=0.2$, and all other $\Delta(p)$ are negligibly small. Next we briefly discuss the MSSM. A recent analysis has found values of $\Delta \equiv \max_p \Delta (p) \geq 63$ for special versions of the MSSM in scans over the parameter spaces \cite{vanBeekveld:2019tqp}. Compared to the NMSSM one needs larger stop masses to accomodate $m_h=125\ensuremath{\;\mbox{GeV}}$, which then leads to larger values of $\Delta$. Yet also for the MSSM the hierarchy $M_3 \gg m_{L,R}$ with proper resummation of higher-order terms improves $\Delta$. We exemplify this with the parameter point \begin{align} m_{11}^{(0)} &= 1583\,\ensuremath{\;\mbox{GeV}} & m_{22}^{(0)} &= 124\,\ensuremath{\;\mbox{GeV}} \nonumber\\ \mu_h &= 400\,\ensuremath{\;\mbox{GeV}} & \tan\beta &= 5\nonumber\\ M_3 &= 4500\,\ensuremath{\;\mbox{GeV}} & A_t &= 3370\,\ensuremath{\;\mbox{GeV}} \nonumber\\ m_L &= 2787\,\ensuremath{\;\mbox{GeV}} & m_R &=1435\,\ensuremath{\;\mbox{GeV}}\nonumber \end{align} The on-shell stop masses for this point are $m_{\tilde t_1}^{\rm OS}= 2168\ensuremath{\;\mbox{GeV}}$ and $m_{\tilde t_1}^{\rm OS}= 3012\ensuremath{\;\mbox{GeV}}$. Despite these large masses the fine-tuning measures $\Delta(m_L) = 13$, $\Delta(m_R)=25$, $\Delta(M_3)=8$ have moderate values while a fine-tuning measure $\Delta(A_t)=41$ reflects the large $A_t$ needed to accomodate $m_h=125\,\ensuremath{\;\mbox{GeV}}$. Finally we remark that also low-energy observables like the \bb\ mixing\ amplitude or the branching ratios of rare meson decays (such as $b\to s \gamma$, $K\to \pi \nu \overline{\nu}$) involve higher-order corrections enhanced by a relative factor of $M_3^2/m_{L,R}^2$, if the stop masses are renormalised in a mass-independent scheme like $\overline{\rm DR}$. This remark applies to supersymmetric theories with minimal flavor violation (MFV) in which the leading contribution is dominated by a chargino-stop loop and the gluino is relevant only at next-to-leading order and beyond. The resummation of the gluino-stop self-energies on the internal stop lines is trivially achieved by using the on-shell stop masses in the leading-order prediction, because the flavor-changing loop is UV-finite; i.e.\ we face the same situation as with $m_{22}^{2\,(\geq 3)}$. Thus low-energy observables effectively probe the same stop masses as the collider searches at high $p_T$. \section{Conclusions} We have investigated the fine-tuning of the electroweak scale in models of new physics with a heavy and hierarchical mass spectrum. Studying supersymmetric models with $M_Z < m_{L,R} < M_3$ we have demonstrated that the usual fine-tuning analysis employing fixed-order perturbation theory breaks down for $M_3 \sim 5\, m_{L,R}$. Resumming terms enhanced by $M_3^2/m_{L,R}^2$ tempers the fine-tuning. This behavior is transparent if the stop masses are renormalized on-shell: The resummation is then encoded in the shift from the $\overline{\rm DR}$ masses to the larger on-shell masses and new allowed parameter ranges with small values of $m_{L,R}^2$ emerge, because large radiative corrections proportional to $\alpha_s M_3^2$ push the physical on-shell masses over the experimental lower bounds. In these scenarios the heavy stops are \emph{natural}, as their masses are larger than the --parametrically large-- self-energies. As a byproduct we have found that low-energy observables probe the on-shell stop masses. \begin{acknowledgments} \paragraph{Acknowledgements.} We thank Stefan de Boer for checking expressions \eqsand{eq:m22n1}{eq:m22n2} and several helpful discussions and acknowledge the support of \emph{Deutsche Forschungsgemeinschaft}\ (DFG, German Research Foundation) through RTG 1694 and grant 396021762 - TRR 257 ``Particle Physics Phenomenology after the Higgs Discovery''. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec1} One can argue that thin matter shells in general relativity provide the simplest class of spacetimes after vacuum spacetimes. Indeed, thin shells, besides giving instances of static and dynamic spacetimes, allow themselves to be scrutinized in relation to their entropic and thermodynamic matter and gravitational properties, and even from those properties to pick up the corresponding black hole properties. For static and rotating circularly symmetric thin shells, i.e., thin rings, in (2+1)-dimensional Ba\~{n}ados-Teitelbom-Zanelli (BTZ) spacetimes their entropic and thermodynamic properties have been worked out in general and in the limit where the ring is taken to its own gravitational, or horizon, radius, i.e., in the black hole limit \cite{quintalemosbtzshell,energycond,btzshell,extremalbtz}. For static electric charged spherically symmetric thin shells in (3+1)-dimensional Reissner-Nordstr\"om spacetimes these properties have also been worked in detail in general and in the black hole limit \cite{charged,extremalshell,lqzn}, see also \cite{martin} for neutral thin shells in Schwarzschild spacetimes. Related studies were those of entropy for a quasiblack holes in the case in which matter is spread over a 3-dimensional spatial region, rather than on a 2-dimensional thin shell \cite{quasi_bh1,quasi_bh2}, and of quasistatic collapse of matter \cite{pretisrvol}. These works \cite{quintalemosbtzshell,energycond,btzshell,extremalbtz,charged,extremalshell,lqzn,martin,quasi_bh1,quasi_bh2,pretisrvol} stem from the fact that the concept of entropy is originally based on quantum properties of matter, and so it is very important to study whether and how black hole thermodynamics could emerge from thermodynamics of collapsing matter, when matter is compressed within its own gravitational radius. Conversely, it is through black hole entropy that we can grasp the microscopic aspects of a spacetime and hence of quantum gravity, and the fact that thermodynamics of a thin shell reflects thermodynamic properties of a black hole formed after quasistatic collapse of the shell indicates some connection between matter and gravitational degrees of freedom. In this thin shell approach to black hole entropy a clear cut distinction exists between nonextremal black holes and extremal black holes. For nonextremal black holes one finds that the entropy is \begin{equation} S=\frac{A_+}{4G}\,, \label{ent1bh} \end{equation} where $A_{+}$ is the area of the event horizon and $G$ is the gravitational constant. Throughout the paper we use units such that the velocity of the light, the Planck constant, and the Boltzmann constant are set to one. This result has been found for static BTZ shells \cite{quintalemosbtzshell} and for rotating BTZ shells \cite{energycond,btzshell}, as well as for Reissner-Nordstr\"om shells \cite{charged}, all in the black hole limit. The result recovers the Bekenstein-Hawking entropy formula in (2+1) dimensions \cite{btz,carlip}, and in the original works in (3+1) dimensions \cite{bek1,bch,haw}. In (2+1) dimensions $A_+$ is a perimeter $A_+=2\pi r_+$, and in (3+1) dimensions $A_+=4\pi r_+^2$ is the usual area, with $r_+$ being the gravitational or horizon radius. For extremal black holes, the ones which we will study in this paper, the situation is more subtle in the shell approach. Extremal black holes are those whose angular momentum or electric charge is equal to the mass in some appropriate units. It has been found that the entropy of the extremal black hole is going to depend on the way the shell approaches its own gravitational radius. This results in three cases. On one hand, clearly, there is a case for an originally nonextremal shell, which we call {\item Case 1}, in which after taking the black hole limit the shell turns into an extremal shell, where one finds $S=\frac{A_+}{4G}$ as in Eq.~(\ref{ent1bh}), see also \cite{btzshell,extremalbtz} for BTZ and \cite{charged,extremalshell,lqzn} for Reissner-Nordstr\"om. On the other hand, it was further found in the Reissner-Nordstr\"om situation that there is a new case \cite{lqzn}, which we call {\item Case 2}, in which the shell is turned extremal concomitantly with the spacetime being turned into a black hole. In this case one finds also $S=\frac{A_+}{4G}$ as in Eq.~(\ref{ent1bh}). Finally, for an ab initio extremal shell that turns into an extremal black hole, one finds that the entropy is a generic function of $A_+$, i.e., \begin{equation} S=S(A_+)\,. \label{ente11bh} \end{equation} This result, which we call {\item Case 3}, is found both in extremal rotating BTZ \cite{extremalbtz} and in extremal electric charged Reissner-Nordstr\"om \cite{extremalshell}. Given the result (\ref{ente11bh}) together with (\ref{ent1bh}) one is led to speculate that the entropy of an extremal black hole should obey \begin{equation} 0\leq S(A_+) \leq\frac{A_+}{4G}\,. \label{lowupent1bh} \end{equation} The lower limit \begin{equation} S=0\,, \label{ent1ebh} \end{equation} is indeed found through an Euclidean path integral approach to extremal black hole entropy, both in BTZ black holes \cite{ebh2} and in Reissner-Nordstr\"om black holes \cite{ebh1}, whereas in contradiction, the Bekenstein-Hawking upper limit of Eq.~(\ref{lowupent1bh}), $S=\frac{A_+}{4G}$, see also Eq.~(\ref{ent1bh}), is found through string theory techniques in extremal cases, namely, in (2+1) dimensional extremal rotating BTZ black holes \cite{birmsacsen}, and in (3+1) dimensional extremal Reissner-Nordstr\"om black holes \cite{string11}, following the breakthrough worked out in (4+1) dimensions \cite{string1,string2}, see also \cite{ebh3,ebh4,ebh5,ebh6,ebh7,ghoshmitra,string3,string4,string5,cano1} for further studies on thermodynamics and entropy of extremal black holes. In a sense, Eq.~(\ref{lowupent1bh}) fills the gap between Euclidean path integral approaches and string theory techniques for the entropy of extremal black holes. The aim of this paper is to complete the study on extremal rotating BTZ thin shell thermodynamics \cite{quintalemosbtzshell,energycond,btzshell,extremalbtz}, in order to have a full understanding of the entropy of an extremal rotating BTZ black hole. We follow also the studies for electrically charged Reissner-Nordstr\"om shells \cite{charged,extremalshell} and in particular we adopt the unified approach devised for an electrically charged Reissner-Nordstr\"om thin shell \cite{lqzn}, and study the three different limits of a rotating thin shell in a (2+1)-dimensional rotating BTZ spacetime when it approaches both extremality and its own gravitational radius, i.e., in the extremal BTZ black hole limit. These three different limits yield the three cases, {\item Cases 1-3}, already mentioned. Our analysis will point out the similarities between the rotating and the electric charged case and will show the contributions from the various thermodynamic quantities appearing in the first law to the entropy in all three cases. The approach developed in the present work can be of interest for the generic investigation of black hole entropy in the thin shell formalism, in particular, for the Kerr black hole, at least in the slow rotation approximation, or to other more complicated (3+1) and ($n$+1)-dimensional black holes, with $n>3$. The paper is organized as follows. In Sec.~\ref{sec2}, we review the mechanics and thermodynamics of a rotating thin shell in (2+1) dimensions with a negative cosmological constant, where the exterior spacetime is BTZ. In Sec.~\ref{sec3}, we introduce the three different limits, thus establishing three diferent cases, when the rotating thin shell is taken into its own gravitational radius, and forms an extremal BTZ black hole. We define the good variables to study these limits, and work out the geometry, the mass, and the angular momemtum of the shell in the three diferent cases. In Sec.~\ref{sec4}, we discuss the three different cases for the pressure, the circular velocity, and the local temperature of the shell. In Sec.~\ref{sec5}, we calculate the entropy of a rotating extremal BTZ black hole in the three different cases. In Sec.~\ref{sec6}, we show, in the three different cases, which terms in the first law give the dominant contributions to the entropy. In Sec.~\ref{sec7}, we conclude. \section{Thin shell thermodynamics in a (2+1)-dimensional BTZ spacetime\label{sec2}} We consider general relativity in (2+1) dimensions with a cosmological constant $\Lambda$, where we assume that $\Lambda<0$, so that the spacetime is asymptotically AdS, with curvature scale $\ell=\sqrt{-\frac{1}{\Lambda}}$. In an otherwise vacuum spacetime, we introduce a timelike rotating thin shell, i.e., a timelike thin ring in the (2+1)-dimensional spacetime, with radius $R$, that divides the spacetime into the inner and outer regions. We leave $G$ explicitly in the formulas, the other physical constants are set to one. The spacetime inside the shell, $0<r<R$, where $r$ is a radial coordinate, is given by the zero mass $m=0$ BTZ-AdS solution in (2+1) dimensions. The spacetime outside the shell, $r>R$, is generically described by the rotating BTZ solution with Arnowitt-Deser-Misner (ADM) mass $m$ and angular momentum $\cal J$. Two important quantities of the outer spacetime, which are related to $m$ and ${\cal J}$, are the gravitational radius $r_+$ and the Cauchy radius $r_-$. The relations between the quantities are \cite{btz} \begin{eqnarray} \label{ml1} 8G\ell^2 m=r_+^2+ r_-^2\,, \end{eqnarray} \begin{eqnarray} \label{jl1} 4G\ell {\cal J}=r_+r_-\,. \end{eqnarray} From Eqs.~(\ref{ml1}) and~(\ref{jl1}), one clearly sees that one can trade $m$ and $\cal J$ for $r_+$ and $r_-$ and vice-versa. For a spacetime that is not over rotating, as will be the case considered here, one has that $m\geq\frac{\cal J}{\ell}$ which, in terms of $r_+$ and $r_-$, translates into $r_+\geq r_-$. This inequality is saturated in the extremal case, $r_+=r_-$, i.e., $m=\frac{{\cal J}}{\ell}$. The gravitational area $A_+$ is defined as \begin{eqnarray} A_+=2\pi r_+\,, \label{areah} \end{eqnarray} is actually a perimeter, since there are just two space dimensions. The shell itself has radius $R$, it is quasistatic in the sense that $\frac{dR}{d\tau}=\frac{d^2R}{d\tau^2}=0$, where $\tau$ is the proper time on the shell. The area of the shell, a perimeter is \begin{eqnarray} A=2\pi R\,. \label{areashell} \end{eqnarray} We assume that the shell is always located outside or at the gravitational radius, \begin{eqnarray} R\geq r_+\,. \label{Rgeqr+} \end{eqnarray} Note that the gravitational radius in this case is a feature of the spacetime, it is not a horizon radius. It would be a horizon radius only if $R\leq r_+$. So here, since $R\geq r_+$ only in the limit $R=r_+$ do we get a horizon, in this limiting situation the shell is on the verge of becoming a black hole. Besides having a radius $R$, the shell has mass $M$ and angular momentum $J$. To find the properties of the shell and the connection to the inner and outer spacetime one has to work out the junction conditions. The junction conditions determine the energy density of the shell $\sigma$ and the angular momentum density of the shell $j$, or if one prefers, the rest mass of the shell $M\equiv2\pi R\sigma$ and the angular momentum of the shell $J\equiv2\pi R$. One finds that $M$ and $J$ are some specific functions of the ADM spacetime mass $m$, angular momentum $\cal J$, and the shell's radius $R$, see \cite{btzshell} for details (see also \cite{energycond}). These relations can be inverted to give the ADM spacetime mass $m$ as a function of $M$, $J$ and $R$, namely, \begin{eqnarray} m(M,J,R) = \frac{R M}{\ell} -2GM^2 +\frac{2G}{R^2}J^2\,, \label{mMJ} \end{eqnarray} and the ADM spacetime angular momentum $\cal J$ also as a function of $M$, $J$ and $R$, namely, \begin{eqnarray} {\cal J}(M,J,R)=J\,. \label{JJ} \end{eqnarray} In Eqs.~(\ref{mMJ}) and~(\ref{JJ}) we have written $m$ as $m(M,J,R)$ and $\cal J$ as ${\cal J}(M,J,R)$ in order to make manifest the explicit dependence of the ADM spacetime mass $m$ and the ADM spacetime angular momentum $\cal J$ on the the shell quantities, i.e., its rest mass $M$, its angular momentum $J$, and its radius $R$. This explicit dependence is also useful when we deal with the thermodynamics of the shell. The gravitational radius $r_+$ and the Cauchy radius $r_-$ can be found inverting Eqs.~(\ref{ml1}) and~(\ref{jl1}) \cite{btz}. The gravitational radius is \begin{eqnarray} r_+(M,J,R)= 2\ell \sqrt{ Gm +\sqrt{(Gm)^2-\frac{(G{\cal J})^2}{\ell^2}}} \,, \label{ml} \end{eqnarray} and the Cauchy radius is \begin{eqnarray} r_-(M,J,R)= 2\ell \sqrt{ Gm -\sqrt{(Gm)^2-\frac{(G{\cal J})^2}{\ell^2}} } \,, \label{jl} \end{eqnarray} with $m$ and $\cal J$ seen as functions of $M$, $J$, and $R$ through Eqs.~(\ref{mMJ}) and~(\ref{JJ}). As a thermodynamic system, the shell has a locally measured temperature $T$ and an entropy $S$. We consider that the shell is adiabatic, i.e., it does not radiate to the exterior. The entropy $S$ of a system can be expressed as a function of the state independent variables which for the rotating shell can be chosen as the shell's locally measured proper mass $M$, the shell's angular momentum $J$, and the shell's area $A$. Thus, $S=S(M,J,A)$ and in these variables the first law of thermodynamics reads \begin{eqnarray} \label{1st} TdS=dM +p\, d A-\Omega\, dJ\,, \end{eqnarray} where $p$ is the tangential pressure at the shell, $\Omega$ is the thermodynamic angular velocity of the shell, and $T$ is the temperature of the shell. These quantities, $p$, $\Omega$, and $T$ are equations of state functions of $(M,J,A)$, i.e. $p=p(M,J,A)$, $\Omega=\Omega(M,J,A)$, and $T=T(M,J,A)$. In (2+1) dimensions, the shell's area is a perimeter, namely, $A=2\pi R$, we can thus express $S$, $p$, $\Omega$, and $T$, as functions of the shell radius $R$, instead of its area $A$. This simplifies the presentation. Thus, $S=S(M,J,R)$, $T=T(M,J,R)$, $p=p(M,J,R)$, and $\Omega=\Omega(M,J,R)$. In order to have a well-defined entropy $S$ there are integrability conditions for $T=T(M,J,R)$, $p=p(M,J,R)$, and $\Omega=\Omega(M,J,R)$, see \cite{btzshell}. The first law for the shell, Eq.~(\ref{1st}), is clearly displayed and has a clear physical meaning in the variables $M$, $J$, and $R$. As it turns out and as we will see, it is much simpler mathematically to work instead in the variables $r_+$, $r_-$, and $R$. Indeed, from Eqs.~(\ref{ml}) and~(\ref{jl}) together with Eqs.~(\ref{mMJ}) and~(\ref{JJ}), one can swap the variables $M$, $R$, and $J$, into $r_+$, $r_-$, and $R$. So, from now on, we express our quantities in terms of $(r_+,r_-,R)$. Inverting Eq.~(\ref{mMJ}) and using Eqs.~(\ref{ml}) and~(\ref{jl}) together with Eq.~(\ref{JJ}), one finds \begin{eqnarray} \hskip -0.4cm M(r_+,r_-,R) = \frac{R}{4 G\ell} \Big( 1-\frac{1}{R^2}\sqrt{(R^2-r_+^2)(R^2-r_-^2)} \Big) \label{sig1}. \end{eqnarray} Inverting Eq.~(\ref{JJ}) and using Eqs.~(\ref{ml}) and~(\ref{jl}) (or more simply Eq.~(\ref{ml1})), one finds \begin{eqnarray} J(r_+,r_-,R)=\frac{r_+r_-}{4G \ell }\,. \label{j1} \end{eqnarray} The tangential pressure $p$ at the shell found through the junction conditions \cite{btzshell} (see also \cite{energycond}) is \begin{eqnarray} p(r_+,r_-,R) = \frac{1}{8\pi G \ell} \Bigg( \frac{R^4-r_+^2 r_-^2}{R^2\sqrt{(R^2-r_+^2)(R^2-r_-^2)}}-1 \Bigg)\,. \nonumber \\ \label{ne1} \end{eqnarray} The angular velocity $\Omega$ and the corresponding linear or circular velocity $v=R\,\Omega$ can be found either by the junction conditions or from one integrability condition of the first law of thermodynamics Eq.~(\ref{1st}) \cite{btzshell,extremalbtz}. The integrability condition gives that the angular velocity defined thermodynamically can be expressed by $\Omega(r_+,r_-,R)=\frac{ r_+r_-}{R \sqrt{ \left(1-\frac{r_+^2}{R^2}\right) \left(1-\frac{r_-^2}{R^2}\right)}} \Big( c(r_+,r_-)-\frac{1}{R^2} \Big)$, where $c(r_+,r_-)$ is an integrating arbitrary function of $r_+$ and $r_-$ (see Eq.~(59) in \cite{btzshell} and Sec.~VI in \cite{extremalbtz}). We choose $ c(r_+,r_-)=\frac{1}{r_+^2}, $ in order to have the well-defined black hole limit \cite{btzshell,extremalbtz}. In this case, one sees that $\Omega$ vanishes when the shell approaches the gravitational radius, $R\to r_+$. Since the circular velocity of the shell is $v=R\,\Omega$, one has, with the choice $c(r_+,r_-)=\frac{1}{r_+^2}$ and after simplifications, that \begin{eqnarray} \label{ne3} v\big(r_+,r_-,R\big)=R\,\Omega(r_+,r_-,R)= \frac{r_-}{r_+} \sqrt{\frac{R^2-r_+^2}{R^2-r_-^2}}\,. \end{eqnarray} The temperature $T$ being a pure thermodynamic quantity is found from another integrability condition of the first law of thermodynamics Eq.~(\ref{1st}) \cite{btzshell,extremalbtz}. As found in \cite{btzshell}, the temperature can be expressed as $ T(r_+,r_-,R)= \frac{T_0(r_+,r_-)}{ \frac{R}{\ell} \sqrt{ \left(1-\frac{r_+^2}{R^2}\right) \left(1-\frac{r_-^2}{R^2}\right)} } $, where $T_0(r_+,r_-)$ is an arbitrary function of $r_+$ and $r_-$ (see also Eqs.~(C2) and~(C3) from \cite{extremalbtz}). Now, $T_0(r_+,r_-)$ is chosen to be the Hawking temperature of the BTZ black hole, i.e., $ T_0(r_+,r_-)= T_H (r_+,r_-)=\frac{1}{2\pi \ell^2}\frac{r_+^2-r_-^2}{r_+}$ \cite{btz}. Thus, we have \begin{eqnarray} T\big(r_+,r_-,R\big) =\frac{r_+^2-r_-^2}{2\pi \ell R r_+} \frac{R^2} {\sqrt{\big(R^2-r_+^2\big)\big(R^2-r_-^2\big)}}. \label{ne2} \end{eqnarray} For the outer spacetime, it is usually useful to define the redshift function $k$ that appears naturally in several instances, namely, $ k\big(r_+,r_-,R\big) =\frac{R}{\ell} \sqrt{ \left(1-\frac{r_+^2}{R^2}\right) \left(1-\frac{r_-^2}{R^2}\right)} $. With this quantity, the temperature $T$ assumes the familiar form $ T(r_+,r_-,R)= \frac{T_H(r_+,r_-)}{k (r_+,r_-,R)} $, and so the function $T_H(r_+,r_-)$ can be interpreted as the temperature of the shell located at the radius where $k=1$, the Hawking temperature. Seen in this fashion, the formula for $T$ expresses then the gravitational redshift of the temperature of the shell, namely, it is an instance of the Tolman temperature formula. Note that the choices, $ c(r_+,r_-)=\frac{1}{r_+^2} $ for the velocity $v$, and $ T_0(r_+,r_-)= T_H (r_+,r_-)=\frac{1}{2\pi \ell^2}\frac{r_+^2-r_-^2}{r_+} $ for the temperature $T$ that lead to Eqs.~\eqref{ne3} and \eqref{ne2}, respectively, are essential if we want to take the black hole limit, i.e., when the shell is taken to its gravitational radius, $R\to r_+$ \cite{btzshell,extremalbtz}. So we stick to these choices. \section{The three different approaches and limits to the BTZ extremal black hole} \label{sec3} \subsection{The variables useful to define the three different approaches and limits to an extremal horizon} To study the entropy of the BTZ extremal black hole we take a unified approach, see \cite{lqzn} for an extremal electric charged shell in 3+1 dimensions. For that, we introduce the dimensionless parameters $\varepsilon$ and $\delta$ through \begin{equation} \varepsilon ^{2}=1-\frac{r_{+}^2}{R^2}\,, \label{e} \end{equation} \begin{equation} \delta ^{2}=1-\frac{r_{-}^2}{R^2}\,. \label{d} \end{equation} From Eqs.~(\ref{e}) and (\ref{d}), we see that we can change the independent thermodynamics variables $(r_+,r_-,R)$ into the new variables $(\varepsilon,\delta,R)$. In this set of variables, for example, the redshift function $k$ defined above takes the simple form $ k(\varepsilon,\delta,R)=\frac{R}{\ell} \varepsilon \delta $. \subsection{The geometry and the three horizon limits} The three relevant limits to an extremal black hole are: \vskip 0.3cm \noindent \textit{Case 1:} $r_+\neq r_-$ and $R\to r_+$, i.e., \begin{equation} \delta ={O}(1)\,,\quad\varepsilon \to 0\,. \label{de1} \end{equation} In evaluating the entropy $S$, we then take $r_+\to r_-$, i.e., the $\delta\to0$ limit, to make the shell extremal at its own gravitational radius $R=r_+$. \vskip 0.3cm \noindent \textit{Case 2:} $r_{+}\rightarrow r_{-}$ and $R\rightarrow r_{+}$, i.e., \noindent \begin{equation} \delta =\frac{\varepsilon }{\lambda}\,,\quad\varepsilon \to 0\,, \label{de} \end{equation} where the constant $\lambda$ is finite, not infinitesimal, and must satisfy $\lambda < 1$ due to $r_{+}> r_{-}$. The limit $\varepsilon \rightarrow 0$ means here that simultaneously $R\rightarrow r_{+}$ and $r_{+}\rightarrow r_{-}$ in such a way that $\delta \sim \varepsilon$. In other words, extremality and black holeness are approached concomitantely. \vskip 0.3cm \noindent \textit{Case 3:} $r_+=r_-$ and $R\to r_+$, i.e., \begin{equation} \delta=\varepsilon \,,\quad \varepsilon \to 0\,. \label{de2} \end{equation} This is the case that there is an extremal shell from the very beginning and then one pushes it to its own gravitational radius. \subsection{Mass and angular momentum in the three horizon limits} In the variables $\varepsilon$ and $\delta$ of Eqs.~(\ref{e}) and (\ref{d}), the shell's rest mass $M$ in Eq.~\eqref{sig1} can be written as \begin{equation} M(\varepsilon,\delta,R)= \frac{R}{4G\ell} (1-\varepsilon \delta )\,. \label{med} \end{equation} As well, from Eq.~\eqref{j1} the shell's angular momentum $J$ is now \begin{equation} J(\varepsilon,\delta,R)= \frac{R^2}{4G\ell}\sqrt{(1-\varepsilon ^{2})(1-\delta ^{2})}\,. \label{jed} \end{equation} In {all} \textit{Cases} 1-3, the limits defined in Eqs.~(\ref{de1})-(\ref{de2}) yield \begin{equation} M(\varepsilon,\delta,r_+)= \frac{r_+}{4G\ell}\,, \end{equation} and \begin{equation} J(\varepsilon,\delta,r_+)= \frac{r_+^2}{4G\ell} \,, \label{mqk1} \end{equation} for the shell's mass and angular momentum, respectively. Thus, the three limits, not surprisingly, yield the same extremal condition, \begin{equation} J=r_+ M\,. \end{equation} \section{The pressure, the circular velocity, an the local temperature: The three extremal BTZ black hole limits} \label{sec4} \subsection{Pressure in the three horizon limits} \label{pphit} In the variables $\varepsilon$ and $\delta$ of Eqs.~(\ref{e}) and (\ref{d}), the shell's pressure $p$ in Eq.~\eqref{ne1} can be written as \begin{equation} p(\varepsilon,\delta,R) =\frac{1}{8\pi G\ell} \Big( \frac{\delta}{\varepsilon}+\frac{\varepsilon}{\delta} -1-\varepsilon\delta \Big) \,. \label{pd} \end{equation} For the \textit{Cases} 1-3, the limits defined in Eqs.~(\ref{de1})-(\ref{de2}) yield from Eq.~\eqref{pd} the expressions for the pressure as below. \vskip 0.3cm \noindent \textit{Case1:} For $\delta ={O}(1)$ and $\varepsilon \to 0$, \begin{equation} p(\varepsilon,\delta,r_+) = \frac{\delta }{8\pi G \ell\,\varepsilon }\,, \label{pdiv} \end{equation} up to leading order. Eq.~(\ref{pdiv}) means that the pressure is divergent as $1/\varepsilon$. \vskip 0.3cm \noindent \textit{Case 2:} For $\delta =\frac{\varepsilon }{\lambda}$ and $\varepsilon \to0 $, \begin{equation} p(\varepsilon,\delta,r_+) = \frac{1}{8\pi G \ell} \Big( \frac{1}{\lambda}+\lambda-1 \Big)\,, \label{p3} \end{equation} up to leading order. Eq.~(\ref{p3}) means that the pressure remains finite but nonzero, since $\lambda$ is finite and fixed with $\lambda<1$. \vskip 0.3cm \noindent \textit{Case 3:} For $\delta=\varepsilon$ and $\varepsilon \to 0$, \begin{equation} p(\varepsilon,\delta,r_+)= \frac{1}{8\pi G\ell}\,, \label{pe} \end{equation} up to leading order. Eq.~(\ref{pe}) means that the pressure remains finite and nonzero. Note the difference to the (3+1)-dimensional electric extreme shell in an asymptotically-flat spacetime studied in \cite{extremalshell,lqzn}, where in this same limit one found instead $p=0$. This difference arises from the different asymptotic behaviours of the spacetime, namely, asymptotically flat spacetime in \cite{extremalshell,lqzn} and an asymptotically AdS spacetime here, see also \cite{extremalbtz}. \subsection{Circular velocity in the three horizon limits} With the variables $\varepsilon$ and $\delta$ defined in Eqs.~(\ref{e}) and (\ref{d}), the shell's circular velocity $v$ in Eq.~\eqref{ne3} can be written as \begin{equation} v (R,\varepsilon ,\delta )=\sqrt{\frac{1-\delta ^{2}}{1-\varepsilon ^{2}}} \,\,\frac{\varepsilon }{\delta }\,. \label{cd} \end{equation} For the \textit{Cases} 1-3, the limits defined in Eqs.~(\ref{de1})-(\ref{de2}) yield from Eq.~\eqref{cd} the expressions for the circular velocity as below. \vskip 0.3cm \noindent {\item Case 1:} For $\delta ={O}(1)$ and $\varepsilon \to 0$, \begin{equation} v(\varepsilon,\delta,r_+)=0\, \end{equation} up to leading order. \vskip 0.3cm \noindent {\item Case 2:} For $\delta =\frac{\varepsilon }{\lambda}$ and $\varepsilon \to0 $, \begin{equation} \label{v1} v(\varepsilon,\delta,r_+)= \lambda \,, \end{equation} up to leading order. Equation~(\ref{v1}) means that the circular velocity is nonzero since $\lambda$ is finite and fixed with $\lambda<1$. \vskip0.3cm \noindent \textit{Case 3:} For $\delta =\varepsilon $ and $\varepsilon \rightarrow 0$, \begin{equation} v (\varepsilon ,\delta,r_{+})\leq 1\,. \label{f2} \end{equation} This result is not found directly from Eq.~(\ref{cd}). Indeed, from Eq.~(\ref{cd}) it follows that $v(r_{+},\varepsilon,\delta)=1$. However, in this case, the condition $c(r_+r_-)=1/r_+^2$ imposed to obtain Eq.~(\ref{ne3}) is no longer valid. An independent calculation is requested for an ab initio extremal shell as showed in \cite{extremalbtz}. In this case, there is also an interesting relationship between the impossibility for a material body to reach the velocity of light $v=1$ and the unattainability of the absolute zero of temperature \cite{extremalbtz}. It is also worth reminding that the property of $v<1$ was found for near-horizon particle orbits in the background of near-extremal black holes for the Kerr metric \cite{72} and in \cite{nh} for a much more general case. Thus, we see an interesting analogy between limiting behaviors of self-gravitating shells in (2+1)-dimensional spacetimes and test particles in (3+1)-dimensional spacetimes. \subsection{Temperature in the three horizon limits} In the variables $\varepsilon$ and $\delta$ of Eqs.~(\ref{e}) and (\ref{d}), the shell's local temperature $T$ in Eq.~\eqref{ne2} can be written as \begin{equation} T(\varepsilon,\delta,R) =\frac{\delta ^{2}-\varepsilon ^{2}}{ 2\pi \ell \delta \varepsilon \sqrt{1-\varepsilon ^{2}}}\,. \label{tloc} \end{equation} For the \textit{Cases} 1-3, the limits defined in Eqs.~(\ref{de1})-(\ref{de2}) yield from Eq.~\eqref{tloc} the expressions for the local temperature as below. \vskip 0.3cm \noindent \textit{Case 1:} For $\delta ={O}(1)$ and $\varepsilon \to 0$, \begin{equation} \label{tdiv} T(\varepsilon,\delta,r_+)= \frac{\delta }{2\pi \ell \varepsilon }\,, \end{equation} up to leading order. Eq.~(\ref{tdiv}) means that the temperature is divergent as $1/\varepsilon$. \vskip 0.3cm \noindent \textit{Case 2:} For $\delta =\frac{\varepsilon }{\lambda}$ and $\varepsilon \to0$, \begin{equation} T(\varepsilon,\delta,r_+) = \frac{1-\lambda ^{2}}{2\pi \ell \lambda }\,, \label{t3} \end{equation} up to leading order. Eq.~(\ref{t3}) means that the local temperature is nonzero since $\lambda$ is finite and fixed with $\lambda<1$. It is worth noting a simple formula that follows from (\ref{p3}) and (\ref{t3}) and relates the pressure and temperature in this horizon limit, namely, $ \frac{p}{T}= \frac{1}{4G}\frac{1+\lambda^2-\lambda }{1-\lambda^2} $. Thus, if we believe that the horizon of a black hole probes quantum gravity physics, we find that in this case the quantum gravity regime obeys an ideal gas law. \vskip 0.3cm \noindent \textit{Case 3:} For $\delta=\varepsilon$ and $\varepsilon \to 0$, \begin{equation} T(\varepsilon,\delta,r_+) = {\rm finite}\,. \label{tsingle} \end{equation} This was shown in \cite{extremalbtz}. Eq.~(\ref{tsingle}) does not follow directly from Eq.~(\ref{tloc}). The condition for $T$ should be modified. It turns out that $T_0 $ may depend not only on $r_+$ and $r_-$, but also on $R$. As a result, it may happen that $T_0\to 0$ but the local temperature on the shell $T$ remains finite \cite{extremalbtz}. \section{Entropy: The three extremal BTZ black hole limits} \label{sec5} Having carefully studied the equations of state for $p$, $v$, and $T$, we can now calculate the entropy by integrating the first law, see Eq.~\eqref{1st}, in all three cases. \vskip 0.3cm \noindent {\item Case 1:} $\delta ={O}(1)$ and $\varepsilon \to 0$. \noindent Here, we use first the expressions in terms of $(\varepsilon,\delta,R)$ i.e., Eqs.~\eqref{med}, \eqref{jed}, \eqref{pd}, \eqref{cd} and \eqref{tloc}. Then one finds that the first law Eq.~\eqref{1st} can be expressed in terms of the differentials of $d\epsilon$, $d\delta$, and $dR$ as $dS(\varepsilon,\delta,R) = \frac{\pi}{2G} \Big( -\frac{R \varepsilon}{\sqrt{1-\varepsilon^2}} d\varepsilon+ \sqrt{1-\varepsilon^2}dR \Big)$. Then taking $\varepsilon \to 0$, i.e., $R\to r_+$, we get $dS(\varepsilon,\delta,r_+) = \frac{\pi}{2G}\,dr_+$. Since it does not depend on $\delta$ the expression is also valid in the $\delta\to0$ case, i.e., in the extremal case $r_+\to r_-$. Then integrating with the condition $S\to 0$ as $r_+\to 0$, we get in this extremal limit \begin{equation} S =\frac{A_+}{4G} \,, \label{sbh01} \end{equation} where $A_+=2\pi r_+$ is the area, i.e., the perimeter, of the shell, i.e., the ring, see Eq.~\eqref{areah}, when it is pushed to its gravitational radius. The entropy in Eq.~\eqref{sbh01} is nothing but the Bekenstein-Hawking entropy, see Eq.~\eqref{ent1bh}. \vskip 0.3cm \noindent \textit{Case 2:} $\delta =\frac{\varepsilon }{\lambda}$ and $\varepsilon \to0$. \noindent Here, we also have to use the expressions in terms of $(\varepsilon,\delta,R)$ i.e., Eqs.~\eqref{med}, \eqref{jed}, \eqref{pd}, \eqref{cd} and \eqref{tloc}, and then the first law Eq.~\eqref{1st} can be expressed in terms of the differentials of $d\epsilon$, $d\delta$, and $dR$, as $dS(\varepsilon,\delta,R) = \frac{\pi}{2G} \Big( -\frac{R \varepsilon}{\sqrt{1-\varepsilon^2}}d\varepsilon + \sqrt{1-\varepsilon^2}dR \Big)$, which is the same formula as in \textit{Case 1}. Then taking $\varepsilon \to 0$, i.e., $R\to r_+\to r_-$, we get $dS(\varepsilon,\delta,r_+) = \frac{\pi}{2G}\,dr_+$. This means that the emtropy is independent of the parameter $\lambda$. Then integrating with the condition $S\to 0$ as $r_+\to 0$, we get in this extremal limit \begin{equation} S =\frac{A_+}{4G} \,, \label{sbh02} \end{equation} where again $A_+=2\pi r_+$, see Eq.~\eqref{areah}. The entropy in Eq.~\eqref{sbh02} is again the Bekenstein-Hawking entropy, see Eq.~\eqref{ent1bh}. This result was not involved in former studies \cite{btzshell,extremalbtz}. \vskip 0.3cm \noindent \textit{Case 3:} $\delta =\varepsilon$ and $\varepsilon \to0$. \noindent This case is special. One takes the extremality condition $\delta=\varepsilon$ from the beginning and thus another route to calculate the entropy has to be followed. This was performed in \cite{extremalbtz} and the result is \begin{equation} S=S(A_+)\,, \label{sbh03} \end{equation} where $S(A_+)$ is a well-behaved, but otherwise arbitrary, function of $A_+$, see also Eq.~\eqref{ente11bh}. One can argue, as was done in \cite{extremalbtz}, that the lower and upper bounds for the entropy in this case are given by the zero entropy, Eq.~\eqref{ent1ebh}, and the Bekenstein-Hawking entropy, Eq.~\eqref{ent1bh}, i.e., $ 0\leq S(r_+) \leq \frac{A_+}{4G}$, see Eq.~\eqref{lowupent1bh}. In addition, Eq.~\eqref{sbh03} suggests that the entropy of an extremal black hole does not take a unique value, but instead it may depend on the preceding history that led to formation of precisely that extremal black hole, see also \cite{pretisrvol}. \section{Contributions to the entropy in the three extremal horizon limits} \label{sec6} Finally, for all three different cases, we state which terms in the first law \eqref{1st} give the dominant contributions to the entropy. \vskip 0.3cm \noindent \textit{Case 1:} $\delta ={O}(1)$ and $\varepsilon \to 0$. \noindent We have that the pressure term Eq.~(\ref{pdiv}) gives all the contribution to the entropy. Taking then into account Eq.~(\ref{tdiv}), we obtain the Bekenstein-Hawking entropy (\ref{sbh01}). \vskip 0.3cm \noindent \textit{Case 2:} $\delta =\frac{\varepsilon }{\lambda}$ and $\varepsilon \to 0 $. \noindent All three terms in the first law \eqref{1st} equally contribute to the entropy. Thus, the mass, pressure, and circular velocity terms give contributions to the Bekenstein-Hawking entropy \eqref{sbh02}. \vskip 0.3cm \noindent \textit{Case 3:} $\delta=\varepsilon$ and $\varepsilon \to 0$. \noindent All the three terms in the first law \eqref{1st} contribute to the entropy, see \cite{extremalbtz}. We note that in contrast to the electrically charged case \cite{extremalshell} the pressure does not vanish in the extremal limit and contributes to the entropy in the first law as all other terms do, see Eq.~\eqref{sbh03}. \vskip 0.3cm We summarize these results in the Table I. \begin{widetext} \hskip -0.5cm \begin{tabular} [c]{|l|l|l|l|l|l|}\hline Case & Pressure $p$ & Velocity $v$ & Local temperature $T$ & Entropy $S(A_+)$ & Contribution \\\hline 1 & Infinite & 1 & Infinite & $\frac{A_+}{4G}$~Eq.~\eqref{sbh01} & Pressure\\\hline 2 & Finite nonzero &$<1$ & Finite nonzero & $\frac{A_+}{4G}$~Eq.~\eqref{sbh02} & Mass, Pressure and Angular velocity \\\hline 3 & Finite nonzero & $\leq1$ & Finite zero and nonzero & $0\leq S(A_+)\leq \frac{A_+}{4G}$~Eq.~\eqref{sbh03}& Mass, Pressure and Angular velocity \\\hline \end{tabular} \vskip 0.2cm \noindent Table 1. The contributions of the pressure $p$, angular velocity $v$, and temperature $T$, to the entropy of the extremal black hole $S(A_+)$, according to the first law. \label{tabent} \end{widetext} \section{Conclusions\label{sec7}} We have presented a unified framework to explain how the different entropies of an extremal BTZ black hole arise from an extremal shell. {\item Case 1} and {\item Case 2} agree in the entropy but disagree in all other thermodynamic quantities. {\item Case 2} and {\item Case 3} disagree in the entropy but agree in all other thermodynamic quantities. Therefore, in this sense {\item Case 2} is intermediate between {\item Case 1} and {\item Case 3}. These results complement the former studies in a (2+1)-dimensional BTZ spacetime \cite{quintalemosbtzshell,energycond,btzshell,extremalbtz}, and have much in common with those in the (3+1)-dimensional electrically charged case \cite{charged,extremalshell}, in particular, with \cite{lqzn}. Consideration of astrophysically relevant rotating black holes in (3+1) dimensions is too complex. In this regard, using the (2+1)-dimensional rotating BTZ exact solution enables one to trace quite subtle details that are expected in the more realistic (3+1) case. Therefore, we hope that the present work can shed light on the entropy issue for the (3+1)-dimensional black holes as well. \section*{ACKNOWLEDGEMENTS} We thank Funda\gamma c\~ao para a Ci\^encia e Tecnologia (FCT), Portugal, for financial support through Grant~No.~UID/FIS/00099/2013.~MM~thanks~FCT, for financial support through Grant No.~SFRH/BPD/88299/2012.~JPSL thanks Coordena\gamma c\~ao de Aperfei\gamma coamento do Pessoal de N\'\i vel Superior (CAPES), Brazil, for support within the Programa CSF-PVE, Grant No.~88887.068694/2014-00. JPSL also thanks an FCT grant, No.~SFRH/BSAB/128455/2017. OBZ thanks support from SFFR, Ukraine, Project No.~32367. OBZ has also been partially supported by the Kazan Federal University through a state grant for scientific activities.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Secret sharing schemes are modifications of cooperative games to the situation when not money but information is shared. Instead of dividing a certain sum of money between participants a secret sharing scheme divides a secret into shares---which is then distributed among participants---so that some coalitions of participants have enough information to recover the secret (authorised coalitions) and some (nonauthorised coalitions) do not. A scheme is perfect if it gives no information to nonauthorised coalitions whatsoever. A perfect scheme is most informationally efficient if the shares contain the same number of bits as the secret \cite{Karnin83}; such schemes are called ideal. The set of authorised coalitions is said to be the access structure. However, not all access structures can can carry an ideal secret sharing scheme \cite{Stinson:1992}. Finding a description of those which can carry appeared to be quite difficult. A major milestone in this direction was the paper by \cite{BD91} who showed that all ideal secret sharing schemes can be obtained from matroids. Not all matroids, however, define ideal schemes \cite{Seymour1992} so the problem is reduced to classifying those matroids that do. There was little further progress, if any, in this direction. Several authors attempted to classify all ideal access structures in subclasses of secret sharing schemes. These include access structures defined by graphs \cite{BD91}, weighted threshold access structures \cite{beimel:360, padro:2010}, hierarchical access structures \cite{padro:2010}, bipartite and tripartite access structures \cite{Padro:1998, PadroS04,FMP2012}. While in the classes of bipartite and tripartite access structures the ideal ones were given explicitly, for the case of weighted threshold access structures \cite{beimel:360} suggested a new kind of description. This method uses the operation of composition of access structures \cite{martin:j:new-sss-from-old}. The idea is that sometimes all players can be classified into 'strong' players and 'weak' players and the access structure can be decomposed into the main game that contains strong players and the auxiliary game which contains weak players. Under this approach the first task is obtaining a characterisation of indecomposable structures. \citeA{beimel:360} proved that every ideal indecomposable secret sharing scheme is either disjunctive hierarchical or tripartite. \citeA{padro:2010,FarrasP12} later gave a more precise classification which was complete (but some access structures that they viewed as indecomposable later appeared to be decomposable). If a composition of two weighted access structures were again a weighted structure there will not be need to do anything else. However, we will show that this is not true. Since the composition of two weighted access structures may not be again weighted, it is not clear which indecomposable structures and in which numbers can be combined to obtain more complex weighted access structures. To answer this question in this paper we undertake a thorough investigation of the operation of composition. \par\medskip Since the access structure of any secret sharing scheme is a simple game in the sense of \citeA{vNM:b:theoryofgames}, we found it more convenient to use game-theoretic methods and terminology. Section~2 of the paper gives the background in simple games. We introduce some important concepts from game theory like Isbel's desirability relation on players, which will play in this paper an important role. We remind the reader of the concept of complete simple game which is a simple game for which Isbel's desirability relation is complete\footnote{In \cite{padro:2010} such games are called hierarchical.}. We introduce the technique of trading transforms and certificates of nonweightedness \cite{GS2011} for proving that a simple game is a weighted threshold games. In Section 3, we give the motivation for the concept of composition $C=G\circ_g H$ of two games $G$ and $H$ over an element $g\in G$, give the definition and examples. The essence of this construction is as follows: in the first game $G$ we choose an element $g\in G$ and replace it with the second game $H$. The winning coalitions in the new game are of two types. Firstly, every winning coalition in $G$ that does not contain $g$ remains winning in $C$. A winning coalition in $G$ which contained $g$ needs a winning coalition of $H$ to be added to it to become winning in $C$. We prove several properties of this operation, in particular, we prove that the operation of composition of games is associative. Section~4 presents preliminary results regarding the compositions of ideal games and weighted games in general. We start with reminding the reader that the composition of two games is ideal if and only if the two games being composed are ideal \cite{beimel:360}. Then we show that if a weighted game is composed of two games, then the two composed games are also weighted. Finally, we prove the first sufficient condition for a composition to be weighted. Section~5 is devoted to compositions in the class of complete games. We prove that, with few possible exceptions, the composition of two complete games is complete if and only if the composition is over the weakest player relative to the desirability relation of the first game. We show that the composition of two weighted threshold simple games may not be weighted threshold even if we compose over the weakest player. We give some sufficient conditions for the composition of two weighted games to be weighted. In Section~6 we prove that onepartite games are indecomposable, and also prove the uniqueness of some decompositions. In Section~7 we recap the classification of indecomposable ideal weighted simple games given by \citeA{padro:2010}. According to it all ideal indecomposable games are either $k$-out-of-$n$ games or belong to one of the six classes: $\bf B_1$, $\bf B_2$, $\bf B_3$, $\bf T_1$, $\bf T_2$, $\bf T_3$. We show that some of the games in their list are in fact decomposable, and hence arrive at a refined list of all indecomposable ideal weighted simple games. In Section~8 we investigate which of the games from the refined list can be composed to obtain a new ideal weighted simple game. The result is quite striking; the composition of two indecomposable weighted games is weighted only in two cases: when the first game is a $k$-out-of-$n$ game, or if the first game is of type $\bf B_2$ (from the Farras and Padro list) and the second game is an anti-unanimity game where all players are passers i.e., players that can win without forming a coalition with other players. This has a major implication for the refinement of Beimel-Tassa-Weinreb-Farras-Padro theorem. In Section~9, using the results of Section~8, we show that a game $G$ is an ideal weighted simple game if and only if it is a composition \[ G=H_1\circ \cdots \circ H_s\circ I\circ A_n, \] where $H_i$ is a $k_i$-out-of-$n_i$ game for each $i=1,2,\ldots, s$, $A_n$ is an anti-unanimity game, and $I$ is an indecomposable game of types $\bf B_1$, $\bf B_2$, $\bf B_3$, $\bf T_1$, and $\bf T_{3}$. Any of these may be absent but $A_n$ may appear only if $I$ is of type $\bf B_2$. The main surprise in this result is that in the decomposition there may be at most one game of types $\bf B_1$, $\bf B_2$, $\bf B_3$, $\bf T_1$, $\bf T_3$. \section{Preliminaries} \iffalse \subsection{Secret Sharing Schemes} Suppose $n$ agents from set $A=\{1,2,\ldots, n\}$ agreed to share a secret in such a way that some coalitions (subsets) of $A$ are authorised to know the secret. In other words a certain access structure to the secret is put in place. An {\em access structure} is any subset $W\subseteq 2^A$ such that \begin{equation}\label{mon} X\in W\ \text{and}\ X\subseteq Y,\ \text{then}\ Y\in W \end{equation} reflecting the fact that if a smaller coalition knows the secret, then the larger one will know it too. The access structure is public knowledge and all agents know it. Due to the monotonicity requirement (\ref{mon}) the access structure is completely defined by its minimal authorised coalitions.\ It is normally assumed that every agent participates in at least one minimal authorised coalition. \par\medskip Let $S_0,\row Sn$ be finite sets where $S_0$ will be interpreted as a set of all possible secrets and $S_i$ is the set of all possible 'shares of the secret' that can be given to agent $i$. Any subset \[ {\cal T}\subseteq S_0\times S_1\times\ldots\times S_n \] will be called a {\em distribution table}. If a secret $s_0\in S_0$ is to be distributed among agents, then an $n$-tuple \[ (s_0,\row sn)\in {\cal T} \] is chosen by the dealer at random uniformly among those tuples whose first coordinate is $s_0$ and then agent $i$ gets the share $s_i\in S_i$. A {\em secret sharing scheme} is a family of triples ${\cal S}=(W,{\cal T}, f_X)_{X\in W}$, where $W$ is an access structure, ${\cal T}$ is a distribution table and for every authorised coalition $X=\{\row ik\}\in W$ the function (algorithm) $$f_X\colon S_{i_1}\times\ldots\times S_{i_k}\to S_0$$ which satisfies $ f_X(s_{i_1}, s_{i_2},\ldots, s_{i_k})=s_0 $ for every $(s_0,\row sn)\in {\cal T}$. The family $(f_X)_{X\in W}$ is said to be the {\em family of secret recovery functions}. \begin{example}\label{n_n} Let us design a secret sharing scheme with $n$ agents such that the only authorised coalition will be the grand coalition, that is the set $A=\{1,2,\ldots, n\}$. We take a sufficiently large field $F$ and set $S_0=F$. The field is large enough so that it is infeasible to try all secrets one by one. We will also have $S_i=F$ for all $i=1,\ldots, n$. Given a secret $s\in F$ to share the dealer may, for example, generate $n-1$ random elements $\row s{n-1}\in F$ and calculate $s_n=s-(s_1+\ldots+s_{n-1})$. Then he may give share $s_i$ to agent $i$. The distribution table ${\cal T}$ will consists of all $n$-tuples $(s_0,\row sn)$ such that $\sum_{i=1}^ns_i=s_0$ and the secret recovery function (since we have only one authorised coalition in this case the only one secret recovery function) will be $f_A(\row sn)=s_1+\ldots+s_{n}$. \end{example} \begin{definition} A secret sharing scheme ${\cal S}=(W,{\cal T}, f_X)_{X\in W}$ is called {\em perfect} if for every non-authorised subset $\{\row jm\}\subset A$, for every sequence of elements $s_{j_1}, s_{j_2},\ldots, s_{j_m}$, such that $s_{j_r}\in S_{j_r}$, and for every two possible secrets $s,s'\in S_0$ the distribution table ${\cal T}$ contains as many tuples $(s,\ldots, s_{j_1},\ldots, s_{j_m}, \ldots )$ as tuples $(s',\ldots, s_{j_1},\ldots, s_{j_m}, \ldots )$. \end{definition} In a perfect scheme a non-authorised coalition has no information about the secret whatsoever. The scheme from Example~\ref{n_n} is obviously perfect. Another perfect secret sharing scheme is the famous Shamir's one. \begin{example}[Shamir, 1979]\label{k_n} Suppose that we have $n$ agents and the access structure is now $W=\{X\subseteq A\mid |X|\ge k\}$, i.e. a colition is authorised if it contains at least $k$ agents. Let $F$ be a large finite field and we will have $S_i=F$ for $i=0,1,2,\ldots, n$. Let $\row an$ be distinct fixed nonzero elements of $F$. (Note that $p(0)=s_0=s$). Suppose $s\in F$ is the secret to share. The dealer sets $t_0=s$ and generates randomly $\row t{k-1}\in F$. He forms the polynomial $p(x)=t_0+t_1x+\ldots+t_{k-1}x^{k-1}$. Then he gives share $s_i=p(a_i)$ to agent $i$. Suppose now $X=\{\row ik\}$ be a minimal authorised coalition. Then the secret recovery function is \[ f_X(s_{i_1},\ldots, s_{i_k})=\sum_{r=1}^k s_{i_r} \frac{(-a_{i_1})\ldots \widehat{(-a_{i_r})}\ldots (-a_{i_k}) }{(a_{i_r}-a_{i_1})\ldots \widehat{(a_{i_r}-a_{i_r})}\ldots (a_{i_r}-a_{i_k})}, \] where the hat over the term means its non-existence. This is the value at zero of the Lagrange's interpolation polynomial \[ \sum_{r=1}^k p(a_{i_r}) \frac{(x-a_{i_1})\ldots \widehat{(x-a_{i_r})}\ldots (x-a_{i_k}) }{(a_{i_r}-a_{i_1})\ldots \widehat{(a_{i_r}-a_{i_r})}\ldots (a_{i_r}-a_{i_k})}, \] which is equal to $p(x)$. \end{example} The access structure from Example~\ref{k_n} is called {\em threshold access structure} or $k$-out-of-$n$ threshold access structure. It is not difficult to see that it is perfect. It is known \cite{Benaloh1990} that for any access structure $W$ there exists a perfect secret sharing scheme which realises $W$. \cite{Karnin83} showed that in a perfect secret sharing scheme $|S_i|\ge |S_0|$ for all $i=1,\ldots, n$; so the most informationally efficient schemes have the domain of the secrets and the domains for shares of the same size. \begin{definition} A secret sharing scheme ${\cal S}=(W,{\cal T}, f_X)_{X\in W}$ is called {\em ideal} if it is perfect and $|S_i|= |S_0|$ for all $i=1,\ldots, n$. \end{definition} The Shamir's secret sharing scheme is obviously ideal. Classification of ideal secret sharing schemes have been a central topic of this theory for some time and is far from being solved. \cite{BD91} showed that there is a unique matroid associated with every ideal secret sharing scheme. At the same time there are matroids that do not correspond to any. The problem appeared to be easier in the subclass of weighted threshold secret sharing schemes to which this paper is devoted. At the end we will give a complete classification. \fi \subsection{Simple Games} The main motivation for this work comes from secret sharing. However, the access structure on the set of users is a {\em simple game} on that set so we will use game-theoretic terminology. \begin{definition}[von Neumann \& Morgenstern, 1944] A simple game is a pair $G=(P_G,W_G)$, where $P_G$ is a set of players and $W_G\subseteq 2^{P_G}$ is a nonempty set of coalitions which satisfies the monotonicity condition: \[ \text{if $X\in W_G$ and $X\subseteq Y$, then $Y\in W_G$}. \] Coalitions from set $W_G$ are called {\em winning} coalitions of $G$, the remaining ones are called {\em losing}. \end{definition} A typical example of a simple game is the United Nations Security Council, which consists of five permanent members and 10 nonpermanent. The passage of a resolution requires that all five permanent members vote for it, and also at least nine members in total. The book by \citeA{tz:b:simplegames} gives many other interesting examples. A simple game will be called just a game. The set $W_G$ of winning coalitions of a game $G$ is completely determined by the set $W_G^{\text{min}} $ of its minimal winning coalitions. A player which does not belong to any minimal winning coalitions is called a {\em dummy}. He can be removed from any winning coalition without making it losing. A player who is contained in every minimal winning coalition is called a {\em vetoer}. A game with a unique minimal winning coalition is called an {\em oligarchy}. In an oligarchy every player is either a vetoer or a dummy. A player who alone forms a winning coalition is called a {\em passer}. A game in which all minimal winning coalitions are singletons is called {\em anti-oligarchy}. In an anti-oligarchy every player is either a passer or a dummy. \begin{definition} A simple game $G$ is called {\em weighted threshold game} if there exist nonnegative weights $\row wn$ and a real number $q$, called {\em quota}, such that \begin{equation} \label{WMG} X\in W_G \Longleftrightarrow \sum_{i\in X}w_i\ge q. \end{equation} This game is denoted $[q;\row wn]$. We call such a game simply {\em weighted}. \end{definition} It is easy to see that the United Nation Security Council can be defined in terms of weights as $[39; 7,\ldots,7,1,\ldots,1]$. In secret sharing weighted threshold access structures were introduced by \cite{shamir:1979,Blakley1979}.\par\medskip For $X \subset P$ we will denote its complement $P \setminus X$ by $X^c$. \begin{definition} Let $G=(P,W)$ be a simple game and $A\subseteq P$. Let us define subsets \[ W_{\text{sg}}=\{X\subseteq A^c\mid X\in W\}, \quad W_{\text{rg}}=\{X\subseteq A^c\mid X\cup A\in W\}. \] Then the game $G_A=(A^c,W_\text{sg})$ is called a {\em subgame} of $G$ and $G^A=(A^c,W_\text{rg})$ is called a {\em reduced game} of $G$. \end{definition} The two main concepts of the theory of games that we will need here are as follows. Given a simple game $G$ on the set of players $P$ we define a relation $\succeq_G$ on $P$ by setting $i \succeq_G j$ if for every set $X\subseteq P$ not containing $i$ and~$j$ \begin{equation} \label{condition} X\cup \{j\}\in W_G \Longrightarrow X\cup \{i\} \in W_G. \end{equation} In such case we will say that $i$ is at least as {\em desirable} (as a coalition partner) as $j$. In the United Nations Security Council every permanent member will be more desirable than any nonpermanent one. This relation is reflexive and transitive but not always complete (total) (e.g., see \citeA{CF:j:complete}). The corresponding equivalence relation on $[n]$ will be denoted $\sim_{G} $ and the strict desirability relation as $\succ_G$. If this can cause no confusion we will omit the subscript $G$. \begin{definition} Any game with complete desirability relation is called {\em complete}. \end{definition} \begin{example} Any weighted game is complete \end{example} We note that in \eqref{condition} we can choose $X$ which is minimal with this property in which case $X\cup\{i\}$ will be a minimal winning coalition. Hence the following is true. \begin{proposition} Given a simple game $G$ on the set of players $P$ and two players $i.j\in P$, the relation $i\succ_G j$ is equivalent to the existence of a minimal winning coalition $X$ which contains $i$ but not $j$ such that $(X\setminus \{i\})\cup \{j\}$ is losing. \end{proposition} We recap that a sequence of coalitions \begin{equation} \label{tradingtransform} {\cal T}=(\row Xj;\row Yj) \end{equation} is a trading transform \cite{tz:b:simplegames} if the coalitions $\row Xj$ can be converted into the coalitions $\row Yj$ by rearranging players. This latter condition can also be expressed as \[ |\{i:a\in X_i\}| = |\{i:a\in Y_i\}|\qquad \text{for all $a\in P$}. \] It is worthwhile to note that while in (\ref{tradingtransform}) we can consider that no $X_i$ coincides with any of $Y_k$, it is perfectly possible that the sequence $\row Xj$ has some terms equal, the sequence $\row Yj$ can also contain equal terms. \citeA{Elgot60} proved (see also \citeA{tz:b:simplegames}) the following fundamental fact. \begin{theorem} A game $G$ is a weighted threshold game if for no integer $j$ there exists a trading transform \eqref{tradingtransform} such that all coalitions $\row Xj$ are winning and all $\row Yj$ are losing. \end{theorem} Due to this theorem any trading transform \eqref{tradingtransform} where all coalitions $\row Xj$ are winning and all $\row Yj$ are losing is called a {\em certificate of nonweightedness} \cite{GS2011}. Completeness can also be characterized in terms of trading transforms \cite{tz:b:simplegames}. \begin{theorem} A game $G$ is complete if no certificate of nonweightedness exists of the form \begin{equation} \label{certinc} {\cal T}=(X\cup \{x\}, Y\cup \{y\}; X\cup \{y\}, Y\cup \{x\}). \end{equation} \end{theorem} We call \eqref{certinc} a {\em certificate of incompleteness}. This theorem says that completeness is equivalent to the impossibility for two winning coalitions to swap two players and become both losing. This latter property is also called {\em swap robustness}.\par\medskip A complete game $G=(P,W)$ can be compactly represented using multisets. All its players are split into equivalence classes of players of equal desirability. If, say, we have $m$ equivalence classes, i.e., $P=P_1\cup P_2\cup \ldots \cup P_m$ with $|P_i|=n_i$, then we can think that $P$ is the multiset \[ \{1^{n_1},2^{n_2},\ldots,m^{n_m}\}. \] A submultiset $\{1^{\ell_1},2^{\ell_2},\ldots,m^{\ell_m}\}$ will then denote the class of coalitions where $\ell_i$ players come from $P_i$, $i=1,\ldots,m$. All of them are either winning or all losing. We may enumerate classes so that $1\succ_G 2\succ_G \cdots \succ_G m$. The game with $m$ classes is called {\em $m$-partite}. If a game $G $ is complete, then we define {\em shift-minimal} \cite{CF:j:complete} winning coalitions as follows. By a {\em shift} we mean a replacement of a player of a coalition by a less desirable player which did not belong to it. Formally, given a coalition $X$, player $p\in X$ and another player $q\notin X$ such that $q\prec_{G}p$, we say that the coalition $ (X\setminus \{p\})\cup \{q\} $ is obtained from $X$ by a {\em shift}. A winning coalition $X$ is {\em shift-minimal} if every coalition strictly contained in it and every coalition obtained from it by a shift are losing. A complete game is fully defined by its shift-minimal winning coalitions. \begin{example}[Onepartite games] Let $H_{n,k}$ be the game where there are $n$ players and it takes $k$ or more to win. Such games are called {\em $k$-out-of-$n$ games}. Alternatively they can be characterised as the class of complete 1-partite games, i.e., the games with a single class of equivalent players. The game $H_{n,n}$ is special and is called the {\em unanimity game} on $n$ players. We will denote it as $U_n$. The game $H_{n,1}$ does not have a name in the literature. We will call it {\em anti-unanimity game} and denote $A_n$. \end{example} \begin{example}[Bipartite games] Here we introduce two important types of bipartite games. A hierarchical disjunctive game $H_\exists ({\bf n},{\bf k})$ with ${\bf n}=(n_1,n_2)$ and ${\bf k}=(k_1,k_2)$ on a multiset $P=\{1^{n_1},2^{n_2}\}$ is defined by the set of winning coalitions \[ W_\exists = \{ \{1^{\ell_1},2^{\ell_2}\} \mid (\ell_1\ge k_1) \vee (\ell_1+\ell_2\ge k_2) \}, \] where $1\le k_1<k_2$, $k_1\le n_1$ and $k_2-k_1 < n_2$. A hierarchical conjunctive game $H_\forall ({\bf n},{\bf k})$ with ${\bf n}=(n_1,n_2)$ and ${\bf k}=(k_1,k_2)$ on a multiset $P=\{1^{n_1},2^{n_2}\}$ is defined by the set of winning coalitions \[ W_\forall = \{ \{1^{\ell_1},2^{\ell_2}\} \mid (\ell_1\ge k_1) \wedge (\ell_1+\ell_2\ge k_2) \}, \] where $1\le k_1\le k_2$, $k_1\le n_1$ and $k_2-k_1 < n_2$. In both cases, if the restrictions on ${\bf n}$ and ${\bf k}$ are not satisfied the game becomes 1-partite \cite{gha:t:hierarchical}). \end{example} \begin{example}[Tripartite games] \label{ex} Here we introduce two types of tripartite games. Let ${\bf n}=(n_1,n_2,n_3)$ and ${\bf k}=(k_1,k_2,k_3)$, where $n_1,n_2,n_3$ and $k_1,k_2,k_3$ are positive integers. The game $\Delta_1({\bf n},{\bf k})$ is defined on the multiset $P=\{1^{n_1},2^{n_2},3^{n_3}\}$ with the set of winning coalitions \[ \{ \{1^{\ell_1},2^{\ell_2},3^{\ell_3}\} \mid (\ell_1\ge k_1) \vee [(\ell_1+\ell_2\ge k_2)\wedge (\ell_1+\ell_2+\ell_3\ge k_3) \}, \] where \begin{equation} \label{delta_cond_1} k_1<k_3,\quad k_2<k_3,\quad n_1 \geq k_1,\quad n_2 >k_2- k_1 \quad \text{and $\quad n_3> k_3-k_2$}. \end{equation} These, in particular, imply $n_1+n_2\ge k_2$.\smallskip The game $\Delta_2({\bf n},{\bf k})$ is for the case when $n_2 \leq k_2 -k_1$, and it is defined on the multiset $P=\{1^{n_1},2^{n_2},3^{n_3}\}$ with the set of winning coalitions \[ \{ \{1^{\ell_1},2^{\ell_2},3^{\ell_3}\} \mid (\ell_1+\ell_2\ge k_2) \vee [(\ell_1\ge k_1)\wedge (\ell_1+\ell_2+\ell_3\ge k_3) \}. \] where \begin{equation} \label{delta_cond_2} k_1< k_2<k_3, \quad n_1+n_2\ge k_2,\quad n_3> k_3-k_2, \quad \text{and $\quad n_2+n_3> k_3-k_1$}. \end{equation} These conditions, in particular, imply $n_1\ge k_1$ and $n_3\ge 2$. In both cases, if the restrictions on ${\bf n}$ and ${\bf k}$ are not satisfied the game either contains dummies or becomes 2-partite or even 1-partite (see a justification of this claim in the appendix). \end{example} The games in these three examples play a crucial role in classification of ideal weighted secret sharing schemes \cite{beimel:360,padro:2010}. \section{The Operation of Composition of Games} The most general type of compositions of simple games was defined by \citeA{Shapley62}. We need a very partial case of that concept here, which is in the context of secret sharing, was introduced by \citeA{martin:j:new-sss-from-old}. \begin{definition} \label{decompo} Let $G$ and $H$ be two games defined on disjoint sets of players and $g \in P_{G}$. We define the composition game $C=G\circ_g H$ by defining $P_{C}=(P_{G}\setminus \{g\}) \cup P_{H}$ and \[ W_{C}= \{X \subseteq P_C\mid X_G \in W_{G} \text{ or $X_G \cup \{g\} \in W_{G}$ and $X_H \in W_{H}$} \}, \] where $X_G = X \cap P_{G}$ and $X_H = X \cap P_{H}$. \end{definition} This is a substitution of the game $H$ instead of a single element $g$ of the first game. All winning compositions in $G$ not containing $g$ remain winning in $C$. If a winning coalition of $G$ contained $g$, then it remains winning in $C$ if $g$ is replaced with a winning coalition of $H$. One might imagine that, if a certain issue is voted in $G$, then voters of $H$ are voted first and then their vote is counted in the first game as if it was a vote of player $g$. Such situation appears, for example, if a very experienced expert resigns from a company, they might wish to replace him with a group of experts. \begin{definition} A game $G$ is said to be {\em indecomposable} if there does not exist two games $H$ and $K$ and $h\in P_H$ such that $\min(|H|,|K|)>1$ and $G\cong H\circ_h K$. Alternatively, it is called {\em decomposable}. \end{definition} \begin{example} \label{vetoers} Let $G=(P,W)$ be a simple game and $A\subseteq P$ be the set of all vetoers in this game. Let $|A|=m$. Then $G\cong U_{m+1}\circ_u G_A$, where $u$ is any player of $U_{m+1}$. So any game with vetoers is decomposable. \end{example} \begin{example} \label{passers} Let $G=(P,W)$ be a simple game and $A\subseteq P$ be the set of all passers in this game. Let $|A|=m$. Then $G\cong A_{m+1}\circ_a G_A$, where $a$ is any player of $A_{m+1}$. So any game with passers is decomposable. \end{example} Suppose $G=(P,W)$ and $G'=(P',W')$ be two games and $\sigma\colon P\to P'$ is a bijection. We say that $\sigma$ is an isomorphism of $G$ and $G'$, and denote this as $G\cong G'$, if $X\in W$ if and only if $\sigma(X)\in W'$. It is easy to see that if $|H|=1$, then $ H\circ_h K\cong K$ and, if $|K|=1$, then $H\circ_h K\cong H$. \begin{proposition} \label{prop1} Let $G,H$ be two games defined on the disjoint set of players and $g\in P_G$. Then \[ W_{G\circ_g H}^\text{min}=\{X\mid X\in W_G^\text{min} \text{ and $g\notin X$}\}\cup \{X\cup Y \mid \text{$X\cup \{g\}\in W_G^\text{min}$ and $Y\in W_H^\text{min}$ with $g\notin X$}\}. \] \end{proposition} \begin{proof} Follows directly from the definition. \end{proof} \begin{proposition} Let $G,H,K$ be three games defined on the disjoint set of players and $g\in P_G$, $h\in P_H$. Then \[ (G\circ_g H)\circ_h K \cong G\circ_g (H\circ_h K), \] that is the two compositions are isomorphic. \end{proposition} \begin{proof} Let us classify the minimal winning coalitions of the game $(G\circ_g H)\circ_h K$. By Proposition~\ref{prop1} they can be of the following types: \begin{itemize} \item $X\in W_G^\text{min}$ with $g\notin X$; \item $X\cup Y$, where $X\cup \{g\}\in W_G^\text{min}$ and $Y\in W_H^\text{min}$ with $g\notin X$ and $h\notin Y$; \item $X\cup Y\cup Z$, where $X\cup \{g\}\in W_G^\text{min}$, $Y\cup \{h\}\in W_H^\text{min}$ and $Z\in W_K^\text{min}$ with $g\notin X$ and $h\notin Y$. \end{itemize} It is easy to see that the game $G\circ_g (H\circ_h K)$ has exactly the same minimal winning coalitions. \end{proof} \begin{proposition} Let $G,H$ be two games defined on the disjoint set of players. Then $G\circ_g H$ has no dummies if and only if both $G$ and $H$ have no dummies. \end{proposition} \begin{proof} Straightforward. \end{proof} \section{Decompositions of Weighted Games and Ideal Games} The following result was proved in \cite{beimel:360} and was a basis for this new type of description. \begin{proposition} \label{splitprop} Let $C=G\circ_g H$ be a decomposition of a game $C$ into two games $G$ and $H$ over an element $g\in P_G$, which is not dummy. Then, $C$ is ideal if and only if $G$ and $H$ are also ideal. \end{proposition} Suppose we have a class of games ${\cal C}$ such that if the composition $G\circ_g H$ belongs to $\cal C$, then both $G$ and $H$ belong to $\cal C$. This proposition means that in any class of games $\cal C$ with the above property we may represent any game as a composition of indecomposable ideal games also belonging to $\cal C$. The class of weighted games as the following lemma shows satisfies the above property, Hence, if we would like to describe ideal games in the class of weighted games we should look at indecomposable weighted games first. \begin{lemma} \label{splitlemma} Let $C=G\circ_g H$ be a decomposition of a game $C$ into two games $G$ and $H$ over an element $g\in P_G$, which is not dummy. Then, if $C$ is weighted, then $G$ and $H$ are weighted. \end{lemma} \begin{proof} Suppose first that $C$ is weighted but $H$ is not. Then we have a certificate of nonweightedness $(\row Uj;\row Vj)$ for the game $H$. Let also $X$ be any minimal winning coalition of $G$ containing $g$ (since $g$ is not a dummy, it exists). Let $X'=X\setminus \{g\}$. Then \[ (X'\cup U_1,\ldots, X'\cup U_j; X'\cup V_1,\ldots, X'\cup V_j) \] is a certificate of nonweightedness for $C$. Suppose now that $C$ is weighted but $G$ is not. Then let $(\row Xj;\row Yj)$ be a certificate of nonweightedness for $G$ and $W$ be a fixed minimal winning coalition $W$ for $H$. Define \[ X_i'= \begin{cases} X_i\setminus \{g\}\cup W & \text{if $g\in X_i$}\\ X_i & \text{if $g\notin X_i$} \end{cases} \] and \[ Y_i'= \begin{cases} Y_i\setminus \{g\}\cup W & \text{if $g\in Y_i$}\\ Y_i & \text{if $g\notin Y_i$} \end{cases} \] Then, since $|\{i\mid g\in X_i\}|=|\{i\mid g\in Y_i\}|$, the following \[ (X_1',\ldots, X_j'; Y_1',\ldots, Y_j') \] is a trading transform in $C$. Moreover, it is a certificate of nonweightedness for $C$ since all $X_1',\ldots, X_j';$ are winning in $C$ and all $Y_1',\ldots, Y_j'$ are losing in $C$. So both assumptions are impossible. \end{proof} \begin{corollary} Every weighted game is a composition of indecomposable weighted games.\footnote{As usual we assume that if a game $G$ is indecomposable, its decomposition into a composition of indecomposable games is $G=G$, i.e., trivial.} \end{corollary} The converse is however not true. As we will see in the next section, the composition $C=G\circ_g H$ of two weighted games $G$ and $H$ is seldom weighted. Thus we will pay attention to those cases where compositions are weighted. One of those which we will now consider is when $G$ is a $k$-out-of-$n$ game. In this case all players of $G$ are equivalent and we will often omit $g$ and write the composition as $C=G\circ H$. \begin{theorem} Let $H=H_{n,k}$ be a $k$-out-of-$n$ game and G is a weighted simple game. Then $C=H\circ G$ is also a weighted game. \end{theorem} \begin{proof} Let $\row Xm$ be winning and $\row Ym$ be losing coalitions of $C$ such that \[ (\row Xm;\row Ym) \] is a trading transform. Without loss of generality we may assume that $\row Xm$ are minimal winning coalitions. Let $U_i=X_i\cap H$, then $U_i$ is either winning in $H$ or winning with $h$, hence $|U_i|=k$ or $|U_i|=k-1$. If for a single $i$ we had $|U_i|=k$, then all of the sets $\row Ym$ could not be losing since at least one of them would contain $k$ elements from $H$. Thus $|U_i|=k-1$ for all $i$. In this case we have $X_i=U_i\cup S_i$, where $S_i$ is winning in $G$. Let $Y_i=V_i\cup T_i$, where $V_i\subseteq H$ and $T_i\subseteq G$. Since all coalitions $\row Ym$ are losing in $C$, we get $|V_i|=k-1$ which implies that all $T_i$ are losing in $G$. But now we have obtained a trading transform $(\row Sm;\row Tm)$ in $G$ such that all $S_i$ are winning and all $T_i$ are losing. This contradicts to $G$ being weighted. \end{proof} \section{Compositions of complete games} We will start with the following observation. It says that if $g\in P_G$ is not the least desirable player of $G$, then the composition $G\circ_g H$ is almost never swap robust, hence is almost never complete. \begin{lemma} \label{not_complete} Let $G,H$ be two games on disjoint sets of players and $H$ is neither a unanimity nor an anti-unanimity. If for two elements $g,g'\in P_G$ we have $g \succ g'$ and $g'$ is not a dummy, then $G\circ_g H$ is not complete. \end{lemma} \begin{proof} As $g$ is more desirable than $g'$, there exists a coalition $X\subseteq P_G$, containing neither $g$ nor $g'$ such that $X\cup \{g\}\in W_G$ and $X\cup \{g'\}\notin W_G$. We may take $X$ to be minimal with this property, then $X\cup \{g\}$ is a minimal winning coalition of $G$. Since $g'$ is not dummy, there exist a minimal winning coalition $Y$ containing $g'$. The coalition $Y$ may contain $g$ or may not. Firstly, assume that it does contain $g$. Since $H$ is not an oligarchy there exist two distinct winning coalitions of $H$, say $Z_1$ and $Z_2$. Then we can find $z\in Z_1\setminus Z_2$. Then the coalitions $U_1=X\cup Z_1$ and $U_2=(Y\setminus \{g\})\cup Z_2$ are winning in $G\circ_g H$ and coalitions $V_1=(X\cup \{g'\})\cup (Z_1\setminus \{z\})$ and $V_2=Y\setminus \{g,g'\}\cup (Z_2\cup \{z\})$ are losing in this game since $Z_1\setminus \{z\}$ is losing in $H$ and $Y\setminus \{g'\} = Y\setminus \{g,g'\} \cup \{g\}$ is losing in $G$. Since $V_1$ and $V_2$ are obtained when $U_1$ and $U_2$ swap players $z$ and $g'$, the sequence of sets $ (U_1,U_2;V_1,V_2) $ is a certificate of incompleteness for $G\circ_g H$. Suppose now $Y$ does not contain $g$. Let $Z$ be any minimal winning coalition of $H$ that has more than one player (it exists since $H$ is not an anti-oligarchy). Let $z\in Z$. Then \[ (X\cup Z, Y; X\cup\{g'\}\cup (Z\setminus \{z\}), Y\setminus \{g'\}\cup \{z\}) \] is a certificate of incompleteness for $G\circ_g H$. \end{proof} This lemma shows that if a composition $G\circ_g H$ of two weighted games is weighted, then almost always $g$ is one of the least desirable players of $G$. The converse as we will see in Section~\ref{inde} is not true. If we compose two weighted games over the weakest player of the first game, the result will be always complete but not always weighted. \begin{theorem} \label{threecases} Let $G$ and $H$ be two complete games, $g\in G$ be one of the least desirable players in $G$ but not a dummy. Then for the game $C=G\circ_gH$ \begin{enumerate} \item[(i)] for $x,y\in P_G\setminus \{g\}$ it holds that $x\succeq_Gy$ if and only if $x\succeq_Cy$. Moreover, $x\succ_Gy$ if and only if $x\succ_Cy$; \item[(ii)] for $x,y\in P_H$ it holds that $x\succeq_Hy$ if and only if $x\succeq_Cy$. Moreover, $x\succ_Hy$ if and only if $x\succ_Cy$; \item[(iii)] for $x\in P_G\setminus \{g\}$ and $y\in P_H$, then $x\succeq_Cy$; if $y$ is not a passer or vetoer in $H$, then $x\succ_Cy$. \end{enumerate} In particular, $C$ is complete. \end{theorem} \begin{proof} (i) Suppose $x\succeq_Gy$ but not $x\succeq_Cy$. Then there exist $Z\subseteq C$ such that $Z\cup \{y\}\in W_C$ but $Z\cup \{x\}\notin W_C$. We can take $Z$ minimal with this property. Consider $Z'=Z\cap P_G$. Then either $Z'\cup \{y\}$ is winning in $G$, or else $Z'\cup \{y\}$ is losing in $G$ but $Z'\cup \{y\}\cup \{g\}$ is winning in $G$. In the latter case $Z\cap P_H\in W_H$. In the first case, since $x \succeq_G y$, we have also $Z'\cup \{x\}\in W_G$, which contradicts $Z\cup \{x\}\notin W_C$. Similarly, in the second case we have $Z'\cup \{x\}\cup \{g\}\in W_G$ and since $Z\cap P_H\in W_H$, this contradicts $Z\cup \{x\}\notin W_C$ also. Hence $x\succeq_Cy$.\par If $x\succ_Gy$, then there exists $S\subseteq P_G$ such that $S\cap \{x,y\}=\emptyset$ and $S\cup \{x\} \in W_G$ but $S\cup \{x\} \notin W_G$. We may assume $S$ is minimal with this property. If $S$ does not contain $g$, then $S$ is also winning in $C$ and $x\succ_Cy$, so we are done. (ii) This case is similar to the previous one. If $S$ contains $g$, then consider any winning coalition $K$ in $H$. Then $(S\setminus \{g\})\cup \{x\}\cup K$ is winning in $C$ whille $(S\setminus \{g\})\cup \{y\}\cup K$ is losing in $C$. Hence $x\succ_Cy$. (iii) We have $x\succeq_Gg$ since $g$ is from the least desirable class in $G$. Let us consider a coalition $Z\subset C$ such that $Z\cap \{x,y\}=\emptyset$, and suppose there exists $Z\cup \{y\}\in W_C$ but $Z\cup \{x\}\notin W_C$. Then $Z$ must be losing in $C$, and hence $Z\cap P_G$ cannot be winning in $G$, but $Z\cap P_G\cup \{g\}$ must be winning in $G$. However, since $x\succeq_Gg$, the coalition $Z\cap P_G\cup \{x\}$ is also winning in $G$. But then $Z\cup \{x\}$ is winning in $C$, a contradiction. This shows that if $Z \cup \{y\}$ is winning in $C$, then $Z \cup \{x\}$ is also winning in $C$, meaning $x \succeq_C y$. Thus $C$ is a complete game. Moreover, suppose that $y$ is not a passer or a vetoer in $H$, we will show that $x \succ_C y$. Since $g$ is not a dummy, then $x$ is not a dummy either. Let $X$ be a minimal winning coalition of $G$ containing $x$. If $g\notin X$, then $X$ is also winning in $C$. However, $X\setminus \{x\}\cup \{y\}$ is losing in $C$, since $y$ is not a passer in $H$. Thus it is not true that $y \succeq_C x$ in this case. If $g\in X$, then consider a winning coalition $Y$ in $H$ not containing $y$ (this is possible since $y$ is not a vetoer in $H$). Then $X\setminus \{g\}\cup Y\in W_C$ but \[ X\setminus \{x\}\cup \{g\}\cup \{y\}\cup Y\notin W_C, \] whence it is not true that $y \succeq_C x$ in this case as well. Thus $x \succ_C y$ in case $y$ is neither a passer nor a vetoer in $H$. \end{proof} \section{Indecomposable onepartite games and uniqueness of some decompositions} \begin{theorem} A game $H_{n,k}$ for $n\ne k \ne 1$ is indecomposable. \end{theorem} \begin{proof} Suppose $H_{n,k}$ is decomposable into $H_{n,k} = K \circ_g L$, where $K=(P_K,W_K), L=(P_L,W_L)$ with $n_1=|P_K| \ge 2$ and $n_2=|P_L|\ge 2$. If $g$ is a passer in $K$, then it is the only passer, otherwise if there is another passer $g'$ in $K$, then $\{g'\}$ is winning in the composition, contradicting $k \ne 1$. \par We will firstly show that $n_2 < k$. Suppose that $n_2 \geq k$, and choose a player $h \in P_K$ different from $g$. Consider a coalition $X$ containing $k$ players from $P_L$, then $X$ is winning in the composition and $g$ is a passer, and it is also true that $X$ is a minimal winning coalition in $L$. Now replace a player $x$ in $X$ from $P_L$ with $h$. The resulting coalition, although it has $k$ players, is losing in the composition, because $x$ is not a passer in $K$, and $k-1$ players from $P_L$ are losing in $L$. Therefore $k > n_2$. \par We also have $|P_K \setminus \{g\}|=n-n_2 > k -n_2>0$. Let us choose any coalition $Z$ in $P_K \setminus \{g\}$ with $k - n_2$ players. Note that it does not win with $g$ as $|Z \cup \{g\}|= k - n_2 + 1 < k$ players. This is why $Z \cup P_L$ is also losing despite having $k$ players in total, contradiction. \end{proof} If the first component of the composition is a $k$-out-of-$n$ game, there is a uniqueness of decomposition. \begin{theorem} \label{uniH} Let $H_{n_1,k_1}$ and $H_{n_2,k_2}$ be two $k$-out-of-$n$ games which are not unanimity games. Then, if $G=H_{n_1,k_1}\circ G_1= H_{n_2,k_2}\circ G_2$, with $G_1$ and $G_2$ having no passers, then $n_1=n_2$, $k_1=k_2$ and $G_1= G_2$. If $G=U_{n_1}\circ G_1=U_{n_2}\circ G_2$ and $G_1$ and $G_2$ does not have vetoers, then $n_1=n_2$ and $G_1=G_2$. \end{theorem} \begin{proof} Suppose that we know that $G=H\circ G_1$, where $H$ is a $k$-out-of-$n$ game but not a unanimity game. Then all winning coalitions in $G$ of smallest cardinality have $k$ players, so $k$ in this case can be recovered unambiguously. If $G_1$ does not have passers, then $n$ can be also recovered since the set of all players that participate in winning coalitions of size $k$ will have cardinality $n-1$. So there cannot exist two decompositions $G=H_{n_1,k_1}\circ G_1$ and $G=H_{n_2,k_2}\circ G_2$ of $G$, where $k_1\ne k_2$ with $k_1\ne n_1$ and $k_2\ne n_2$. Let us consider now the game $G=U\circ G_1$, where $U$ is a unanimity game. Due to Example~\ref{vetoers} if $G_1$ does not have vetoers, then $U$ consists of all vetoers of $G$ and uniquely recoverable. \end{proof} \section{Indecomposable Ideal Weighted Simple Games} \label{inde} The following theorem was proved in~\cite[p.234]{padro:2010} and will be of a major importance in this chapter. \begin{theorem}[Farr\`{a}s-Padr\'{o}, 2010] \label{FP2010} Any indecomposable ideal weighted simple game belongs to one of the seven following types: \begin{description} \item[{\bf H}:] Simple majority or $k$-out-of-$n$ games. \item[\textbf{B$_1$}:] Hierarchical conjunctive games $H_\forall(n,k)$ with $\textbf{n}=(n_1,n_2)$, $\textbf{k} = (k_1,k_2)$, where $k_1 < n_1$ and $k_2 - k_1 = n_2 - 1 > 0$. Such games have the only shift-minimal winning coalition $\{1^{k_1},2^{k_2-k_1}\}$. \item[\textbf{B$_2$}:] Hierarchical disjunctive games $H_\exists(n,k)$ with $\textbf{n}=(n_1,n_2), \textbf{k}=(k_1,k_2)$, where $1 < k_1 \leq n_1$, $k_2 \leq n_2$, and $k_2=k_1+1$. The shift-minimal winning coalitions have the forms $\{1^{k_1}\}$ and $\{2^{k_2}\}$. \label{p_list_1} \item[\textbf{B$_3$}:] Hierarchical disjunctive games $H_\exists(n,k)$ with $ \textbf{n}=(n_1,n_2), \textbf{k}=(k_1,k_2)$, where $k_1 \leq n_1$, $k_2 > n_2 > 2$ and $k_2=k_1+1$. The shift-minimal winning coalitions have the forms $\{1^{k_1}\}$ and $\{1^{k_2-n_2},2^{n_2}\}$. \item[\textbf{T$_{1}$}:] Tripartite games $\Delta_1({\bf n},{\bf k})$ with $k_1> 1$, $k_2 < n_2$, $k_3=k_1+1 $ and $n_3= k_3-k_2+1 > 2$. It has two types of shift-minimal winning coalitions: $\{1^{k_1}\}$ and $\{2^{k_2},3^{k_3-k_2}\}$. It follows from \eqref{delta_cond_1} that $k_1\le n_1$ and $k_3-k_2\le n_3$. \item[\textbf{T$_{2}$}:] Tripartite games $\Delta_1({\bf n},{\bf k})$ with $n_3= k_3-k_2+1 > 2$ and $k_3=k_1+1$. It has two types of shift-minimal winning coalitions: $\{1^{k_1}\}$ and $\{1^{k_2-n_2},2^{n_2},3^{k_3-k_2}\}$. It follows from \eqref{delta_cond_1} that $k_1\le n_1$, $k_2-n_2\le k_1$, and $k_3-k_2\le n_3$. \item[\textbf{T$_{3}$}:] Tripartite games $\Delta_2({\bf n},{\bf k})$ with $k_3-k_1 = n_2+n_3-1$ and $k_3=k_2+ 1$ and $k_2-n_2 > k_1$, $n_3 > 1$. It has two types of shift-minimal winning coalitions $\{1^{k_2-{n_2}},2^{n_2}\}$ and $\{1^{k_1},2^{k_3-k_1-n_3},3^{n_3}\}$ (the case when $k_3-k_1=n_3$ and $n_2=1$ is not excluded). It follows from \eqref{delta_cond_2} that $k_1\le n_1$, $k_2-n_2\le n_1$, and $k_3-k_1-n_3< n_2$. \end{description} \end{theorem} \noindent Farras and Padro \citeyear{FarrasP12} wrote these families more compactly but equivalently. However, we found it more convenient to use their earlier classification. The list above contains some decomposable games as we will now show. \begin{proposition} \label{Prop_B1} The game of type ${{\bf B}_1}$ for $k_2 - k_1 = n_2 - 1 =1$ is decomposable. \end{proposition} \begin{proof} The decomposition is as follows: Assume $k_2 - k_1 = n_2 - 1 = 1$, so $n_2=2 \ \text{and} \ k_2=k_1+1$, then we have $\textbf{k} = (k_1,k_1+1), \textbf{n}=(n_1,2)$, and the only shift-minimal winning coalition here is $\{1^{k_1},2\}$. Let the first game $G=(P_G,W_G)$, be one-partite with $P_G=\{1^{n_1+1}\}$, $W_G = \{1^{k_1+1}\}$, and let the second game be $H=(P_H,W_H), P_H=\{2^{2}\}, W_H = \{2\}$. Then the composition $G \circ_1 H$ over a player $1 \in P_G$ gives two minimal winning coalitions $\{1^{k_1+1}\}$ and $\{1^{k_1},2\}$, of which only $\{1^{k_1},2\}$ is shift-minimal. Hence the composition is of type ${{\bf B}_1}$. This proves that a game of type \textbf{B$_1$} is decomposable in this case. \end{proof} \begin{proposition} \label{Prop_UA} The unanimity games $U_n$ and anti-unanimity $A_n$ for $n>2$ are decomposable. $U_2$ and $A_2$ are indecomposable. \end{proposition} \begin{proof} We note that \[ U_n\circ U_m\cong U_{n+m-1} \] for any $u\in U_n$. In particular, the only indecomposable unanimity game is $U_2$. Similarly, \[ A_n\circ A_m\cong A_{n+m-1} \] for any $a\in A_n$ with the only indecomposable anti-unanimity game is $A_2$. \end{proof} \begin{proposition} \label{Prop_T2} All games of type ${\bf T}_2$ are decomposable. \end{proposition} \begin{proof} Let $\Delta=\Delta_1({\bf n},{\bf k})$ be of type ${\bf T}_2$. Then we have the following decomposition for it. The first game will be $G=(P_G,W_G)$, which is bipartite with the multiset representation on $ \{1^{n_1},2^{n_2+1}\}$ and shift-minimal winning coalitions of types $\{1^{k_1}\}$ and $ \{1^{k_2-n_2},2^{n_2+1}\}$. The second game will be $(k_3-k_2)$-out-of-$n_3$ game $H=(P_H,W_H)$, with the multiset representation on $\bar{P}_H = \{3^{n_3}\}$ and shift-minimal winning coalitions of type $\{3^{k_3-k_2}\}$. The composition is over a player $p \in P_G$ from level $2$. Then we can see that $G \circ_p H$ has shift-minimal winning coalitions of types $\{1^{k_1}\}$ and $\{1^{k_2-n_2},2^{n_2},3^{k_3-k_2}\}$, hence is exactly $\Delta$. \end{proof} We now refine classes ${\bf H}$ and ${\bf B}_1$ as follows: \begin{description} \item[\textbf{H}:] $\ \ $Games of this type are $A_2$, $U_2$ and $H_{n,k}$, where $1 < k < n$. \item[\textbf{B$_1$}:] Hierarchical conjunctive games $H_\forall(n,k)$ with $\textbf{n}=(n_1,n_2)$, $\textbf{k} = (k_1,k_2)$, where $k_1 < n_1$ and $k_2 - k_1 = n_2 - 1 > 1$. \end{description} The following of Theorem~\ref{FP2010}, is now an if-and-only-if statement. \begin{theorem} \label{list_all} A game is ideal weighted and indecomposable if and only if it belongs to one of the following types: ${\bf H}, {\bf B}_1, {\bf B}_2, {\bf B}_3, {\bf T}_1, {\bf T}_3$. \end{theorem} \begin{proof} Due to Theorem~\ref{FP2010} and Propositions~\ref{Prop_B1}-\ref{Prop_T2} all that remains to show is that the remaining cases are indecomposable. We leave this routine work to the reader. \end{proof} Let us compare this theorem with Theorem~\ref{FP2010}. We narrowed the class ${\bf H}$, we excluded the case $n_2=2$ in ${\bf B}_1$ and removed class ${\bf T}_2$. \section{Compositions of ideal weighted indecomposable games} Suppose from now on that we have a composition $G = G_1 \circ_g G_2$, where both $G_1$ and $G_2$ are ideal and weighted, and $G_1$ is indecomposable. The plan now is to fix $G_1$ and analyse what happens when we compose it with an arbitrary ideal weighted game $G_2$. Since $G_1$ is ideal weighted and indecomposable, then it belongs to one of the seven types of games listed in Theorem~\ref{list_all}. So we carry out the analysis case by case for all possibilities of $G_1$. \par The key result that will lead us to the main theorem of this paper is the following. \begin{theorem} \label{when} Let $G$ be a game with no dummies which has a nontrivial decomposition $G= G_1 \circ_g G_2$, such that $G_1$ and $G_2$ are both ideal and weighted, and $G_1$ is indecomposable. Then $G$ is ideal weighted if and only if either \begin{itemize} \item[(i)] $G_1$ is of type $\textbf{H}$, or \item[(ii)] $G_1$ is of type \textbf{B}$_2$ and $G_2$ is $A_n$ such that the composition is over a player $g$ of level $2$ of~$G_1$. \end{itemize} \end{theorem} We will prove it in several steps. Firstly, we will consider all cases when $g$ is from the least desirable level of $G_1$. Secondly, in Appendix, we will deal with the hypothetical cases when $g$ is not from the least desirable level. This is because, unfortunately, Lemma~\ref{not_complete} still leaves a possibility that for some special cases of $G_2$ this decomposition may be over $g$ which is not the least desirable in $G_1$. \subsection{The two weighted cases} \label{1A} \begin{proposition} \label{w_1} If $G_1=(P_1,W_1)$ is of type $\textbf{H}$ and $G_2=(P_2,W_2)$ is weighted, then $G = G_1 \circ_{g} G_2$ is weighted. \end{proposition} \begin{proof} Assume the contrary. Then $G$ has a certificate of nonweightedness \[ (X_1,\ldots,X_m; Y_1,\ldots,Y_m), \] where $\row Xm$ are minimal winning coalitions and $\row Ym$ are losing coalitions of $G$. Let $U_i = X_i \cap P_1$, then either $|U_i| = k$ or $|U_i| = k-1$. However, if for a single $i$ we have $|U_i| = k$, then it cannot be that all of the sets $\row Ym$ are losing, as there will be at least one among with at least $k$ elements of $P_1$. Thus $|U_i| = k-1$ for all $i$. In this case we have $X_i = U_i \cup S_i$, where $S_i$ is winning in $G_2$. Let $Y_i = V_i \ \cup \ T_i$, where $V_i \subseteq P_1$ and $T_i \subseteq P_2$. We must have $|V_i| = k-1$ for all $i$. Since all coalitions $\row Ym$ are losing in $G$, then all $T_i$ are losing in $G_2$. But now we have obtained a trading transform $(S_1,\ldots,S_m; T_1,\ldots,T_m)$ for $G_2$, such that all $S_i$ are winning and all $T_i$ are losing in $G_2$, i.e., a certificate of nonweightedness for $G_2$. This contradicts the fact that $G_2$ is weighted. \end{proof} \begin{proposition} \label{g_2} Let $G_1=(P_1,W_1)$ be a weighted simple game of type \textbf{B}$_2$, $g$ is a player from level $2$ of $P_1$, and $G_2$ is $A_n$, then $G = G_1 \circ_{g} G_2$ is a weighted simple game. \end{proposition} \begin{proof} Since $g$ is a player from level $2$ of $P_1$, then $G$ is a complete game by Theorem~\ref{threecases}. Also, recall that shift-minimal winning coalitions of a game of type \textbf{B}$_2$ are $\{1^{k_1}\}$ and $\{2^{k_1+1}\}$. We shall prove weightedness of $G$ by showing that it cannot have a certificate of nonweightedness. In the composition, in the multiset notation, $G$ has the following shift-minimal winning coalitions $\{1^{k_1}\},\{2^{k_1},3\}$. So all shift-minimal winning coalitions have $k_1$ players from $P_1 \setminus \{g\}$. Also, since $G_1$ has two thresholds $k_1$ and $k_2$ such that $k_2=k_1+1$, then any coalition containing more than $k_1$ players from $P_1 \setminus \{g\}$ is winning in $G_1$, and hence winning in $G$. Suppose now towards a contradiction that $G$ has the following certificate of nonweightedness \begin{equation} \label{baba} (X_1,\ldots,X_n;Y_1,\ldots,Y_n), \end{equation} where $X_1,\ldots,X_n$ are shift-minimal winning coalitions and $Y_1,\ldots,Y_n$ are losing coalitions in $G$. Let the set of players of $A_n$ be $P_{A_n}$. It is easy to see that at least one of the coalitions $X_1,\ldots,X_n$ in~(\ref{baba}) is not of the type $\{1^{k_1}\}$, so at least one of these winning coalitions has a player from the third level, i.e. from $A_n$. But since each shift-minimal winning coalition in~(\ref{baba}) has $k_1$ players from $P_1 \setminus \{g\}$, then each losing coalition $Y_1,\ldots,Y_n$ in~(\ref{baba}) also has $k_1$ players from $P_1 \setminus \{g\}$ (if it has more than $k_1$ then it is winning). Moreover, at least one coalition from $Y_1,\ldots,Y_n$, say $Y_1$, has at least one player from $P_{A_n}$. It follows that $(Y_1 \cap P_{1}) \cup \{g\} \in W_1$ and $Y_1 \cap P_{A_n}$ is winning in $A_n$. Hence $Y_1$ is winning in $G$, contradiction. Therefore no such certificate can exist. \end{proof} In the next section we analyse the remaining of compositions $G = G_1 \circ G_2$ in terms of $G_1$, where the composition is over a player from the least desirable level of $G_1$. We will show that none of them is weighted. \subsection{All other compositions are nonweighted} \label{1C} Here we will consider two cases: \begin{enumerate} \item $G_2$ has at least one minimal winning coalition with cardinality at least $2$. \item $G_2 = A_n$, where $n \geq 2$. \end{enumerate} We will start with the following general statement which will help us to resolve the first case. \begin{definition} Let $G=(P,W)$ be a simple game and $g\in P$. We say that a coalition $X$ is $g$-winning if $g\notin X$ and $X\cup \{g\}\in W$. \end{definition} Every winning coalition is of course $g$-winning but not the other way around. \begin{lemma} \label{keylemma} Let $G$ be a game for which there exist coalitions $X_1,X_2,Y_1,Y_2$ such that both $X_1$ and $X_2$ do not contain $g$, \begin{equation} \label{X1-Y2} (X_1,X_2\, ;\, Y_1,Y_2) \end{equation} is a trading transform, $X_1$ is winning $X_2$ is $g$-winning and $Y_1$ and $Y_2$ are losing in $G$. Let also $H$ be a game with a minimal winning coalition $U$ which has at least two elements, then $C=G\circ_gH$ is not weighted. \end{lemma} \begin{proof} If $X_2$ is winning in $G$, then there is nothing to prove since \eqref{X1-Y2} is a certificate of nonweightedness for $C$, suppose not. Let $U=U_1\cup U_2$, where $U_1$ and $U_2$ are losing in $H$. Then it is easy to check that \[ (X_1,X_2\cup U\, ;\, Y_1\cup U_1,Y_2\cup U_2) \] is a certificate of nonweightedness for $C$. Indeed, $X_1$ and $X_2\cup U$ are both winning in $C$ and $Y_1\cup U_1$ and $Y_2\cup U_2$ are both losing. \end{proof} The only exception in this case is when $H$ consists of passers and dummies. We will have to consider this case separately. \begin{lemma} If $G$ is of type ${\bf B_1}$, ${\bf B_2}$ or ${\bf B_3}$, $g$ is any element of level 2, and $H$ has a minimal winning coalition $X$ which has at least two elements, then $G\circ_gH$ is not weighted. \end{lemma} \begin{proof} Suppose $G$ is of type ${\bf B_1}$. Then let us consider the following trading transform \[ (\{1^{k_1},2^{k_2-k_1}\}, \{1^{k_1},2^{k_2-k_1-1}\}\,;\, \{1^{k_1-1},2^{k_2-k_1+1}\}, \{1^{k_1+1},2^{k_2-k_1-2}\}) \] (note that $k_2-k_1+1=n_2$ and $k_1+1\le n_1$ so there is enough capacity in both equivalence classes to make all coalitions involved legitimate). It is easy to check that the first coalition in this sequence is winning, the second is $g$-winning and the remaining two are losing. By Lemma~\ref{keylemma} the result holds. Suppose now $G$ is of type ${\bf B_2}$, then $k_2=k_1+1\le n_2$. Let $k_1=k$. Then we can apply Lemma~\ref{keylemma} to the trading transform \[ (\{1^k\},\{2^k\}\, ;\, \{1^{\lfloor \frac{k}{2}\rfloor}, 2^{\lceil\frac{k}{2}\rceil}\}, \{1^{\lceil \frac{k}{2}\rceil}, 2^{\lfloor \frac{k}{2}\rfloor}\}), \] where $\{1^k\}$ is winning, $\{2^k\}$ is $g$-winning and the remaining two coalitions are losing. If $G$ is of type ${\bf B_3}$, then $n_2<k_2=k_1+1$. We again let $k=k_1$. In this case we can apply Lemma~\ref{keylemma} to the trading transform \[ (\{1^k\}, \{1^{k-2},2^2\}\, ; \, \{1^{k-1},2\}, \{1^{k-1},2\}), \] where the first coalition is winning, the second is $g$-winning (we use $n_2\ge 3$ here) and the two remaining coalitions are losing. \end{proof} \begin{lemma} If $G$ is of type ${\bf T}_1$ or ${\bf T}_{3}$, $g$ is any element of level 3, and $H$ has a minimal winning coalition $X$ which has at least two elements, then $C=G\circ_gH$ is not weighted. \end{lemma} \begin{proof} If $G$ is of type ${\bf T}_1$. Then let us consider the following trading transform \[ (\{1^{k_1}\}, \{2^{k_2},3^{k_3-k_2-1}\}\, ;\, \{1^{k_1-1}, 2\}, \{1, 2^{k_2-1},3^{k_3-k_2-1}\}). \] Lemma~\ref{keylemma} is applicable to it so $C$ is not weighted. Suppose $G$ is of type ${\bf T}_{3}$. Then let us consider the following trading transform \[ (\{1^{k_2-n_2},2^{n_2}\}, \{1^{k_1},2^{n_2-1},3^{n_3-1}\}\, ;\, \{1^{k_2-n_2},2^{n_2-1},3\}, \{1^{k_1},2^{n_2},3^{n_3-2}\}). \] Since $n_3>1$ all coalitions exist. Lemma~\ref{keylemma} is now applicable and shows that $C$ is not weighted. This proves the lemma. \end{proof} We will now deal with the second case. Denote players of $A_n$ by $P_{A_n}$. \begin{proposition} \label{an} Let $G_1$ be an ideal weighted indecomposable simple game of types ${\bf B}_1$, ${\bf B}_3$, ${\bf T}_1$, and ${\bf T}_3$, and $g$ be a player from the least desirable level of $G_1$, then $G=G_1 \circ_g A_n$ is not weighted. \end{proposition} \begin{proof} Let $G_1$ be of type \textbf{B$_1$}. The only shift-minimal winning coalition of $G_1$ is of the form $\{1^{k_1},2^{k_2-k_1}\}$, where $n_1>k_1>0$, $k_2-k_1=n_2-1>1$. Composing over a player of level $2$ of $G_1$ gives shift-minimal winning coalitions of types $\{1^{k_1},2^{k_2-k_1}\}$ and $\{1^{k_1}, 2^{k_2-k_1-1}, 3\}$. Thus the game is not weighted due to the following certificate of nonweightedness: \[ (\{1^{k_1}, 2^{k_2-k_1}\}, \{1^{k_1}, 2^{k_2-k_1-1}, 3\}; \{1^{k_1-1}, 2^{k_2-k_1+1}, 3\}, \{1^{k_1+1}, 2^{k_2-k_1-2}\}). \] Since in a game of type \textbf{B$_1$} we have $k_2-k_1+1=n_2$ and $k_1+1\le n_1$, then all the coalitions in this trading transform exist. \par Now consider \textbf{B$_3$}. Its shift-minimal winning coalition have types $\{1^{k_1}\}, \{1^{k_2-n_2},2^{n_2}\}$. Composing over a player of level $2$ of $G_1$ gives the following types of winning coalitions $\{1^{k_1}\}$, $\{1^{k_2-n_2},2^{n_2-1}, 3\}$ in $G$. The game is not weighted due to the following certificate of nonweightedness: \[ (\{1^{k_2-n_2},2^{n_2-1}, 3\}, \{1^{k_2-n_2},2^{n_2-1}, 3\}; \{1^{k_2-n_2+1},2^{n_2-2}\}, \{1^{k_2-n_2-1},2^{n_2},3^2\}). \] Note that $k_2-n_1+1<k_1 \leq n_1$ and $ n_2 > 2$ in \textbf{B$_3$}, so all the coalitions in this transform exist. \par Now consider \textbf{T$_1$}. Since its levels 2 and 3 form a subgame of type \textbf{B$_1$}, composing it with $A_n$ over a player of level 3, as was proved, will result in a nonweighted game.\par Let us consider ${\bf T}_{3}$, where the shift-minimal winning coalition are $\{1^{k_2-n_2},2^{n_2}\}$, $\{1^{k_1},2^{k_3-k_1-n_3},3^{n_3}\}$. If we compose over a player of level $3$ of $G_1$, then the resulting game will have shift-minimal coalitions of the following type $\{1^{k_1},2^{k_3-k_1-n_3},3^{n_3-1},4\}$, where now elements of $G_2=A_n$ will form level 4. Then we can show that the composition $G_1\circ G_2$ is not weighted due to the following certificate of nonweightedness: \[ (\{1^{k_1},2^{k_3-k_1-n_3},3^{n_3-1},4\},\{1^{k_1},2^{k_3-k_1-n_3},3^{n_3-1},4\}; \] \[ \{1^{k_1+1},2^{k_3-k_1-n_3},3^{n_3-2}\}, \{1^{k_1-1},2^{k_3-k_1-n_3},3^{n_3},4^2\}). \] The coalition $\{1^{k_1+1},2^{k_3-k_1-n_3},3^{n_3-2}\}$ is losing because in ${\bf T}_{3}$ we have $k_3-k_1-n_3=n_2-1$ and also $k_2-n_2 > k_1$, meaning $(k_1+1)+(k_3-k_1-n_3)=k_1+1+n_2-1 \leq k_2-n_2+n_2-1 = k_2-1$ Also in total it contains less than $k_3$ elements. The coalition $ \{1^{k_1-1},2^{k_3-k_1-n_3},3^{n_3},4^2\}$ is easily seen to be losing as well. Now all that remains for the proof of Theorem~\ref{when} is to consider the cases when $g$ is not from the least desirable level of $G_1$ which may happen only when it is of types ${\bf T}_{1}$ and ${\bf T}_{3}$. These cases are similar to those that have been already considered and we delegate them to the Appendix. \end{proof} \section{The Main Theorem} All previous results combined give us the main theorem: \begin{theorem} \label{pad2} $G$ is an ideal weighted simple game if and only if it is a composition \begin{equation} \label{magic} G = H_1 \circ \ldots \circ H_s \circ I \circ_g A_{n} \ \ (s \geq 0); \end{equation} where $H_i$ is an indecomposable game of type \textbf{H} for each $i=1,\ldots,s$. Also, $I$, which is allowed to be absent, is an indecomposable game of types \textbf{B$_1$}, \textbf{B$_2$}, \textbf{B$_3$}, \textbf{T$_1$} and \textbf{T$_{3}$}, and $A_{n}$ is the anti-unanimity game on $n$ players. Moreover, $A_n$ can be present only if $I$ is either absent or it is of type \textbf{B$_2$}; in the latter case the composition $I \circ A_n$ is over a player $g$ of the least desirable level of $I$. Also, the above decomposition is unique. \end{theorem} \begin{proof} The following proposition will be useful to show the uniqueness of the decomposition of an ideal weighted game. \begin{proposition} \label{uniuni} Let $H$ be a game of type \textbf{H}, $B$ be a game of type ${\bf B}_2$ with $b$ being a player from level $2$ of $B$, $G$ be an ideal weighted simple game, and $A_n$ be an anti-unanimity game. Then $H \circ G \ncong B \circ_b A_n$. \end{proposition} \begin{proof} We note that by Theorem~\ref{threecases} both compositions are complete. Recall that isomorphisms preserve Isbell's desirability relation \cite[]{CF:j:complete}. An isomorphism preserves completeness and maps shift-minimal winning coalitions of a complete game onto shift-minimal winning coalitions of another game. Let $H=H_{k,n}$. Consider first the composition $H \circ G$. Any minimal winning coalition in this composition will have either $k$ or $k-1$ players from the most desirable level. \par Now consider $B \circ_b A_n$. Let the two types of shift-minimal winning coalitions of $B$ are of the forms $\{1^\ell \}$ and $\{2^{\ell +1}\}$, then there will be a minimal winning coalition in $B \circ_b A_n$ which has $\ell $ players from the second most desirable level and an element of level 3 with no players of level 1. The two games therefore cannot be isomorphic. \end{proof} \noindent {\it Proof of Theorem~\ref{pad2}.} This proof is now easy since the main work has been done in Theorem~\ref{when}. Either $G$ is decomposable or not. If it is not, then by Theorem~\ref{list_all} it is either of type ${\bf H}$ or one of the indecomposable games of types \textbf{B$_1$}, \textbf{B$_2$}, \textbf{B$_3$}, \textbf{T$_1$}, and \textbf{T$_{3}$}. So the theorem is trivially true. Suppose now that $G$ is decomposable, so $G = G_1 \circ G_2$. Then by Theorem~\ref{when} there are only two possibilities: \begin{itemize} \item[(i)] $G_1$ is of type $\textbf{H}$; \item[(ii)] $G_1$ is of type \textbf{B$_2$}, and also $G_2 = A_n$ such that the composition is over a player of level $2$ of $G_1$. \end{itemize} By Proposition~\ref{uniuni} these two cases are mutually exclusive. Suppose we have the case (i). By Theorem~\ref{uniH} $G_1$ is uniquely defined and we can apply the induction hypothesis to $G_2$. It is also easy to see that in the second case $G_1$ and $G_2$ are uniquely defined. \end{proof} \section{Acknowledgments} Authors thank Carles Padro for a number of useful discussions. We are very grateful to Sascha Kurz for a very useful feedback on the early draft of this paper. \bibliographystyle{apacite}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Though inflation is commonly invoked as the mechanism in the early universe that solves the horizon and flatness problems by exponentially diluting away any inhomogeneities, it has been argued that if it runs for long enough it actually does not produce such a flat, smooth spacetime. This is because it also generates quantum mechanical fluctuations that get stretched to superhorizon lengths, and if these accrue the spacetime gets warped on very long length scales. The time it takes for this to occur is the Page time of de Sitter\footnote{We define the Page time of de Sitter as $t_{dS}\sim R_{dS}S_{dS}\sim M_p^2/H^3$, where $R_{dS}$ is the de Sitter radius and $S_{dS}$ is the de Sitter entropy.} in analogy with the black hole case, as has long been argued. Recently we showed that these changes can be encapsulated by a change in the quantum state of the system for the de Sitter case, and showed that after the Page time the perturbative description in the initial vacuum state breaks down, forcing the overlap between the initial and final states to go to zero \cite{1609.06318}. This elaborates on the findings of \cite{1005.1056,Giddings:2011ze,Giddings:2011zd} who, for example in \cite{1005.1056}, demonstrated that the circumsphere of de Sitter compactified on a torus undergoes a large variance due to infrared fluctuations after a similar time scale, and of \cite{Dvali:2013eja,Dvali:2014gua,Dvali:2017eba}\footnote{This time-scale also appeared in the discussion of the thermalization time scale of de Sitter in \cite{Danielsson:2003wb}} who also identified this time scale in de Sitter as a point where the perturbative treatment appears to break down. At first this seems at odds with the fact that a physical observer still experiences a spacetime that locally resembles a flat, homogeneous and isotropic spacetime. Indeed, because this breakdown is a property of the global description of the state, it has been argued that the effect is unphysical. Indeed, any mode with wavelength much larger than the system being observed can be locally gauged away, as a consequence of the equivalence principle, and at late times super-horizon modes cannot be measured within a single Hubble patch as illustrated in Fig. \ref{pepsi}. Any observation of this effect would require a nonlocal measurement in the sense that it would need a comparison of the effect at (minimum) two very separate spacetime points. A single local observer is able to register the effect if they were willing to wait for long enough. After a Page time, it would always be possible to construct a coordinate system corresponding to flat space; however, this would not coincide with the initial coordinate system. Thus, while our observer remains in flat space the entire time, the flat spatial slices are related to each other in a nontrivial way (a diffeomorphism at asymptotic future null infinity) as we illustrate in Fig. \ref{can}. \begin{centering} \begin{figure*}[h] \centering \includegraphics[width=6cm]{pepsi.png} \caption{The change in geometry from the emission of a long wavelength mode at late times. A local observer only has access to information contained within their horizon, and the effect of long modes can be gauged away locally. However, the difference between different causally separated local observers becomes large. To observe this effect, inflation needs to come to an end, and long wavelength modes need to re-enter the horizon.} \label{pepsi} \end{figure*} \end{centering} \begin{centering} \begin{figure*}[h] \centering \includegraphics[width=6cm]{can.png} \caption{The change in geometry from the emission of a long wavelength mode. A local observer only has access to information contained within their horizon, and the effect of long modes can be gauged away locally. However, the difference between initial and final gauges can be kept track of, in which case even a physical observer would be able to measure the effect once it becomes large.} \label{can} \end{figure*} \end{centering} To make sure this represents a physical effect, we can imagine constructing a device that would be able to register this change. What we have in mind is a network of satellites, loosely tethered together to counteract the effect of expansion, initially arranged in a sphere. After a Page time, the initial sphere would be deformed into an ellipsoid of different size by an $\mathcal{O}(1)$ amount. This represents the memory effect associated with the asymptotic symmetry group transformations that have taken place, as explained in \cite{1411.5745}. We could even imagine observing this effect with a single telescope, if we were able to register the stresses induced by the long wavelength modes throughout its body. When a measuring device is created, it is done so in vacuum, and so automatically gauges away every long mode that was emitted prior to that event. Therefore, only long modes emitted after that point will leave any effect on the system. This leads to our definition of a patient observer: \emph{A patient observer is an observer that has been in existence prior to the time when a given long mode of interest was created, and is equipped with some measurement device, necessarily not de Sitter invariant, to record its state for the entire duration.} The usual curvature perturbations of wavelength shorter than the soft mode are not patient observers, since their freeze-out dynamics, that is, the time at which they cross the horizon, is set by the background geometry that includes all soft modes that have been previously emitted. This is the reason why IR effects can be gauged away in the case of the usual spectrum of curvature perturbations in cosmology, and prevents an accurate record of the history before its creation from being recorded. However, the arrangement of satellites mentioned above, as well as any isocurvature modes present, are examples of a patient observer. In section \ref{patient} we will discuss a third type of patient observer based off an Unruh detector. In section \ref{impatience} we discuss possible fundamental limitations to constructing patient observers in practice due to quantum mechanical effects. The breaking of de Sitter invariance by the observer plays a crucial role, and without it, a patient observer would not be able to distinguish the state before and after the soft mode is created. This is similar to the analogous example of electromagnetic memory in the case of exploding charges \cite{Susskind:2015hpa}, where the change in the state is a pure gauge transformation in the absence of the observers (in that case the superconducting nodes). However, the presence of the superconducting nodes on a sphere surrounding the initial charges spontaneously break the gauge invariance in the global state, and therefore they are able to record a memory of the state before the explosion of the charges, and register the difference afterwards. The superconducting nodes are the electromagnetic analog of a patient observer. Though the departure from the initial vacuum becomes large at the Page time, giving an indication that the perturbative description breaks down on this timescale, the prognosis for describing the physics is not necessarily bleak if we can find a non-perturbative way of organizing these effects. This is the procedure we outline in this paper, where we show that it is possible to explicitly evaluate the effects of long modes on correlators to all orders. While related methods have been discussed in the literature \cite{1005.1056,Giddings:2011zd,Tanaka:2013caa,Urakawa:2010it,Tanaka:2012wi,Byrnes:2010yc,Gerstenlauer:2011ti, Frob:2013ht, Burgess:2015ajz}, our technique enables us to derive known results in a compact fashion, and also find new applications. As an example, which also serves as a non-trivial check of our methods, we re-derive the probability distribution of the comoving curvature perturbation, and show that our results matches the results of \cite{1103.5876}, where it was derived using instead the Kramers-Moyal equation of the cosmological comoving curvature perturbation. Our initial expressions are within several approximation schemes which prevent them from being exact: namely, we work in the constant tilt approximation, and ignore interactions among the long modes themselves, focusing on their additive influence on short modes. This allows us to arrive at simple expressions for the correlators we consider, but we are actually able to step beyond these approximations and extract some quantitative features in the general case. Our main application of this technology is to write down the probability distribution of an observer measuring a given value of the power spectrum, even deep in the non-perturbative regime. We find a power law form, with the exponent dependent on the precise nature of inflation. To achieve this, we first recast the change in state of the system as a Bogoliubov transformation in section 2. This is shown to induce changes in the mode functions of the short wavelength modes, equivalent to the change in state obtained from the Noether charge we found in \cite{1609.06318}. This state is then used to compute the expected values of several common correlators in section 3, and the full distribution in section 4. We find that the averages are not very representative of a typical observation, a consequence of the system's power law behavior. Before we begin, we comment on the concept of the Page time in inflationary spacetimes, as the generalization from de Sitter space contains some subtlety. The most primitive definition we will need is that the number of degrees of freedom emitted rivals the degrees of freedom of the horizon, which for pure de Sitter corresponds to a number of e-folds $N\sim M_p^2/H^2$. In slow roll spacetimes, however, the horizon size increases as well, and generically at a faster rate than long wavelength modes are emitted, so the Page time is never reached. The minimum requirement for the Page time to make sense is that the number of horizon degrees of freedom increases at a slower rate than the emission rate. Then, using the slow roll equation $dH/dN=-\epsilon H$, we find that $d A/dN= \epsilon A$, and, using $P_\gamma=H^2$, $P_\zeta=P_\gamma/(16\epsilon)$, we find that the change in horizon area (and therefore the number of holographic degrees of freedom) in one e-fold is equal to $\Delta A=1/P_\zeta$. Comparing this to the number of gravitons emitted per e-fold, $\Delta N_\gamma=2$, we see that the Page time is only possibly reached in the $P_\zeta\sim1$ regime, i.e. eternal inflation. Note that this is only a necessary condition for the Page time to be reached: this regime must still persist for long enough that the emitted modes actually overtake the horizon's. During inflation, however, more scalar degrees of freedom are emitted than tensors, and so a more appropriate comparison would be to $\Delta N_\zeta=1/\epsilon$. This yields the condition $r=P_\zeta$, which, as we will see, exactly corresponds to the condition that the loop corrections to the graviton two point function are of the same order of magnitude as the tree level contribution, or, in other words, the onset of the non-perturbative regime. \section{Bogoliubov}\label{bogo} In this section we show how a soft graviton or inflaton mode can be reinterpreted as inducing particle production through a Bogoliubov transformation. We demonstrate the utility of this framework by easily computing one loop corrections to the scalar two point function, and verify that it is equivalent to a change in the state of the system, proving its validity. Generically, we will be interested in correlators between the curvature perturbation $\zeta$ and gravitons $\gamma$ of the form \begin{equation} \left\langle \prod_{i=1}^n\zeta_{\tilde{k}_i}\prod_{j=1}^m\gamma_{\tilde{k}_j}\right\rangle \, . \end{equation} We choose to work in the uniform density gauge, where the effect of the long mode is to shift the momentum $k^2 \rightarrow\tilde{k}^2=e^{2\zeta_L}\left[e^{\gamma_L}\right]_{ij} k_i k_j$. Intuitively, this will induce particle creation by altering the mode equation of the fluctuations. For any scalar field, for instance, the wave function will obey the shifted wave equation \begin{equation} \tilde\Delta\phi=\left(a^{-3}\partial_t a^3 \partial_t -\frac{1}{a^2}e^{-2\zeta_L}e^{-\gamma_L}{}_{ij}\partial_i\partial_j\right)\phi=m^2\phi \, . \end{equation} We mostly focus on the case where $\phi$ is the canonically normalized field corresponding to $\zeta$, that is, $\zeta=(H/\dot{\bar\phi})\phi$. From here, the long wavelength modes do not affect the separability properties of this wave operator, so that time and space can be analyzed individually (once it is frozen out). Additional terms involving the long $\zeta$ mode would muddle the two by adding a nonzero shift vector, but these are slow roll suppressed. Furthermore, the part of this equation that depends on time derivatives is not altered at all, which means that if we expand in spatial plane waves, the mode function will depend on these through the effective wave number described above. If there is any spatial dependence in the long modes then this procedure is approximate, as plane waves will not be exact eigenfunctions of the Laplacian, but in the limit that the wavelength of the long modes is much larger than scales we are interested in it can be treated as a constant (anisotropic) rescaling of the coordinates. \subsection{Inflation} \label{Bogoliubov transformation} In this setting the mode equations are Hankel functions, which, when written in terms of conformal time $\eta=\int dt/a$ are \begin{equation} u_k(\eta)=c_{\nu} H (-\eta)^{3/2} H^+_\nu(-k\eta)\, , \end{equation} where $c_{\nu}=\frac{\sqrt{\pi}}{2} e^{i \frac{\pi}{2}(\nu + 1/2)}$, chosen to asymptote to positive frequency modes at early times. The coefficient $\nu$ parameterizes the departure from exact de Sitter, which in terms of the scalar power spectrum tilt is $\nu=3/2-(n_s-1)/2$. The insight we draw is that in a shifted background the same mode function can be used except with the replacement of $k\rightarrow\tilde{k}$\cite{Maldacena:2002vr,Seery:2008ax,1005.1056,Giddings:2011ze}. Then the Bogoliubov coefficients can be computed in the standard way. The additional subtlety that normally does not occur is that these coefficients now depend on the long modes, that is, the Bogoliubov coefficients are now \emph{field dependent}. However, if we restrict our attention to modes with wavelengths much shorter than the long mode, the operators commute and we are able to sidestep this subtlety. The field in the shifted background is related to $U$ through \begin{eqnarray} \label{alpha beta def} \zeta_{\tilde{k}}|0\rangle=\left(\alpha_k u_k^*+\beta_k u_k\right)a_k^\dagger|0\rangle\, . \end{eqnarray} If there are multiple modes in the desired correlator, this procedure can be used for each independently. Thus, for instance, the correction to the two point function is \begin{equation}\label{optimus} \langle \zeta_{\tilde{k}_1}\zeta_{\tilde{k}_2}\rangle=\langle \left|\alpha_k u_k^*+\beta_k u_k\right|^2 J \rangle \delta^{(3)}(k_1+k_2) \, . \end{equation} The coefficient $J$ is a Jacobian factor, obtained from $\delta^{(3)}(\tilde k)=J\delta^{(3)}(k)$. It remains to compute the quantity $\langle \left|\alpha_k u_k^*+\beta_k u_k\right|^2 J\rangle$, which is where the interesting physics comes in. From the standard formulas we find that \begin{equation}\label{grimlock} \langle \left|\alpha_k u_k^*+\beta_k u_k\right|^2 J\rangle = \left\langle e^{(3-2\nu)\zeta_L}\left(e^{\gamma_L}{}_{ij}\hat{k}_i\hat{k}_j\right)^{-\nu}\right\rangle P_k\, . \end{equation} Where $\hat{k}$ is the unit vector pointed in the direction of the wavenumber at which the scalar correlator is being evaluated, and $P_k=|u_k|^2$ is the power spectrum. This quantity is somewhat tricky, as it involves nontrivial index structure and, when the matrix exponential is expanded, depends on the long graviton in a complicated way. To begin our analysis we reproduce the one loop results, before going on to a more sophisticated analysis. \subsection{One loop} To reproduce the one loop results from the literature \cite{1005.1056} we need to expand the quantity (\ref{grimlock}) to second order in the long curvature and graviton modes. This will automatically incorporate the two diagrams of the form drawn below. \begin{centering} \begin{figure*}[h] \centering \includegraphics[width=12cm]{1loop.png} \caption{The two 1 loop diagrams. The straight line is the scalar mode, and the slinky lines are both long wavelength gravitons and scalars.} \label{alphadecays} \end{figure*} \end{centering} To do this, let us denote $\delta_m=\gamma^m_{ij}\hat{k}_i\hat{k}_j$. Then we have \begin{equation} \left\langle\hat{\tilde{k}}^{-2\nu}\right\rangle=1+\frac{(3-2\nu)^2}{2}\langle\zeta_L^2\rangle-\frac{\nu}{2}\langle\delta_2\rangle+\frac{\nu(\nu+1)}{2}\langle\delta_1^2\rangle+\mathcal{O} \left(\gamma_L^4, \zeta_L^4, \zeta_L^2 \gamma_L^2 \right)\, . \end{equation} To proceed we need to relate the two second order correlators to the power spectrum of gravitons. Using \begin{equation} \langle\gamma(p_1)_{ij}\gamma(p_2)_{kl}\rangle=P_\gamma(p_1)P_{ijkl}(p_1)\delta^{(3)}(p_1+p_2), \end{equation} where \begin{equation} P_{ijkl}(p)=\hat\delta_{ik}\hat\delta_{jl}+\hat\delta_{il}\hat\delta_{jk}-\hat\delta_{ij}\hat\delta_{kl},\quad \hat\delta_{ij}=\delta_{ij}-\hat{p}_i\hat{p}_j \end{equation} we find that \begin{equation} \langle\delta_1^2\rangle=s^4\langle\gamma_L^2\rangle,\quad\langle\delta_2\rangle=2s^2\langle\gamma_L^2\rangle \end{equation} where $s^2=1-(\hat k\cdot\hat p)^2$. Once we do the angular average we arrive at \begin{equation} \langle\delta_1^2\rangle=\frac{8}{15}\langle\gamma_L^2\rangle,\quad\langle\delta_2\rangle=\frac{4}{3}\langle\gamma_L^2\rangle \end{equation} which gives \begin{equation} \left\langle\hat{\tilde{k}}^{-2\nu}\right\rangle=1+\frac{(1-n_s)^2}{2}\langle\zeta_L^2\rangle+\frac{(4-n_s)(1-n_s)}{15}\langle\gamma_L^2\rangle+\mathcal{O} \left(\gamma_L^4, \zeta_L^4, \zeta_L^2 \gamma_L^2 \right) \, . \end{equation} We have used the fact that to lowest order $\nu=(4-n_s)/2$, which gives perfect agreement with \cite{1005.1056}. \subsection{Charge transformation $\Leftrightarrow$ Bogoliubov transformation} To close the consistency triangle of figure \ref{triangle} we need to show that the charge associated with the soft mode derived in \cite{1609.06318} indeed corresponds to a Bogoliubov transformation with the coefficients derived in section \ref{Bogoliubov transformation}. \begin{centering} \begin{figure*}[h] \centering \includegraphics[width=8cm]{triangle2.pdf} \caption{Equivalence between the charge associated with the asymptotic symmetries, a coordinate transformation, and a Bogoliubov transformation.} \label{triangle} \end{figure*} \end{centering} As described in \cite{1609.06318} (and in several other works before \cite{1203.6351,1304.5527}, as well as in \cite{Kehagias:2017rpe} from the perspective of a dS/CFT correspondence) the symmetry of the action under large gauge transformations has an associated charge \begin{eqnarray} Q_\xi=\frac{1}{2} \int d^3 x \left[ \{\Pi_\zeta, \delta \zeta\} + \{\Pi_\gamma^{ij}, \delta \gamma_{ij} \} \right] \end{eqnarray} where $\Pi_{\zeta,g} \equiv \delta {\cal L}/ \delta (\dot{\zeta}, \dot{g}_{ij}) $ are the conjugate momentum associated with the two gravitational degrees of freedom $\zeta$ and $\gamma_{ij}$, and $\delta \zeta, \delta \gamma_{ij}$ are their transformations under a large gauge transformation. For simplicity we will only consider the charge associated with a soft graviton. We would like to verify that the charge acts on the vacuum as a Bogoliubov transformation which has the form \begin{eqnarray} \label{BogTrans} \left| 0' \right>= \prod_{k} \frac{e^{\frac{-\beta^*}{2\alpha}a^\dagger_k a^\dagger_{-k}}}{|\alpha_k|^{1/2}} \left| 0 \right>, \end{eqnarray} where $\alpha$ and $\beta$ relate the creation and annihilation operators between the two different vacua as described in eq. \ref{alpha beta def}. We focus on the charge induced by a soft tensor, i.e. we consider a diffeomorphism of the form $\xi_i = \left(e^{\gamma^L/2}-\delta \right)_{ij} x_j$ with associated charge\footnote{We refer the reader to \cite{1609.06318} for the derivation of the cubic part of the charge.} \begin{eqnarray} \label{charge0} Q= \frac{a^3 M_p^2}{4} \int d^3x \, \dot{\gamma}_{ij} D_L \gamma_{ij} \end{eqnarray} where $D_L \equiv \gamma^L_{ab}/2 \, x_b \partial_a$ and $\gamma^L_{ab}$ is the soft graviton appearing in the large gauge transformation. When expanding in Fourier space the charge can be written as \begin{eqnarray} \exp{iQ}= \Pi_k \exp{\left(c_+ K_+ + c_- K_- + c_3 K_3\right)} \end{eqnarray} where $K_+ = a^\dagger_k a^\dagger_{-k}/2$, $K_- = a_k a_{-k}/2$ and $K_3 = (a_k^\dagger a_{-k} + a_{k} a^\dagger_{-k})/4$, and all the $c$'s are momentum dependent functions associated with each $K$. Making use of the formulas in appendix 5 of \cite{barnrad} we can rewrite the exponential of the charge as \begin{eqnarray} \exp{iQ}= \Pi_k \exp{\left(\Gamma_+ K_+\right)} \, \exp{\left( \log (\Gamma_3) K_3\right)} \, \exp{\left(\Gamma_- K_-\right)} \end{eqnarray} where \begin{eqnarray} \Gamma_\pm &=& \frac{ 2 c_\pm \sinh \beta}{ 2\beta \cosh \beta - c_3 \sinh \beta} \\ \Gamma_3 &=& \left( \cosh \beta - \frac{c_3}{2 \beta} \sinh \beta \right)^{-2} \\ \beta^2 &=& \frac 1 4 c_3^2 - c_+ c_- . \end{eqnarray} All the $c$-functions are proportional to $\gamma_L$ and so is $\beta$. Thus, expanding the $\Gamma$s for small $\beta$ gives \begin{eqnarray} \Gamma_\pm &=& c_\pm + {\cal O }(\gamma_L^2) \\ \Gamma_3 &=& 1 + {\cal O }(\gamma_L) \end{eqnarray} This shows that to leading order in $\gamma_L$ \begin{eqnarray} \exp{iQ} \left| 0 \right> = \mathcal{N} \, \Pi_k \exp{\left(c_+ K_+\right)} \left| 0 \right>. \end{eqnarray} where $\mathcal{N}$ is a normalization factor. To close the triangle of fig. \ref{triangle} we still need to show that $c_+ = - \beta^*/\alpha$ with the $\alpha$ and $\beta$ computed in sec. \ref{Bogoliubov transformation}\footnote{Note that the factor of $1/2$ is already included in the definition of $K_+$.} . To do that we come back to our original expression in eq. \ref{charge0} and integrate by parts half of the integral \begin{eqnarray} Q=\frac{a^3 M_p^2}{8} \left[\int d^3 x D_L \left( \dot{\gamma}_{ij} \gamma_{ij} \right) - D_L \left( \dot{\gamma}_{ij} \right) \gamma_{ij}+ \dot{\gamma}_{ij} D_L \gamma_{ij} \right]. \end{eqnarray} The total derivative will be evaluated at the boundary. Therefore, when going to Fourier space it will correspond to an operator of a single (absolute) momenta associated. We neglect this term in what follows given that in the large volume limit its effect on the state is negligible. Then, after Fourier transforming the previous integral and select the terms proportional to $K_+$ we get \begin{eqnarray} \label{charge1} \tilde{Q}&=&-\frac{a^3 M_p^2}{4} \int d^3 k \, \sum_{\sigma} \left[ \dot{\gamma}_k^* D_L \gamma_k^*- \gamma_k ^* D_L \dot{\gamma}_k^* \right] a_{k,\sigma}^\dagger a_{-k,\sigma}^\dagger \quad, \end{eqnarray} where we have used the fact that $\sum_{\sigma, \sigma'} \epsilon^{\sigma}_{ij}(k) \epsilon^{\sigma'}_{ij}(-k) = 2\delta_{\sigma, \sigma'}$, being $\sigma$ the graviton polarization. The previous equation defines $c_+$ to be \begin{eqnarray} \label{c+} c_{+,\sigma} = \frac{a^3 M_p^2}{2i} \left[ \dot{\gamma}_k^* D_L \gamma_k^*- \gamma_k ^* D_L \dot{\gamma}_k^* \right] \end{eqnarray} where the $\sigma$ index just characterizes the fact that we would have a $c_+$ for each polarization. In order to make the comparison with the Bogoliubov coefficients derived from the coordinate transformation we use the definition \begin{eqnarray} \label{beta} \beta &=& \frac{u'_k v_k - v'_k u_k }{W} \\ \label{alpha} \alpha &=& \frac{u^{'}_k v^*_k - u_k v_k^{'*} }{W} \end{eqnarray} where $v(k)= u(\tilde{k})$ is the mode function in the shifted coordinates with $\tilde k^2= e^{2\zeta_L} e^{\gamma_L}_{ij} k_i k_j$, and $W$ is the Wronskian. When expanding $u(\tilde{k})$ around $k$ the two expressions simplify to \begin{eqnarray} \beta &=& \frac{u_k' D_L u_k - u_k D_L u'_k}{W} + {\cal O}(\tilde{k}-k)^2 \\ \alpha &=& 1 + {\cal O}(\tilde{k}-k). \end{eqnarray} Therefore, \begin{eqnarray} -\frac{\beta^*}{\alpha} = \frac{u^{'*}_k D_L u^*_k - u^*_k D_L u^{'*}_k}{W^*} + {\cal O}(\tilde{k}-k)^2 \, . \end{eqnarray} If we now identify $u_k$ with $\gamma_k$ the Wronskian would be given by $2 i/(a M_p)^2$ and the last equation matches precisely with eq. \ref{c+}. In the case of $Q_\zeta$ the derivation will go through in the same way with an extra detail. When Fourier transforming $D_L$ there will be a term of the form $\partial_k k$ which at a first glance makes the $-\beta^*/\alpha$ computed from the two approaches different. However, this term corresponds precisely to the change in the Jacobian which we also had to take into account in eq. \ref{optimus}. With this result we close the equivalence between the three vertices of the triangle: charge transformation, coordinate change and Bogoliubov transformation. \subsection{Patient Observers and Particle Production}\label{patient} In order to measure the inflaton particle production induced by the emission of a long mode in a quasi-de Sitter phase, we imagine an Unruh type detector, a two state quantum system coupled to the inflaton in such a way that the absorption of an inflaton particle will trigger a transition in the state of the detector. Since the long scalar mode can be seen as a constant shift of the time coordinate, this will only be visible in the integrated particle production when measured according to some internal clock in the detector, which is independent of the expansion. Such a detector is not de Sitter invariant, and will detect a change in the integrated particle production measured according to the internal detector clock as the state changes from $\left|0\right> \to \left|0'\right>$. Thus, this serves as a patient observer (defined in the introduction) in quasi de Sitter space if it is around for a sufficiently long time. An isocurvature mode during inflation is a prototype of this kind of detector \cite{Geshnizjani:2003cn}. However, if we consider the production of gravitons in pure de Sitter, the issue is a bit trickier. In this case, we can think of a detector localized inside the horizon, which is coupled to a light scalar field initially in its vacuum. The detector breaks de Sitter invariance and can record the particle excitations of the light test scalar field. Using the expressions for the Bogoliubov coefficients defined in equations (\ref{beta}) and (\ref{alpha}) we obtain, in the dS limit, \begin{equation} \alpha_k =\frac{1}{2i} \left[\left(\left(\frac{k}{\tilde k}\right)^{3/2}-\left(\frac{\tilde k}{ k}\right)^{1/2}\right) \frac{1}{k\eta}-i\left(\left(\frac{k}{\tilde k}\right)^{1/2}+\left(\frac{\tilde k}{ k}\right)^{1/2}\right)\right] e^{i(\tilde k-k)\eta} ~, \end{equation} \begin{equation} \beta_k =\frac{1}{2i} \left[\left(\left(\frac{\tilde k}{ k}\right)^{1/2}-\left(\frac{k}{\tilde k}\right)^{3/2}\right) \frac{1}{k\eta}-i\left(\left(\frac{\tilde k}{ k}\right)^{1/2}-\left(\frac{k}{\tilde k}\right)^{1/2}\right)\right] e^{i(\tilde k+k)\eta} ~. \end{equation} It is easy check that the normalization condition $|\alpha_k|^2-|\beta_k|^2=1$ is satisfied, and that on super-horizon scales one has indeed \begin{equation} | u_{\tilde k}|^2 = \left(\frac{k}{\tilde k}\right)^3 | u_{k}|^2~, \end{equation} which proves that the form of $\alpha_k$ and $\beta_k$ are self consistent. From the Bogoliubov transformation, we can compute the inflaton particle number, initially in the $\left| 0\right>$ vacuum when measured in the $\left| 0'\right>$ state, \begin{equation} N_k= \left< |\beta_k|^2 \right>= \frac{1}{4} \left<\left[\left(\frac{\tilde k}{ k}\right)^3-2\left(\frac{k}{\tilde k}\right) +\left(\frac{\tilde k}{k} \right) \right] \frac{1}{k^2\eta^2}+\left[\left(\frac{k}{\tilde k}\right)+\left(\frac{\tilde k}{ k}\right)-2\right] \right> ~, \end{equation} and so for sub-horizon modes, $-k\eta \to \infty$, we see that there is a finite piece that does not vanish \begin{equation} N_k \approx \frac{\left<\delta_1^2\right>}{16}= \frac{1}{30}\left<\gamma_L^2\right>~. \end{equation} We notice that the effect for a sub-horizon observer is tiny. Even if our sub-horizon observer waits for what corresponds to the Page time of de Sitter, until the variance of the long modes that have left the horizon becomes larger than one $\left<\gamma_L^2\right> \gg 1$, there will be approximately one extra particle produced with Fourier mode $k$. However, it appears as if a sub-horizon observer after the Page time would discover that he is no more in the vacuum, but in a state with one excited particle of all sub-horizon Fourier modes. Since the one-particle state is orthogonal to the vacuum, the observer will see that the state has changed by an order one factor on all scales within the horizon \cite{1609.06318}. Since a constant long mode does not change the time dependence of the metric, it is not expected to change the particle production rate or the periodic properties of the Green function under a complex phase shift in time. Therefore we expect that the new state, $\left|0'\right>$, will be thermal with the same temperature as $\left|0\right>$, and in fact a single pointlike Unruh type detector may not experience any change in the transition rates between energy levels in the detector due to absorption or emission of particles. One suspects that only when measuring the integrated particle production using an internal clock (such as an independently fluctuating isocurvature mode) can the difference in the particle production before and after adding the soft mode be seen. Therefore, one is led to think that such an effect might be accounted for as being due to some accumulated difference between some independent clock field and the adiabatic clock (the expansion) shifted by long modes \cite{Geshnizjani:2003cn} (see also \cite{Abramo:2001dd,Bonga:2015urq}), and if the effect is always measured in terms of the adiabatic clock (the expansion), then the effect is absent \cite{Unruh:1998ic,Abramo:2001dc, Geshnizjani:2002wp,Tsamis:2005bh, Losic:2005vg, Gasperini:2009wp, Frob:2017coq}. \subsection{Is Patience Fundamentally Impossible?}\label{impatience} In this subsection, we report a tentative conjecture that it may actually be impossible to build a realistic subhorizon instrument capable of acting as a patient observer. We have performed two thought experiments on how one would go about measuring the effects of long modes, and each time we have been stymied by the requirement that we build our machine out of physically realizable matter. This does not constitute a proof that a measurement device of sufficiently clever design cannot ever be envisioned, but it does give some indication that there may be a version of ``cosmic censorship" at play, preventing one from making these measurements. This situation is reminiscent of \cite{DYSON:2013jra}, where it is argued that no machine capable of observing a single graviton may ever be constructed. Our first attempt at a patient observer, as mentioned in the introduction, is a circular array of satellites very carefully bound together in the radial direction, keeping them attached under the expansion, but not preventing them from feeling shear effects. When long modes are added, the spatial distance between the satellites changes by \begin{equation} ds^2 = a^2 \delta_{ij}dx^idx^j \to ds'^2 = a^2 (e^{\gamma_L})_{ij}dx^i dx^j~, \label{squab} \end{equation} and so shear deformations become order one when $\left<\gamma_L^2\right> \sim 1$. However, if we compute the uncertainty in the location of the satellites due to quantum drift after a time $t$ by using $\Delta p_q = m \Delta x_q/t$, we notice from the uncertainty principle, \begin{equation} \Delta x_q \geqslant \sqrt{t/m} \end{equation} where $m$ is the mass of the satellites. On the other hand, the effect we want to measure is a shift in the position of the satellite of order \begin{equation} \frac{\Delta x}{x} \leqslant \sqrt{\left<\gamma^2\right>} \sim \sqrt{H^3 t/M_p^2}~. \end{equation} In order for the effect to be observable, we obviously require \begin{equation} \label{dxq} \frac{\Delta x}{x} \geqslant \frac{\Delta x_q}{x}. \end{equation} Using that the Schwarzschild radius of the satellites has to be less than the horizon, $r_s= G_N m \leqslant 1/H$, we find that (\ref{dxq}) implies \begin{equation} x\geqslant \frac{1}{H}, \end{equation} which implies that the detector must be larger than the horizon in order to measure the effect, which makes this setup a bad patient observer. Note also that trying to alleviate the problem by adding $N$ satellites, as the Gaussian quantum noise in the measurement then goes down as $1/\sqrt{N}$, does not help since one still needs to require the that Schwarzschild radii of all $N$ satellites fit inside the horizon, $N r_s \leqslant 1/H$, and so the constraint above remains the same, independent of the number of satellites. So long wavelength gravitons cannot be measured by this method. We try again with a system of clocks, aiming to measure the effect on the expansion rate as measured by independent clocks discussed in the previous section. Granting that the effect is present when treating the clocks and detectors classically, we check what happens when taking into account the quantum fluctuations of the detectors or clocks themselves. Here, the idea is that a shift by a long wavelength mode as in (\ref{squab}) would shift the time coordinate by $t\rightarrow t-\langle\gamma_L^2\rangle/H$. By using $\langle\gamma_L^2\rangle \sim \alpha_0 H^2/M_p^2 - \alpha_1 H^3 t/M_p^2 - \alpha_2 H^6 t^2/M_p^4 -\dots$ ($\alpha_i$ numerical loop factors), we can equally well absorb the effect in the expansion rate by taking $H\to H + \alpha_1 H^3/M_p^2+\alpha_2 H^6 t/M_p^4 +\dots \,$. One has to be careful in the physical interpretation of this kind of effect as discussed in the previous subsection\footnote{In fact it is a controversial point whether this is a physical effect or not \cite{Garriga:2007zk,Tsamis:2008zz}.}, but for the sake of the argument let us assume that this effect is present at a classical level and see if it will ever be possible to measure the effect in practice when including quantum effects on the detectors or clocks. In order to measure a change in the expansion rate we imagine two freely falling satellites exchanging photons. The redshift of the photons, $z$, is related to the expansion rate, if the satellites are not too far from each other, by Hubble's law \begin{equation}\label{Hubble} z = H_0 (t_1-t_0) \end{equation} where $t_1$ is the time at which the signal is observed at the second satellite, and $t_0$ is when it was emitted at the first. If we want to measure a shift in the expansion rate of order $\Delta H_L/H \sim H^2 /M_p^2$, then we need to be able to synchronize the two clocks to a precision of $\Delta t_L/t \sim H^2/M_p^2$. On the other hand, if we want to look at the expansion rate in a region of size $L$, the two clocks need to be separated by the same distance. Regardless of the details of how these clocks register time, a general requirement is that if we want a clock that is accurate to a precision $\Delta t$, it must have an energy uncertainty $\Delta E\gtrsim1/(\Delta t)$. As recently described in \cite{clocks}, physical clocks must interact with each other gravitationally causing time dilation effects that will shift the time between clocks by an uncertain amount \begin{equation} \Delta t_q \sim \frac{\Delta E}{M_p^2 L} t. \end{equation} This sets a fundamental limit to how precisely clocks may measure time. To make this uncertainty the smallest, we take $\Delta E$ to be as small as possible, and $L$ as large as possible. Whatever the nature of our clock, it needs to be localized inside the horizon, giving $\Delta E > H$ and, for the same reason $L < 1/H$. This implies that \begin{equation} \Delta t_q \gtrsim \frac{H^2}{M_p^2} t=\Delta t_L~. \end{equation} Therefore, quantum decoherence makes it impossible again to measure the shift in the expansion rate due to long modes. Therefore even if we assume that the effects of \cite{Geshnizjani:2003cn, Abramo:2001dd,Tsamis:2011ep,Bonga:2015urq, Basu:2016gyg} are there to measure when neglecting quantum fluctuations of clocks and detectors, in a full quantum treatment the effect appears unobservable. One might be tempted to conjecture that this could be an indication that there is some fundamental limitation to the size of infrared effects measured by patient observers in de Sitter. This points to a tantalizing sort of quantum-geometric consistency, that even though the geometry of spacetime `fuzzes out' on such scales, it does so in such a way that it is exactly below the threshold of detectability for any local observer. Note that the analysis above is for exact de Sitter space and does not apply explicitly to inflation, as the long wavelength scalar modes are enhanced by a slow roll factor $1/\epsilon$ in that case. It would be interesting to examine this more generally. \section{Non-perturbative Results}\label{nonpert} We are ultimately interested in the non-perturbative regime, where the contributions from many long modes accrue to alter the lowest order results. The correlators \begin{equation} \langle \zeta_L^2\rangle=\int_{k_{min}}^{k_{max}}\frac{dk}{k}P(k),\quad\quad \langle \gamma_L^2\rangle=\int_{k_{min}}^{k_{max}}\frac{dk}{k}P_\gamma(k)\label{coxrelate} \end{equation} extend over the number of modes that can be seen by an observer. Thus, even though we may be embedded in a very large universe, the value of $k_{max}$ we are able to take is limited by our current horizon size. This corresponds to at the very most a few dozen e-folds, and as such places us squarely in the perturbative regime. If we were willing to wait a very long time (and dark energy proves to eventually decay, placing us in asymptotic Minkowski space and giving us access to an unlimited number of e-folds of inflation), we would eventually reach the non-perturbative regime. Note that since $P(k)>P_\gamma(k)$ for the entire duration of inflation, the scalar corrections will be the more relevant of the two effects. This is fortuitous for us, as the tensor nature of the graviton corrections makes the expressions much more difficult. As such, we will focus primarily on the scalar corrections first, and display the tensor generalization in section \ref{nonpertten}. Before we launch into our analysis, let's make an honest assessment of how long it takes to reach the non-perturbative regime, when these quantities become large\footnote{As we will see, ``large'' means greater than $1/(1-n_s)^2$.}. The CMB scales we have access to only yield around 8 e-folds, which is nowhere close to the amount necessary to see non-perturbative effects. In general, the amount needed will depend on the model of inflation, as this sets the momentum dependence of the power spectrum. For monomial models, for instance, where $V(\phi)=\lambda\phi^p$, we find that $\langle \zeta_L^2\rangle=\lambda N^{p/2+2}$, with $N$ the number of e-folds. For a linear model, which is on the cusp of being ruled out or confirmed by tensor modes, $\lambda=10^{-3}$, and this becomes large for $N_{np}>200$. Now, to have access to these scales, we would have to wait until scales of the size $k_{min}=e^{N_{np}}k_{max}$ enter our horizon, which would be $10^{103}$ years. Recall that this also assumes a scenario where dark energy eventually decays away to Minkowski space, otherwise these scales would not ever reenter the horizon at all. Models of the form $m^2\phi^2$ are even worse, taking $10^{8,000}$ years before we would see enough modes. Plateau models of the form $V_0(1-e^{\phi/\Lambda})$ are worst of all, taking $10^{80,000}$ years. It can be amusing to compare these timescales to the timescales laid out in \cite{Dyson:1979zz} for the far far future evolution of the universe. There, they find that matter behaves as a liquid on timescales of order $10^{65}$ years, with all chemical bonds tending to break apart and every object ultimately sphericalizing. Black holes even of galactic size decay after $10^{100}$ years, and all elements decay to iron after $10^{1,500}$ years. Ordinary clumps of matter decaying into black holes takes much, much longer, though. Thus, even for the most optimistic models of inflation, building a deice capable of registering this effect will be a challenge. Linear models would require some sort of repair mechanism that would counteract the tendency of the chemical bonds in the device to break, and the others would need to deal with parts of it collapsing into iron, releasing $\text{MeV}$ scale radiation. These problems are in principle surmountable, though the authors graciously waive claims to any patent rights for such a device. Another point of note is that dark energy is not necessarily a nuisance, when measuring this effect. In fact, many of these timescales are actually longer than the Page time for our current cosmological expansion, $10^{130}$ years, meaning that dark energy is actually beneficial in trying to measure this effect. However, these timescales are ridiculously impractical from a human perspective, and thus we regard the most promising avenue to ever this phenomenon to be from analog systems. Like the recent success of (possibly) detecting phonon Hawking radiation from a mute hole \cite{mute}, if the analog of de Sitter space could be constructed in a condensed matter system, where the Planck scale is replaced with the $eV$ scale, this timeframe could be pushed down to years or decades. We refrain from speculating what such a system would actually be at this moment. In this section we will compute the correlators along the lines of those in (\ref{optimus}) without resorting to a loop expansion. Our expressions for the correlators will depend on the quantities (\ref{coxrelate}) to arbitrarily high orders, and in this sense will be non-perturbative. However, our results will not be exact, and hold only in several well defined approximations, which we detail now. \begin{centering} \begin{figure*}[h] \centering \includegraphics[width=12cm]{2loop.png} \caption{The two loop diagrams that are automatically included by expanding our result to second order.} \label{there} \end{figure*} \end{centering} We treat the long mode state as the free field vacuum, ignoring interactions of the long modes among themselves. Diagrammatically, this corresponds to resumming the infinite class of diagrams represented in Fig. \ref{there}, while excluding those in Fig. \ref{not}. This amounts to treating the vacua to be those of the free fields. So, while we include, say, fifteen loop contributions coming from the Bogoliubov coefficients, we neglect the leading corrections from self-interactions, which start at two loop order. If desired, corrections from these higher order interactions can be added systematically to our result using the in-in formalism. \begin{centering} \begin{figure*}[h] \centering \includegraphics[width=12cm]{2loopgrav.png} \caption{The two loop diagrams that are not encapsulated in our analysis. Note that they all have self-interactions for the long modes, and so require a further insertion of the interaction Hamiltonian in the in-in formalism.} \label{not} \end{figure*} \end{centering} The second approximation that we make initially in order to obtain exact expressions is to treat the spectral tilt as constant, ignoring $k$ dependence. The basic results in the non-perturbative regime will actually not be greatly affected by this approximation, and we show in section \ref{fulldist} how to easily go beyond it. \subsection{Scalar Loops} In this section, we show that it is possible to obtain non-perturbative results for the expectation value of correlators. This makes it possible to probe the regime where the number of long modes becomes large, and will allow us to obtain nontrivial statements about the statistical properties of the ultra-large scale structure of the universe. We focus here on the corrections coming from scalar modes, both because the effect is larger than that coming from tensors, and because the results are much simpler. In the next section we return to the tensor contribution. To illustrate our approach, we start by computing the 2 point correlator for scalar fields. From (\ref{optimus}), the expression in the shifted coordinates is \begin{equation} \langle \tilde P_k\rangle=\langle e^{-2\nu \zeta_L} e^{3\zeta_L}\rangle P_k=\langle e^{(1-n_s)\zeta_L}\rangle P_k \end{equation} so that we must find the expectation value of $e^{(1-n_s)\zeta_L}$ in order to find the average power spectrum measured. In order to do this, we write this as a path integral: \begin{equation} \langle e^{(1-n_s)\zeta_L}\rangle=\prod dk \int d\zeta_k \frac{e^{-\frac{\zeta_k^2}{2\sigma_k^2}}}{\sqrt{2\pi\sigma_k^2}}e^{(1-n_s)\zeta_L} \, . \end{equation} Here, we have made the two aforementioned simplifications: that the path integral is performed in the free field vacuum, so that we can ignore interactions, and that the spectral tilt is constant over the field range under consideration. We will comment on, and relax, each assumption in the next section, but for now we stick to this simplistic case in order to arrive at analytically solvable results. The qualitative behavior we find extends to the more complicated cases as well. Then, we take $\zeta_L$ to be a sum of long wavelength modes, $\zeta_L=\int_{k_{min}}^{k_{max}} dk \zeta_k$. The path integral for momenta outside these modes trivially evaluates to 1, and the remainder are a simple Gaussian integration that yields \begin{equation} \langle e^{(1-n_s)\zeta_L}\rangle=e^{\frac12(1-n_s)^2\langle\zeta_L^2\rangle} \, . \end{equation} We stress that this is a non-perturbative result, valid even in the regime with a large number of modes, where the loop expansion breaks down. Expanding the exponential in a Taylor series is equivalent to a loop expansion, and indeed it reproduces the results found in \cite{1005.1056}. In fact, this expression was already obtained in \cite{1103.5876}, using quite different methods, and therefore serves as a nice consistency check of our methods. The approach we take in this paper allows us to generate results like this in a streamlined fashion, which will allow us to make more sophisticated statements about observable quantities than appear in the literature. As a first implementation of our analysis, note that this quantity is always greater than 1, indicating that the average measured power spectrum is always greater than the bare power spectrum, especially in the non-perturbative regime. We will make more precise comments about how to interpret this in section \ref{fulldist}. We can compute the correction to the tensor correlator using the exact same procedure, which gives \begin{equation} \langle \tilde P_k^\gamma\rangle=e^{\frac12 n_t^2\langle \zeta_L^2\rangle} P^\gamma_k. \end{equation} Thus, we find the average value of the tensor to scalar ratio in the presence of a large number of long wavelength modes to be \begin{equation} \langle\tilde r\rangle=r\, e^{\frac12(n_t^2-(1-n_s)^2)\langle \zeta_L^2\rangle}\label{ravg} \, . \end{equation} Though we would observe that both the tensor and scalar power increase, the ratio would either go to zero or infinity, depending on the sign of the exponent. Using the standard slow roll consistency relations, the threshold between the two different behaviors is $r=8(1-n_s)$, which, at the best fit value for the tilt, becomes $r=.32$. Since we can rule out tensors at this level, we conclude that, (assuming constant tilt), the brand of inflation that gave rise to our universe leads to a vanishing mean tensor to scalar ratio on extremely large scales. \subsection{Local non-Gaussianity} We now compute the squeezed limit of the bispectrum, to see how Maldacena's consistency relation is modified in the presence of a large number of modes. This involves a double expansion, as the short wavelength modes are in the presence of the single soft mode of intermediate wavelength, as well as the far infrared modes. Then \begin{eqnarray} \langle \zeta_{\tilde q}\zeta_{\tilde{ k_1}}\zeta_{\tilde{k_2}}\rangle=\left\langle\left\langle\zeta_{\tilde q}e^{(1-n_s)\zeta_{\tilde q}}\right\rangle \zeta_{\tilde{ k_1}}\zeta_{\tilde{k_2}}\right\rangle=-\left\langle\frac{\partial}{\partial n_s}\left\langle e^{(1-n_s)\zeta_{\tilde q}}\right\rangle \zeta_{\tilde{ k_1}}\zeta_{\tilde{k_2}}\right\rangle\\ =(1-n_s)\left\langle\left\langle\zeta_{\tilde q}^2\right\rangle e^{\frac12(1-n_s)^2\langle\zeta_{\tilde q}^2\rangle} \zeta_{\tilde{ k_1}}\zeta_{\tilde{k_2}}\right\rangle \, . \end{eqnarray} Where we have used the expression for two point functions in a shifted background to arrive at the last line. At this point we neglect the quantity in the exponential, as it will always be extremely small, even in the non-perturbative regime. Then \begin{equation} \langle \zeta_{\tilde q}\zeta_{\tilde{ k_1}}\zeta_{\tilde{k_2}}\rangle=(1-n_s)e^{\frac12(1-n_s+1-n_q)^2\langle\zeta_L^2\rangle}\langle\zeta_q^2\rangle\langle\zeta_{k_1}\zeta_{k_2}\rangle \, . \end{equation} So that the background modes always enhance the expected power in the three point function over the standard result. Here we have denoted $n_q$ as the tilt at the scale of the long mode, even though in our analysis we have taken it to be constant, to emphasize the fact that both scales enter into the corrections. The correction to the parameter $f^\text{local}_{NL}$ is \begin{equation} \langle\tilde{f}^\text{local}_{NL}\rangle=\frac{5}{12}\frac{\langle \zeta_{\tilde q}\zeta_{\tilde{ k_1}}\zeta_{\tilde{k_2}}\rangle}{\langle\zeta_{\tilde{q}}^2\rangle\langle\zeta_{\tilde{k_1}}\zeta_{\tilde{k_2}}\rangle}=\frac{5}{12}(1-n_s)e^{(1-n_s)(1-n_q)\langle\zeta_L^2\rangle}=f_{NL}^\text{local} e^{(1-n_s)(1-n_q)\langle\zeta_L^2\rangle}\label{fnlavg} \, , \end{equation} in agreement with the one-loop result of \cite{1005.1056}, when expanding the exponential function to first order. We see that some terms in the exponential are not cancelled, as a consequence of its quadratic nature. From this we infer that the long modes generically induce larger average nongaussianity, and that deep in the non-perturbative regime this quantity diverges exponentially. This is provided that the spectrum is red everywhere: from our expression, it appears as if it becomes blue for a period of inflation, the average nongaussianity associated with those scales will actually shrink to 0 on very large scales. As we will discuss in the following section, this is an artefact of the constant tilt approximation. \subsection{Tensor Loops}\label{nonpertten} Now that we have demonstrated that our formalism can yield non-perturbative results for scalar modes, we use it to extend these results by computing the tensor contribution, using the same approximations. Though the final expression is not quite as simple for tensors, the path integral still simplifies dramatically to an integral expression that automatically resums all graviton loops to the two point correlator. For a single long mode this becomes a single integral of special functions, and can effortlessly be expanded to any desired order, making it a compact generating expression for all higher order loop corrections. Including multiple long modes complicates the expression, but simplifications are still displayed. General properties of this integral can be investigated, yielding non-perturbative results on the effect of infrared loops on the two point correlator. To begin with, we consider the presence of a single long wavelength graviton, which serves to simplify our analysis. Afterwards, we generalize to the case where we consider the sum of modes, so that we can study the behavior as the number of modes becomes large. The first technical hurdle is the fact that the correlator (\ref{grimlock}), upon Taylor expansion of the exponential, involves an infinite number of distinct objects, which we labelled as $\delta_m=\gamma^m_{ij}\hat{k}_i\hat{k}_j$. Fortunately, they are not all independent: since $\gamma$ is a $3\times3$ matrix, we can use the Cayley-Hamilton theorem, which states that every matrix obeys its own characteristic equation: \begin{equation} \gamma^3+\frac12[\gamma]\gamma^2+\frac12([\gamma]^2-[\gamma^2])\gamma+\det\gamma=0 \, . \end{equation} The notation $[A]=\text{trace}(A)$ is employed. Because we are using the gauge where $\gamma$ is transverse and traceless, both the trace and determinant of $\gamma$ vanish, so this expression simplifies considerably: \begin{equation} \gamma^3=r^2\gamma \end{equation} where, because we are considering only a single mode, we have $r^2=[\gamma^2]/2=\gamma_+^2+\gamma_\times^2$. This can then be used to give a recursive relation for the different contractions, $\delta_m=r^2\delta_{m-2}$, which may be used separately for the even and odd powers of $\gamma$. Then, the expression for the matrix exponential simplifies to \begin{equation} e^{\gamma}{}_{ij}\hat k_i\hat k_j=1+\frac{\sinh r}{r}\delta_1+\frac{\cosh r-1}{r^2}\delta_2 \, . \label{hype} \end{equation} This can be simplified further by noting that $\delta_2=r^2s^2$, $s$ being the sine of the angle between momenta, and denoting $\gamma_+=r\cos\psi$, $\gamma_\times=r\sin\psi$ to yield \begin{equation} e^{\gamma}{}_{ij}\hat k_i\hat k_j=1+s^2\big(\cosh r-1+\sinh r\cos(\psi-2\phi)\big) \, . \end{equation} Here, $\phi$ is the angle between the momentum $k$ projected onto the plane perpendicular to some fixed unit vector $p$, the choice of which will drop out of our final expressions. Now we must evaluate the correlator. For a generic function of the graviton, we have \begin{equation} \langle f(\gamma_{ij})\rangle=\int \frac{\mathcal{D}\gamma_k}{\sqrt{2\pi\langle \gamma_L^2\rangle}}e^{-\frac12 \gamma(k)_{ij}D(k)_{ijkl}\gamma(k)_{kl}}f(\gamma_{ij}) \, . \end{equation} Again, we have treated the graviton as being in the free-field vacuum so that the correlator reduces to a Gaussian integral. Because we are considering only a single momentum for the moment, the majority of the integrations in the path integral become trivial, and the only surviving integration reduces to \begin{equation} \langle f(\gamma(k)_{ij})\rangle=\int d\gamma(k)_+d\gamma(k)_\times \frac{e^{-\frac{r^2}{2\langle \gamma_L^2\rangle}}}{2\pi\langle \gamma_L^2\rangle}f(\gamma_+,\gamma_\times) \, . \label{almost} \end{equation} We see that the kernel of integration only depends on the radial combination $r$, so that the only angular dependence comes from the quantity to be evaluated in the correlator. For a power of the matrix exponential, both the radial and angular integrals cannot be done explicitly at the same time, but the angular one can be done, yielding a compact expression that generates the loop corrections systematically. Our final expression for the correlator can be written as \begin{equation} \left\langle\hat{\tilde{k}}^{-2\nu}\right\rangle=\int_0^\infty\frac{dr r}{2\langle \gamma_L^2\rangle} e^{-\frac{r^2}{2\langle \gamma_L^2\rangle}}R_\nu(r,s)\label{full} \end{equation} where here \begin{equation} R_\nu(r,s)=T_-^{-\nu}{}_2F_1\left(\frac12,\nu,1,1-\frac{T_+}{T_-}\right)+T_+^{-\nu}{}_2F_1\left(\frac12,\nu,1,1-\frac{T_-}{T_+}\right) \end{equation} and $T_\pm=1+s^2(e^{\pm r}-1)$. Using a computer, it is trivial to expand this expression around $r=0$ to recover the loop corrections order by order. These agree with the results obtained previously in \cite{1005.1056}. To illustrate the power of this expression, we display the four loop correction to the power spectrum: \begin{eqnarray} \left\langle\hat{\tilde{k}}^{-2\nu}\right\rangle=1+\frac{(1-n_s)(4-n_s)}{15}\langle \gamma_L^2\rangle\bigg[1+\frac{1-5n_s+n_s^2}{21}\langle \gamma_L^2\rangle\nonumber\\ +\frac{311 + 175 n_s + 590 n_s^2 - 250 n_s^3 + 25 n_s^4}{15015}\langle \gamma_L^2\rangle^2\nonumber\\ +\frac{-8781 - 14595 n_s - 3331 n_s^2 - 1875 n_s^3 + 2375 n_s^4 - 525 n_s^5 + 35 n_s^6}{765765}\langle \gamma_L^2\rangle^3\bigg] \, . \end{eqnarray} This is not a very useful expression, since it is doubtful we will ever be able to observe even the one loop correction, but it demonstrates that this reformulation of the problem has something new to offer. One important thing to note, however, is that each term in this expression aside from the zeroth is proportional to $1-n_s$, indicating that in the exact de Sitter limit this expression reduces to a constant. This will be rigorously shown in the next subsection. What is more important is having the full expression (\ref{full}), which allows us to study its properties to gain insights into the effects of long wavelength modes on the power spectrum at a non-perturbative level. \subsection{The de Sitter Limit} While it is somewhat trivial to take the de Sitter limit of the tensor loop expression (\ref{full}), the tensor expression is more nontrivial. However, since we have the full expression, it also becomes possible to address what happens to loop corrections in the de Sitter limit $n_s-1\rightarrow0$. Though it may have been guessed that the effect will vanish, it still is comforting to verify explicitly. To do so, it behooves us to do the angular average of (\ref{almost}) first, since the corrections only vanish after this procedure. Taking the average, we find \begin{eqnarray} \left\langle\hat{\tilde{k}}^{-2\nu}\right\rangle=\frac12\int_{-1}^1 d\mu\int_0^\infty drr\frac{e^{-\frac{r^2}{2\langle\gamma_L^2\rangle}}}{2\pi\langle\gamma_L^2\rangle}\int_0^{2\pi}d\psi\left(1+(1-\mu^2)A\right)^{-\nu}\nonumber\\ =\int_0^\infty drr\frac{e^{-\frac{r^2}{2\langle\gamma_L^2\rangle}}}{4\pi\langle\gamma_L^2\rangle}\int_0^{2\pi}d\psi\frac{{}_2F_1\left(1,\frac32-\nu,-\frac12,\frac{A}{1+A}\right)-(1+2(\nu-2)A){}_2F_1\left(1,\frac32-\nu,\frac12,\frac{A}{1+A}\right)}{(\nu-1)A(1+A)}\nonumber \end{eqnarray} where $A=\cosh r-1+\sinh r\cos(\psi-2\phi)$. This expression makes it easy to see how the correlator behaves in the de Sitter limit ($\nu\rightarrow3/2$). From the definition of hypergeometric functions, ${}_2F_1(a,0,c,x)=1$ for all $x$. Then the integral becomes much easier to perform \begin{eqnarray} \left\langle\hat{\tilde{k}}^{-2\nu}\right\rangle=\int_0^\infty drr\frac{e^{-\frac{r^2}{2\langle\gamma_L^2\rangle}}}{4\pi\langle\gamma_L^2\rangle}\int_0^{2\pi}d\psi\frac{2}{\cosh r+\sinh r\cos(\psi-2\phi)}\nonumber\\ =\int_0^\infty drr\frac{e^{-\frac{r^2}{2\langle\gamma_L^2\rangle}}}{4\pi\langle\gamma_L^2\rangle}4\pi\nonumber\\ =1 \, . \end{eqnarray} \subsection{Sum of Modes} The results become much more complicated in the case where the background graviton is a sum of modes, but we can display some results. This will be essential for probing the effects when the number of modes tends to infinity, as is the case for long inflation, for example. This scenario introduces further complications to our previous reduction of the path integral, but it is possible to still make simplifications here as well. The biggest difference is that the condition $D=\det\gamma=0$ no longer follows from the transversality condition, though $[\gamma]=0$ still holds. This makes the recursion relation for $\delta_m$ more complicated: \begin{equation} \delta_m=r^2\delta_{m-2}+D\delta_{m-3} \, . \end{equation} Unfortunately, for this generalized relation the evens and odds do not split, so that the exponential (\ref{grimlock}) we are interested in does not reduce to sinhs and coshs. Still, we can solve this equation inductively, if we set \begin{equation} \delta_m=A_m\delta_2+B_m\delta_1+C_m \, . \end{equation} Then the recursion relation can be written as \begin{equation*} \left( \begin{array}{ccc} A_{m+1} \\B_{m+1} \\C_{m+1} \end{array} \right)=\mathbf{M} \left( \begin{array}{ccc} A_{m} \\B_{m} \\C_{m} \end{array} \right)=\mathbf{M}^{m+1} \left( \begin{array}{ccc} 0 \\0 \\1 \end{array} \right),\quad\mathbf{M} = \left( \begin{array}{ccc} 0 & 1 & 0 \\ r^2 & 0 & 1 \\ D & 0 & 0 \end{array} \right) \end{equation*} and the matrix exponential can be written \begin{equation*} e^{\gamma}_{ij}\hat k_i\hat k_j=\sum_{m=0}^\infty\frac{1}{m!}\delta_m=(\delta_2, \delta_1, 1)e^{\mathbf{M}}\left( \begin{array}{ccc} 0 \\0 \\1 \end{array} \right) \, . \end{equation*} All that remains is to exponentiate the matrix $\mathbf{M}$, and we arrive at the form \begin{equation} e^{\gamma}_{ij}\hat k_i\hat k_j=f_0(r,D)+f_1(r,D)\delta_1+f_2(r,D)\delta_2 \end{equation} where \begin{equation} f_i(r,D)=\sum_\lambda \frac{e^\lambda p_i(\lambda)}{3\lambda^2-r^2},\quad p_0=1,\quad p_1=\lambda,\quad p_2=\lambda^2-r^2 \end{equation} and the sum is over the three characteristic roots of the matrix, defined by $\lambda^3-r^2\lambda+D=0$. These functions reduce to (\ref{hype}) when the determinant is taken to be 0. The path integral is now much harder to compute, both because the functions to be integrated are more complicated, and because with a sum over momenta the path integral does not collapse to a finite dimensional integral as readily. \section{Full Probability Distribution}\label{fulldist} In previous sections we demonstrated the utility of our approach by calculating several average quantities in the non-perturbative regime. Now we turn to the fact that the average does not necessarily accurately capture what a typical observer would see. To see this, we can compute the variance of the power spectrum, for example, by noting that \begin{equation} \langle \tilde P^2\rangle =\langle e^{-2(1-n_2)\zeta_L}\rangle P_0^2=e^{2(1-n_s)^2\langle\zeta_L^2\rangle}P_0^2 \end{equation} so that, in the non-perturbative regime, $\sigma=e^{\frac12(1-n_s)^2\langle\zeta_L^2\rangle}\mu\gg\mu$, and the width of the distribution is very broad. In fact, this can be extended to $\langle \tilde P^m\rangle =e^{\frac{m^2}{2}(1-n_s)^2\langle\zeta_L^2\rangle}P_k^m$. Thus, it does not have the property of ``self-averageness'' that we encounter in situations like thermodynamics, where the distribution is so sharply peaked that we can characterize the behavior of a system by solely studying average quantities. Fortunately, our methods allow us to go beyond computing moments, and we can recover the full probability distribution for measuring any given value of the power spectrum. This can be found by taking the expectation value of a delta function centered a particular value of $P_k$ leading to the full distribution \begin{equation} p(\tilde P_k)=\frac{1}{\sqrt{2\pi(1-n_s)^2\langle\zeta_L^2\rangle}\tilde P_k}e^{-\frac{1}{2(1-n_s)^2\langle\zeta_L^2\rangle}\log^2\left(\frac{\tilde P_k}{P_k}\right)} \, .\label{tiltdist} \end{equation} This is the log-normal distribution, which should come as no surprise, since comes from the product of exponentials of Gaussian distributions. This produces the correct moments we computed above. Now that we've seen the full distribution, we can elucidate some of its properties, as they pertain to the context of corrections to the power spectrum. First, we'd like to know the limiting behaviors: when the quantity $(1-n_s)^2\langle\zeta_L^2\rangle$ becomes small, this distribution becomes a delta function centered at the standard result. In the opposite limit, however, it becomes a power law \begin{equation} p(\tilde P_k)\rightarrow \left\{ \begin{array}{rl} \delta(\tilde P_k-P_k) & (1-n_s)^2\langle\zeta_L^2\rangle\rightarrow 0\\ \frac{1}{\sqrt{2\pi(1-n_s)^2\langle\zeta_L^2\rangle}}\frac{1}{\tilde P_k} & (1-n_s)^2\langle\zeta_L^2\rangle\rightarrow \infty \end{array} \right. \, . \end{equation} Now it becomes clear that looking at the average quantities cannot capture the full behavior of the theory, because of the heavy tail of the distribution. In fact, the mean of the log-normal distribution lies at an unremarkable point on the tail, with no assurance that it will be very close to a typical observation. Let us now try to make the interpretation of this distribution clear: it represents the probability that, on a given time slice set in the uniform density gauge, an observer will measure a particular value of the power spectrum. The power spectrum is, after all, set by the microphysics of the potential as $P_k\propto V^3/V'^2$, and this does not change. What this encapsulates, however, is where on the potential an observer would perform the measurement at that time. The accumulation of long wavelength modes changes the scale factor locally, which in turn can be interpreted as a shift in the number of e-folds at that point. Put another way, if one thinks of the end of inflation as being altered by the accumulation of many stochastic modes, pockets will tend to develop when inflation ends slightly earlier or later than average. This offers a simple way to interpret our average results (\ref{ravg}), (\ref{fnlavg}). At a given location, the value of the physical momentum is continually jostled by long wavelength modes, resembling a Brownian process (for $\log(k)=N$). As the number of modes becomes large, the momentum will either tend toward zero or infinity. If the power spectrum has a constant tilt, at one of these endpoints it diverges, and thus the average of the two outcomes yields a mean result that is larger than the background value, no matter whether the spectrum is red or blue. The same goes for $f_{NL}$, if the sign of the tilts at the two different locations are equal. We can also understand where our result goes astray: if the spectrum is red in one portion of the spectrum and blue in another, then no matter which extreme the momentum tends toward, one of the two will go to zero, and the nongaussianity will be suppressed. This is unphysical, however, because the actual result would keep the ratio of the two different momentum modes fixed, which when taking into account the scale dependence of the tilt would at some point give them both the same sign. This also suggests how to go beyond the constant tilt approximation: if we have full expressions for the power spectra, we can evaluate them with the probability distribution for $k$, which we can extract from (\ref{tiltdist}) by using the change of variables $P_k=P_k^0k^{n_s-1}$: \begin{equation} p(\tilde k)=\frac{1}{\sqrt{2\pi\langle\zeta_L^2\rangle}\tilde k}e^{-\frac{1}{2\langle\zeta_L^2\rangle}\log^2(\tilde k/k)} \, . \end{equation} Even though we used the constant tilt expression (\ref{tiltdist}) to derive this, it is more general. Now, if we want to transform this to a distribution for the physical observable $P_k$, we see that the prefactor is actually unchanged, as it just comes from the Jacobian of the transformation. Thus in the non-perturbative regime the power spectrum is given by a power law distribution, with the same form as indicated before. However, what does change is the quantity in the exponential, which is now interpreted as $k(P_k)$, which requires inverting the power spectrum (which may or may not be possible). This complicates the condition for the onset of the non-perturbative regime, but not the scaling behavior. The actual scaling behavior for the power spectrum can be written simply in the non-perturbative regime, and depends on the precise model of inflation, since the tilt can be written as a function of the power spectrum. Then \begin{equation} p( P_k)\rightarrow \frac{1}{\sqrt{2\pi\langle\zeta_L^2\rangle}}\frac{1}{\lvert n_s(P_k)-1\rvert P_k} = \frac{1}{\sqrt{2\pi\langle\zeta_L^2\rangle}}\bigg\lvert\frac{dN}{dP_k}\bigg\rvert \, . \end{equation} This simple expression suggests that knowledge of the $N$ dependence of the spectral tilt determines the ultra-large scale structure of the universe. This usually comes in the form of a power law $p(P_k)\rightarrow P_k^{-q}$, and we have tabulated this dependence for several representative models in Table 1. In Fig. \ref{distribution} we display realizations of the ultra-large scale distribution of e-folds, for different power law dependences. These look quite qualitatively different, and indicate that an accurate determination of the functional dependence on the tilt allows us to distinguish between extremely distinct possibilities for the organization of the universe as a whole. \begin{table}[h] \vskip.4cm \begin{center} {\renewcommand{\arraystretch}{1.5} \renewcommand{\tabcolsep}{0.2cm} \begin{tabular}{|c|c|c|} \hline Model & $m$ & $q$ \\ \hline $\lambda\phi^n$ & $\frac12$ & $\frac{4+3n}{4+2n}\in\left(1,\frac32\right)$\\ $V_0-\lambda\phi^n$ & $\frac{2n-2}{n-2}$ & $\frac{5n-4}{4n-4}$ \\ plateau inflation & 2 & $\frac54$\\ \hline \end{tabular}}\label{tableone} \end{center} \caption{Representative values of the power law dependence for various models of inflation.} \end{table} \begin{centering} \begin{figure*}[h] \centering \includegraphics[width=16cm]{dist5.pdf} \caption{The ultra-large scale distribution for the number of e-folds at a given time, in increasing levels of heterogeneity. Since the probability is simply proportional to the tilt, for weak dependence on the number of e-folds lots of little patches are intermingled, while strong dependence gives rise to massive domains. Here each pixel represents a horizon-sized region, and the color indicates the strength of the power spectrum measured.} \label{distribution} \end{figure*} \end{centering} Additionally, recall that we made a further approximation by omitting diagrams of the form displayed in Fig. \ref{not}. We now argue that these do not significantly alter the form of \ref{tiltdist}, even though they are not hierarchically suppressed with respect to the diagrams we sum over. This is an initial cause for concern, especially since the log normal distribution we have found is not one of the stable distributions guaranteed to retain its form for a suitable class of random variables. Instead, it is known in the statistics literature \cite{tail} that correlations between the gaussian variables do deform even the tail of the log normal distribution. These deformations can be seen to be subleading, however, by noting that corrections away from the free-field vacuum are encapsulated by the in-in formalism in the form \begin{equation} \langle\mathcal{O}\rangle=\langle\mathcal{O}\rangle_0-i\langle[H_{int},\mathcal{O}]\rangle_0+\dots \end{equation} for any operator $\mathcal{O}$. The interaction Hamiltonian for gravity will contain an infinite number of terms of the form $\zeta^m\gamma^n$, and these will serve to act on the original result as derivatives, such as $\langle\zeta\mathcal{O}\rangle=P_\zeta\partial_\zeta\langle\mathcal{O}\rangle$, and so on for higher order corrections. However, when acting on the log normal distribution, derivatives turn into terms that are suppressed by powers of $\log(P_k)/P_k$. Thus, the direct dependence on $P_k$ cancels, and the leading contribution is logarithmic. However, in inflation the interaction terms are suppressed by the slow roll parameters, and so these subleading results will only become important when $\log(P_k)>1/\epsilon$. In appendix \ref{appendix} we provide more arguments for neglecting such diagrams which further shows that our results are robust. Before ending this subsection, it is important to point out that one should interpret the probability distribution that we have discussed with care. It is the probability distribution of the comoving curvature perturbation at some fixed global comoving time slice, however a local observer can only measure according to a locally defined clock and have no access to a globally defined clock. Therefore the probability distribution, which is relevant for describing an observer like us, who can only see the last $60$ e-folds of inflation, is different from the probability distribution of the curvature perturbation in the entire inflated patch. This point was discussed in more depth in \cite{1103.5876,Bartolo:2007ti,Salopek:1990re,Salopek:1990jq}, while here we will be more interested in the globally defined probability distribution, which, in the following subsection, we will use to find a criteria for eternal inflation. \subsection{Reheating Volume} As an application of our probability distribution, we use it to calculate the expected reheating volume for some simple inflationary scenarios. This is used as a diagnostic for eternal inflation, as if this quantity diverges in the late time limit we can tell that inflation never ends. As in \cite{0802.1067}, the reheating volume can be expressed as \begin{equation} \langle V\rangle=V_0\int d\Delta N \,p(\Delta N) \, . \end{equation} Here, $V_0$ is some arbitrary initial volume, which we have pulled outside the integral using statistical homogeneity. Additionally, we have neglected the fact that once a portion of the universe reheats, the Brownian jitter of long wavelengths ceases, precluding that portion from spontaneously undergoing inflation again, but this was shown in \cite{0802.1067} to give a negligible contribution to the overall evolution. Then we can use our expression \ref{tiltdist}: when expressed in terms of number of efolds, it simply becomes a Gaussian distribution \begin{equation} p(\Delta N)=\frac{1}{\sqrt{2\pi\langle\zeta_L^2\rangle}}e^{-\frac{\Delta N^2}{2\langle\zeta_L^2\rangle}} \, . \end{equation} To specify a model of inflation, we need the dependence of the power spectrum on time, so that we can use \ref{coxrelate} to find the variance of the long modes. If we take the ansatz $P_k=P_K^0(N/N_0)^m=P_k^0(1+\Delta N/N_0)^m$, then $\langle \zeta_L^2\rangle=N_0P_k^0/(m+1)(1+\Delta N/N_0)^{m+1}$. This can then be used to find the average reheating volume: \begin{equation} \langle V\rangle=V_0\int d\Delta N A(\Delta N)e^{B(\Delta N)},\quad B(\Delta N)=3\Delta N-\frac{m+1}{2N_0}\frac{\Delta N^2}{(1+\Delta N/N_0)^{m+1}} \, . \end{equation} Here $A(\Delta N)$ is some sub-exponential prefactor that does not determine the convergence properties of the integral. What does determine the convergence is the asymptotic behavior of the quantity inside the exponential: if $B(\Delta N)$ asymptotes to $-\infty$ as $\Delta N\rightarrow + \infty$, the integral converges. Otherwise, it does not. The conditions for this are rather simple: since the quantity is \begin{equation} B(\Delta N)\rightarrow3\Delta N-\frac{m+1}{2}N_0^m \Delta N^{1-m} \end{equation} the integral converges for $m<0$, and diverges for $m>0$. The case $m=0$ is more subtle, despite it being the most discussed in the literature, as it corresponds to a perfectly flat power spectrum. Then, if $P_k=P_k^0$, $\langle\zeta_L^2\rangle=P_k^0\Delta N$, and \begin{equation} B(\Delta N)=\left(3-\frac{1}{2P_k}\right)\Delta N \, . \end{equation} In this case both terms are linear, and the convergence depends on the coefficient: if $P_k<1/6$, inflation proceeds classically, while if $P_k>1/6$ inflation is in the eternal regime, and quantum fluctuations dominate the evolution. This standard analysis has been used to diagnose many inflationary scenarios, for instance in \cite{Barenboim:2016mmw}, where they concluded that hilltop inflation is capable of yielding eternal inflation. What this does not take into account is that the strength of the fluctuations can vary with location on the inflationary potential, and thus time. This makes an analysis taking the time dependence of the power spectrum more realistic. For instance, in the parameterization we have chosen, $m=1/(3-2q)$ from Table 1, so that the condition for $m$ to be negative is $q>3/2$. For hilltop inflation, with with $V(\phi)=V_0-\lambda\phi^n$, this occurs except for the range $1<n<2$, confirming the claims of \cite{Barenboim:2016mmw} that hilltop inflation can generically be eternal. Also, plateau inflation can be seen as a kind of limiting case of hilltop inflation in this regard, giving the same distribution as the $n\rightarrow\infty$ limit. Monomial models are always eternal, independent of the power of the field that appears in the potential. One other feature of this expression to note is that the models which yield eternal inflation do so independently of the starting conditions. This may seem to contradict standard intuition that they only achieve this behavior if the initial field value is above some critical threshold. However, the quantity we are calculating is the reheating volume averaged over many realizations of the dynamics. There will always be a (potentially very small) probability of fluctuations pushing the inflaton field further up the potential than its starting point, all the way to the eternal regime. Once this happens, the volume in this realization is infinite, and because the probability does not vanish, the quantity will be dominated by this realization. Other diagnostics must be used if one is interested in the relative likelihood of eternal inflation for a given set of initial conditions, but this quantity gives us information of whether eternal inflation is possible at all for a given potential. \section{Conclusions} In this work we have shown that a coordinate transformation, a transformation generated by the charge associated with the asymptotic symmetries of de-Sitter and a Bogoliubov transformation are three equivalent ways of capturing the infrared effects on the local vacua in de-Sitter. Even though the perturbative description of inflation breaks down at the Page time, we have shown that making use of the equivalence above it is possible to obtain non-perturbative results that accurately reflect the dynamics of the system. This is done by resumming an infinite class of diagrams, which results in explicit expressions for the case of long wavelength scalars, and highly simplified expressions for the more complicated case of tensors. We used this to explicitly compute the physical quantity corresponding to the probability that a given value of the power spectrum is measured on a flat time slice. In principle, this method may be used to calculate other observables, such as the fractal dimension of the reheating surface, as in \cite{Aryal:1987vn,Vilenkin:1992uf,gr-qc/0111048}. We have also used our calculations to investigate if a physical effect can in principle be observed by a local observer confined to a single Hubble patch\footnote{Here we are not considering the possibility of the de Sitter expansion ending and modes re-entering the horizon, as discussed in \cite{Giddings:2011zd}.} , if they are willing to patiently tend to their apparatus for a long enough time. We found that the answer is non-trivial when including quantum fluctuations of the detector, and appears to obstruct the measurement of an effect even for a patient observer in de Sitter space. It would be interesting to generalize this discussion, to test if there is a new type of UV/IR cosmic censorship for patient observers in de Sitter, as we are tempted to conjecture. \subsection*{Aknowledgements} We would like to thank Paolo Creminelli, Jaume Garriga, Steve Giddings, Atsushi Higuchi, Nemanja Kaloper, Alexandros Kehagias, Antonio Riotto and Sergey Sibiryakov for interesting discussions or comments. RZF is supported by ERC Starting Grant HoloLHC-306605. MSS is supported by Villum Fonden grant 13384. CP3-Origins is partially funded by the Danish National Research Foundation, grant number DNRF90.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Anisotropic shuriken model} \label{sec:Model} The shuriken lattice is made of corner-sharing triangles with 6 sites per unit cell [see Fig.~\ref{fig:UnitCell}]. As opposed to its kagome parent where all spins belong to hexagonal loops, the shuriken lattice forms two kinds of loops made of either 4 or 8 sites. As a consequence, $2/3$ of the spins in the system belong to the A-sublattice, while the remaining $1/3$ of the spins form the B-sublattice. Let us respectively define $J_{AA}$ and $J_{AB}$ as the coupling constants between A-sites on the square plaquettes, and between A- and B-sites on the octagonal plaquettes. The Hamiltonian of the model can be written as: \begin{equation} H = - J_{AA}\;\sum_{\langle i j \rangle_{AA}} \sigma^{A}_i \sigma^{A}_j - J_{AB}\;\sum_{\langle i j \rangle_{AB}} \sigma^{A}_i \sigma^{B}_j \label{eq:ham} \end{equation} where we consider Ising spins $\sigma_{i}=\pm 1$ with nearest-neighbor coupling. There is no frustration for ferromagnetic $J_{AA} = +1$ where the system undergoes a phase transition with spontaneous $\mathbb{Z}_{2}$ symmetry breaking for $J_{AB}\neq 0$. We shall thus focus on antiferromagnetic $J_{AA} = -1$, which will be our energy and temperature scale of reference. The thermodynamics will be discussed as a function of the coupling ratio~[\onlinecite{Derzhko2006},\onlinecite{Rousochatzakis2013},\onlinecite{Ralko2015}] \begin{equation} x = \dfrac{J_{AB}}{J_{AA}}, \label{eq:coupling} \end{equation} with ferro- and antiferromagnetic $J_{AB}$. \section{Phase Diagram} \label{sec:PhaseDiagram} The Hamiltonian of equation~(\ref{eq:ham}) is invariant under the transformation \begin{align} \sigma^{A}\rightarrow -\sigma^{A} \nonumber\\ J_{AB} \rightarrow - J_{AB} \label{eq:symmetricPD} \end{align} All quantities derived from the energy, and especially the specific heat $C_{h}$ and entropy $S$, are thus the same for $x$ and $-x$. Their respective magnetic phases are related by reversing all spins of the A-sublattices. \subsection{Long-range order: ${|x|>1}$} \label{sec:LRO} When the octagonal plaquettes are dominating ($x\rightarrow \pm \infty $), the shuriken lattice becomes a decorated square lattice, with A-sites sitting on the bonds between B-sites. Being bipartite, the decorated square lattice is not frustrated and orders via a phase transition of the 2D Ising Universality class~\cite{Sun06a} by spontaneous $\mathbb{Z}_{2}$ symmetry breaking. Non-universal quantities such as the transition temperature can be exactly computed by using the decoration-iteration transformation~\cite{Fisher59a,Sun06a,Strecka15b} [see appendix~\ref{subsec:exactTc}] \begin{eqnarray} T_{c}=\frac{2 J_{AB}}{\ln\left(\sqrt{2}+1 +\sqrt{2+2\sqrt{2}}\right)} \approx 1.30841 J_{AB} \label{eq:exactTc} \end{eqnarray} The low-temperature ordered phases, displayed in Fig.~\ref{fig:PhaseDiagram}.(b) and~\ref{fig:PhaseDiagram}.(c), remain the ground states of the anisotropic shuriken model for $x<-1$ and $x>1$ respectively. The persistence of the 2D Ising Universality class down to $|x|=1^{+}$ is not necessarily obvious, but is confirmed by finite-size scaling from Monte Carlo simulations [see appendix~\ref{sec:PhaseTransition}]. These two ordered phases are respectively ferromagnetic (FM, $x<-1$) and staggered ferromagnetic (SFM, $x>1$) [see Fig.~\ref{fig:PhaseDiagram}.(b,c)]. The staggering of the latter comes from all spins on square plaquettes pointing in one direction, while the remaining ones point the other way. This leads to the rather uncommon consequence that fully antiferromagnetic couplings -- both $J_{AA}$ and $J_{AB}$ are negative for $x>1$ -- induce long-range ordered (staggered) ferromagnetism, reminiscent of Lieb ferrimagnetism~\cite{Lieb89a} as pointed out in Ref.~[\onlinecite{Rousochatzakis2013}] for quantum spins. The existence of ferromagnetic states among the set of ground states of Ising antiferromagnets is not rare, with the triangular and kagome lattices being two famous examples. But such ferromagnetic states are usually part of a degenerate ensemble where no magnetic order prevails on average. Here the lattice anisotropy is able to induce ferromagnetic order in an antiferromagnetic model by lifting its ground-state degeneracy at $|x|=1$ (see below). This is interestingly quite the opposite of what happens in the spin-ice model~\cite{Harris97a}, where frustration prevents magnetic order in a ferromagnetic model by stabilizing a highly degenerate ground state. \begin{figure*}[t] \centering\includegraphics[width=1 \textwidth]{MC_HT.eps} \caption{{\bf Multiple crossovers between the paramagnetic, spin-liquids and binary regimes} as observed in the specific heat $C_h$, entropy $S$ and reduced magnetic susceptibility $\chi \, T$. The models correspond to a) $x = \pm1$, b) $x = \pm 0.9$ and c) $x = 0$. There is no phase transition for this set of parameters, which is why the Husimi tree calculations (lines) perfectly match the Monte Carlo simulations (circles) for all temperatures. The double crossover is present for $x = \pm 0.9$, with the low-temperature regime being the same as for $x=0$, as confirmed by its entropy and susceptibility. The entropy is obtained by integration of $C_{h}/T$, setting $S(T\rightarrow +\infty) = \ln 2$. The vertical dashed lines represent estimates of the crossover temperatures determined by the local specific-heat maxima. The temperature axis is on a logarithmic scale. All quantities are given per number of spins and the Boltzmann constant $k_{B}$ is set to 1. } \label{fig:MCHT} \end{figure*} \subsection{Binary paramagnet: ${|x|<1}$} The central part of the phase diagram is dominated by the square plaquettes. The ground states are the same for all $|x|<1$. A sample configuration of these ground states is given in Fig.~\ref{fig:PhaseDiagram}.(d), where antiferromagnetically ordered square-plaquettes are separated from each other via spins on sublattice B. The antiferromagnetic square-plaquettes locally order in two different configurations equivalent to a superspin $\Xi$ with Ising degree of freedom. \begin{eqnarray} \Xi=\sigma^{A}_{1}-\sigma^{A}_{2}-\sigma^{A}_{3}+\sigma^{A}_{4}=\pm 4, \label{eq:superspin} \end{eqnarray} where the site indices are given in Fig.~\ref{fig:UnitCell}. These superspins are the classical analogue of the tetramer objects observed in the spin$-1/2$ model~\cite{Rousochatzakis2013}. At zero temperature, the frustration of the $J_{AB}$ bonds perfectly decouples the superspins $\Xi$ from the B-sites. The system can then be seen as two interpenetrating square lattices: one made of superspins, the other one of B-sites. We shall refer to this phase as a \textit{binary} paramagnet (BPM). The perfect absence of correlations beyond square plaquettes at $T=0$ allows for a simple determination of the thermodynamics. Let $N_{uc}$ and $N=6\,N_{uc}$ be respectively the total number of unit cells and spins in the system, and $\langle X \rangle$ be the statistical average of $X$. There are $N_{uc}$ square plaquettes and $2N_{uc}$ B-sites, giving rise to an extensive ground-state entropy \begin{eqnarray} S_{\rm BPM}=k_{B}\,\ln\left(2^{N_{uc}}\,2^{2N_{uc}}\right)=\dfrac{N}{2}k_{B}\,\ln 2 \label{eq:entropyBPM} \end{eqnarray} which turns out to be half the entropy of an Ising paramagnet. As for the magnetic susceptibility $\chi$, it diverges as $T\to 0^{+}$. But the reduced susceptibility $\chi\, T$, which is nothing less than the normalized variance of the magnetization \begin{eqnarray} \chi\,T &=& \dfrac{1}{N}\left(\sum_{i,j}\langle \sigma_{i}\sigma_{j}\rangle\,-\,\langle \sigma_{i}\rangle\langle \sigma_{j}\rangle\right),\nonumber\\ &=&1+\dfrac{1}{N}\sum_{i\neq j}\langle \sigma_{i}\sigma_{j}\rangle, \label{eq:Chi} \end{eqnarray} converges to a finite value in the BPM \begin{eqnarray} \chi\,T|_{\rm BPM}&=&\dfrac{1}{3}. \label{eq:bulkChi} \end{eqnarray} \subsection{Classical spin liquid: ${|x|\sim 1}$} There is a sharp increase of the ground-state degeneracy at $|x|=1$, when the binary paramagnet and the (staggered) ferromagnet meet. As is common for isotropic triangle-based Ising antiferromagnets, 6 out of 8 possible configurations per triangle minimize the energy of the system. As opposed to the BPM one does not expect a cutoff of the correlations [see section~\ref{sec:corr}], making these phases cooperative paramagnets~\cite{Villain79a}, also known as classical spin liquids. Due to the high entropy of these cooperative paramagnets, the SL$_{1,2}$ phases spread to the neighboring region of the phase diagram for $|x|\sim 1$ and $T>0$, continuously connected to the high-temperature paramagnet [see Fig.~\ref{fig:PhaseDiagram}]. Hence, for $|x|\gtrsim 1$, the anisotropic shuriken model stabilizes a cooperative paramagnet above a non-degenerate\footnote{besides the trivial time-reversal symmetry} long-range ordered phase. This is a general property of classical spin liquids when adiabatically tuned away from their high-degeneracy point, as observed for example in Heisenberg antiferromagnets on the kagome~\cite{Elhajal02a} or pyrochlore~\cite{Canals08a,Chern10b,Mcclarty14b} lattices, and possibly in the material of Er$_{2}$Sn$_{2}$O$_{7}$~\cite{Yan13a}. For $|x|\lesssim 1$ on the other hand, multiple crossovers take place upon cooling which deserves a dedicated discussion in the following section~\ref{sec:LiquidRegime}. \section{Reentrance of disorder} \label{sec:LiquidRegime} \subsection{Double crossover} \label{sec:dblecross} First of all, panels (a) and (c) of Fig.~\ref{fig:MCHT} confirm that the classical spin liquids and binary paramagnet persist down to zero temperature for $x=\pm 1$ and $x=0$ respectively, and that all models for $|x|\leqslant 1$ have extensively degenerate ground states. For $x=\pm 0.9$ there is a double crossover indicated by the double peaks in the specific heat $C_h$ of Fig.~\ref{fig:MCHT}.(b). These peaks are not due to phase transitions since they do not diverge with system size. The double crossover persists for $0.5\lesssim |x|<1$. Upon cooling, the system first evolves from the standard paramagnet to a spin liquid before entering the binary paramagnet. The intervening spin liquid takes the form of an entropy plateau for $|x|=0.9$ [see Fig.~\ref{fig:MCHT}.(b)], at the same value as the low-temperature regime for $|x|=1$ [see Fig.~\ref{fig:MCHT}.(a)]. All relevant thermodynamic quantities are summarized in Table~\ref{tab:zeroT}. While the mapping of equation~(\ref{eq:symmetricPD}) ensures the invariance of the energy, specific heat and entropy upon reversing $x$ to $-x$, it does not protect the magnetic susceptibility. The build up of correlations in classical spin liquids is known to give rise to a Curie-law crossover~\cite{Jaubert13a} between two $1/T$ asymptotic regimes of the susceptibility, as observed in pyrochlore~\cite{Isakov04a,Ryzhkin05a,Conlon10a,Jaubert13a}, triangular~\cite{Isoda08a} and kagome~\cite{Isoda08a,Li10a,Macdonald11a} systems. This is also what is observed here on the anisotropic shuriken lattice for $x=\{-1,0,1\}$ [see Fig.~\ref{fig:Chi_All}]. But for intermediate models with $x=\{-0.99,-0.9,0.9,0.99\}$, the double crossover makes the reduced susceptibility non-monotonic. $\chi\, T$ first evolves towards the values of the spin liquids SL$_{1}$ (resp. SL$_{2}$) for $x<0$ (resp. $x>0$) before converging to $1/3$ in the binary paramagnet. Beyond the present problem on the shuriken lattice, this multi-step Curie-law crossover underlines the usefulness of the reduced susceptibility to spot intermediate regimes, and thus the proximity of different phases. From the point of view of renormalization group theory, the $(x,T)=(\pm 1,0)$ coordinates of the phase diagram are fixed points which deform the renormalization flows passing in the vicinity. \newcolumntype{C}{>{}c<{}} \renewcommand{\arraystretch}{3} \begin{table} \centering \begin{tabular}{||C|C|C|C||} \hhline{|t:====:t|} $T\to0^{+}$& Monte Carlo & Husimi tree & $\quad$ exact $\quad$ \\ \hhline{||====||} $S (|x| = 1)$ & $ 0.504(1) $ & $ \dfrac{1}{6} \ln \dfrac{41}{2} \approx 0.5034$ & n/a \\ \hhline{||----||} $\chi \, T (x = 1)$ & $ 0.203(1) $ & $0.2028$ & n/a \\ \hhline{||----||} $\chi \, T (x =-1)$ & $ 1.766(1) $ & $1.771$ & n/a \\ \hhline{||====||} $S (|x| < 1)$ & $ 0.347(1) $ & $ \dfrac{1}{2} \ln 2 \approx 0.3466$ & $ \dfrac{1}{2} \ln 2 $ \\ \hhline{||----||} $\chi \, T (|x| < 1)$ & $ 0.333(1) $ & $\dfrac{1}{3}$ &$ \ \dfrac{1}{3} $ \\ \hhline{|b:====:b|} \end{tabular} \caption{{\bf Entropies $S$ and reduced susceptibilities $\chi T$ as $\mathbf{T\to0^{+}}$} for the anisotropic shuriken lattice with coupling ratios $|x|\leqslant 1$. The results are obtained from Monte Carlo simulations, Husimi tree analytics and the exact solution for the binary paramagnet. All quantities are given per number of spins and the Boltzmann constant $k_{B}$ is set to 1. } \label{tab:zeroT} \end{table} \begin{figure}[t] \centering\includegraphics[width=0.5 \textwidth]{Chi_All.eps} \caption{{\bf Reduced susceptibility $\chi\, T$} with coupling ratios of $x = \pm 1, \pm 0.99, \pm 0.9$ and $0$, obtained from Husimi-tree calculations (solid lines) and Monte Carlo simulations (circles). The Curie-law crossover of classical spin liquids is standard, \textit{i.e.} $\chi\, T$ is monotonic, for $x=\pm 1$ and 0, and takes a multi-step behavior for intermediate values of $x$, due to the double crossover. The characteristic values of the entropy and reduced susceptibility are given in Table~\ref{tab:zeroT}. The temperature axis is on a logarithmic scale } \label{fig:Chi_All} \end{figure} \subsection{Decoration-iteration transformation} \label{section:checkerboard} The phase diagram of the anisotropic shuriken model and, in particular, the double crossover observed for $|x| < 1$ [see Fig.~\ref{fig:PhaseDiagram}] can be further understood using an exact mapping to an effective model on the checkerboard lattice, a method known as decoration-iteration transformation [see Ref.~[\onlinecite{Strecka15b}] for a review]. In short, by summing over the degrees of freedom of the A-spins, one can arrive at an effective Hamiltonian involving only the B-spins, which form a checkerboard lattice. The coupling constants of the effective Hamiltonian are functions of the temperature $T$ and for $|x| < 1$ they vanish at both high and low temperatures, but are finite for an intermediate regime. This intermediate regime may be identified as the SL$_{1,2}$ cooperative paramagnets of Fig.~\ref{fig:PhaseDiagram}, whereas the low-temperature region of vanishing effective interaction corresponds to the binary paramagnet (BPM). This mapping is able to predict a non-monotonic behavior of the correlation length. In this section we give a brief sketch of the derivation of the effective model, before turning to its results. Details of the effective model are given in Appendix \ref{appendix:mapping}. To begin, consider the partition function for the system, with the Hamiltonian given by Eq.~(\ref{eq:ham}) \begin{eqnarray} Z= \sum_{\{ \sigma^{A}_i=\pm1\}} \sum_{\{ \sigma^{B}_i=\pm1\}} \exp\left[ -\beta H \right] \label{eq:partition0} \end{eqnarray} where $\beta=\frac{1}{T}$ is the inverse temperature and the sums are over all possible spin configurations. Since in the Hamiltonian of Eq.~(\ref{eq:ham}) the square plaquettes of the A-sites are only connected to each other via their interaction with the intervening B-sites, it is possible to directly take the sum over configurations of A-spins in Eq.~(\ref{eq:partition0}) for a fixed (but completely general) configuration of B-spins. Doing so, we arrive at \begin{eqnarray} Z= \sum_{\{ \sigma^{B}_i=\pm1\}} \prod_{\square} \mathcal{Z}_{\square} (\{ \sigma^{B}_i\}) \label{eq:partitionSq} \end{eqnarray} where the product is over all the square plaquettes of the lattice and $\mathcal{Z}_{\square} (\{ \sigma^{B}_i\})$ is a function of the four B-spins immediately neighbouring a given square plaquette. The B-spins form a checkerboard lattice, and Eq.~(\ref{eq:partitionSq}) can be exactly rewritten in terms of an effective Hamiltonian $H_{\boxtimes}$ on that lattice: \begin{eqnarray} &&Z= \sum_{\{ \sigma^{B}_i=\pm1\}} \exp(-\beta \sum_{\boxtimes} H_{\boxtimes}) \\ &&H_{\boxtimes}= -\mathcal{J}_0(T)- \mathcal{J}_1(T) \sum_{\langle ij \rangle}\sigma^{B}_i \sigma^{B}_j + \nonumber \\ && \qquad - \mathcal{J}_2(T) \sum_{\langle \langle ij \rangle \rangle}\sigma^{B}_i \sigma^{B}_j - \mathcal{J}_{\sf ring}(T) \prod_{i \in \boxtimes} \sigma_i^B \label{eq:effectiveHcheck} \end{eqnarray} where $\sum_{\boxtimes}$ is a sum over checkerboard plaquettes of B-spins. The effective Hamiltonian $H_{\boxtimes}$ contains a constant term $\mathcal{J}_0$, a nearest neighbour interaction $\mathcal{J}_1$, a second nearest neighbour interaction $\mathcal{J}_2$, and a four-site ring interaction $\mathcal{J}_{\sf ring}$. All couplings are functions of temperature $\mathcal{J}_i=\mathcal{J}_i(T)$ and are invariant under the transformation $J_{AB}\longmapsto -J_{AB}$ because the degrees of freedom of the A-sites have been integrated out. Expressions for the dependence of the couplings on temperature are given in Appendix \ref{appendix:mapping}. The temperature dependence of the effective couplings $\mathcal{J}_i=\mathcal{J}_i(T)$ can itself give rather a lot of information about the behavior of the shuriken model. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{couplings-x09.eps}\\ \includegraphics[width=0.9\columnwidth]{couplings-x10.eps} \caption{ \textbf{Behavior of the coupling constants of the the effective checkerboard lattice model as a function of temperature} [Eq.~(\ref{eq:effectiveHcheck})] for $x=-0.9$ (upper panel) and $x=-1$ (lower panel). {\it Upper panel}: All couplings vanish at both high and low temperatures with an intermediate regime at $T \sim 1$ where the effective interactions are stronger. The intermediate regime corresponds to the spin liquid region of the phase diagram Fig.~\ref{fig:PhaseDiagram}, with the high- and low-temperature regimes corresponding to the paramagnet and binary paramagnet respectively. {\it Lower panel}: For all couplings $\mathcal{J}_i$, $\beta \mathcal{J}_i$ vanishes at high temperature and tends to a finite constant of magnitude $|\beta \mathcal{J}_i (T)|<<1$ at low temperature. The short range correlated, spin-liquid regime, thus extends all the way down to $T=0$. } \label{fig:couplings} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{corrlen--all.eps \caption{ \textbf{Correlation lengths in the effective checkerboard model}, calculated from Eq.~(\ref{eq:corrlen}), for $x=-0.9$ and $x=-1$. The correlation length is calculated to leading order in a perturbative expansion of the effective model in powers of $\beta \mathcal{J}_i$. Such an expansion is reasonable for $|x|\leq1$ since $\beta \mathcal{J}_i<<1$ for all $T$ (see Fig.~\ref{fig:couplings}). For $x=- 0.9$ the behavior of the correlation length is non-monotonic. The correlation length is maximal in the spin liquid regime but correlations remain short ranged at all temperatures. In the binary paramagnet regime, the correlation length vanishes linearly at low temperature. For $x=- 1$, the correlation length enters a plateau at $T\sim 1$, and short range correlations remain down to $T=0$. } \label{fig:correlationlengths} \end{figure} First we consider the case $|x|<1$. In this regime of parameter space, all effective interactions $\mathcal{J}_1, \mathcal{J}_2, \mathcal{J}_{\sf ring}$ vanish exponentially at low temperature $T<<1$. For intermediate temperatures $T \sim 1$ the effective interactions in Eq.~(\ref{eq:effectiveHcheck}) become appreciable before vanishing once more at high temperatures. This is illustrated for the case $x=-0.9$ in the upper panel of Fig.~\ref{fig:couplings}. Seeing the problem in terms of these effective couplings gives some intuition into the double crossover observed in simulations. As the temperature is decreased the effective couplings $|\mathcal{J}_i|$ increase in absolute value and the system enters a short range correlated regime. However, as the temperature decreases further, the antiferromagnetic correlations on the square plaquettes of A-spins become close to perfect, and act to screen the effective interaction between B-spins. This is reflected in the exponential suppression of the couplings $\mathcal{J}_1, \mathcal{J}_2, \mathcal{J}_{\sf ring}$. In the case $|x|=1$, the effective interactions $\mathcal{J}_i$ no longer vanish exponentially at low temperature, but instead vanish linearly \begin{eqnarray} \mathcal{J}_1, \mathcal{J}_2, \mathcal{J}_{\sf ring} \sim T. \end{eqnarray} The ratio of effective couplings to the temperature $\beta \mathcal{J}_{i}$ thus tends to a constant below $T \sim 1$, as shown in the lower panel of Fig.~\ref{fig:couplings}. Thus, the zero temperature limit of the shuriken model can be mapped to a finite temperature model on the checkerboard lattice for $|x|=1$ and to an infinite temperature model for $|x|<1$.\\ \begin{figure*} \centering\includegraphics[width=0.9\textwidth]{CorrFunc.eps} \caption{{\bf Spin-spin correlations in the vicinity of the spin liquid phases} for $x=-0.9$ (a,b) $-1$ (c,d) and $-1.05$ (e,f), obtained from Monte Carlo simulations. The temperatures considered are $T=0.01$ (\textcolor{green}{$\blacklozenge$}), 1 (\textcolor{orange}{$\blacksquare$}) and 891.25 (\textcolor{blue}{\Large \textbullet}). Because of the anisotropy of the lattice, we want to separate correlation functions which start on A-sites (a,c,e) and B-sites (b,d,f). The radial distance is given in units of the unit-cell length. The agglomeration of data points around $C\sim 2. 10^{-5}$ is due to finite size effects. The y-axis is on a logarithmic scale. } \label{fig:corr} \end{figure*} The behavior of the spin correlations in the shuriken model can be captured by calculating the correlation length between B-spins in the checkerboard model. Since $\beta \mathcal{J}_i$ is small for all of the interactions $\mathcal{J}_i$, at all temperatures $T$ (see Fig.~\ref{fig:couplings}), this can be estimated using a perturbative expansion in $\beta \mathcal{J}_i$. For two B-spins chosen such that the shortest path between them is along nearest neighbour $\mathcal{J}_1$ bonds we obtain to leading order \begin{eqnarray} &&\langle \sigma^B_i \sigma^B_j \rangle= \exp\left( -\frac{r_{ij}}{\xi_{BB}} \right) \label{eq:corrfun} \\ &&\xi_{BB} \approx \frac{1}{\sqrt{2}\ln\left( \frac{T}{ \mathcal{J}_1 (T)} \right)} \label{eq:corrlen} \end{eqnarray} where we choose units of length such that the linear size of a unit cell is equal to 1. Details of the calculation are given in Appendix \ref{appendix:mapping}. The correlation length between B-spins, calculated from Eq.~(\ref{eq:corrlen}), is shown for the cases $x=-0.9$ and $x=-1$ in Fig.~\ref{fig:correlationlengths}. For $x=-0.9$ the correlation length shows a non-monotonic behavior, vanishing at both high and low temperature with a maximum at $T \sim1$. On the other hand for $x=-1$, the correlation length enters a plateau for temperatures below $T\sim 1$ and the system remains in a short range correlated regime down to $T=0$. The extent of this plateau agrees with the low-temperature plateau of the reduced susceptibility in Fig.~\ref{fig:Chi_All} \begin{figure*}[t] \centering\includegraphics[width=1\textwidth]{Sq_all.eps} \caption{ {\bf Static structure factors of the anisotropic shuriken lattice for} (a) $x = -1$, (b) $x = 0$ and (c) $x = 1$ at zero temperature, obtained from Monte Carlo simulations. For $x=\pm 1$, the scatterings are strongly inhomogeneous (as opposed to a standard paramagnet) and non-divergent (\textit{i.e.} without long-range order), confirming the spin liquid nature of these phases. The structure factors of the $x=+1$ and $x=-1$ models are similar by a $(q_{x},q_{y})=(60\pi,0)$ or $(0,60\pi)$ translation. The patterns are related to the 6-site unit cell of the shuriken lattice, as visible from Fig.~\ref{fig:UnitCell}. On the other hand for (b) $x=0$, the black background underlines the absence of correlations in the binary paramagnet beyond the size of the superspins (square plaquettes), which is responsible for the finite extension of the dots of scattering. In order to restore ergodicity, a local update flipping the four spins of square plaquettes was used in the simulations. A video showing the temperature dependence of the static structure factor for $x=0.9$ is available in the Supplementary Materials. } \label{fig:SQ} \end{figure*} \subsection{Correlations and Structure factors} \label{sec:corr} The non-monotonic behavior of the correlation length estimated in the previous section~\ref{section:checkerboard} can be measured by Monte Carlo simulations. Let us consider the microscopic correlations both in real ($C_{\rho}$) and Fourier ($S_{q}$) space. The function $C_{\rho}$ measures the correlation between a central spin $\sigma_{0}$ and all spins at distance $\rho$. Because of the nature of the binary paramagnet, one needs to make a distinction between central spins on the A and B sublattices. Let $D_{\rho}^{X}$ be the ensemble of sites at distance $\rho$ from a given spin $\sigma_{0}^{X}$ on the $X=\{A,B\}$ sublattice. The correlation function is defined as \begin{eqnarray} C^X_{\rho} = \dfrac{\sum_{i\in D_{\rho}^{X}} | \langle \sigma_{0}^{X} \sigma_i \rangle |}{\sum_{i\in D_{\rho}^{X}}} \label{eq:Crho} \end{eqnarray} where the absolute value accounts for the antiferromagnetic correlations. As for the static structure factor $S_{q}$, it is defined as \begin{equation} S_q= \langle \sigma_{\vec{q}} \,\sigma_{-\vec{q}} \rangle = \Big \langle \Big| \frac{1}{N_{uc}} \sum_i e^{-i \vec{q}\cdot \vec{r}_i} \sigma_i \Big|^2 \Big \rangle. \end{equation}\\ $C^{A}_{\rho}$ and $C^{B}_{\rho}$ are respectively plotted on the left and right of Fig.~\ref{fig:corr}. Let us first consider what happens in absence of reentrant behavior. For $x=-1.05$ [see panels (a,b)], the system is ferromagnetic at low temperature with $C(\rho)\approx 1$ over long length scales. Above the phase transition, the correlations are exponentially decaying. When $x=1$ [see panels (c,d)], the correlations remain exponentially decaying down to zero temperature. The correlation length $\xi$ reaches a maximum in the spin-liquid regime with $\xi \approx 0.3$. The quantitative superimposition of data for $T=0.01$ and $T=1$ is in agreement with the low-temperature plateau of the correlation length in Fig.~\ref{fig:correlationlengths}. The spin liquid remains essentially unchanged all the way up to $T\sim 1$, when defects are thermally excited. However even if the correlations are exponential, they should not be confused with paramagnetic ones, as illustrated by their strongly inhomogeneous structure factors [see Fig.~\ref{fig:SQ} and Supplementary Materials]. Once one enters the double-crossover region [see Fig.~\ref{fig:corr}.(e,f) for $x=-0.9$], the correlation function becomes non-monotonic with temperature, as predicted from the analytics of Fig.~\ref{fig:correlationlengths}. In the binary paramagnet, the B-sites are perfectly uncorrelated, while the A-sites have a finite cutoff of the correlation that is the size of the square plaquettes (superspins). This is why $S_{q}$ takes the form of an array of dots of scattering, whose width is inversely proportional to the size of the superspins [see Fig.~\ref{fig:SQ}]. Please note that the dip of correlations for the nearest-neighbors in Fig.~\ref{fig:corr}.(e) is because half of the nearest-neighbors of any A-site are on the uncorrelated B sublattice. The intervening presence of the spin liquids between the two crossovers is conceptually reminiscent of reentrant behavior~\cite{Hablutzel39a,Vaks66a,Cladis88a}. Not in the usual sense though, since reentrance is usually considered to be a feature of ordered phases surrounded by disordered ones. But the present scenario is a direct extension of the concept of reentrance applied to disordered regimes. This reentrance is quantitatively characterized at the macroscopic level by the double-peak in the specific heat, the entropy plateau and the multi-step Curie-law crossover of Fig.~\ref{fig:MCHT}.(b), and microscopically by the non-monotonic evolution of the correlations [see Figs.~\ref{fig:correlationlengths}, \ref{fig:corr} and \ref{fig:SQ}]. As such, it provides an interesting mechanism to stabilize a gas-like phase ``below'' a spin liquid, where (a fraction of) the spins form fully correlated clusters which i) can then fluctuate independently of the other degrees-of-freedom while ii) lowering the entropy of the gas-like phase below the one of the spin liquids. \section{The shuriken lattice in experiments ?} \label{sec:exp} Finally, we would like to briefly address the experimental situation. Unfortunately we are not aware of an experimental realization of the present model, but several directions are possible, each of them with their advantages and drawbacks.\\ The shuriken topology has been observed, albeit quite hidden, in the dysprosium aluminium garnet (DAG)~\cite{Landau71a,Wolf72a} [see Ref.~[\onlinecite{Wolf00a}] for a recent review]. The DAG material has attracted its share of attention in the 1970's, but its microscopic Hamiltonian does not respect the geometry of the shuriken lattice -- it is actually not frustrated -- and is thus quite different from the model presented in equation~(\ref{eq:ham}). However it shows that the shuriken topology can exist in solid state physics.\\ Cold atoms might offer an alternative. Indeed, the necessary experimental setup for an optical shuriken lattice has been proposed in Ref.~[\onlinecite{Glaetzle14a}]. The idea was developed in the context of spin-ice physics, \textit{i.e.} assuming an emergent Coulomb gauge theory whose intrinsic Ising degrees of freedom are somewhat different from the present model. Nonetheless, optical lattices are promising, especially if one considers that the inclusion of ``proper'' Ising spins might be available thanks to artificial gauge fields~\cite{Struck13a}.\\ But the most promising possibility might be artificial frustrated lattices, where ferromagnetic nano-islands effectively behave like Ising degrees-of-freedom. Since the early days of artificial spin ice~\cite{Wang06a}, many technological and fundamental advances have been made~\cite{Nisoli13a}. In particular, while the thermalization of the Ising-like nano-islands had been a long-standing issue, this problem is now on the way to be solved~\cite{kapaklis12a,farhan13a,farhan13b,Morgan13a,Marrows13a,Anghinolfi15a,Arnalds16a}. Furthermore, since the geometry of the nano-array can be engineered lithographically, a rich diversity of lattices is available, and the shuriken geometry should not be an issue. Concerning the Ising nature of the degrees-of-freedom, nano-islands have recently been grown with a magnetization axis $\vec z$ perpendicular to the lattice~\cite{Zhang12a,Chioar14a}. To compute their interaction~\cite{Zhang12a,Chioar14a}, let us define the Ising magnetic moment of two different nano-islands: $\vec S = \sigma \vec z$ and $\vec S' = \sigma' \vec z$. The interaction between them is dipolar of the form \begin{eqnarray} D\left(\frac{\vec S\cdot \vec S'}{r^{3}}\,-\,3\frac{(\vec S\cdot \vec r)(\vec S'\cdot \vec r)}{r^{5}}\right)= \frac{D}{r^{3}}\; \sigma\,\sigma' \label{eq:dipo} \end{eqnarray} where $D$ is the strength of the dipolar interaction and $\vec r$ is the vector separating the two moments. The resulting coupling is thus antiferromagnetic and quickly decays with distance. Hence, at the nearest-neighbour level, a physical distortion of the shuriken geometry -- by elongating or shortening the distance between A and B sites -- would precisely reproduce the anisotropy of equation~(\ref{eq:ham}) for $x>0$. However, the influence of interactions beyond nearest-neighbours has successively been found to be experimentally negligible~\cite{Zhang12a} and relevant~\cite{Chioar14a} on the kagome geometry. Thus the phase diagram of Fig.~\ref{fig:PhaseDiagram}.(a) could possibly be observed at finite temperature, but would likely be influenced by longer-range interactions at relatively low temperature. \section{Conclusion} \label{sec:conclusion} The anisotropic shuriken lattice with classical Ising spins supports a variety of different phases as a function of the anisotropy parameter $x=J_{AB}/J_{AA}$: two long-range ordered ones for $|x|>1$ (ferromagnet and staggered ferromagnet) and three disordered ones [see Fig.~\ref{fig:PhaseDiagram}]. Among the latter ones, we make the distinction, at zero temperature, between two cooperative paramagnets SL$_{1,2}$ for $x=\pm 1$, and a phase that we name a binary paramagnet (BPM) for $|x|<1$. The BPM is composed of locally ordered square plaquettes separated by completely uncorrelated single spins on the B-sublattice [see Fig.~\ref{fig:PhaseDiagram}.(d)]. At finite temperature, the classical spin liquids SL$_{1,2}$ spread beyond the singular points $x=\pm 1$, giving rise to a double crossover from paramagnet to spin liquid to binary paramagnet, which can be considered as a reentrant behavior between disordered regimes. This competition is quantitatively defined by a double-peak feature in the specific heat, an entropy plateau, a multi-step Curie-law crossover and a non-monotonic evolution of the spin-spin correlation, illustrated by an inhomogeneous structure factor [see Figs.~\ref{fig:MCHT}, \ref{fig:Chi_All},\ref{fig:correlationlengths}, \ref{fig:corr} and \ref{fig:SQ}]. The reentrance can also be precisely defined by the resurgence of the couplings in the effective checkerboard model [see Fig.~\ref{fig:couplings}].\\ Beyond the physics of the shuriken lattice, the present work, and especially Fig.~\ref{fig:MCHT}, confirms the Husimi-tree approach as a versatile analytical method to investigate disordered phases such as spin liquids. Regarding classical spin liquids, Fig.~\ref{fig:Chi_All} illustrates the usefulness of the reduced susceptibility $\chi\, T$~[\onlinecite{Jaubert13a}], whose temperature evolution quantitatively describes the successive crossovers between disordered regimes. Last but not least, we hope to bring to light an interesting facet of distorted frustrated magnets, where extended regions of magnetic disorders can be stabilized by anisotropy, such as on the Cairo~\cite{Rousochatzakis12a,Rojas12a}, kagome~\cite{Li10a,Apel11a} and pyrochlore~\cite{Benton15c} lattices. Such connection is particularly promising since it expands the possibilities of experimental realizations, for example in Volborthite kagome~\cite{Hiroi01a} or breathing pyrochlores~\cite{Okamoto13a,Kimura14a}.\\ Possible extensions of the present work can take different directions. Motivated by the counter-intuitive emergence of valence-bond-crystals made of resonating loops of size 6~[\onlinecite{Ralko2015}], the combined influence of quantum dynamics, lattice anisotropy $x$~\cite{Rousochatzakis2013,Ralko2015} and entropy selection presented here should give rise to a plethora of new phases and reentrant phenomena. As an intermediary step, classical Heisenberg spins also present an extensive degeneracy at $x=1$~\cite{Richter2009,Rousochatzakis2013}, where thermal order-by-disorder is expected to play an important role in a similar way as for the parent kagome lattice, especially when tuned by anisotropy $x$. The addition of an external magnetic field~\cite{Derzhko2006,Nakano2015} would provide a direct tool to break the invariance by transformation of equation~(\ref{eq:symmetricPD}), making the phase diagram of Fig.~\ref{fig:PhaseDiagram}.(a) asymmetric. Furthermore, the diversity of spin textures presented here offers a promising framework to be probed by itinerant electrons coupled to localized spins via double-exchange. \begin{acknowledgments} We are thankful to John Chalker, Arnaud Ralko, Nic Shannon and Mathieu Taillefumier for fruitful discussions and suggestions. This work was supported by the Theory of Quantum Matter Unit of the Okinawa Institute of Science and Technology Graduate University. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} One of major contributions by Stone to Measure Theory is a representation pre-intergrals $I\left( f\right) $ as intergrals $\int fd\mu$ for some measure $\mu$ (see \cite{S48}). He found in particular that the natural framework to get such a representation is the function Riesz spaces $E$ closed under \textsl{truncation} with the constant function $1$, i.e. \[ 1\wedge f\in E\text{, for all }f\in E. \] Since then, such spaces, called by now \textsl{Stone vector lattices}, were found to be fundamental to analysis, and their importance has steadily grown \cite{AC07,D04,HP84,K97}. For instance, Lebesgue integration extends quite smoothly to any Stone vector lattice, and function Riesz spaces lacking the above \textsl{Stone condition} may have pre-integrals which cannot be represented by any measure (see \cite{JF87}). In \cite{F74,F06}, Fremlin calls Stone vector lattices of functions \textsl{truncated Riesz spaces}. In the present paper, we shall rather adopt the Fremlin terminology. That's what Ball did in his papers \cite{B14,B14-0} when he gave an appropriate axiomatization of this concept. Rephrasing his first Axiom, we call a \textsl{truncation} on an arbitrary Riesz space $E$ any unary operation $\ast$ on $E^{+}$ such tha \[ f^{\ast}\wedge g=f\wedge g^{\ast}\text{, for all }f,g\in E^{+}. \] A Riesz space (also called a vector lattice) $E$ along with a truncation is called a \textsl{truncated Riesz space}. A positive element $f$ in a truncated Riesz space $E$ is said to be $^{\ast}$-\textsl{infinitely small} i \[ \left( \varepsilon f\right) ^{\ast}=\varepsilon f\text{, for all }\varepsilon\in\left( 0,\infty\right) . \] Furthermore, if $f\in E^{+}$ and $f^{\ast}=0$ imply $f=0$, we call $E$ a \textsl{weakly truncated Riesz space}. Using a different terminology, Ball proved that for any weakly truncated Riesz space $E$ with no nonzero $^{\ast $-infinitely small elements, there exists a compact Hausdorff space $K$ such that $E$ can be represented as a truncated Riesz space of continuous real extended functions on $K$ (see \cite{L79} for continuous real extended functions). Actually, this is a direct generalization of the classical Yosida Representation Theorem for Archimedean Riesz spaces with weak unit \cite{LZ71}. The Ball's result prompted us to investigate the extent to which the classical Kakutani Representation Theorem for Archimedean Riesz spaces with strong unit can be generalized to a wider class of truncated Riesz spaces. Let's figure out how the problem has been addressed. A truncation $\ast$ on a Riesz space $E$ is said to be \textsl{strong }if, for every $f\in E^{+}$, the equality $\left( \varepsilon f\right) ^{\ast }=\varepsilon f$ holds for some $\varepsilon\in\left( 0,\infty\right) $. A Riesz space $E$ with a strong truncation is called a \textsl{strongly truncated Riesz space}. The set of all real-valued Riesz homomorphisms $u$ on the truncated Riesz space $E$ such tha \[ u\left( f^{\ast}\right) =\min\left\{ 1,u\left( f\right) \right\} \text{, for all }f\in E^{+ \] is denoted by $\eta E$ and called the \textsl{spectrum} of $E$. We show that $\eta E$, under its topology inherited from the product topology on $\mathbb{R}^{E}$, is a locally compact Hausdorff space. Moreover, if $E$ is a strongly truncated Riesz with no nonzero $^{\ast}$-infinitely small elements, then $\eta E$ is large enough to separate the points of $E$ and allow representation by continuous functions. Relying on a lattice version of the Stone-Weierstrass theorem for locally compact Hausdorff spaces (which could not be found explicitly in the literature), we prove that any strongly truncated Riesz space $E$ with no nonzero $^{\ast}$-infinitely small elements is Riesz isomorphic to a uniformly dense truncated Riesz subspace of $C_{0}\left( \eta E\right) $. Here, $C_{0}\left( \eta E\right) $ denotes the Banach lattice of all continuous real-valued functions on $\eta E$ vanishing at infinity. We obtain the classical Kakutani Representation Theorem as a special case of our result. Indeed, if $E$ is an Archimedean Riesz space with a strong unit $e>0$, then $E$ is a strongly truncated Riesz space with no nonzero $^{\ast}$-infinitely elements under the truncation given b \[ f^{\ast}=e\wedge f\text{, for all }f\in E^{+}. \] We show that, in this situation, $\eta E$ is a compact Hausdorff space and so $C_{0}\left( \eta E\right) $ coincides with the Banach lattice $C\left( \eta E\right) $ of all continuous real-valued functions on $\eta E$. In summary, $E$ has a uniformly dense copy in $C\left( \eta E\right) $. Our central result will also turn out to be a considerable generalization of a less known representation theorem due to Fremlin (see \cite[83L (d)]{F74}). Some details seem in order. Let $E$ be a Riesz space with a Fatou $M$-norm $\left\Vert .\right\Vert $ and assume that the supremu \[ \sup\left\{ g\in E^{+}:g\leq f\text{ and }\left\Vert g\right\Vert \leq r\right\} \] exists in $E$ for every $f\in E^{+}$ and $r\in\left( 0,\infty\right) $. Fremlin proved that $E$ is isomorphic, as a normed Riesz space, to a truncated Riesz subspace of the Riesz space $\ell^{\infty}\left( X\right) $ of all bounded real-valued functions on some nonvoid set $X$. As a matter of fact, we shall use our main theorem to improve this result by showing that $E$ is isomorphic, as a normed Riesz space, to a uniformly dense truncated Riesz subspace of $C_{0}\left( \eta E\right) $. Finally, we point out that in each section we summarize enough necessary background material to keep this paper reasonably self contained. By the way, the classical book \cite{LZ71} by Luxemburg and Zaanen is adopted as the unique source of unexplained terminology and notation. \section{Infinitely small elements with respect to a truncation} Recall that a \textsl{truncation} on a Riesz space $E$ is a unary operation $\ast$ on the positive cone $E^{+}$ of $E$ such tha \[ f^{\ast}\wedge g=f\wedge g^{\ast}\text{, for all }f,g\in E^{+}. \] A Riesz space $E$ along with a truncation $\ast$ is called a \textsl{truncated Riesz space.} The truncation, on any truncated Riesz space will be denoted by $\ast$. The set of all fixed points of the truncation on a Riesz space $E$ is denoted by $P_{\ast}\left( E\right) $, i.e. \[ P_{\ast}\left( E\right) =\left\{ f\in E^{+}:f^{\ast}=f\right\} . \] We gather some elementary properties in the next lemma. Some of them can be found in \cite{BE17}. We give the detailed proofs for the sake of convenience. \begin{lemma} \label{elem}Let $E$ be a truncated Riesz space and $f,g\in E^{+}$. Then the following hold. \begin{enumerate} \item[\emph{(i)}] $f^{\ast}\leq f$ and $f^{\ast}\in P_{\ast}\left( E\right) $. \item[\emph{(ii)}] $f\leq g$ implies $f^{\ast}\leq g^{\ast}$. \item[\emph{(iii)}] If $f\leq g$ and $g\in P_{\ast}\left( E\right) $, then $f\in P_{\ast}\left( E\right) $. \item[\emph{(iv)}] $\left( f\wedge g\right) ^{\ast}=f^{\ast}\wedge g^{\ast}$ and $\left( f\vee g\right) ^{\ast}=f^{\ast}\vee g^{\ast}$ \emph{(}In particular, $P_{\ast}\left( E\right) $ is a lattice\emph{)}. \item[\emph{(v)}] $\left\vert f^{\ast}-g^{\ast}\right\vert \leq\left\vert f-g\right\vert ^{\ast}$. \end{enumerate} \end{lemma} \begin{proof} $\mathrm{(i)}$ We hav \[ f^{\ast}=f^{\ast}\wedge f^{\ast}=f\wedge\left( f^{\ast}\right) ^{\ast}\leq f. \] In particular, $\left( f^{\ast}\right) ^{\ast}\leq f^{\ast}\leq f$ and s \[ \left( f^{\ast}\right) ^{\ast}=\left( f^{\ast}\right) ^{\ast}\wedge f=f^{\ast}\wedge f^{\ast}=f^{\ast}. \] This shows $\mathrm{(i)}$. $\mathrm{(ii)}$ Since $f^{\ast}\leq f\leq g$, we ge \[ f^{\ast}=f^{\ast}\wedge g=f\wedge g^{\ast}\leq g^{\ast}, \] which gives the desired inequality. $\mathrm{(iii)}$ As in the proof of $\mathrm{(ii)}$, from $f^{\ast}\leq f\leq g$ it follows tha \[ f^{\ast}=f^{\ast}\wedge g=f\wedge g^{\ast}=f. \] This means $f$ is a fixed point of $\ast$. $\mathrm{(iv)}$ Using $\mathrm{(ii)}$, we have $\left( f\wedge g\right) ^{\ast}\leq f^{\ast}$ and $\left( f\wedge g\right) ^{\ast}\leq g^{\ast}$. Thus \[ \left( f\wedge g\right) ^{\ast}\leq f^{\ast}\wedge g^{\ast}=f\wedge g\wedge g^{\ast}=\left( f\wedge g\right) ^{\ast}\wedge g\leq\left( f\wedge g\right) ^{\ast}. \] This shows the first equality. Now, by $\mathrm{(i)}$, we hav \begin{align*} f^{\ast}\vee g^{\ast} & =\left( f\vee g\right) \wedge\left( f^{\ast}\vee g^{\ast}\right) \\ & =\left( \left( f\vee g\right) \wedge f^{\ast}\right) \vee\left( \left( f\vee g\right) \wedge g^{\ast}\right) \\ & =\left( \left( f\vee g\right) ^{\ast}\wedge f\right) \vee\left( \left( f\vee g\right) ^{\ast}\wedge g\right) \\ & =\left( f\vee g\right) ^{\ast}\wedge\left( f\vee g\right) =\left( f\vee g\right) ^{\ast}, \end{align*} and the second equality follows. $\mathrm{(v)}$ From $\mathrm{(i)}$ it follows that $f^{\ast}\leq f\leq f+g$ and $g^{\ast}\leq g\leq f+g$. Hence, using the classical Birkhoff's Inequality (see \cite[Theorem 1.9. (b)]{AB06}), we obtain \begin{align*} \left\vert f^{\ast}-g^{\ast}\right\vert & =\left\vert f^{\ast}\wedge\left( f+g\right) -g^{\ast}\wedge\left( f+g\right) \right\vert =\left\vert f\wedge\left( f+g\right) ^{\ast}-g\wedge\left( f+g\right) ^{\ast }\right\vert \\ & \leq\left\vert f-g\right\vert \wedge\left( f+g\right) ^{\ast}=\left\vert f-g\right\vert ^{\ast}\wedge\left( f+g\right) \leq\left\vert f-g\right\vert ^{\ast}. \end{align*} The proof of the lemma is now complete. \end{proof} An element $f$ in a truncated Riesz space $E$ is said to be $^{\ast $\textsl{-infinitely small} i \[ \varepsilon\left\vert f\right\vert \in P_{\ast}\left( E\right) \text{, for all }\varepsilon\in\left( 0,\infty\right) . \] The set $E_{\ast}$ of all $^{\ast}$-infinitely small elements in $E$ enjoys an interesting algebraic property. \begin{lemma} Let $E$ be a truncated Riesz space. Then $E_{\ast}$ is an ideal in $E$. \end{lemma} \begin{proof} Let $f,g\in E_{\ast}$ and $\alpha\in\mathbb{R}$. Pick $\varepsilon\in\left( 0,\infty\right) $ and observe tha \begin{align*} \varepsilon\left\vert f+\alpha g\right\vert & \leq\left( 1+\left\vert \alpha\right\vert \right) \left( \frac{1}{1+\left\vert \alpha\right\vert }\varepsilon\left\vert f\right\vert +\frac{\left\vert \alpha\right\vert }{1+\left\vert \alpha\right\vert }\varepsilon\left\vert g\right\vert \right) \\ & \leq\left( 1+\left\vert \alpha\right\vert \right) \varepsilon\left\vert f\right\vert \vee\left( 1+\left\vert \alpha\right\vert \right) \varepsilon\left\vert g\right\vert . \end{align*} However, using Lemma \ref{elem} $\mathrm{(iv)}$, we fin \begin{align*} \left( \left( 1+\left\vert \alpha\right\vert \right) \varepsilon\left\vert f\right\vert \vee\left( 1+\left\vert \alpha\right\vert \right) \varepsilon\left\vert g\right\vert \right) ^{\ast} & =\left( \left( 1+\left\vert \alpha\right\vert \right) \varepsilon\left\vert f\right\vert \right) ^{\ast}\vee\left( \left( 1+\left\vert \alpha\right\vert \right) \varepsilon\left\vert g\right\vert \right) ^{\ast}\\ & =\left( 1+\left\vert \alpha\right\vert \right) \varepsilon\left\vert f\right\vert \vee\left( 1+\left\vert \alpha\right\vert \right) \varepsilon\left\vert g\right\vert . \end{align*} It follows tha \[ \left( 1+\left\vert \alpha\right\vert \right) \varepsilon\left\vert f\right\vert \vee\left( 1+\left\vert \alpha\right\vert \right) \varepsilon\left\vert g\right\vert \in P_{\ast}\left( E\right) \mathrm{. \] and so, by Lemma \ref{elem} $\mathrm{(iii)}$, $\varepsilon\left\vert f+\alpha g\right\vert \in P_{\ast}\left( E\right) $. But then $\left\vert f+\alpha g\right\vert \in E_{\ast}$ because $\varepsilon$ is arbitrary in $\left( 0,\infty\right) $. Accordingly, $E_{\ast}$ is a vector subspace of $E$. Now, let $f,g\in E$ such that $\left\vert f\right\vert \leq\left\vert g\right\vert $ and $g\in E_{\ast}$. If $\varepsilon\in\left( 0,\infty\right) $ then $\varepsilon\left\vert f\right\vert \leq\varepsilon\left\vert g\right\vert \in P_{\ast}\left( E\right) $. Using once again \ref{elem} $\mathrm{(iii)}$, we conclude that $E_{\ast}$ is a solid in $E$ and the lemma follows. \end{proof} Throughout this paper, if $I$ is an ideal in a Riesz space $E$, then the equivalence class in the quotient Riesz space $E/I$ of an element $f\in E$ will be denoted by $I\left( f\right) $. In other words, the canonical surjection from $E\ $onto $E/I$, which is a Riesz homomorphism, will be denoted by $I$ as well (we refer the reader to \cite[Section 18]{LZ71} for quotient Riesz spaces). In what follows, we show that the quotient Riesz space $E/E_{\ast}$ can be equipped with a truncation in a natural way. \begin{proposition} \label{quotient}Let $E$ be a truncated Riesz space. Then, the unary operation $E_{\ast}\left( f\right) \rightarrow E_{\ast}\left( f\right) ^{\ast}$ on $\left( E/E_{\ast}\right) ^{+}$ given b \[ E_{\ast}\left( f\right) ^{\ast}=E_{\ast}\left( f^{\ast}\right) \text{, for all }f\in E^{+}\text{, \] is a truncation on $E/E_{\ast}$. Moreover, the truncated Riesz space $E/E_{\ast}$ has no nonzero $^{\ast}$-infinitely small elements. \end{proposition} \begin{proof} Let $f,g\in E$ such that $E_{\ast}\left( f\right) =E_{\ast}\left( g\right) $. Hence, $f-g\in E_{\ast}$ and so $\left\vert f-g\right\vert ^{\ast}\in E_{\ast}$ because $E_{\ast}$ is an ideal and $\left\vert f-g\right\vert ^{\ast}\leq\left\vert f-g\right\vert $. Using Lemma \ref{elem} $\mathrm{(v)}$, we get $\left\vert f^{\ast}-g^{\ast}\right\vert \in E_{\ast}$ and thus $E_{\ast}\left( f^{\ast}\right) =E_{\ast}\left( g^{\ast}\right) $. Moreover, if $f,g\in E^{+}$ the \[ E_{\ast}\left( f^{\ast}\right) \wedge E_{\ast}\left( g\right) =E_{\ast }\left( f^{\ast}\wedge g\right) =E_{\ast}\left( f\wedge g^{\ast}\right) =E_{\ast}\left( f\right) \wedge E_{\ast}\left( g^{\ast}\right) . \] It follows that the equalit \[ E_{\ast}\left( f\right) ^{\ast}=E_{\ast}\left( f^{\ast}\right) \text{, for all }f\in E^{+ \] defines a truncation on $E/E_{\ast}$. Furthermore, pick $f\in E^{+}$ and assume tha \[ \varepsilon E_{\ast}\left( f\right) \in P_{\ast}\left( E/E_{\ast}\right) \text{, for all }\varepsilon\in\left( 0,\infty\right) . \] For every $\varepsilon\in\left( 0,\infty\right) $, we can writ \[ E_{\ast}\left( \varepsilon f\right) =\varepsilon E_{\ast}\left( f\right) =\left( \varepsilon E_{\ast}\left( f\right) \right) ^{\ast}=\left( E_{\ast}\left( \varepsilon f\right) \right) ^{\ast}=E_{\ast}\left( \left( \varepsilon f\right) ^{\ast}\right) . \] Therefore \[ \varepsilon f-\left( \varepsilon f\right) ^{\ast}\in E_{\ast}\text{, for all }\varepsilon\in\left( 0,\infty\right) . \] In particular \[ \varepsilon f-\left( \varepsilon f\right) ^{\ast}\in P_{\ast}\left( E\right) \text{, for all }\varepsilon\in\left( 0,\infty\right) \] Accordingly, if $\varepsilon\in\left( 0,\infty\right) $ the \begin{align*} \varepsilon f & =\varepsilon f-\left( \varepsilon f\right) ^{\ast}+\left( \varepsilon f\right) ^{\ast}=\left( \varepsilon f-\left( \varepsilon f\right) ^{\ast}\right) ^{\ast}+\left( \varepsilon f\right) ^{\ast}\\ & \leq2\left[ \left( \varepsilon f-\varepsilon f^{\ast}\right) ^{\ast \vee\left( \varepsilon f\right) ^{\ast}\right] =2\left[ \left( \varepsilon f-\varepsilon f^{\ast}\right) \vee\varepsilon f\right] ^{\ast}. \end{align*} It follows tha \[ 0\leq\frac{\varepsilon}{2}f\leq\left[ \left( \varepsilon f-\varepsilon f^{\ast}\right) \vee\varepsilon f\right] ^{\ast}\in P_{\ast}\left( E\right) \text{, for all }\varepsilon\in\left( 0,\infty\right) . \] But then $f\in E_{\ast}$, so $E_{\ast}\left( f\right) =0$. Consequently, the truncated Riesz space $E/E_{\ast}$ has no nonzero $^{\ast}$-infinitely small elements, as desired. \end{proof} \section{Spectrum of a truncated Riesz space} Let $E,F$ be two truncated Riesz spaces. A Riesz homomorphism $T:E\rightarrow F$ is called a $^{\ast}$-\textsl{homomorphism} if $T$ preserves truncations, i.e. \[ T\left( f^{\ast}\right) =T\left( f\right) ^{\ast}\text{, for all }f\in E^{+}. \] For instance, it follows directly from Proposition \ref{quotient} that the canonical surjection $E_{\ast}:E\rightarrow E/E_{\ast}$ is a $^{\ast $-homomorphism. Clearly, if $T:E\rightarrow F$ is a bijective $^{\ast $-homomorphism, then its inverse $T^{-1}:F\rightarrow E$ is a $^{\ast $-homomorphism. In this situation, $T$ is called a $^{\ast} \textsl{-isomorphism} and $E,F$ are said to be $^{\ast}$\textsl{-isomorphic}. Now, we define the \textsl{canonical truncation} $\ast$ on the Riesz space $\mathbb{R}$ of all real numbers by puttin \[ r^{\ast}=\min\left\{ 1,r\right\} \text{, for all }r\in\left[ 0,\infty \right) . \] The set of all nonzero $^{\ast}$-homomorphisms from $E$ onto $\mathbb{R}$ is called the \textsl{spectrum} of $E$ and denoted by $\eta E$. \begin{quote} \textsl{Beginning with the next lines, }$\eta E$ \textsl{will be equipped with the topology inherited from the product topology on the Tychonoff space} $\mathbb{R}^{E}$. \end{quote} \noindent Recall here that the product topology on $\mathbb{R}^{E}$ is the coarsest topology on $\mathbb{R}^{E}$ for which all projections $\pi_{f}$ $\left( f\in E\right) $ are continuous, where $\pi_{f}:\mathbb{R ^{E}\rightarrow\mathbb{R}$ is defined b \[ \pi_{f}\left( \phi\right) =\phi\left( f\right) \text{, for all }\phi \in\mathbb{R}^{E}. \] A truncation on the Riesz space $E$ is said to be \textsl{strong} if, for each $f\in E$, there exists $\varepsilon\in\left( 0,\infty\right) $ such that $\varepsilon f\in P_{\ast}\left( E\right) $. A Riesz space along with a strong truncation is called a \textsl{strongly truncated Riesz space. }It turns out that the spectrum of a strongly truncated Riesz space has an interesting topological property. \begin{lemma} \label{loc}The spectrum $\eta E$ of a strongly truncated Riesz space $E$ is locally compact. \end{lemma} \begin{proof} If $f\in E$, then there exists $\varepsilon_{f}\in\left( 0,\infty\right) $ such that $\varepsilon_{f}\left\vert f\right\vert \in P_{\ast}\left( E\right) $. So, for each $u\in\eta E$, we hav \[ \left\vert u\left( f\right) \right\vert =\frac{1}{\varepsilon_{f}}u\left( \left\vert \varepsilon_{f}f\right\vert ^{\ast}\right) =\frac{1 {\varepsilon_{f}}\min\left\{ 1,u\left( \left\vert \varepsilon_{f f\right\vert \right) \right\} \leq\frac{1}{\varepsilon_{f}}. \] Accordingly \[ \eta E\cup\left\{ 0\right\} \subse {\displaystyle\prod\limits_{f\in E}} \left[ -\frac{1}{\varepsilon_{f}},\frac{1}{\varepsilon_{f}}\right] . \] On the other hand, it takes no more than a moment's thought to see that $\eta E\cup\left\{ 0\right\} $ is the intersection of the closed set \ \begin{tabular} [c]{c {\displaystyle\bigcap\limits_{f,g\in E,\lambda\in\mathbb{R}}} \left( \pi_{f+\lambda g}-\pi_{f}-\lambda\pi_{g}\right) ^{-1}\left( \left\{ 0\right\} \right) \text{,}$\\ {\displaystyle\bigcap\limits_{f\in E}} \left( \left\vert \pi_{f}\right\vert -\pi_{\left\vert f\right\vert }\right) ^{-1}\left( \left\{ 0\right\} \right) \text{, and {\displaystyle\bigcap\limits_{f\in E_{+}}} \left( \pi_{f^{\ast}}-1\wedge\pi_{f}\right) ^{-1}\left( \left\{ 0\right\} \right) . \end{tabular} \ \] It follows that $\eta E\cup\left\{ 0\right\} $ is again a closed set in $\mathbb{R}^{E}$. In summary, $\eta E\cup\left\{ 0\right\} $ is a closed subset of a compact set and so it is compact. But then $\eta E$ is locally compact since it is an open subset in a compact set. \end{proof} The following simple lemma is needed to establish the next theorem. \begin{lemma} \label{weak}Let $E$ be a strongly truncated Riesz space. If $f\in E^{+}$ and $f^{\ast}=0$ then $f=0$. \end{lemma} \begin{proof} Choose $\varepsilon\in\left( 0,\infty\right) $ such that $\varepsilon f\in P_{\ast}\left( f\right) $. If $\varepsilon\leq1$ the \[ 0\leq\varepsilon f=\left( \varepsilon f\right) ^{\ast}\leq f^{\ast}=0 \] (where we use Lemma \ref{elem} $\mathrm{(ii)}$). Suppose now that $\varepsilon>1$. We ge \[ 0\leq f=f\wedge\varepsilon f=f\wedge\left( \varepsilon f\right) ^{\ast }=f^{\ast}\wedge\varepsilon f=0. \] This completes the proof of the lemma. \end{proof} In fact, Lemma \ref{weak} tells us that any strong truncation on Riesz space is a weak truncation. Now, the kernel of any $^{\ast}$-homomorphism $u\in\eta E$ is denoted by $\ker u$. The following theorem will turn out to be crucial for later purposes. Actually, we are indebted to professor Richard Ball for his significant help. Indeed, trying to prove the result, we ran into a serious problem and it was only through him that things were done. \begin{theorem} \label{spectrum}Let $E$ be a strongly truncated Riesz space. The \ {\displaystyle\bigcap\limits_{u\in\eta E}} \ker u=E_{\ast}. \] \end{theorem} \begin{proof} First, we assume that $E_{\ast}=\left\{ 0\right\} $. We shall prove that if $0<f\in E$, then $u\left( f\right) >0$ for some $u\in\eta E$. Choose $\varepsilon\in\left( 0,\infty\right) $ for which $\varepsilon f\notin P_{\ast}\left( E\right) $. By replacing $f$ by $\varepsilon f$ if needed, we may suppose that $f^{\ast}<f$, so $\left( f-f^{\ast}\right) ^{\ast}>0$ (see Lemma \ref{weak}). Let $P$ be a prime ideal in $E$ such that $\left( f-f^{\ast}\right) ^{\ast}\notin P$ (such an ideal exists by \cite[Theorem 33.5]{LZ71}). If $g\in E^{+}$ the \[ 0<\left( f-f^{\ast}\right) ^{\ast}\leq f-f^{\ast}\leq f-\left( f^{\ast }\wedge g\right) =f-\left( f\wedge g^{\ast}\right) =\left( f-g^{\ast }\right) ^{+}. \] Accordingly, $\left( f-g^{\ast}\right) ^{+}\notin P$ and, as $P$ is prime \[ g^{\ast}-g^{\ast}\wedge f=\left( g^{\ast}-f\right) ^{+}=\left( f-g^{\ast }\right) ^{-}\in P\text{. \] On the other hand, by Theorem 33.5 in \cite{LZ71}, there exists a prime ideal $Q$, containing $P$, which is maximal with respect to the property of not containing $f^{\ast}$. Of course, such a prime ideal $Q$ indeed exists because $f^{\ast}\notin P$. Then, fro \[ g^{\ast}-g\wedge f^{\ast}\in P\subset Q\text{, for all }g\in E^{+}, \] it follows that the (in)equalitie \[ Q\left( g^{\ast}\right) =Q\left( g\wedge f^{\ast}\right) =Q\left( g\right) \wedge Q\left( f^{\ast}\right) \leq Q\left( f^{\ast}\right) \] hold in the quotient Riesz space $E/Q$ for all $g\in E^{+}$. Let $g\in E^{+}$ and $\varepsilon\in\left( 0,\infty\right) $ such that $\varepsilon g\in P_{\ast}\left( E\right) $. So \[ Q\left( g\right) =\frac{1}{\varepsilon}Q\left( \varepsilon g\right) =\frac{1}{\varepsilon}Q\left( \left( \varepsilon g\right) ^{\ast}\right) \leq\frac{1}{\varepsilon}Q\left( f^{\ast}\right) . \] We derive that $Q\left( f^{\ast}\right) $ is a strong unit in $E/Q$. We claim that $E/Q$ is Riesz isomorphic to $\mathbb{R}$. To this end, let $I$ be a proper ideal in $E/Q$ and se \[ Q^{-1}\left( I\right) =\left\{ g\in E:Q\left( g\right) \in I\right\} . \] Then, $Q^{-1}\left( I\right) $ is an ideal in $E$ containing $Q$. If $f^{\ast}\in Q^{-1}\left( I\right) $ then $Q\left( f^{\ast}\right) \in I$ and so $I=E/Q$ (as $Q\left( f^{\ast}\right) $ is a strong unit in $E/E_{\ast}$), a contradiction. Accordingly, $f^{\ast}\notin Q^{-1}\left( I\right) $ and so, by maximality, $Q^{-1}\left( I\right) =Q$. We derive straightforwardly that $I=\left\{ 0\right\} $ and, in view of \cite[Theorem 27.1]{LZ71}, there exists a Riesz isomorphism $\varphi:E/Q\rightarrow \mathbb{R}$ with $\varphi\left( Q\left( f^{\ast}\right) \right) =1$. Put $u=\varphi\circ Q$ and notice that $u$ is a Riesz homomorphism. Moreover, if $g\in E^{+}$ the \begin{align*} u\left( g^{\ast}\right) & =\left( \varphi\circ Q\right) \left( g^{\ast }\right) =\varphi\left( Q\left( g^{\ast}\right) \right) =\varphi\left( Q\left( g\right) \wedge Q\left( f^{\ast}\right) \right) \\ & =\varphi\left( Q\left( g\right) \right) \wedge\varphi\left( Q\left( f^{\ast}\right) \right) =\min\left\{ 1,u\left( g\right) \right\} . \end{align*} This yields that $u\in\eta E$. Furthermore \[ u\left( f\right) \geq u\left( f^{\ast}\right) =\varphi\left( Q\left( f^{\ast}\right) \right) =1>0, \] Consequently \ {\displaystyle\bigcap\limits_{u\in\eta E}} \ker u=\left\{ 0\right\} =E_{\ast}. \] Let's discuss the general case. Pick $f\in E$ such tha \[ u\left( f\right) =0\text{, for all }u\in\eta E. \] We claim that $f\in E_{\ast}$. To this end, choose $\phi\in\eta\left( E/E_{\ast}\right) $ and observe that $\phi\circ E_{\ast}\in\eta E$ (where we use Proposition \ref{quotient}). This means tha \[ \phi\left( E_{\ast}\left( f\right) \right) =\left( \phi\circ E_{\ast }\right) \left( f\right) =0. \] By the first case, $E_{\ast}\left( f\right) =0$ and thus $f\in E_{\ast}$. We get the inclusio \ {\displaystyle\bigcap\limits_{u\in\eta E}} \ker u\subset E_{\ast}. \] The converse inclusion is routine. \end{proof} \section{The representation theorem} This section contains the central result of this paper, viz., a representation theorem for strongly truncated Riesz spaces with no nonzero $^{\ast $-infinitely small elements. As most classical theorems of representation, our result is based upon a Stone-Weierstrass type theorem, which is presumably well-known. Unfortunately, we have not been able to locate a precise reference for it. We have therefore chosen to provide a detailed proof, which is an adjustment of the proof of the \textquotedblleft algebra version\textquotedblright\ of the theorem (see, for instance, Corollary 4.3.5 in \cite{P80}). In this regard, we need further prerequisites. Let $X$ be a locally compact Hausdorff space $X$. The Riesz space of all real-valued continuous functions on $X$ is denoted by $C\left( X\right) $, as usual. A function $f\in C\left( X\right) $ is said to \textsl{vanish at infinity} if, for every $\varepsilon\in\left( 0,\infty\right) $, the se \[ K\left( f,\varepsilon\right) =\left\{ x\in X:\left\vert f\left( x\right) \right\vert \geq\varepsilon\right\} \] is compact. The collection $C_{0}\left( X\right) $ of such functions is a Riesz subspace of $C\left( X\right) $. Actually, $C_{0}\left( X\right) $ is a Banach lattice (more precisely, an $AM$-space \cite{S74}) under the uniform norm given b \[ \left\Vert f\right\Vert _{\infty}=\sup\left\{ \left\vert f\left( x\right) \right\vert :x\in X\right\} \text{, for all }f\in C_{0}\left( X\right) . \] If $f\in C_{0}\left( X\right) $, then the real-valued function $f^{\infty}$ defined on the one-point compactification $X_{\infty}=X\cup\left\{ \infty\right\} $ of $X$ b \[ f_{\infty}\left( \infty\right) =0\quad\text{and\quad}f_{\infty}\left( x\right) =f\left( x\right) \text{ if }x\in X \] is the unique extension of $f$ in $C\left( X_{\infty}\right) $ (see, e.g., \cite{E89}). Here too, $C\left( X_{\infty}\right) $ is endowed with its uniform norm defined b \[ \left\Vert f\right\Vert _{\infty}=\sup\left\{ \left\vert f\left( x\right) \right\vert :x\in X_{\infty}\right\} \text{, for all }f\in C\left( X_{\infty}\right) . \] It is an easy exercise to verify that the map $S:C_{0}\left( X\right) \rightarrow C\left( X_{\infty}\right) $ defined b \[ S\left( f\right) =f_{\infty}\text{, for all }f\in C_{0}\left( X\right) \] is an isometry and, simultaneously, a Riesz isomorphism. In what follows, $C_{0}\left( X\right) $ will be identified with the range of $S$, which is a uniformly closed Riesz subspace of $C\left( X_{\infty}\right) $. On the other hand, a subset $D$ of $C_{0}\left( X\right) $ is said to \textsl{vanish nowhere} if for every $x\in X$, there is some $f\in D$ such that $f\left( x\right) \neq0$. Moreover, $D$ is said to \textsl{separate the points} of $X$ if for every $x,y\in X$ with $x\neq y$, we can find some $f\in D$ such that $f\left( x\right) \neq f\left( y\right) $. Following Fremlin in \cite{F74}, we call a \textsl{truncated Riesz subspace} of $C_{0}\left( X\right) $ any Riesz subspace $E$ of $C_{0}\left( X\right) $ for whic \[ 1\wedge f\in E\text{, for all }f\in E. \] We are in position now to prove the suitable version of the Stone-Weierstrass theorem we were talking about. \begin{lemma} \label{SW}Let $X$ be a locally compact Hausdorff space and $E$ be a truncated Riesz subspace of $C_{0}\left( X\right) $. Then $E$ is uniformly dense in $C_{0}\left( X\right) $ if and only if $E$ vanishes nowhere and separates the points of $X$. \end{lemma} \begin{proof} We prove the `\textit{if}' part. So, assume that $E$ vanishes nowhere and separates the points of $X$. It is an easy task to check that $E$ separates the points of $X_{\infty}$. Consider the direct su \[ E\oplus\mathbb{R}=\left\{ f+r:f\in E\text{ and }r\in\mathbb{R}\right\} . \] Clearly, $E\oplus\mathbb{R}$ is a vector subspace of $C\left( X_{\infty }\right) $ containing the constant functions on $X_{\infty}$ and separating the points of $X_{\infty}$. Moreover, an easy calculation reveals that the positive part $\left( f+r\right) ^{+}$ in $C\left( X_{\infty}\right) $ of a function $f+r\in E\oplus\mathbb{R}$ is given b \[ \left( f+r\right) ^{+}=\left\{ \begin{array} [c]{l f^{+}-r\left( 1\wedge\dfrac{1}{r}f^{-}\right) +r\text{ if }r>0\\ \\ f^{+}+r\left( 1\wedge\dfrac{-1}{r}f^{+}\right) \text{ if }r<0\\ \\ f^{+}\text{ if }r=0\text{. \end{array} \right. \] Since $E$ is a truncated Riesz subspace of $C_{0}\left( X\right) $, the direct sum $E\oplus\mathbb{R}$ is a Riesz subspace of $C\left( X_{\infty }\right) $. Using the classical Stone-Weierstrass for compact Hausdorff spaces (see, for instance, Theorem 2.1.1 in \cite{M91}), we derive that $E\oplus\mathbb{R}$ is uniformly dense in $C\left( X_{\infty}\right) $. Accordingly, if $f\in C_{0}\left( X\right) $ then there exist sequences $\left( f\right) _{n}$ in $E$ and $\left( r_{n}\right) $ in $\mathbb{R}$ such that $\lim\left( f_{n}+r_{n}\right) =f$. Hence, for $n\in\left\{ 1,2,...\right\} $, we hav \[ \left\vert r_{n}\right\vert =\left\vert f_{n}\left( \infty\right) +r_{n}-f\left( \infty\right) \right\vert \leq\left\Vert f_{n}+r_{n -f\right\Vert _{\infty}. \] Thus, $\lim r_{n}=0$ and so $\lim f_{n}=f$. This means that $E$ is uniformly dense in $C_{0}\left( X\right) $. We now focus on the `\textit{only if}'. Suppose that $E$ is uniformly dense in $C_{0}\left( X\right) $. Obviously, $C_{0}\left( X\right) $ vanishes nowhere and so does $E$. We claim that $E$ separates the points of $C_{0}\left( X\right) $. To this end, pick $x,y\in X$ with $x\neq y$. Since $C_{0}\left( X\right) $ separates the points of $X$, there exists $f\in C_{0}\left( X\right) $ such that $f\left( x\right) \neq f\left( y\right) $. Using the density condition, there exists $g\in E$ such that $\left\Vert f-g\right\Vert _{\infty}<\left\vert f\left( x\right) -f\left( y\right) \right\vert /2$. Consequently \begin{align*} \left\vert g\left( x\right) -g\left( y\right) \right\vert & =\left\vert g\left( x\right) -f\left( x\right) +f\left( x\right) -f\left( y\right) +f\left( y\right) -g\left( y\right) \right\vert \\ & \geq\left\vert f\left( x\right) -f\left( y\right) \right\vert -2\left\Vert f-g\right\Vert _{\infty}>0. \end{align*} We get $g\left( x\right) \neq g\left( y\right) $, which completes the proof of the lemma. \end{proof} We have gathered at this point all the ingredients for the main result of this paper. \begin{theorem} \label{main}Let $E$ be a strongly truncated Riesz space with no nonzero $^{\ast}$-infinitely small elements. Then the map $T:E\rightarrow C_{0}\left( \eta E\right) $ defined b \[ T\left( f\right) \left( u\right) =u\left( f\right) \text{, for all }f\in E\text{ and }u\in\eta E \] is an injective $^{\ast}$-homomorphism with uniformly dense range. \end{theorem} \begin{proof} Pick $f\in E$ and define the evaluation $\delta_{f}:\eta E\rightarrow \mathbb{R}$ by puttin \[ \delta_{f}\left( u\right) =u\left( f\right) \text{, for all }u\in\eta E. \] We claim that $\delta_{f}\in C_{0}\left( \eta E\right) $. To this end, observe that $\delta_{f}$ is continuous since it is the restriction of the projection $\pi_{f}$ to $\eta E$. Moreover, if $\varepsilon\in\left( 0,\infty\right) $ the \[ K\left( \delta_{f},\varepsilon\right) =\left\{ u\in\eta E:\left\vert \delta_{f}\left( u\right) \right\vert \geq\varepsilon\right\} \subset\eta E\cup\left\{ 0\right\} . \] As $\eta E\cup\left\{ 0\right\} $ is compact (see the proof of Lemma \ref{loc}), so is $K\left( \delta_{f},\varepsilon\right) $ (notice that $K\left( \delta_{f},\varepsilon\right) $ is a closed set in $\eta E\cup\left\{ 0\right\} $). It follows that $\delta_{f}$ vanishes at infinity, as desired. Accordingly, the map $T:E\rightarrow C_{0}\left( \eta E\right) $ given b \[ T\left( f\right) =\delta_{f}\text{, for all }f\in E \] is well-defined. It is an easy task to show that $T$ is a Riesz homomorphism. Moreover, since $E$ has no nonzero $^{\ast}$-infinitely small elements, Theorem \ref{spectrum} yields directly that $T$ is one-to-one. Furthermore, if $f\in E^{+}$ and $u\in\eta E$, the \begin{align*} \left( 1\wedge T\left( f\right) \right) \left( u\right) & =\left( 1\wedge\delta_{f}\right) \left( u\right) =\min\left\{ 1,\delta_{f}\left( u\right) \right\} \\ & =\min\left\{ 1,u\left( f\right) \right\} =u\left( f^{\ast}\right) =T\left( f^{\ast}\right) \left( u\right) . \end{align*} Since $u$ is arbitrary in $\eta E$, we ge \[ T\left( f^{\ast}\right) =1\wedge T\left( f\right) \text{, for all }f\in E^{+}. \] This means that $T$ is $^{\ast}$-homomorphism. It remains to shows that the range $\operatorname{Im}\left( T\right) $ of $T$ is uniformly dense in $C_{0}\left( \eta E\right) $. For, let $u,v\in\eta E$ such that $u\neq v$. Hence, $u\left( f\right) \neq v\left( f\right) $ for some $f\in E$. Therefore \[ \delta_{f}\left( u\right) =u\left( f\right) \neq v\left( f\right) =\delta_{f}\left( v\right) \] from which it follows that $\operatorname{Im}\left( T\right) $ separates the points of $\eta E$. Moreover, since $\eta E$ does not contain the zero homomorphism, $\operatorname{Im}\left( T\right) $ vanishes nowhere. This together with Lemma \ref{loc} and Lemma \ref{SW} completes the proof. \end{proof} The following remark deserves to be empathized. \begin{remark} \label{Rq}\emph{Under the conditions of Theorem \ref{main}, we may consider }$E$\emph{ as a normed subspace of }$C_{0}\left( \eta E\right) $\emph{. In this situation, it is readily checked that the closed unit ball }$B_{\infty $\emph{ of }$E$\emph{ coincides with the set of all }$f\in E$\emph{ such that }$\left\vert f\right\vert \in P_{\ast}\left( E\right) $\emph{, i.e., \[ B_{\infty}=\left\{ f\in E:\left\Vert f\right\Vert _{\infty}\leq1\right\} =\left\{ f\in E:\left\vert f\right\vert \in P_{\ast}\left( E\right) \right\} . \] \end{remark} We end this section with the following observation. From Theorem \ref{main} it follows directly that any strongly truncated Riesz space $E$ with no nonzero $^{\ast}$-infinitely small elements is Archimedean. It is plausible therefore to think that any truncated Riesz space with no nonzero $^{\ast}$-infinitely small elements is Archimedean. However, the next example shows that this is not true. \begin{example} Assume that the Euclidean plan $E=\mathbb{R}^{2}$ is furnished with its lexicographic ordering. We know that $E$ is a non-Archimedean Riesz space. Clearly, the formul \[ \left( x,y\right) ^{\ast}=\left( x,y\right) \wedge\left( 0,1\right) ,\text{ for all }\left( x,y\right) \in E^{2 \] defines a truncation on $E$. Let $\left( x,y\right) \in E^{+}$ such that $\varepsilon\left( x,y\right) \in P_{\ast}\left( E\right) $ for all $\varepsilon\in\left( 0,\infty\right) $. We hav \[ \left( \varepsilon x,\varepsilon y\right) =\varepsilon\left( x,y\right) =\left[ \varepsilon\left( x,y\right) \right] ^{\ast}=\left( \varepsilon x,\varepsilon y\right) \wedge\left( 0,1\right) , \] which means that $\left( \varepsilon x,\varepsilon y\right) \leq\left( 0,1\right) $ for all $\varepsilon\in\left( 0,\infty\right) $. Assume that $\varepsilon x<0$ for some $\varepsilon\in\left( 0,\infty\right) $. Then $x<0$ which is impossible since $\left( x,y\right) \geq\left( 0,0\right) $. Thus, $\varepsilon x=0$ and $\varepsilon y\leq1$ for all $\varepsilon \in\left( 0,\infty\right) $. Therefore, $x=0$ and $y\leq0$. But then $x=y=0$ because $\left( x,y\right) \in E^{+}$. Accordingly, $E$ is a non-Archimedean truncated Riesz space with no non-trivial $^{\ast}$-infinitely small elements. \end{example} Notice finally that the truncation in the above example is not strong. \section{Uniform completeness with respect to a truncation} In this section, our purpose is to find a necessary and sufficient condition on the strongly truncated Riesz space $E$ with no nonzero $^{\ast}$-infinitely small elements for $T$ in Theorem \ref{main} to be a $^{\ast}$-isomorphism. As it could be expected, what we need is a certain condition of completeness. We proceed to the details. Let $E$ be a truncated Riesz space. A sequence $\left( f_{n}\right) $ in $E$ is said to $^{\ast}$-\textsl{converge }(or, to be $^{\ast} -\textsl{convergent}) in $E$ if there exists $f\in E$ such that, for every $\varepsilon\in\left( 0,\infty\right) $ there is $n_{\varepsilon}\in\left\{ 1,2,...\right\} $ for which \[ \varepsilon\left\vert f_{n}-f\right\vert \in P_{\ast}\left( E\right) \text{, for all }n\in\left\{ n_{\varepsilon},n_{\varepsilon}+1,...\right\} . \] Such an element $f$ is called a $^{\ast}$-\textsl{limit} of the sequence\emph{ }$\left( f_{n}\right) $ in $E$. As we shall see next, $^{\ast}$-limits are unique, provided $E\ $has no nonzero $^{\ast}$-infinitely small elements. \begin{proposition} Let $E$ be a truncated Riesz space. Then any sequence in $E$ has at most one $^{\ast}$-limit if and only if $E$ has no nonzero $^{\ast}$-infinitely small elements. \end{proposition} \begin{proof} \textsl{Sufficiency. }Choose a sequence $\left( f_{n}\right) $ in $E$ with two $^{\ast}$-limits $f$ and $g$ in $E$. Let $\varepsilon\in\left( 0,\infty\right) $ and $n_{1},n_{2}\in\left\{ 1,2,...\right\} $ such tha \[ 2\varepsilon\left\vert f_{n}-f\right\vert \in P_{\ast}\left( E\right) \text{, for all }n\in\left\{ n_{1},n_{1}+1,...\right\} \] an \[ 2\varepsilon\left\vert f_{n}-g\right\vert \in P_{\ast}\left( E\right) \text{, for all }n\in\left\{ n_{2},n_{2}+1,...\right\} . \] Put $n_{0}=\max\left\{ n_{1},n_{2}\right\} $ and observe that $2\varepsilon \left\vert f_{n_{0}}-f\right\vert \in P_{\ast}\left( E\right) $ and $2\varepsilon\left\vert f_{n_{0}}-g\right\vert \in P_{\ast}\left( E\right) $. This together with Lemma \ref{elem} $\mathrm{(v)}$ yields tha \[ \varepsilon\left\vert f-g\right\vert \leq\varepsilon\left( \left\vert f_{n_{0}}-f\right\vert +\left\vert f_{n_{0}}-g\right\vert \right) \leq2\varepsilon\left\vert f_{n_{0}}-f\right\vert \vee2\varepsilon\left\vert f_{n_{0}}-g\right\vert \in P_{\ast}\left( E\right) . \] Hence, $\varepsilon\left\vert f-g\right\vert \in P_{\ast}\left( E\right) $ (where we use Lemma \ref{elem} $\mathrm{(iii)}$) . As $\varepsilon$ is arbitrary in $\left( 0,\infty\right) $, we get $f=g$ and the sufficiency follows. \textsl{Necessity.} Pick a $^{\ast}$-infinitely small element $f\in E$ and pu \[ f_{n}=\frac{1}{n}f\text{, for all }n\in\left\{ 1,2,...\right\} . \] If $\varepsilon\in\left( 0,\infty\right) $ and $n\in\left\{ 1,2,...\right\} $, the \[ \varepsilon\left\vert f_{n}\right\vert =\frac{\varepsilon}{n}\left\vert f\right\vert \in P_{\ast}\left( E\right) . \] This yields that $0$ is a $^{\ast}$-limit of $\left( f_{n}\right) $ in $E$. Analogously, for $\varepsilon\in\left( 0,\infty\right) $ and $n\in\left\{ 1,2,...\right\} $, we hav \[ \varepsilon\left\vert f_{n}-f\right\vert =\varepsilon\left( 1-\frac{1 {n}\right) \left\vert f\right\vert \in P_{\ast}\left( E\right) . \] This shows that $f$ is a $^{\ast}$-limit of $\left( x_{n}\right) $ in $E$. By uniqueness of $^{\ast}$-limits, we conclude $f=0$ and the proposition follows. \end{proof} A sequence $\left( f_{n}\right) $ in the truncated Riesz space $E$ is called a $^{\ast}$-\textsl{Cauchy sequence} if, for every $\varepsilon\in\left( 0,\infty\right) $, there exists $n_{\varepsilon}\in\left\{ 1,2,...\right\} $\emph{ }such tha \[ \varepsilon\left\vert f_{m}-f_{n}\right\vert \in P_{\ast}\left( E\right) \text{, for all }m,n\in\left\{ n_{\varepsilon},n_{\varepsilon}+1,...\right\} . \] We then say that $E$ is said to be $^{\ast}$-\textsl{uniformly complete} if any $^{\ast}$-Cauchy sequence in $E$ is $^{\ast}$-convergent in $E$. Let's give a simple example. \begin{example} \label{exp}Let $X$ be a locally compact Hausdorff space and assume that the Banach lattice $E=C_{0}\left( X\right) $ is equipped with its strong truncation given b \[ f^{\ast}=1\wedge f\text{, for all }f\in E. \] Clearly, $E$ has no nonzero $^{\ast}$-infinitely small elements. Let $\left( f_{n}\right) $ be a $^{\ast}$-Cauchy sequence in $E$. For $\varepsilon \in\left( 0,\infty\right) $, we can find $n_{\varepsilon}\in\left\{ 1,2,...\right\} $ such tha \[ \varepsilon\left\vert f_{m}-f_{n}\right\vert \in P_{\ast}\left( E\right) \text{, for all }m,n\in\left\{ n_{\varepsilon},n_{\varepsilon}+1,...\right\} . \] Whence, if $m,n\in\left\{ n_{\varepsilon},n_{\varepsilon}+1,...\right\} $ the \[ \varepsilon\left\vert f_{m}\left( x\right) -f_{n}\left( x\right) \right\vert \leq1\text{, for all }x\in X. \] But then $\varepsilon\left\Vert f_{m}-f_{n}\right\Vert _{\infty}\leq1$ and so $\left( f_{n}\right) $ is a norm Cauchy sequence in $E$. As $E$ is norm complete, we derive that $\left( f_{n}\right) $ has a norm limit $f$ in $E$. Therefore, if $n\in\left\{ n_{\varepsilon},n_{\varepsilon}+1,...\right\} $ and $x\in X$ the \[ \varepsilon\left\vert f_{n}\left( x\right) -f\left( x\right) \right\vert \leq\varepsilon\left\Vert f_{n}-f\right\Vert _{\infty}\leq1. \] This yields directly tha \[ \varepsilon\left\vert f_{n}-f\right\vert \in P_{\ast}\left( E\right) \text{, for all }n\in\left\{ n_{\varepsilon},n_{\varepsilon}+1,...\right\} . \] It follows that $\left( f_{n}\right) $ is $^{\ast}$-convergent to $f$ in $E$, proving that $E\ $is $^{\ast}$-uniformly complete. \end{example} The following is the main result of this section. \begin{theorem} \label{complete}Let $E$ be a $^{\ast}$-uniformly complete strongly truncated Riesz space with no nonzero $^{\ast}$-infinitely small elements. Then the map $T:E\rightarrow C_{0}\left( \eta E\right) $ defined b \[ T\left( f\right) \left( u\right) =u\left( f\right) \text{, for all }f\in E\text{ and }u\in\eta E \] is a $^{\ast}$-isomorphism. \end{theorem} \begin{proof} In view Theorem \ref{main}, the proof would be complete once we show that $T\left( E\right) $ is uniformly closed in $C_{0}\left( \eta E\right) $. Hence, let $\left( f_{n}\right) $ be a sequence in $E$ such that $\left( T\left( f_{n}\right) \right) $ converges uniformly to $g\in C_{0}\left( \eta E\right) $. It follows that $\left( T\left( f_{n}\right) \right) $ is a uniformly Cauchy sequence in $C_{0}\left( \eta E\right) $. So, if $\varepsilon\in\left( 0,\infty\right) $ then there exists $n_{\varepsilon }\in\left\{ 1,2,...\right\} $ such that, whenever $m,n\in\left\{ n_{\varepsilon},n_{\varepsilon}+1,...\right\} $, we have $\left\Vert T\left( f_{m}\right) -T\left( f_{n}\right) \right\Vert _{\infty}<1/\varepsilon$. Choose $u\in\eta E$ and $m,n\in\left\{ n_{\varepsilon},n_{\varepsilon }+1,...\right\} $. We derive tha \[ \left\vert T\left( f_{m}\right) \left( u\right) -T\left( f_{n}\right) \left( u\right) \right\vert \leq\left\Vert T\left( f_{m}\right) -T\left( f_{n}\right) \right\Vert _{\infty}<1/\varepsilon. \] Therefore \[ u\left( \left\vert f_{m}-f_{n}\right\vert \right) =\left\vert T\left( f_{m}\right) \left( u\right) -T\left( f_{n}\right) \left( u\right) \right\vert <1/\varepsilon. \] So \[ u\left( \left( \varepsilon\left\vert f_{m}-f_{n}\right\vert \right) ^{\ast }\right) =\min\left\{ 1,u\left( \varepsilon\left\vert f_{m}-f_{n \right\vert \right) \right\} =u\left( \varepsilon\left\vert f_{m -f_{n}\right\vert \right) . \] Since $u$ is arbitrary in $\eta E$, Lemma \ref{spectrum} yields that $\varepsilon\left\vert f_{m}-f_{n}\right\vert \in P_{\ast}\left( E\right) $, meaning that $\left( f_{n}\right) $ is a $^{\ast}$-Cauchy sequence in $E$. This together with the $^{\ast}$-uniform completeness of $E$ yields that $\left( f_{n}\right) $ is $^{\ast}$-convergent to some $f\in E$. Hence, we ge \[ \varepsilon\left\vert f_{n}-f\right\vert \in P_{\ast}\left( E\right) \text{, for all }n\in\left\{ n_{\varepsilon},n_{\varepsilon}+1,...\right\} . \] So, if $n\in\left\{ n_{\varepsilon},n_{\varepsilon}+1,...\right\} $ and $u\in\eta E$ the \begin{align*} \left\vert T\left( f_{n}\right) \left( u\right) -T\left( f\right) \left( u\right) \right\vert & =\frac{1}{\varepsilon}u\left( \varepsilon\left\vert f_{n}-f\right\vert \right) \\ & =\frac{1}{\varepsilon}u\left( \left( \varepsilon\left\vert f_{n -f\right\vert \right) ^{\ast}\right) \leq\frac{1}{\varepsilon}. \end{align*} We quickly get $\left\Vert T\left( f_{n}\right) -T\left( f\right) \right\Vert _{\infty}\leq1/\varepsilon$ from which it follows that $\left( T\left( f_{n}\right) \right) $ converges uniformly to $T\left( f\right) $. By uniqueness, we get $g=T\left( f\right) \in T\left( E\right) $. Consequently, $T\left( E\right) $ is closed in $C_{0}\left( \eta E\right) $, which is the desired result. \end{proof} As for the remark at the end of the previous section, it follows quite easily from Theorem \ref{complete} that any $^{\ast}$-uniformly complete strongly truncated Riesz space with no nonzero elements is relatively uniformly complete (in the usual sense \cite[Pages 19,20]{L79} or \cite[Page 248 {LZ71}). Indeed, it is well known that any Banach lattice (and so $C_{0}\left( X\right) $) is relatively uniformly complete (see, for instance, Theorem 15.3 in \cite{Z97}). Nevertheless, a $^{\ast}$-uniformly complete truncated Riesz space with no nonzero $^{\ast}$-infinitely small elements need not be relatively uniformly complete. An example in this direction is provided next. \begin{example} Let $X=\left\{ 0\right\} \cup\left\{ 1/2\right\} \cup\left[ 1,2\right] $ with its usual topology. It is routine to show that the set $E$ of all piecewise polynomial functions $f$ in $C\left( X\right) $ with $f\left( 1/2\right) =0$ is a Riesz subspace of $C\left( X\right) $. Clearly, the formul \[ f^{\ast}=f\wedge\mathcal{X}_{\left\{ 0,\frac{1}{2}\right\} }\text{, for all }f\in E \] defines a \emph{(}non strong\emph{)} truncation on $E$, where $\mathcal{X _{\left\{ 0,\frac{1}{2}\right\} }$ is the characteristic function of the pair $\left\{ 0,1/2\right\} $. A short moment's though reveals that $E$ has no non trivial $^{\ast}$-infinitely small elements. Consider now a $^{\ast $-Cauchy sequence $\left( f_{n}\right) $ in $E$. Given $\varepsilon \in\left( 0,\infty\right) $, there is $n_{0}\in\left\{ 1,2,...\right\} $ for whic \[ \left\vert f_{n}-f_{n_{0}}\right\vert \leq\varepsilon\mathcal{X}_{\left\{ 0,\frac{1}{2}\right\} }\text{, for all }n\in\left\{ n_{0},n_{0 +1,...\right\} . \] Thus, if $n\in\left\{ n_{0},n_{0}+1,...\right\} $ and $x\in\left\{ 1/2\right\} \cup\left[ 1,2\right] $ the \[ f_{n}\left( x\right) =f_{n_{0}}\left( x\right) \text{\quad and\quad }\left\vert f_{n}\left( 0\right) -f_{n_{0}}\left( 0\right) \right\vert \leq\varepsilon. \] In particular, $\left( f_{n}\left( 0\right) \right) $ is Cauchy sequence in $\mathbb{R}$ from which it follows that $\left( f_{n}\left( 0\right) \right) $ converges to some real number $a$. This yields quickly that $\left( f_{n}\right) $ is $^{\ast}$-convergent in $E$ to a function $f\in E$ given b \[ f\left( 0\right) =a\text{\quad and\quad}f\left( x\right) =f_{n_{0}}\left( x\right) \text{ for all }x\in\left\{ 1/2\right\} \cup\left[ 1,2\right] . \] We conclude that $E$ is $^{\ast}$-uniformly complete. At this point, we show that $E$ is not relatively uniformly complete. Indeed, by the Weierstrass Approximation Theorem, there exists a polynomial sequence $\left( p_{n}\right) $ which converges uniformly on $\left[ 1,2\right] $ to the function $f$ defined by \[ f\left( x\right) =\sqrt{x}\text{, for all }x\in\left[ 0,\infty\right) . \] Define a sequence $\left( q_{n}\right) $ in $E$ b \[ q_{n}=p_{n}\chi_{\left[ 1,2\right] }\text{, for all }n\in\left\{ 1,2,...\right\} . \] The uniform limit of $\left( q_{n}\right) $ in $C\left( X\right) $ is the function $f\chi_{\left[ 1,2\right] }$. Assume that $\left( q_{n}\right) $ converges relatively uniformly in $E$ to a function $g\in E$. So, there exists $h\in E$ such that, for every $\varepsilon\in\left( 0,\infty\right) $ there is $n_{\varepsilon}\in\left\{ 1,2,...\right\} $ for whic \[ \left\vert q_{n}-g\right\vert \leq\varepsilon h\text{, for all }n\in\left\{ n_{\varepsilon},n_{\varepsilon}+1,...\right\} . \] This leads straightforwardly to the contradiction $f\chi_{\left[ 1,2\right] }=g\in E$, meaning that $E$ is not relatively uniformly complete. \end{example} \section{Applications} The main purpose of this section is to show how we can apply our central result (Theorem \ref{main}) to derive representation theorems from existing literature. We will be interested first in the classical Kakutani Representation Theorem (see, for instance, Theorem 45.3 in \cite{LZ71}), namely, for any Archimedean Riesz space with a strong unit $e$, there exists a compact Hausdorff space $K$ such that $E$ and $C\left( K\right) $ are Riesz isomorphic so that $e$ is identified with the constant function $1$ on $K$. Recall here that a positive element $e$ in a Riesz space $E$ is called a \textsl{strong unit} if, for every $f\in E$ there exists $\varepsilon \in\left( 0,\infty\right) $ such that $\left\vert f\right\vert \leq\varepsilon e$. The Kakutani Representation Theorem ensues from our main theorem as we shall see right now. \begin{corollary} Let $E$ be an Archimedean Riesz space with a strong unit $e>0$. Then $\eta E$ is compact and $E$ is Riesz isomorphic to a uniformly dense Riesz subspace of $C\left( \eta E\right) $ in such a way that $e$ is identified with the constant function $1$ on $\eta E$. \end{corollary} \begin{proof} It is readily checked that $E$ is a strongly truncated Riesz space under the truncation given b \[ f^{\ast}=e\wedge f\text{, for all }f\in E^{+}. \] Moreover, since $E$ is Archimedean and $e$ is a strong unit, we can easily verify that $E$ has no nonzero $^{\ast}$-infinitely small elements. Now, let $u\in\mathbb{R}^{E}$. We claim that $u\in\eta E$ if and only if $u$ is a Riesz homomorphism with $u\left( e\right) =1$. To this end, assume that $u$ is Riesz homomorphism with $u\left( e\right) =1$. So, if $f\in E^{+}$ the \[ u\left( f^{\ast}\right) =u\left( e\wedge f\right) =\min\left\{ u\left( e\right) ,u\left( f\right) \right\} =\min\left\{ 1,u\left( f\right) \right\} . \] We derive that $u\in\eta E$. Conversely, suppose that $u\in\eta E$. We have to show that $u\left( e\right) =1$. Observe tha \[ u\left( e\right) =u\left( e\wedge e\right) =u\left( e^{\ast}\right) =\min\left\{ 1,u\left( e\right) \right\} \leq1. \] Moreover, we have $u\neq0$ and so $u\left( f\right) \neq0$ for some $f\in E^{+}$. By dividing by $u\left( f\right) $ if necessary, we can assume that $u\left( f\right) =1$. Thus \begin{align*} 1 & =\min\left\{ 1,u\left( f\right) \right\} =u\left( f^{\ast}\right) =u\left( e\wedge f\right) \\ & =\min\left\{ u\left( e\right) ,u\left( f\right) \right\} =\min\left\{ u\left( e\right) ,1\right\} . \end{align*} Therefore, $u\left( e\right) \geq1$ and thus $u\left( e\right) =1$. It follows therefore that $\eta E$ is a closet set in the compact space $\eta E\cup\left\{ 0\right\} $ (see the proof of Lemma \ref{loc}) and so $\eta E$ is compact. We derive in particular that $C\left( \eta E\right) =C_{0}\left( \eta E\right) $. This together with Theorem \ref{main} yields that $E$ is Riesz isomorphic to a dense truncated Riesz subspace of $C\left( \eta E\right) $ \textit{via} the map $T:E\rightarrow C\left( \eta E\right) $ defined b \[ T\left( f\right) \left( u\right) =u\left( f\right) \text{, for all u\in\eta E\text{ and }f\in E. \] Hence, if $u\in\eta E$ the \[ T\left( e\right) \left( u\right) =u\left( e\right) =1. \] This completes the proof of the corollary. \end{proof} In the next paragraph, we discuss another representation theorem obtained by Fremlin in \cite[83L (d)]{F74}. A norm $\left\Vert .\right\Vert $ on a Riesz space $E$ is called a \textsl{Riesz} (or, \textsl{lattice}) \textsl{norm} whenever $\left\vert f\right\vert \leq\left\vert g\right\vert $ in $E$ implies $\left\Vert f\right\Vert \leq\left\Vert g\right\Vert $. The Riesz norm on $E$ is called a \textsl{Fatou norm} if for every increasing net $\left( f_{a}\right) _{a\in A}$ in $E$ with supremum $f\in E$ it follows tha \[ \left\Vert f\right\Vert =\sup\left\{ \left\Vert f_{a}\right\Vert :a\in A.\right\} \] Furthermore, the Riesz norm on $E$ is called an $M$-\textsl{norm} i \[ \left\Vert f\vee g\right\Vert =\max\left\{ \left\Vert f\right\Vert ;\left\Vert g\right\Vert \right\} \text{, for all }f,g\in E^{+}. \] Fremlin proved that if $E$ is a Riesz space with a Fatou $M$-norm such that the supremu \[ \sup\left\{ g\in E:0\leq g\leq f\text{ and }\left\Vert g\right\Vert \leq\alpha\right\} \] exists in $E$ for every $f\in E^{+}$ and $\alpha\in\left( 0,\infty\right) $, then $E$ is isomorphic, as a normed Riesz space, to a truncated Riesz subspace of $\ell^{\infty}\left( X\right) $ for some nonvoid set $X$. Here, $\ell^{\infty}\left( X\right) $ denotes the Riesz space of all bounded real-valued functions on $X$. As we shall see in our last result, our main theorem allows as to make the conclusion by Fremlin more precise by showing that, actually, $E$ is uniformly dense in a $C_{0}\left( X\right) $-type space. \begin{corollary} Let $E$ be a Riesz space with a Fatou $M$-norm such that the supremu \[ \sup\left\{ g\in E:0\leq g\leq f\text{ and }\left\Vert g\right\Vert \leq1\right\} \] exists in $E$ for every $f\in E^{+}$. Then $E$ is isomorphic, as a normed Riesz space, to a uniformly dense truncated Riesz subspace of $C_{0}\left( \eta E\right) $. \end{corollary} \begin{proof} Pu \[ f^{\ast}=\sup\left\{ g\in E:0\leq g\leq f\text{ and }\left\Vert g\right\Vert \leq1\right\} \text{, for all }f\in E^{+}. \] It turns out that this equality gives rise to a truncation on $E$. Indeed, let $f,g\in E^{+}$ and pu \[ U=\left\{ f\wedge h:0\leq h\leq g\text{ and }\left\Vert h\right\Vert \leq1\right\} \] an \[ V=\left\{ g\wedge h:0\leq h\leq f\text{ and }\left\Vert h\right\Vert \leq1\right\} . \] If $k$ is an upper bound of $U$ and $h\in\left[ 0,f\right] $ with $\left\Vert h\right\Vert \leq1$, the \[ 0\leq h\wedge g\leq g\text{\quad and\quad}\left\Vert h\wedge g\right\Vert \leq\left\Vert h\right\Vert \leq1. \] Hence \[ k\geq\left( h\wedge g\right) \wedge f=\left( h\wedge f\right) \wedge g=h\wedge g. \] This means that $k$ is an upper bound of $V$. We derive quickly that $U$ and $V$ have the same upper bounds, and so the same supremum (which exists in $E$). Therefore, $f^{\ast}\wedge g=f\wedge g^{\ast}$, meaning that $\ast$ is a truncation on $E$. We claim that $E$ has no nonzero $^{\ast}$-infinitely small elements. To this end, take $f\in E^{+}$ with $\left( \varepsilon f\right) ^{\ast}=\varepsilon f$ for all $\varepsilon\in\left( 0,\infty\right) $. Since $E$ has a Fatou $M$-norm, we can write, for $\varepsilon\in\left( 0,\infty\right) $ \[ \varepsilon\left\Vert f\right\Vert =\left\Vert \left( \varepsilon f\right) ^{\ast}\right\Vert =\sup\left\{ \left\Vert g\right\Vert :0\leq g\leq \varepsilon f\text{ and }\left\Vert g\right\Vert \leq1\right\} \leq1, \] so $f=0$, as desired. Now, we prove that the truncation $\ast$ is strong. If $f\in E^{+}$ with $f\neq0$ the \begin{align*} \left( \frac{f}{\left\Vert f\right\Vert }\right) ^{\ast} & =\sup\left\{ h:0\leq h\leq\frac{f}{\left\Vert f\right\Vert }\text{ and }\left\Vert h\right\Vert \leq1\right\} \\ & =\sup\left\{ h:0\leq h\leq\frac{f}{\left\Vert f\right\Vert }\right\} =\frac{f}{\left\Vert f\right\Vert }. \end{align*} This gives the required fact. Consequently, using Theorem \ref{main}, we infer that $E$ is isomorphic as a truncated Riesz space to a uniformly dense truncated Riesz subspace of $C_{0}\left( \eta E\right) $. In the next lines, we shall identify $E$ with its isomorphic copy in $C_{0}\left( \eta E\right) $. So, it remains to prove that the norm of $E$ coincides with the uniform norm. To see this, it suffices to show tha \[ B_{E}=\left\{ f\in E:\left\Vert f\right\Vert \leq1\right\} =B_{\infty }=\left\{ f\in E:\left\Vert f\right\Vert _{\infty}\leq1\right\} . \] Pick $f\in B_{E}\ $and observe tha \[ f=\sup\left\{ g\in E:0\leq g\leq f\text{ and }g\in B_{E}\right\} =f^{\ast}. \] However \[ \left\Vert f^{\ast}\right\Vert _{\infty}=\left\Vert 1\wedge f\right\Vert _{\infty}\leq1. \] It follows that $f\in B_{\infty}$ from which we derive that $B_{\infty}$ contains $B_{E}$. Conversely, if $0\leq f\in B_{\infty}$ then, by Remark \ref{Rq}, $f\in P_{\ast}\left( E\right) $. But the \[ f=\sup\left\{ g\in E:0\leq g\leq f\text{ and }g\in B_{E}\right\} . \] This yields easily that $\left\Vert f\right\Vert \leq1$, completing the proof of the corollary. \end{proof} \medskip \noindent\textbf{Acknowledgment. }This research is supported by Research Laboratory LATAO Grant LR11ES12.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Discrete time quantum walks, or quantum walks (QWs) in short, are space-time discrete unitary dynamical systems first appeared as the space-time discretization of $1+1$ dimensional Dirac equation related to Feynman path integral \cite{FH10Book}. From the end of 20th century, QWs started to attract interest of researchers in many fields of mathematics and physics, see \cite{Konno08LNM,Portugal13Book} and reference therein. Also, for interesting discussion about the generalization of Quantum walks, see \cite{SS1802.01837, Sako1902.02479}. The purpose of this note is to combine the the study of QWs as dispersive equation \cite{MSSSSdis,MSSSS18DCDS,MSSSS19JPC} and the study of the continuous limit of QWs, see \cite{MS20RMP} and reference therein. In particular, we study the Strichartz estimate of QWs in the continuous limit setting and obtain ``uniform" Strichartz estimates, where ``uniform" means that the inequality is independent of lattice size $\delta>0$. To state our result precisely, we prepare several notations. First, Pauli matrices are given by \begin{align}\label{pauli} \sigma_0:=\begin{pmatrix}1&0\\0&1\end{pmatrix},\ \sigma_1:=\begin{pmatrix}0 &1\\1&0\end{pmatrix},\ \sigma_2:=\begin{pmatrix}0&-\im \\ \im &0\end{pmatrix},\ \sigma_3:=\begin{pmatrix}1 &0\\0&-1\end{pmatrix}. \end{align} The ``shift operator" $\mathcal{S}_\delta$ is given by \begin{align*} \(\mathcal{S}_\delta u\)(x):=\begin{pmatrix} u_1(x-\delta)\\ u_2(x+\delta) \end{pmatrix}, \ \text{for}\ u=\begin{pmatrix} u_1\\ u_2 \end{pmatrix}, \end{align*} and ``coin operator" $\mathcal{C}_\delta $ is given by \begin{align*} \(\mathcal{C}_\delta u\)(x) := \begin{pmatrix} \cos (\delta m ) & -\im \sin(\delta m )\\ -\im \sin(\delta m ) & \cos (\delta m ) \end{pmatrix} u(x) = e^{-\im \delta m \sigma_1}u(x). \end{align*} where $ m \in \R$ is a constant and $u={}^t\!(u_1\ u_2):\delta \Z\to \C^2$. Given the shift and coin operators, we define \begin{align}\label{def:qwg} \mathcal{U}_\delta&:=\mathcal{S}_\delta\mathcal{C}_\delta,\quad U_\delta(t)u:=\mathcal{U}_\delta^{t/\delta}u,\ t\in \delta\Z. \end{align} \begin{remark} We can consider more general families of coin operators. However, for the simplicity of exposition, we chose to consider only the above family of coin operators. \end{remark} \begin{remark} The definition of $U_\delta(t)$, in particular the reason we take $t\in \delta\Z$ and $U_\delta(t):=\mathcal{U}_\delta^{t/\delta}$ instead of taking $t\in \Z$ and $U_\delta(t):=\mathcal{U}_\delta^{t}$, is inspired by the continuous limit. Indeed, formally $\mathcal{S}_\delta \sim 1-\delta \sigma_3 \partial_x$ and $C_\delta \sim 1 -\im \delta m \sigma_1$, we see $\mathcal{U}_\delta\sim 1-\delta(\sigma_3 \partial_x + \im m \sigma_1)$. Thus, we have \begin{align*} \im \partial_t u(t)\sim \im \delta^{-1}(u(t+\delta)-u(t))=\im\delta^{-1}(\mathcal{U}_\delta u(t)-u(t))\sim -\im \sigma_3 \partial_x u(t)+ m \sigma_1 u(t), \end{align*} which is a $1+1$ dimensional (free) Dirac equation up to $O(\delta)$ error. \end{remark} We further define several notations to state our result. \begin{itemize} \item $\<a\>:=(1+|a|^2)^{1/2}$. \item For $u={}^t\!(u_1\ u_2)\in \C^2$, we set $\|u\|_{\C^2}:=\(|u_1|^2+|u_2|^2\)^{1/2}$. \item For $p,q\in [1,\infty]$, we set \begin{align*} l^p_\delta&:=\left\{u:\delta\Z\to \C^2\ |\ \|u\|_{l^p_\delta}:=\(\delta \sum_{x\in \delta \Z} \|u(x)\|_{\C^2}^p\)^{1/p}<\infty\right\},\\ l^p_\delta l^q_\delta&:=\left\{f:\delta \Z\times \delta\Z \to \C^2\ |\ \|f\|_{l^p_\delta l^q_\delta}:=\(\sum_{t\in \delta \Z} \delta \(\sum_{x\in \delta \Z}\delta \|f(t,x)\|_{\C^2}^q\)^{p/q}\)^{1/p} \right\}, \end{align*} with the standard modification for the cases $p=\infty$ or $q=\infty$. \item We define the Fourier transform $\mathcal{F}_\delta $ and the inverse Fourier transform by \begin{align*} \mathcal{F}_\delta u = \frac{\delta}{\sqrt{2\pi}}\sum_{x\in \delta \Z}e^{-\im x \xi}u(x),\quad \mathcal{F}_\delta^{-1} v = \frac{1}{\sqrt{2\pi}}\int_{-\pi/\delta}^{\pi/\delta}e^{\im x \xi}v(\xi)\,d\xi. \end{align*} \item Given $p:[-\pi/\delta,\pi/\delta] \to \R$, we define $p(\mathcal{D}_\delta)$ by $$ \(p(\mathcal{D}_\delta)u\)(x):=\mathcal{F}_\delta^{-1} \(p(\xi) \(\mathcal{F}_\delta u\)(\xi)\)(x). $$ \item We say $(p,q)\in [2,\infty]\times [2,\infty]$ is an admissible pair if $3p^{-1}+q^{-1}=2^{-1}$. We say $(p,q)\in [2,\infty]\times [2,\infty]$ is a continuous admissible pair if $2p^{-1}+q^{-1}=2^{-1}$. \item For $p\in [1,\infty]$, we denote the $p'$ will mean the H\"older conjugate of $p$, i.e.\ $\frac{1}{p}+\frac{1}{p'}=1$. \item We write $a\lesssim b$ by meaning $a\leq C b$ where $C>0$ is a constant independent of quantities we are concerning. In particular, in this note, the implicit constant never depends on $\delta$. If $a\lesssim b$ and $b\lesssim a$ then we write $a\sim b$. \end{itemize} We are now in the position to state our main result. \begin{theorem}\label{thm:main} Let $\delta \in (0,1]$ and $u: \delta \Z \to \C^2$, $f:\delta\Z\times \delta \Z \to \C^2$. Let $(p,q)$ and $(\tilde{p},\tilde{q})$ be admissible pairs. Then, we have \begin{align} \| U_\delta(t)u\|_{l^p_\delta l^q_\delta} &\lesssim \| |\mathcal{D}_\delta|^{1/p}\<\mathcal{D}_\delta\>^{3/p}u\|_{l^2_\delta},\label{eq:homest}\\ \| \sum_{s\in [0,t]\cap \delta\Z}U_\delta(t-s) f \|_{l^p_\delta l^q_\delta} & \lesssim \| |\mathcal{D}_\delta|^{1/p+1/\tilde{p}}\<\mathcal{D}_\delta\>^{3/p+3/\tilde{p}}f\|_{l^{\tilde{p}'}_\delta l^{\tilde{q}'}_\delta} \label{eq:inhomest}. \end{align} \end{theorem} We compare our result with the known results by Hong-Yang \cite{HY19DCDS} who proved similar result for discrete Schr\"odinger equations and discrete Klein-Gordon equations. Here, we briefly recall the results for discrete Schr\"odinger equations. First, discrete Schr\"odinger equations on $1$D lattice $\delta\Z$ is given by \begin{align*} \im \partial_t u(x) = -\Delta_\delta u(x):=\delta^{-2}(2u(x)-u(x-\delta)-u(x+\delta)),\ u:\R\times \delta\Z\to \C. \end{align*} For fixed $\delta$, the Strichartz estimate \begin{align*} \|e^{\im t \Delta_\delta} u\|_{L^p l^q_\delta}\leq C_\delta \|u_0\|_{l^2_\delta},\ (p,q)\text{ is admissible}, \end{align*} was proved by Stefanov-Kevrekidis \cite{SK05N} where the constant $C_\delta=C\delta^{-1/p}$ blows up as $\delta\to 0$. In general, one expect that solutions of discretized equation converge to the solutions of the original equation. This is true for Schr\"odinger equation for fixed time interval. However, as we saw above, the Strichartz estimate for discrete Schr\"odinger equation, which control the global behavior of solutions, do not converge to the Strichartz estimate of the continous Schr\"odinger equation. This is because of the lattice resonance, which occurs because the dispersive relation for a equation of lattice always have degeneracy. Indeed, since the dispersive relation for an equation on $\delta \Z$ is a function $p_\delta(\xi)$ on a torus, if $p_\delta$ is smooth, then there exists a point such that $p_\delta''(\xi)=0$. Also, we can observe the difference between discrete and continous Strichartz estimate from the difference of admissible pair. Hong-Yang observed that the difference of the scale can be absorbed by fractal difference operator $|\mathcal{D}_\delta|^{1/p}$ and obtained the result: \begin{align*} \|e^{\im t \Delta_\delta} u\|_{L^p l^q_\delta}\lesssim \||\mathcal{D}_\delta|^{1/p}u_0\|_{l^2_\delta},\ (p,q)\text{ is admissible}. \end{align*} This estimate is compatible with the estimate for the continuous Schr\"odinger equations. Hong-Yang also prove similar estimates for discrete Klein-Gordon equations. We now discuss QWs. In \cite{MSSSS18DCDS}, the Strichartz estimate for QWs was given by \begin{align}\label{eq:Stz1} \|U_\delta(t)u\|_{l^p_\delta l^q_\delta} \lesssim C_\delta \|u\|_{l^2_\delta},\quad \ (p,q)\text{ is admissible}, \end{align} where $C_\delta\to \infty$ as $\delta\to 0$. Since QWs are space-time discrete Dirac equations, we compare \eqref{eq:Stz1} with the the Strichartz estimates of the spacetime continuous Dirac equations: \begin{align*} \|e^{-\im t H}u\|_{L^pL^q}\lesssim \| \<\partial_x \>^{3/p}u\|_{L^2},\quad (p,q)\text{ is continous admissible},, \end{align*} where $H=-\im\sigma_3\partial_x + m \sigma_1$ (for the proof see \cite{NS11Book} and reference therein). To make $(p,q)$ to be admissible, using Sobolev inequality we have \begin{align*} \|e^{\im t H}u\|_{L^pL^q}\lesssim \| |\partial_x|^{1/p}\<\partial_x \>^{3/p}u\|_{L^2},\quad \ (p,q)\text{ is admissible}. \end{align*} We now see that estimate \eqref{eq:homest} is compatible with the above estimate of Dirac equation. For the inhomogeneous estimate \eqref{eq:inhomest} there is also the similar correspondence. In the next section, we prove Theorem \ref{thm:main} following Hong-Yang \cite{HY19DCDS}. There are two difference between the discrete Schr\"odinger equation treated in \cite{HY19DCDS} and QWs. The first is that discrete Schr\"odinger/Klein-Gordon equations are discretized only in space, while QWs are discretized in spacetime. The second difference is that QWs have more complicated dispersive relation than discrete Schr\"odinger/Klein-Gordon equations. For the first problem we follow \cite{MSSSS18DCDS}. For the 2nd problem, we carefully estimate the dispersive relation for low and high frequency region separately to get the optimal result. \section{Proof of Theorem \ref{thm:main}} Let $\phi \in C_0^\infty$ satisfy $\chi_{[-1,1]}(x)\leq \phi(x)\leq \chi_{[-2,2]}(x)$ for all $x\in\R$, where $\chi_A$ is the characteristic function of $A\subset \R$. For $\lambda>0$, we set $\psi_{\delta,\lambda}\in C^\infty([-\pi/\delta,\pi/\delta])$ by $\psi_{\delta,\lambda}(\xi):=\psi(\xi/\lambda)$ where $\psi(x):=\phi(x)-\phi(2x)$. We note that since $\mathrm{supp}\psi_{\delta,\lambda} \subset \([-2\lambda,-\lambda/2]\cup [\lambda/2,2\lambda]\)\cap [-\pi/\delta,\pi/\delta]$, we have $\psi_{\delta,\lambda}= 0$ a.e.\ if $\lambda \geq 2\pi/\delta$ and for $\lambda < 2\pi/\delta$, we have \begin{align*} \xi \in \mathrm{supp}\psi_{\delta,\lambda}\ \Rightarrow \ |\xi| \sim \lambda. \end{align*} Using this $\psi_{\delta,\lambda}$, we define the Littlewood-Paley projection operators by \begin{align}\label{eq:def:LW} P_\lambda:=P_{\delta,\lambda}:=\psi_{\delta,\lambda}(\mathcal{D}_\delta). \end{align} We further set $\tilde \psi \in C_0^\infty$ to satisfy $0\not\in \mathrm{supp}\tilde{\psi}$ and $\tilde \psi(\xi)=1$ for $\xi \in \mathrm{supp}\psi$ and set $\tilde \psi_{\delta,\lambda}$ and $\tilde P_N$ as $\tilde{\psi}_{\delta,\lambda}(\xi)=\psi(\xi/\lambda)$ and \eqref{eq:def:LW}. In particular, we have $P_\lambda = P_\lambda \tilde{P}_\lambda$. The main ingredient of the proof of Theorem \ref{thm:main} is the following proposition. \begin{proposition}\label{prop:main} Let $\delta\in (0,1]$ and $\lambda\in (0,2\pi/\delta )$. Then, we have \begin{align}\label{eq:mainest} \|U_\delta(t)P_\lambda u\|_{l^\infty_\delta} \lesssim \lambda^{1/3} \<\lambda\>t^{-1/3}\|u\|_{l^1_\delta}. \end{align} \end{proposition} \begin{proof} As in \cite{MSSSS18DCDS}, by Fourier transformation, we have \begin{align} U_\delta(t) P_\lambda u_0(x)=\sum_{\mathfrak{s}\in \{\pm\}}\(I_{\delta,\lambda,\mathfrak{s}}*u\)(x),\label{eq:Uconv} \end{align} where $I*u(x):=h\sum_{x\in \delta\Z} I(x-y)u(y)$ and \begin{align} I_{\delta,\lambda ,\mathfrak{s}}&:=\frac{1}{2\pi}\int_{-\pi/\delta}^{\pi/\delta}e^{\im\(\mathfrak{s} p_\delta(\xi) t + x\xi\)}Q_{\delta,\mathfrak{s}}(\xi)\psi_{\delta,\lambda }(\xi)\,d\xi,\label{eq:I}\\ p_\delta(\xi)&:=\delta^{-1}\mathrm{arccos}\(\cos(\delta m )\cos(\delta \xi)\), \end{align} where $Q_{\delta,\pm}$ are $2\times 2$ matrices, which can be computed explicitly. \begin{remark} Notice that $ p_\delta(\xi)=\sqrt{ m ^2+\xi^2}+O(\delta) $ for fixed $\xi$, which tells us that QWs have the dispersion relation similar to Dirac equations (and Klein-Gordon equations) in the continuous limit. \end{remark} For $\mathfrak{s}=\pm$, $i,j=1,2$, since $Q_{\delta,\pm}$ depends on $\xi$ only through $\delta\xi$, by $\|f(\delta \xi)\|_{L^\infty([-\pi/\delta,\pi/\delta])}=\|f\|_{L^\infty([-\pi,\pi])}$ and $\| \(f(\delta \xi)\)'\|_{L^2([-\pi/\delta,\pi/\delta])}=\|\delta f'(\delta \xi)\|_{L^1([-\pi/\delta,\pi/\delta])}=\|f'\|_{L^1([-\pi,\pi])} $, we see that \begin{align}\label{eq:estQ} \| Q_{\delta,\mathfrak{s},i,j}\|_{L^\infty[-\pi/\delta,\pi/\delta]}+\| Q_{\delta,\mathfrak{s},i,j}'\|_{L^1[-\pi/\delta,\pi/\delta]}\lesssim 1, \end{align} where $Q_{\delta,\mathfrak{s},i,j}$ is the $i,j$ component of $Q_{\delta,\mathfrak{s}}$. Since we are intending to use van der Corput Lemma, we record the derivatives of $p_\delta$: \begin{align} p_\delta'(\xi)&=\frac{\cos(\delta m )\sin(\delta \xi)}{(1-\cos^2(\delta m )\cos^2(\delta \xi))^{1/2}}, \label{eq:p'}\\ p_\delta''(\xi)&=\frac{\delta\cos(\delta m )\sin^2(\delta m )\cos(\delta \xi)}{(1-\cos^2(\delta m )\cos^2(\delta \xi))^{3/2}}, \label{eq:p''}\\ p_\delta'''(\xi)&=\frac{\delta^2\cos(\delta m )\sin^2(\delta m )(1+2\cos^2(\delta m )\cos^2(\delta \xi))\sin(\delta \xi)}{(1-\cos^2(\delta m )\cos^2(\delta \xi))^{5/2}}, \label{eq:p'''}. \end{align} To prove \eqref{eq:mainest}, we consider two cases $\lambda \leq 1/2\delta$ and $1/2\delta<\lambda$. First, if $\lambda \leq 1/2\delta$, then by the elementary inequality $\sqrt{1-a^2}\leq \cos a$ for $|a|\leq 1$, we have \begin{align*} 1-\cos^2(\delta m )\cos^2(\delta \xi) \leq \delta^2 m ^2+\delta^2\xi^2\lesssim \delta^2\<\lambda\>^2, \end{align*} because $|\delta\xi|\leq 1$ for $ \xi \in \mathrm{supp}\psi_{\delta,\lambda}$ due to the constraint of $\lambda$. Thus, we have $p_\delta''(\xi)\gtrsim \<\lambda\>^{-3}$ for $\xi \in \mathrm{supp}\psi_{\delta,\lambda }$ and by Young's convolution inequality, van der Corput lemma and the estimate \eqref{eq:estQ} combined with the expression \eqref{eq:Uconv} and \eqref{eq:I}, we have \begin{align}\label{eq:est:first1} \|U_\delta(t)P_\lambda u\|_{l_\delta^\infty}\lesssim \<\lambda\>^{3/2}t^{-1/2}\|u\|_{l^1_\delta}. \end{align} Combining \eqref{eq:est:first1} with the trivial $l^2$ conservation $ \|U_\delta(t)P_\lambda u\|_{l_\delta^2}=\|P_\lambda u\|_{l^2_\delta}\lesssim \|u\|_{l^2_\delta}, $ we have by Riesz-Thorin interpolation (see e.g. \cite{BLBook}) \begin{align}\label{eq:qq'est} \|U_\delta(t)P_\lambda u\|_{l_\delta^q}\lesssim \<\lambda\>^{3(1/2-1/q)}t^{-(1/2-1/q)}\|u\|_{l_\delta^{q'}}, \end{align} for $q\geq 2$. Thus, by Bernstein's inequality (see Lemma 2.3 of \cite{HY19DCDS}) and \eqref{eq:qq'est}, we have \begin{align} \|U_\delta(t)P_\lambda u\|_{l^\infty_\delta}&=\|\tilde{P}_\lambda U_\delta(t)P_\lambda \tilde P_\lambda u\|_{l^\infty_\delta}\lesssim \lambda^{1/6}\|U_\delta(t)P_\lambda \tilde{P}_\lambda u\|_{l^6_\delta}\lesssim \lambda^{1/6}\<\lambda\> t^{-1/3}\|\tilde{P}_\lambda u\|_{l_\delta^{6/5}} \nonumber \\& \lesssim \lambda^{1/3}\<\lambda\> t^{-1/3}\|u\|_{l^1_{\delta}}.\label{eq:rep} \end{align} This gives \eqref{eq:mainest} for the first case. Next, we consider the case $1/2\delta< \lambda <2\pi/\delta$. In this case, since $\lambda\sim \delta^{-1}$, it suffices to prove \eqref{eq:mainest} with $\lambda$ replaced by $\delta^{-1}$. Since the $\mathrm{supp} \psi_{\delta,\lambda}$ can contain both $\pm \pi/2\delta$ (where $p_\delta''$ degenerates) and $\pm \pi/\delta$ (where $p_\delta'''$ degenerates), we split the integral \eqref{eq:I} into two regions containing only one of $\pm \pi/\delta$ and $\pm \pi/2\delta$ by smooth cutoff. In the region which do not contain $\pm \pi/\delta$ (and note that because of the constraint of $\lambda$), it will not contain neither $0$, we have $p_\delta'''(\xi)\gtrsim \delta^4$. Thus, we have \begin{align}\label{eq:est:second1} \|U_\delta(t)P_\lambda u\|_{l_\delta^\infty}\lesssim \(\delta^4 t\)^{-1/3}, \end{align} by van der Corput lemma, which gives us \eqref{eq:mainest} in this case. In the region which contains $\pm \pi/\delta$ but not $\pm\pi/2\delta$, we have $p_\delta''(\xi)\gtrsim \delta^3$ so again by van der Corput lemma, we have \begin{align*} \|U_\delta(t)P_\lambda u\|_{l_\delta^\infty}\lesssim \(\delta^3 t\)^{-1/2}. \end{align*} Repeating the argument of \eqref{eq:qq'est} and \eqref{eq:rep} replacing $\<\lambda\>$ by $\delta^{-1}$ we also have \eqref{eq:est:second1}. This finishes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main}] By standard $T T^*$ argument (see e.g. \cite{KT98AJM} for general cases and \cite{MSSSS18DCDS} for discrete setting) we obtain \begin{align*} \| U_\delta(t) P_\lambda u\|_{l^p_\delta l^q_\delta}\lesssim \<\lambda\>^{3/p} \lambda^{1/p}\|u\|_{l_\delta^2},\quad \| \sum_{s\in [0,t]\cap \delta\Z} U_\delta(t-s)P_\lambda f(s) \|_{l^pl^q} \lesssim \<\lambda\>^{3/p +3/\tilde{p}}\lambda^{1/p+1/\tilde{p}}\|f\|_{l_\delta^{\tilde{p}'}l_\delta^{\tilde{q}'}}, \end{align*} for any admissible pairs $(p,q)$ and $(\tilde{p},\tilde{q})$. Here, we note that the implicit constants are independent of $\lambda$ and $\delta$. Finally, arguing as in Proof of Theorem 1.3 of \cite{HY19DCDS}, we have the desired estimates. \end{proof} \section*{Acknowledgments} M.M. was supported by the JSPS KAKENHI Grant Number 19K03579, G19KK0066A, JP17H02851 and JP17H02853.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Elastic hadron scattering in the Regge regime (high center-of-mass energy and small scattering angle) has long been understood to have particularly interesting features. Written in terms of the Mandelstam variables $s$ and $t$, the scattering amplitude scales as $s^{\alpha(t)}$, where $\alpha(t)$ is a linear function known as a Regge trajectory. For positive values of $t$, the amplitudes have regularly spaced poles associated to mesons of masses $m^2 = t$, and spins $J = \alpha(m^2)$. The interpretation of this is that the full scattering amplitude can be thought of as an infinite sum of amplitudes associated with the exchanges of the mesons lying along the trajectory \cite{earlyRegge}. Analysis of this type of behavior in baryon and meson scattering is tied to the earliest work in string theory: the Veneziano amplitude originally written down to model pion scattering was later shown to arise naturally from the scattering of open strings. In general, string amplitudes are known to have the same scaling behaviors in the Regge regime, and the same dependence on a linear Regge trajectory. This Regge trajectory also represents the linear relationship between mass squared and spin for spin states. However, string amplitudes can only (thus far) be calculated for simple cases such as 26-d flat space Bosonic string theory. For these cases, the string states and Regge trajectory parameters do not correspond well to physical mesons. More than a decade ago now, the idea of a relationship between QCD and string theory gained new traction with the proposal that QCD might be dual to string theory living in a curved, 5-dimensional space. In this scenario we have mesons mapping onto open strings, glueballs mapping onto closed strings, and baryons mapping onto D-branes. Various toy models for such a duality have been proposed, including the Sakai-Sugimoto model in which stacks of D-branes are placed in a background space to generate curvature, as well as soft-wall and hard-wall models \cite{Sakai:2004cn}. Although such models have limitations, they typically agree reasonably well with masses and coupling constants coming from both experimental results and lattice QCD calculations. At very high center-of-mass energies proton-proton scattering and proton-antiproton scattering data suggest a trajectory not consistent with that of any known mesons: possessing a higher intercept and consisting of particles of even spin and vacuum quantum numbers (so that proton-proton data is identical to proton-antiproton data). We will adapt the common interpretation of this, that these processes are mediated by a single trajectory of glueballs known as the Pomeron \cite{CGM}. However, direct experimental confirmation of the glueballs involved does not exist, though they are observed in lattice QCD calculations \cite{lattice}. In addition, it is known that at the highest energies, total cross sections must obey the Froissart-Martin bound and grow no faster than $(\ln s)^2$, implying something more complicated than single Pomeron exchange. Whether scattering data existing today (up to energies reached at the LHC) is at high enough energies to be affected by this is still an open question, though we will assume that it is not. Interpretation of the Pomeron trajectory is complicated by the fact that it has a significantly different behavior in the hard scattering regime $|t| > \Lambda^2_{\mathrm{QCD}}$, where it ought to be associated to a sum over perturbative QCD processes involving gluon exchange, than it does in the soft scattering regime $0 < |t| < \Lambda^2_{\mathrm{QCD}}$, where the exchange of bound glueball states makes more sense. How a single trajectory could have these different behaviors can be understood within the string dual picture: in \cite{bpstetc}, it was shown that the radius of curvature of the 5th dimension generates an energy scale that can be mapped onto $\Lambda_{\mathrm{QCD}}$, such that the closed string trajectory would have different behaviors for low and high energies as compared to this scale. Work has also been done analyzing the structure of the Pomeron trajectory in soft- or hard-wall models for holographic QCD, and in analyzing the Pomeron trajectory in backgrounds with an arbitrary number of dimensions \cite{topdown}. In this paper we will restrict our attention to the soft Pomeron regime, and assume the trajectory is linear. Building a string-dual model to explain proton-proton scattering via Pomeron exchange in the Regge regime is a project of real interest in this conversation. However, calculations within toy dual models are generally restricted to the supergravity limit, corresponding to low energy QCD processes, so additional tools are necessary to extend the usefulness of AdS/QCD outside this regime. In \cite{DHM}, an assumption was suggested that the main structures of string scattering amplitudes in flat space (in this case the Virasoro-Shapiro amplitude) would also apply to the scattering amplitudes in weakly curved spacetimes, but with the defining parameters of the Regge trajectory modified by the curvature. Furthermore, it was proposed that the coupling constants appearing in these amplitudes would be the same as those calculable in the low-energy limit. This leads to a hybrid approach for modeling scattering processes in the Regge regime: coupling constants are determined in the supergravity limit using a toy model, and low energy scattering cross sections are determined using them. Then, the propagators in these cross sections are ``Reggeized'' using a modified version of a string scattering amplitude, with the Regge trajectory parameters chosen to agree with the physical trajectory. This basic procedure was extended in \cite{ADHM, ADM, IRS} to apply to central-production processes, with the Reggeization based on 5-string amplitudes. In general, this method agrees reasonably well with experimental results in some respects but has significant discrepancies in others \cite{ADM}. It is possible these discrepancies arise due to the limitations of using a toy dual model to generate the low energy coupling constants; recent work has suggested for example that the Sakai-Sugimoto model systematically underestimates coupling constant values \cite{low}. However, it is also possible that the Reggeization procedure used is not ideal: perhaps the assumptions made about string amplitudes in a weakly curved background are not correct. In this paper we seek to examine this latter concern by revisiting the Reggeization procedure for elastic proton-proton scattering via Pomeron exchange and attempting to introduce generalizations where possible, while still maintaining the phenomenologically desirable features of the amplitude's behavior. In particular, the Reggeization procedure of \cite{DHM} introduces a parameter $\chi$ that arises from the mass-shell condition for the external particles and that depends on the Pomeron trajectory. However, we will show that our generalizations allow for other values of $\chi$. We will also examine the issue by comparing the generalized model with real data, and allowing $\chi$ to be a fitting parameter. The results of the fitting procedure are ambiguous, but they suggest that the value of $\chi$ used previously may not be the best choice for agreement with the data. In section \ref{PomeronReview}, we will review what is known about proton-proton scattering via the exchange of a single Pomeron trajectory, and we will identify the key desirable features to look for in a string amplitude designed to model this process. In section \ref{mod1}, we will review the Reggeization procedure of \cite{DHM}, and introduce generalizations consistent with these features. We will show that these generalizations amount to allowing the value of the mass-shell parameter $\chi$ to change. In section \ref{mod2}, we will show a second, related Reggeization procedure, leading again to a different choice of $\chi$. In section \ref{fitting}, we compare the model to scattering data, allowing the value of $\chi$ to be a fitting parameter. In section \ref{Conclusion}, we offer discussion and conclusions. \section{\label{PomeronReview} Reviewing Pomeron Exchange in Proton-Proton Scattering} In this section we will (briefly) review some of the essentials of Regge theory. There are no new results presented here; our goal is to establish what the phenomenological requirements should be for an amplitude designed to model pomeron exchange in proton-proton scattering. Consider elastic proton-proton scattering or proton-antiproton scattering expressed in terms of the standard Mandelstam variables $s$ and $t$, in the Regge limit where $s \gg t$. We can describe both the differential and total cross sections in terms of an amplitude $\mathcal{A}(s, t)$ as \begin{equation} \sigma_{\mathrm{tot}} = \frac{1}{s} \, \Im\mathcal{A}(s, 0), \hspace{1in} \frac{d\sigma}{dt} = \frac{1}{16\pi s^2}\left|\mathcal{A}(s, t)\right|^2 \, . \end{equation} This scattering process occurs via the exchanges of families of particles that lie along Regge trajectories \cite{earlyRegge}. Mesons, baryons, and glueballs form patterns where there is a linear relationship between the spin of a ``family member'' and its mass squared: \begin{equation} J = \alpha_0 + \alpha' m_J^2 \, , \end{equation} so that we can define the linear Regge trajectory function as \begin{equation} \alpha(x) = \alpha_0 + \alpha' x, \hspace{1in} J = \alpha(m_J^2) \, . \end{equation} For example, consider the $\rho$ and $a$ mesons, shown in FIG. \ref{mesontraj}. There is a leading linear trajectory of particles with the smallest mass for a given spin, with $\alpha_0 \approx 0.53$ and $\alpha' \approx 0.88 \ \mathrm{GeV}^{-2}$. \begin{figure} \begin{center} \resizebox{4in}{!}{\includegraphics{mesontraj.pdf}} \caption{\label{mesontraj} A Regge plot of $\rho$ and $a$ mesons, showing the leading trajectory \cite{PDG}.} \end{center} \end{figure} The scattering amplitude, differential cross section, and total cross section for the exchange of this leading trajectory (which will dominate over the ``daughter trajectories'' in the Regge limit) are known to take the generic form \begin{equation} \mathcal{A}(s, t) = \beta(t) \left(\alpha' s\right)^{\alpha(t)}, \hspace{.5in} \frac{d\sigma}{dt} = \frac{\alpha'^2 |\beta(t)|^2}{16\pi}\left(\alpha' s\right)^{2\alpha(t) - 2}, \hspace{.5in} \sigma_{\mathrm{tot}} = \frac{\Im \beta(0)}{s} \, \left(\alpha' s\right)^{\alpha_0 - 1} \, . \end{equation} Both the pole structure associated with the exchange of a Regge trajectory of particles and the characteristic Regge limit scaling behavior are consistent with the Veneziano amplitude, which can be written in its original form as \begin{equation} \mathcal{A}^{\mathrm{Ven}}_{\{n, m, p\}}(s, t) = \frac{\Gamma[n - \alpha(s)]\Gamma[m - \alpha(t)]}{\Gamma[p - \alpha(s) - \alpha(t)]} \, . \end{equation} This (famously) is also a structure that arises in the scattering of open strings in 26-D flat spacetime, where the amplitude for the scattering of four tachyonic scalar string states would take a crossing-symmetric form generated by a sum of three such amplitudes, with $n = m = p = 0$. However, the actual Regge trajectories arising do not compare well with the known masses and spins of vector mesons. (For example, there is no tachyonic vector meson.) At very high center-of-mass energies, there is significant evidence that both proton-proton and proton-antiproton scattering are dominated by what is known as the Pomeron trajectory. Consider for example the total cross sections of proton-proton scattering and proton-antiproton scattering, which are plotted as a function of $s$ in FIG. \ref{totalcross}. We can see that this data is well fit by assuming that two separate Regge trajectories with two separate intercepts contribute to the scattering process. At lower energies the process is controlled by a term that corresponds well with the known parameters of the $\rho-a$ trajectory. Note that both proton-proton and proton-antiproton scattering have such a contribution, but that the scale of this trajectory's contribution to proton-proton scattering is somewhat smaller, since the alternating even and odd spins have partially canceling effects. For very high values of $s$, the leading term suggests a Regge trajectory with intercept around $1.08$, which is larger than that of any known meson trajectory. Furthermore, the contribution from this term is equal for both proton-proton scattering and proton-antiproton scattering. This suggests the trajectory is made up of particles with vacuum quantum numbers, and in particular that only even-spin particles appear on it. We have taken the common point of view that this is associated with a single glueball trajectory: the Pomeron \cite{CGM}. \begin{figure} \begin{center} \resizebox{4in}{!}{\includegraphics{totalcross.pdf}} \caption{\label{totalcross} The total cross sections for proton-proton scattering and proton-antiproton scattering \cite{PDG}.} \end{center} \end{figure} Based on this evidence, we can say that a phenomenologically valid Reggeization procedure for Pomeron exchange should begin with an amplitude that is fully crossing symmetric, since we know Pomeron exchange treats particles and anti-particles identically. Also, the pole structure should correspond to the exchanges of (even spin) glueballs on the Pomeron trajectory: if we assume \begin{equation} J = \alpha_{g0} + \alpha_{g}'m_{g, J}^2 = \alpha_g(m_{g, J}^2) \, , \end{equation} is the trajectory of glueballs, with lowest lying state having spin $2$, we should have a pole at every $\alpha_g(t) = 2, 4, 6, 8, \dots$, with the structure \begin{equation} \mathcal{A}_{t \approx m_{g, J}^2} \approx \frac{P_{J}(s)}{t - m_{g, J}^2} \, , \end{equation} where $P_J(s)$ is a polynomial of degree $J$ in $s$. Finally, it should have the correct Regge behavior: \begin{equation} \mathcal{A}_{\mathrm{Regge}} \approx \beta(t) \left(\alpha_g's\right)^{\alpha_g(t)} \, . \end{equation} Glueballs ought to be dual to closed strings in some hyperbolically curved background. The Reggeization method of \cite{DHM}, further elaborated on in \cite{ADHM} and \cite{ADM}, takes the approach of staying as close to the flat-space string theory amplitude (the Virasoro-Shapiro amplitude) as possible, making only the minimal modifications necessary to meet all of the requirements above. However, as we will see in the next two sections, other modifications are possible. It is then possible to use fitting to data to determine which modification scheme is actually in the best agreement with reality, and in particular how the original ``minimal modification'' scheme fares in comparison to others. \section{\label{mod1} A Variation on the Known Method of Modifying the Virasoro-Shapiro Amplitude} The standard approach to determining the Reggeization procedure for a propagator, first presented in \cite{DHM}, is to begin with the Virasoro-Shapiro amplitude written in a form with manifest crossing symmetry, and then modify both the Regge trajectory parameters and the mass shell condition such that the amplitude has the correct pole structure, before taking the Regge limit. The Reggeization procedure can then be read off from a comparison between the pole expansion and the Regge limit. Here we will review this procedure while allowing for some generalization, in order to examine how unique the result is. To begin, consider closed Bosonic strings in 26-D flat space. Here, the spectrum of string states lying on the leading Regge trajectory have even spins $J$ and masses $m_J$ satisfying \begin{equation} J = 2 + \frac{\alpha'}{2}m_J^2 = a_c(m_J^2) \, . \end{equation} where $a_c(x)$ is the Regge trajectory. The lowest lying states are scalar tachyons with masses satisfying $m_T^2 = -\frac{4}{\alpha'}$. The tree-level scattering amplitude for four external tachyonic states gives us the well-known Virasoro-Shapiro amplitude, which can be written as \begin{equation} \label{eqn:VSoriginal} \mathcal{A}_{c}(s, t, u) = \frac{C \, \pi \, \Gamma\left(-\frac{a_c(s)}{2}\right) \, \Gamma\left(-\frac{a_c(t)}{2}\right) \, \Gamma\left(-\frac{a_c(u)}{2}\right)}{\Gamma\left(-\frac{a_c(s)}{2} - \frac{a_c(t)}{2}\right) \, \Gamma\left(-\frac{a_c(t)}{2} - \frac{a_c(u)}{2}\right) \, \Gamma\left(-\frac{a_c(s)}{2} - \frac{a_c(u)}{2}\right)} \, , \end{equation} where the mass-shell condition on Mandelstam variables is \begin{equation} \label{eqn:BSMS} s + t + u = -\frac{16}{\alpha'} \, , \hspace{1in} a_c(s) + a_c(t) + a_c(u) = -2 \, . \end{equation} Note that in this form the crossing symmetry of the amplitude is manifest. If we rewrite this in terms of just the independent variables $s$ and $t$ we obtain \begin{equation} \label{eqn:BSVSts} \mathcal{A}_{c}(s, t) = \frac{C \, \pi \, \Gamma\left(-\frac{a_c(s)}{2}\right) \, \Gamma\left(-\frac{a_c(t)}{2}\right) \, \Gamma\left(1 + \frac{a_c(s)}{2} + \frac{a_c(t)}{2}\right)}{\Gamma\left(-\frac{a_c(t)}{2} - \frac{a_c(s)}{2}\right) \, \Gamma\left(1 + \frac{a_c(s)}{2}\right) \, \Gamma\left(1 + \frac{a_c(t)}{2}\right)} \, . \end{equation} This amplitude has a pole for each $t = m_J^2$. If we expand around a pole corresponding to the exchange of a particle of spin $J$, we obtain \begin{equation} \mathcal{A}_{c, \, t \approx m_J^2}(s, t) \approx \frac{4C\pi \, e^{-i\pi\left(\frac{J}{2} + 1\right)} \, P_J\left(\frac{\alpha' s}{4}\right)}{\alpha'\left[\Gamma\left(\frac{J}{2} + 1\right)\right]^2 \, (t - m_J^2) } \, , \end{equation} where $P_J\left(\frac{\alpha' s}{4}\right)$ is a polynomial in $s$ of degree $J$ such that $P_J\left(\frac{\alpha' s}{4}\right) = \left(\frac{\alpha' s}{4}\right)^J + \cdots$. On the other hand, if we examine the Regge limit of this amplitude, we obtain \begin{equation} \mathcal{A}_{c, \, \mathrm{Regge}}(s, t) \approx \frac{C\pi \, \Gamma\left(-\frac{a_c(t)}{2}\right)}{\Gamma\left(1 + \frac{a_c(t)}{2}\right)} \, e^{-\frac{i\pi a_c(t)}{2}}\left(\frac{\alpha' s}{4}\right)^{a_c(t)} \, . \end{equation} All of these behaviors are just as expected: the poles are in the correct locations and have the right residues to correspond to exchanges of particles on the trajectory of open string states, and in the Regge limit we have correct scaling behavior with $s$, also associated with the trajectory. The Reggeization procedure assumes we are working in a low energy limit with a scattering process involving the exchange of the lowest lying particle on a trajectory (in this case, a tachyon), and that we can use that amplitude or cross section for the entire trajectory of exchanged particles in the Regge limit, by simply replacing the propagator with a Reggeized version. In this case, that replacement rule would be \begin{equation} \frac{1}{t - m_T^2} \hspace{.25in} \rightarrow \hspace{.25in} -\frac{\alpha' \Gamma\left(-\frac{a_c(t)}{2}\right)}{4\Gamma\left(1 + \frac{a_c(t)}{2}\right)} \, e^{-\frac{i\pi a_c(t)}{2}} \, \left(\frac{\alpha' s}{4}\right)^{a_c(t)} \, . \end{equation} However, as we know, this rule lacks some essential features: the particles being scattered are not protons, and those being exchanged are not glueballs. We therefore want to make alterations to this rule so that it corresponds to the scattering of physical particles, while retaining the many desirable features it already has. The procedure of \cite{DHM} involves replacing the dependence on the closed string Regge trajectory with an unknown linear function $A(x)$ (such that the crossing symmetry is maintained), and then relating this back to the physical glueball trajectory by requiring that the new amplitude has the correct pole structure. However, from a phenomenological point of view, we can actually introduce two separate unknown linear functions, which we will call $A(x)$ and $\tilde{A}(x)$, while still maintaining the desired crossing symmetry. We also multiply by an unknown kinematic factor $F(s, t, u)$. In the method of \cite{DHM}, this is eventually chosen to be $F(s, t, u) = s^2 + t^2 + u^2$, which allows for the residues of the poles to have the correct scaling with $s$. Again, we will leave this arbitrary for now. Thus our starting place is \begin{equation} \label{eqn:VSgxs} \mathcal{A}_g(s, t, u) = \frac{C \, \pi \, \Gamma\left(-A(s)\right)\Gamma\left(-A(t)\right)\Gamma\left(-A(u)\right)}{\Gamma\left(-\tilde{A}(s) - \tilde{A}(t)\right)\Gamma\left(-\tilde{A}(t) - \tilde{A}(u)\right)\Gamma\left(-\tilde{A}(s) - \tilde{A}(u)\right)} \, F(s, t, u) \, . \end{equation} Next we want to examine pole structure and Regge limit behavior, but for both of these we need to rewrite our amplitude in term of just the independent Mandelstam variables $t$ and $s$. Doing so introduces new parameters, \begin{equation} A(s) + A(t) + A(u) = \chi, \hspace{1in} \tilde{A}(s) + \tilde{A}(t) + \tilde{A}(u) = \tilde{\chi} \, . \end{equation} Note that these must be constants, assuming both $A(x)$ and $\tilde{A}(x)$ are linear, and are determined by the mass shell condition we impose in assuming the external particles are protons, which is \begin{equation} \label{eqn:msprotons} s + t + u = 4m_p^2 \, . \end{equation} This gives \begin{equation} \label{eqn:VSgts} \mathcal{A}_g(s, t) = \frac{C \, \pi \, \Gamma\left(-A(s)\right)\Gamma\left(-A(t)\right)\Gamma\left(A(s) + A(t) - \chi\right)}{\Gamma\left(-\tilde{A}(s) - \tilde{A}(t)\right)\Gamma\left(\tilde{A}(s) - \tilde{\chi}\right)\Gamma\left(\tilde{A}(t) - \tilde{\chi}\right)} \, F(s, t, 4m_p^2 - s - t) \, . \end{equation} This amplitude has a pole at every value $A(t) = n$, where $n$ is an integer, which we want to correspond to the masses of the physical glueballs. (At this stage we need to require that the function $F$ neither cancels any of these poles nor introduces new ones into the amplitude.) Suppose we define $\alpha_g(x)$ as the Pomeron trajectory, with \begin{equation} \alpha_g(x) = \alpha_{g0} + \alpha_g'x \, . \end{equation} In order to agree with what we believe about the physical Pomeron, we want the lowest lying particle on this trajectory to have spin $2$, the next spin $4$, and so on. This implies \begin{equation} A(x) = \frac{\alpha_g(x)}{2} - 1 \, . \end{equation} Note that if we initially allowed for further generalization by making $A(x)$ an arbitrary function instead of requiring that it be linear, we would be led to the same place at this stage. Furthermore, our definition of $\chi$ now agrees with that in \cite{DHM}\footnote{We could also convert this to the notation of \cite{ADM}, which uses $\alpha_g(s) + \alpha_g(t) + \alpha_g(u) = \chi_g$, so that $\chi = \frac{\chi_g}{2} - 3$.}. Next we examine the expansion near one of these poles, to ensure that we obtain the correct scaling of the residue with $s$. This will give \begin{equation} \mathcal{A}_{g, t \approx m_{g, J}^2} \approx \end{equation} \nopagebreak $$ \frac{2C\pi e^{-\frac{i\pi J}{2}}}{\alpha_g'\Gamma\left(\frac{J}{2}\right)\Gamma\left(\tilde{A}(m_{g, J}^2) - \tilde{\chi}\right)(t - m_{g, J}^2)}\left[\frac{\Gamma\left(1 - \frac{\alpha_g(s)}{2}\right)}{\Gamma\left(-\tilde{A}(s) - \tilde{A}(m_{g, J}^2)\right)}\right]\left[\frac{\Gamma\left(\frac{\alpha_g(s)}{2} + \frac{J}{2} - 2 - \chi\right)}{\Gamma\left(\tilde{A}(s) - \tilde{\chi}\right)}\right]F(s, m_{g, J}^2, 4m_p^2 - s - m_{g, J}^2) \, . $$ In the usual scheme, $A(x) = \tilde{A}(x)$, and each of the ratios of Gamma functions above yields a polynomial in $s$. Since this is an essential part of the residue structure, we would like to maintain this behavior. However, we see that we can do so provided $A(x)$ and $\tilde{A}(x)$ differ by at most a half-integer. We will therefore be content with the looser requirement that \begin{equation} \tilde{A}(x) = A(x) + \frac{k}{2}, \hspace{.75in} \tilde{\chi} = \chi + \frac{3k}{2}, \hspace{.75in} k \in \mathbb{Z} \, . \end{equation} This then gives \begin{equation} \mathcal{A}_{g, t \approx m_{g, J}^2} \approx \frac{2C\pi e^{-\frac{i\pi J}{2}}P_{J + 2k - 2}\left(\frac{\alpha_g' s}{2}\right)}{\alpha_g'\Gamma\left(\frac{J}{2}\right)\Gamma\left(\frac{J}{2} - 1 - k - \chi\right)(t - m_{g, J}^2)} F(s, m_{g, J}^2, 4m_p^2 - s - m_{g, J}^2) \, . \end{equation} What we need is for the residue of this pole to be a polynomial whose leading term in $s$ is degree $J$, but the polynomial that arises from the ratios of Gamma functions doesn't quite do this, so we need the function $F$ to compensate. The simplest way to allow this to happen is to make the crossing-symmetric form of $F(s, t, u)$ be \begin{equation} F(s, t, u) = \left(\frac{\alpha'_g s}{2}\right)^{2 - 2k} + \left(\frac{\alpha'_g t}{2}\right)^{2 - 2k} + \left(\frac{\alpha'_g u}{2}\right)^{2 - 2k} \, , \end{equation} and require $k \le 1$, (so that this doesn't introduce additional poles into our amplitude). Other, more exotic choices for $F$ clearly exist at this stage. However, choosing $F$ to be polynomial in the Mandelstam variables is most consistent with the underlying idea that the basic form of closed string scattering is maintained even in a curved background; $F$ is then just a kinematic pre-factor, such as we know arises when we change the spins of the external string states even in bosonic string theory in 26-D flat space. On the other hand, in the Regge limit our amplitude becomes \begin{equation} \mathcal{A}_{g, \mathrm{Regge}} \approx \frac{C\pi \, e^{-i\pi\left(k - 1 + \frac{\alpha_g(t)}{2}\right)}\Gamma\left(1 - \frac{\alpha_g(t)}{2}\right)}{\Gamma\left(\frac{\alpha_g(t)}{2} - 1 - k - \chi\right)} \, \left(\frac{\alpha'_g s}{2}\right)^{\alpha_g(t)} \, , \end{equation} which has exactly the scaling behavior we require. This then gives the Reggeization prescription \begin{equation} \frac{1}{t - m_{g, 2}^2} \hspace{.25in} \rightarrow \hspace{.25in} \frac{\alpha' e^{-i\pi \left(k + \frac{\alpha_g(t)}{2}\right)} \, \Gamma\left(-k - \chi\right) \, \Gamma\left(1 - \frac{\alpha_g(t)}{2}\right)}{2 \, \Gamma\left(\frac{\alpha_g(t)}{2} - 1 - k - \chi\right)} \, \left(\frac{\alpha'_g s}{2}\right)^{\alpha_g(t) - 2} \, . \end{equation} Recall that in the standard prescription of \cite{DHM}, we have $k = 0$. The integer $k$ appears in the above expression three times, but its presence in the phase factor is fairly trivial, generating at most a minus sign. In the other two locations, we could interpret it as simply shifting the value of $\chi$ to $\chi + k$. Usually we consider the value of $\chi$ fixed by the trajectory parameters and the mass of the proton, but this suggests from a purely phenomenological point of view one might interpret $\chi$ as an unknown parameter, to be determined via a data fitting scheme. That being said, the choice $k = 0$ would still be the most consistent with the ideas that glueballs are dual to closed strings living in some curved spacetime background, and that the closed string amplitude in this background would retain most of the structure it has in flat space, with only the Regge trajectories changed. \section{\label{mod2} A Second Possible Modification Scheme} In order to further examine the role that $\chi$ plays in the Reggeization procedure, it is worth stepping back to the worldsheet integral that the Virasoro-Shapiro amplitude is derived from. Let us first recall the argument in standard, 26-D flat space string theory. We begin with four vertices, associated to tachyonic external string states, on a sphere. The locations of three of the vertices can be fixed using conformal symmetry, leaving an integral over the fourth vertex location, which becomes an integral over the complex plane. This is written \begin{equation} \label{eqn:wsintegral} \mathcal{A}_{c} = C\int_{\mathbb{C}} d^2 z_4 \, |z_{12}|^2|z_{13}|^2|z_{23}|^2 \, \prod_{i < j} |z_{ij}|^{-\alpha' k_i\cdot k_j} \, , \end{equation} where $\{z_1, z_2, z_3\}$ are the fixed locations of the first three vertices, $z_4$ is the location of the fourth, and $z_{ij} = z_i - z_j$. It can be shown explicitly that the integral doesn't actually depend on the values of $\{z_1, z_2, z_3\}$. Since rearranging the momenta $k_i$ is equivalent to rearranging the vertices $z_i$, this is how crossing symmetry manifests itself in this expression. The traditional method for solving this integral is to choose the values $\{0, 1, \infty\}$ for the first three vertex locations, giving \begin{equation} \mathcal{A}_{c} = C\int_{\mathbb{C}} d^2 z_4 \, |z_4|^{-4 - \frac{\alpha' u}{2}}|1 - z_4|^{-4 - \frac{\alpha' t}{2}} \, , \end{equation} (where we have also rewritten the momentum dot products in terms of Mandelstam variables.) This temporarily suppresses the crossing symmetry, but simplifies the integral so that it can be done in closed form, taking advantage of analytic continuation. This gives the result \begin{equation} \mathcal{A}_{c} = \frac{C \, \pi \, \Gamma\left(-1 - \frac{\alpha' t}{4}\right) \, \Gamma\left(-1 - \frac{\alpha' u}{4}\right) \, \Gamma\left(3 + \frac{\alpha' t}{4} + \frac{\alpha' u}{4}\right)}{\Gamma\left(2 + \frac{\alpha' s}{4}\right) \, \Gamma\left(2 + \frac{\alpha' u}{4}\right) \, \Gamma\left(-2 - \frac{\alpha' t}{4} - \frac{\alpha' u}{4}\right)} \, , \end{equation} which can then be rewritten in a form where the crossing symmetry is manifest, using the mass-shell condition in equation \ref{eqn:BSMS}. This results in the traditional form for the Virasoro-Shapiro amplitude, given in equation \ref{eqn:VSoriginal}. Suppose instead of making modifications to the closed-form Virasoro-Shapiro amplitude, we go back to equation \ref{eqn:wsintegral}, and attempt to modify this. This is subtle because the lack of dependence on the values $\{z_1, z_2, z_3\}$ relies on the conformal symmetry of the worldsheet, which is broken if we attempt to modify the mass shell condition and the Regge trajectory.\footnote{Presumably the true solution to this problem lies in properly quantizing strings on a curved background, to produce an exact dual to QCD.} However, we note that if we choose $\{z_1, z_2, z_3\}$ to form an equilateral triangle, we retain explicit crossing symmetry. Specifically, we choose \begin{equation} z_1 = e^{i\pi/3}, \hspace{.75in} z_2 = 0, \hspace{.75in} z_3 = 1 \, . \end{equation} Any translational shift or rotation of this triangle can be absorbed into a redefinition of the variable of integration, and any dilation of this triangle can be absorbed into a redefinition of the constant $C$; this is therefore a unique choice. We then obtain \begin{equation} \mathcal{A}_{c} = C\int_{\mathbb{C}} d^2 z_4 \, |z_4|^{-4 - \frac{\alpha' t}{2}} \, |1 - z_4|^{-4 - \frac{\alpha' s}{2}} \, \left|e^{i\pi/3} - z_4\right|^{-4 - \frac{\alpha' u}{2}} \, . \end{equation} Now we replace the exponents $-4 - \frac{\alpha' x}{2}$ with an arbitrary linear function $B(x) = B_0 + B'x$, which we will later relate to the true Regge trajectory, and we allow for multiplying by an arbitrary function $\tilde{F}(s, t, u)$, yielding \begin{equation} \tilde{\mathcal{A}}_{g} = C\tilde{F}(s, t, u)\int_{\mathbb{C}} d^2 z_4 \, |z_4|^{-B(t)} \, |1 - z_4|^{-B(s)} \, \left|e^{i\pi/3} - z_4\right|^{-B(u)} \, . \end{equation} This amplitude is manifestly crossing symmetric, and we are assuming with an appropriate choice of $B_0$ and $B'$, it would have the correct pole structure and Regge limit to meet our requirements. Following the traditional procedure, we should now compute this integral, confirm what the correct linear parameters are by examining the pole structure, and then take the Regge limit. However, this integral is substantially more difficult, so we must work from the integral itself in examining both poles and the Regge limit. We begin by using the physical mass-shell condition to rewrite our amplitude in terms of just $s$ and $t$, as \begin{equation} \label{eqn:stint} \tilde{\mathcal{A}}_{g} = C\tilde{F}(s, t, 4m_p^2 - t - s)\int_{\mathbb{C}} d^2 z_4 \, |z_4|^{-B(t)} \, |1 - z_4|^{-B(s)} \, \left|e^{i\pi/3} - z_4\right|^{B(s) + B(t) - \chi_B} \, , \end{equation} where $B(s) + B(t) + B(u) = \chi_B$. Our inability to perform this integral in closed form prevents us from examining the full pole structure at low energies, since this structure must arise from analytic continuation away from the region where the integral converges. However, we can expand around the first pole $t \approx m_{g, 2}^2$, where $m_{g,2}$ is the mass of the spin-2 glueball. We do this by noting that near the first pole, the integral must be dominated by the region where $z_4$ is small. Using $z_4 = r e^{i\theta}$, and $\delta \ll 1$, this gives \begin{equation} \tilde{\mathcal{A}}_{g, t \approx m_{g, 2}^2} \approx 2\pi C\tilde{F}(s, m_{g, 2}^2, 4m_p^2 - m_{g, 2}^2 - s)\int_{0}^{\delta} r^{1 - B(t)} \, dr \approx \frac{2\pi C \delta^{2 - B(t)} \, \tilde{F}(s, m_{g, 2}^2, 4m_p^2 - m_{g, 2}^2 - s)}{2 - B(t)} \, , \end{equation} which implies the first pole is at $B(t) = 2$. That suggests we choose simply $B(t) = \alpha_g(t)$ (we will see that this is also supported by the Regge limit behavior), and this then gives \begin{equation} \tilde{\mathcal{A}}_{g, t \approx m_{g, 2}^2} \approx -\frac{2\pi C\tilde{F}(s, m_{g, 2}^2, 4m_p^2 - m_{g, 2}^2 - s)}{\alpha_g'(t - m_{g, 2}^2)} \, . \end{equation} Giving this pole the correct residue would then require that $\tilde{F}$ be quadratic in $s$. Thus we choose \begin{equation} \tilde{F}(s, t, u) = \left(\frac{\alpha'_g s}{2}\right)^{2} + \left(\frac{\alpha'_g t}{2}\right)^{2} + \left(\frac{\alpha'_g u}{2}\right)^{2} \, . \end{equation} Next we apply the Regge limit directly to equation \ref{eqn:stint}. This integral does not converge for large real values of $s$, so in order to perform it in the Regge limit, we will allow $s$ to have a large imaginary part, and analytically continue back to physical values of $s$ after integration. With $s$ large and complex, the integrand is largest for $z_4 \sim \frac{1}{s}$, close to the origin. We can therefore write \begin{equation} |1 - z_4|^{-B(s)} \approx e^{\frac{B's}{2}(z_4 + \bar{z}_4)}, \hspace{1in} \left|e^{i\pi/3} - z_4\right|^{B(s) + B(t) - \chi_B} \approx e^{-\frac{B's}{2}\left(z_4 e^{-i\pi/3} + \bar{z}_4 e^{i\pi/3}\right)} \, , \end{equation} so that our integral becomes \begin{equation} \tilde{\mathcal{A}}_{g, \mathrm{Regge}} \approx C\left(\frac{\alpha'_g s}{2}\right)^{2}\int_{\mathbb{C}} d^2 z_4 \, |z_4|^{-B(t)} \, e^{\frac{B's}{2}\left(z_4 - z_4e^{-i\pi/3} + \bar{z}_4 - \bar{z}_4 e^{i\pi/3}\right)} \, , \end{equation} which yields \begin{equation} \tilde{\mathcal{A}}_{g, \mathrm{Regge}} \approx \frac{C\pi \, e^{-i\pi\left(\frac{B(t)}{2} - 1\right)} \, \Gamma\left(1 - \frac{B(t)}{2}\right)}{\Gamma\left(\frac{B(t)}{2}\right)} \, \left(\frac{B's}{2}\right)^{B(t) - 2}\left(\frac{\alpha'_g s}{2}\right)^{2} \, . \end{equation} Again we see that in order for this to have the correct poles and scaling behavior, we must choose $B(t) = \alpha_g(t)$, which then gives \begin{equation} \tilde{\mathcal{A}}_{g, \mathrm{Regge}} \approx \frac{C\pi \, e^{-i\pi\left(\frac{\alpha_g(t)}{2} - 1\right)} \, \Gamma\left(1 - \frac{\alpha_g(t)}{2}\right)}{\Gamma\left(\frac{\alpha_g(t)}{2}\right)} \, \left(\frac{\alpha'_gs}{2}\right)^{\alpha_g(t)} \, . \end{equation} Comparison between this result and the pole expansion then leads to the Reggeization prescription \begin{equation} \frac{1}{t - m_{g, 2}^2} \hspace{.25in} \rightarrow \hspace{.25in} \frac{\alpha_g' \, e^{-\frac{i\pi\alpha_g(t)}{2}} \, \Gamma\left(1 - \frac{\alpha_g(t)}{2}\right)}{2\Gamma\left(\frac{\alpha_g(t)}{2}\right)} \, \left(\frac{\alpha'_gs}{2}\right)^{\alpha_g(t) - 2} \, . \end{equation} This is very similar to the solution found in \cite{DHM}, but no parameter $\chi$ appears in the final result. Equivalently, you could say it takes the same form as the original solution but with $\chi = -1$. This reinforces the conclusion of the previous section, that choosing a Reggeization procedure inspired by the structure of closed string scattering in 26-D flat space is not completely unique. We will therefore assume a generic form \begin{equation} \frac{1}{t - m_{g, 2}^2} \hspace{.25in} \rightarrow \hspace{.25in} \frac{\alpha_g' \, e^{-\frac{i\pi\alpha_g(t)}{2}} \, \Gamma(-\chi)\Gamma\left(1 - \frac{\alpha_g(t)}{2}\right)}{2\Gamma\left(\frac{\alpha_g(t)}{2} - 1 - \chi\right)} \, \left(\frac{\alpha'_gs}{2}\right)^{\alpha_g(t) - 2} \, , \end{equation} with $\chi$ undertermined, and we will use this in fitting elastic proton-proton scattering. We can then examine what value of $\chi$ agrees best with the data, and use this as a guide in evaluating which modification scheme ought to be used. \section{\label{fitting} Fitting to Proton-Proton Scattering with an Additional Free Parameter} \subsection{\label{fitfunction} The Differential Cross Section} \begin{figure} \begin{center} \resizebox{2in}{!}{\includegraphics{feyn.pdf}} \caption{\label{feyn} The Feynman diagram for proton-proton scattering via tree-level glueball exchange in the $t$-channel.} \end{center} \end{figure} In order to use data to evaluate how well different values of $\chi$ work, we must use this Reggeization procedure to model proton-proton scattering via Pomeron exchange. This begins by calculating the amplitude and differential cross section for the scattering process via the exchange of a spin-2 massive glueball in the Regge limit, for which the Feynman diagram is shown in FIG. \ref{feyn}. The propagator for a massive spin-2 particle is \begin{equation} D_{\mu\rho\nu\sigma} = \frac{\frac{1}{2}(\eta_{\mu\nu}\eta_{\rho\sigma} + \eta_{\mu\sigma}\eta_{\rho\nu}) + \cdots}{t - m_{g,2}^2} \, , \end{equation} where the terms not written will either vanish when contracted into the vertices or be suppressed in the Regge limit \cite{prop}. The vertex structures we will use are \begin{equation} \Gamma^{\mu\rho}(P_1) = \frac{i\lambda A(t)}{2}\left(\gamma^{\mu}P_1^{\rho} + \gamma^{\rho}P_1^{\mu}\right) \ + \dots \, , \end{equation} where $P_1 = \frac{p_1 + p_3}{2}$. (We will similarly define $P_2 = \frac{p_2 + p_4}{2}$.) This vertex structure is based on assuming the glueballs couple to the protons predominately via the QCD stress tensor, and we have again ignored terms that will not contribute significantly in the Regge limit \cite{Domokos:2010ma}. Finally, $A(t)$ is a form factor that should be well approximated by a dipole form \begin{equation} A(t) = \frac{1}{\left(1 - \frac{t}{M_d^2}\right)^2} \, , \end{equation} for the values of $t$ we are considering \cite{Hong:2007dq}. Putting these pieces together the amplitude for the process is \begin{equation} \mathcal{A} = \Big[\bar{u}_3\Gamma^{\mu\rho}(P_1)u_1\Big]D_{\mu\rho\nu\sigma}(k)\Big[\bar{u}_4\Gamma^{\nu\sigma}(P_2)u_2\Big] \, , \end{equation} which in the Regge limit leads to \begin{equation} \frac{1}{4}\sum_{\mathrm{spins}} |\mathcal{A}|^2 = \frac{\lambda^4 \, A^4(t) \, s^4}{(t - m_{g, 2}^2)^2} \, . \end{equation} If we replace the propagators with our Reggeized propagators, and use this expression to find the differential cross section, we obtain \begin{equation} \label{model} \frac{d\sigma}{dt} = \frac{\lambda^4 A^4(t) \Gamma^2(-\chi)\Gamma^2\left(1 - \frac{\alpha_g(t)}{2}\right)}{16\pi \Gamma^2\left(\frac{\alpha_g(t)}{2} - 1 - \chi\right)} \,\left(\frac{\alpha_g' s}{2}\right)^{2\alpha_g(t) - 2} \, . \end{equation} \subsection{The Data Fitting Results} We now want to fit this model to existing proton-proton and proton-antiproton scattering data. We will restrict our attention to scattering processes where single soft Pomeron exchange is the (presumed) dominate contributor. Based on FIG. \ref{totalcross}, we will consider only data with $\sqrt{s} > 500$ GeV, where the contribution from Reggeon exchange is less than 1\%. We also restrict ourselves to the range $0.01 < |t| < 0.6$ GeV; below $|t| = 0.01$ GeV there are significant Coulomb interactions, and above $|t| > 0.6$ GeV we are in the hard Pomeron regime. This leaves us with three available center-of-mass energies: $\sqrt{s} = 546$ GeV and $\sqrt{s} = 1800$ GeV, from the E710 and CDF experiments at the Tevatron, and $\sqrt{s} = 7$ TeV, from the TOTEM experiment at the LHC. This data was taken from the High Energy Physics Data Repository (https://hepdata.net). \begin{figure} \begin{center} \resizebox{4in}{!}{\includegraphics{xivschi.pdf}} \caption{\label{xivschi} A map of the minimum value of $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}}$ with respect to the other fitting parameters, for each value of $\chi$} \end{center} \end{figure} We will treat equation \ref{model} as our model, with five fitting parameters $\{\lambda, \alpha_{g0}, \alpha_{g}', M_d, \chi\}$, and use a standard weighted fit in which we minimize the quantity \begin{equation} \frac{\tilde{\chi}^2}{\mathrm{d.o.f}} = \frac{1}{\mathrm{d.o.f}}\sum_{\mathrm{data \ points}} \frac{1}{\tilde{\sigma}^2_{\mathrm{exp}}}\left[\left(\frac{d\sigma}{dt}\right)_{\mathrm{exp}} - \left(\frac{d\sigma}{dt}\right)_{\mathrm{model}}\right]^2 \end{equation} where $\mathrm{d.o.f}$ is the number of degrees of freedom in the fit, $\tilde{\sigma}_{\mathrm{exp}}$ is the experimental uncertainty in each data point, $\left(\frac{d\sigma}{dt}\right)_{\mathrm{exp}}$ is the experimental value of the differential cross section at each data point, and $\left(\frac{d\sigma}{dt}\right)_{\mathrm{model}}$ is the value of the differential cross section that the model provides at each data point.\footnote{Unfortunately, the traditional notation for discussing data fitting overlaps with the notation used elsewhere in this discussion, so the tildes are added for clarity.} Although the value of $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}}$ will be our primary measure for evaluating the fit, and our primary interest is in exploring various choices of the parameter $\chi$, it is also helpful to keep in mind information we have about the values of the other fitting parameters. As was discussed in section \ref{PomeronReview}, analysis of total cross sections for different center-of-mass energies suggests $\alpha_{g0} \approx 1.085$. The slope of the Pomeron trajectory based on similar analyses is usually given somewhere around $\alpha_{g}' \approx 0.3 \ \mathrm{GeV}^{-2}$ \cite{CGM}. The values of $M_d$ and $\lambda$ are less well established, but they can be computed in AdS/QCD dual models; a Skyrme model for the proton generates $M_d = 1.17 \ \mathrm{GeV}$ and $\lambda = 9.02 \ \mathrm{GeV}^{-1}$ \cite{DHM, Domokos:2010ma}. A simple automated approach to this fitting problem encounters issues associated with the dependence on $\chi$. It appears as an argument of the Gamma function, and the regularly spaced poles of this function produce many possible fit values for $\chi$, because the quantity $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}}$ has a series of local minima in the variable $\chi$. This complicates the fit, so our approach is to fix values of $\chi$ and fit with respect to the other parameters, then extract the values of $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}}$ for each. The result is a map such as that shown in FIG. \ref{xivschi}. \begin{figure} \begin{center} \resizebox{4in}{!}{\includegraphics{datadiscrepancy.pdf}} \caption{\label{datadiscrepancy} The discrepancy between the CDF and E710 data at $\sqrt{s} = 1800$ GeV} \end{center} \end{figure} There is a known discrepancy at the energy 1800 GeV between the data produced by the E710 experiment and that produced by the CDF experiment, as shown in FIG. \ref{datadiscrepancy}, and discussed in \cite{DHM}. In that work removing either data set from the analysis resulted in a better fit, with a somewhat better result when just the CDF data remained. FIG. \ref{xivschi} includes maps of the best fit value of $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}}$ as functions of $\chi$, for data including each experiment as well as including both. Interestingly, here we see the opposite from what was previously found: the fit is markedly better if we only include the data from the E710 experiment. In fact, having both data sets involved is still better than just involving the CDF data. Since this is true systematically, regardless of the value of $\chi$, we conclude that this is largely because, at least assuming our model, the E710 experimental data at 1800 GeV is more consistent with the data at the other center-of-mass energies, so we will continue our analysis excluding the CDF data set at 1800 GeV. Further considering FIG. \ref{xivschi}, there is a series of (close to) regularly spaced local minima mostly in the region of positive $\chi$. These minima get slightly lower as $\chi$ increases, but all give very similar values of $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}} \approx 1.34$. They are also all associated to similar values for the other fitting parameters. Since none of the theoretical models suggest values of $\chi$ in the range $\chi > 1$, we will focus on the first two local minima. These fits are shown in table \ref{fittable}. Note that the values of the other fitting parameters differ significantly from their predicted values. Furthermore, these local minima are associated to values of $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}}$ that are somewhat higher than the typical value of $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}}$ for negative $\chi$. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \ \ fit parameters \ \ & \ \ first local minimum \ \ & \ \ second local minimum \ \ & $\chi = -1$ \\ \hline \hline $\lambda$ & $1.608 \pm 0.001$ & $1.881 \pm 0.001$ & $6.729 \pm 0.004$ \\ $\alpha_{g0}$ & $1.0964 \pm 0.0001$ & $1.0961 \pm 0.0001$ & \ \ $1.09472 \pm 0.00006$ \ \ \\ $\alpha_{g}'$ & $0.570 \pm 0.001$ & $0.566 \pm 0.001$ & $0.5475 \pm 0.0009$ \\ $M_d$ & $3.87 \pm 0.10$ & $2.58 \pm 0.03$ & $1671 \pm 91000$ \\ $\chi$ & $-0.02585 \pm 0.00003$ & $0.94977 \pm 0.00006$ & -1 \\ \hline \hline $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}}$ & 1.344 & 1.343 & 1.336 \\ \hline \end{tabular} \caption{\label{fittable} A table of fitting results associated with fixed values of $\chi$.} \end{center} \end{table} For most of the negative regime the map is close to flat, with a lower value of $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}}$, but the fit shows a slight preference for increasingly negative values. In the limit that $\chi$ is large and negative, we are effectively working with a different, simpler model: \begin{equation} \frac{d\sigma}{dt}\Bigg|_{\chi \ll 0} \approx \frac{\lambda^4 A^4(t)}{16\pi}\Gamma^2\left(1 - \frac{\alpha_g(t)}{2}\right) \, (-\chi)^{1 - \frac{\alpha_g(t)}{2}} \, \left(\frac{\alpha' s}{2}\right)^{2\alpha_g(t) - 2} \, . \end{equation} One might conclude that this model is a better fit for the data. However, the values of $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}}$ that result from this fit are not significantly lower than those for more moderate, negative values of $\chi$. If we decide based on FIG. \ref{xivschi} that the best fit is for some negative value of $\chi$, we still cannot really argue that any particular negative value should be chosen. \begin{figure} \begin{center} \resizebox{4in}{!}{\includegraphics{chivsmd.pdf}} \caption{\label{chivsmd} A map of the fit value of $M_d$, for each value of $\chi$} \end{center} \end{figure} That being said, for a region of moderate negative values of $\chi \in [-1.942, -0.615]$, the fit attempts result in ``runaway'' values of the fitting parameter $M_d$: the fit chooses an extremely large value of $M_d$, with an even larger uncertainty, which suggests that this range of $\chi$ values should be ruled out. See for example a plot of the fit dipole masses associated with each value of $\chi$, as shown in FIG. \ref{chivsmd}. This behavior is exhibited in particular by the value $\chi = -1$, the value suggested by the modification scheme developed in section \ref{mod2}. We have included these fitting results in table \ref{fittable}. Although the general data fitting is not pointing to any particular value of $\chi$, we can still examine specifically the family of possible choices of $\chi$ suggested in section \ref{mod1}, which is \begin{equation} \label{chik} \chi = \frac{3\alpha_{g0}}{2} - 3 + 2\alpha_g' M_p^2 + k, \hspace{1in} k = 1, 0, -1, -2, \dots \, . \end{equation} The first several choices of $k$ are shown in table \ref{fittable2}. Of primary interest is the choice $k = 0$, which corresponds to the original value of $\chi$ used in \cite{DHM}. This generates a fit with a value of $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}}$ that is around twice that associated with the lowest values we have. However, the values of the other fitting parameters are much closer to their predicted values in this case than for any other choices we are considering. The choices $k = \pm 1$ generate fits with the runaway behavior in the fitting parameter $M_d$. However, choices $k = -2, -3, -4, \cdots$ work well. These generate fits consistent with the generic behavior for negative $\chi$: they all have $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}} \approx 1.2$, with a slight improvement in the fit as $k$ decreases. On the other hand, the other fitting parameters are then significantly different from their predicted values. \begin{table} \begin{center} \begin{tabular}{c|c|c|c|c|} fit parameters & $k = 1$ & $k = 0$ & $k = -1$ & $k = -2$ \\ \hline $\lambda$ & $0.05981 \pm 0.00004$ & $10.930 \pm 0.007$ & $5.703 \pm 0.004$ & $4.637 \pm .003$ \\ $\alpha_{g0}$ & $1.12531375 \pm 6 \times 10^{-8}$ & $1.0834 \pm 0.0001$ & $1.09626 \pm 0.00009$ & $1.09734 \pm 0.00009$ \\ $\alpha_{g}'$ & $0.74564453 \pm 5 \times 10^{-8}$ & $0.4173 \pm 0.0001$ & $0.5665 \pm 0.0005$ & $0.5798 \pm 0.0007$ \\ $M_d$ & $79390 \pm 3 \times 10^{7}$ & $1.91 \pm 0.01$ & $103876 \pm 6 \times 10^8$ & $5.7 \pm 0.3$ \\ \hline $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}}$ & 24.5 & 2.63 & 1.269 & 1.246 \\ \end{tabular} \caption{\label{fittable2} A table of fitting results assuming the family of choices for $\chi$ discussed in section \ref{mod1}.} \end{center} \end{table} \begin{figure} \begin{center} \resizebox{4in}{!}{\includegraphics{fitresult.pdf}} \caption{\label{fitresult} The data (displayed on a log plot) together with the model for the fitting results using equation \ref{chik} with $k = -2$.} \end{center} \end{figure} The data and the fit are shown together for the choice $k = -2$ in FIG. \ref{fitresult}. What we notice here is that the fit is best for small values of $|t|$, with deviations mostly in the range $0.5 \ \mathrm{GeV}^2 < |t| < 0.6 \ \mathrm{GeV}^2$. These deviations are most pronounced for the $\sqrt{s} = 7$ TeV data, though they are also somewhat apparent for the lower energy data. This might indicate that the transition between hard Pomeron and soft Pomeron behaviors occurs at $t = -0.5$ GeV$^2$ rather than $t = 0.6 \ \mathrm{GeV}^2$. In summary, it seems that there are two possible conclusions based on our analysis. The best way to minimize $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}}$ is to choose a value of $\chi < -1.942$. However, no particular value is strongly better than any other, and the other fitting parameters do not agree with our predictions. On the other hand, we could choose the traditional value of $\chi$ first given in \cite{DHM}. This results in a significantly weaker fit, but fitting parameters (other than $\chi$) that are more reasonable. \section{\label{Conclusion} Conclusions and Future Directions} In this work we have explored the role of the mass-shell parameter $\chi$ in the Reggeization procedure modeling proton-proton scattering via Pomeron exchange. The essential idea behind our model is to start with the cross section for proton-proton scattering via the lowest lying particle on the Pomeron trajectory, the spin 2 glueball. Then, we replace the propagator with a ``Reggeized'' propagator, that should take into account the exchanges of all the particles in the trajectory. This replacement rule is based on analyzing the Virasoro-Shapriro amplitude for the scattering of four closed strings and modifying the amplitude to depend on the physical Pomeron trajectory. The original method of Reggeization was developed in \cite{DHM}, and involved the introduction of the parameter $\chi$, based on the mass-shell condition for the protons. However, in sections \ref{mod1} and \ref{mod2} we showed that this method could be generalized and modified while still satisfying the basic phenomenological requirements of proton-proton scattering via Pomeron exchange (reviewed in section \ref{PomeronReview}). These modifications effectively change the value of $\chi$. In order to better inform the choice of $\chi$ and compare the effectiveness of the original scheme with the generalized one we fit the model to proton-proton scattering data in section \ref{fitting}, allowing $\chi$ to be a fitting parameter. The fitting procedure was complicated by the role the parameter $\chi$ plays in the model; its appearance inside a gamma function leads to a landscape of fitting results with multiple locally ``best fit'' choices. We therefore performed the fit by choosing values of $\chi$ and fitting to the other parameters, thus creating a map of the best possible $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}}$ for values of $\chi$ in the range $[-10, 10]$. We also analyzed the specific choices of $\chi$ suggested in sections \ref{mod1} and \ref{mod2}. The results of this analysis were inconclusive: the smallest values of $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}}$ are acheived for $\chi < -1.942$. However, the landscape of $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}}$ is too flat to effectively narrow down the choice of $\chi$ beyond that, and the other fitting parameters in this region seem inconsistent with what we know about them. On the other hand, the original choice of $\chi$ given in \cite{DHM} provides a substantially higher value of $\frac{\tilde{\chi}^2}{\mathrm{d.o.f}}$, but generates values for the other parameters that are a better fit to previous work. We \emph{were} able to rule out the modification from section \ref{mod2} with reasonable certainty. One insight that we gained involved the experimental data at center-of-mass energy $1800 \ \mathrm{GeV}$. As has been previously established, there is some discrepancy between the data sets obtained by the experiments E710 and CDF at the Tevatron. Our model systematically fits the E710 data better than the CDF data for any choice of $\chi$; in fact, the CDF data alone generates a worse fit than even both data sets together. This result is different than that found in \cite{DHM}; it seems likely that the inclusion in our fitting of the 7 TeV data from the TOTEM experiment at the LHC is the source of this change: our model suggests that the E710 data is more consistent with the new 7 TeV data. And while this might be model specific, the most important factor in how the data sets at different values of $\sqrt{s}$ relate to each other is the scaling $\frac{d\sigma}{dt} \propto s^{2\alpha_g(t) - 2}$, which is a common feature of any Pomeron exchange model. The fitting results also showed the greatest discrepancies with the model in the range $0.5 \ \mathrm{GeV}^2 < |t| < 0.6 \ \mathrm{GeV}^2$, which might suggest that the transition from soft Pomeron to hard Pomeron behavior occurs at a different location than previously thought: $t = -0.5 \ \mathrm{GeV}^2$. In the future, it might be interesting to repeat this analysis over a larger range of values of $\sqrt{s}$. We could incorporate lower energy scattering data effectively if we modified our model to include Reggeon exchange as well as Pomeron exchange. To include higher energy data we would need to wait for the LHC to provide results at $14 \ \mathrm{TeV}$. However, this might give us additional clarity on the issue of the discrepancy between the two 1800 GeV data sets, and that of the deviations between the model and the fit for $0.5 \ \mathrm{GeV}^2 < |t| < 0.6 \ \mathrm{GeV}^2$. \begin{acknowledgements} Z. Hu and N. Mann would like to acknowledge the support of the Union College Summer Research Fellowship program. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The heterotic $E_8 \times E_8$ and the heterotic $SO(32)$ hold a special place when it comes to relating string vacua to experimental phenomena. In this note, we speculate about the fundamental role being played by the $SO(8)$ group representations, displaying the triality structure which is the four dimensional manifestation of the twisted generation of gauge groups already noticed in the ten dimensional case, that necessarily arise in models constructed following the free fermionic methodology being remnants of the higher-dimensional triality algebra $$\Tri({\mathbb{O}}) = {\mathfrak{so }} (8).$$ \section{{The Free-Fermionic Methodology}} For each consistent heterotic string model, there exists a partition function defined by a set of vectors with boundary conditions and a set of coefficients associated to each pair of these vectors. It will be shown that for each set of boundary conditions basis vectors and the set of associated coefficients, a set of general rules can be summarized for any model realized in the free fermionic formalism. These rules, originally derived by Antoniadis, Bachas, Kounnas in \cite{fff}, are known as the ABK rules\footnote{These rules were also developed with a different formalism by Kawai, Lewellen and Tye in \cite{fff1}.} For further convenience, the vectors containing the boundary conditions used to define a model are called the basis vectors and the associated coefficients are called the one-loop phases that appear in the partition function. \subsection{{The ABK Rules}} One of the key elements is the set of basis vectors that defines $\Xi$, the space of all the sectors. For each sector $\beta \in \Xi$ there is a corresponding Hilbert space of states. Each basis vector $b_{i}$ consists of a set of boundary conditions for each fermion denoted by $$b_{i}=\{\alpha(\psi^{\mu}_{1,2}), ...,\alpha(\omega^{6})|\alpha(\overline y^{1}),...,\alpha(\overline{\phi}^{8})\}$$ where $\alpha(f)$ is defined by $$f \rightarrow - e^{\ i \pi \alpha(f)}f.$$ The $b_{i}$ have to form an additive Abelian group and satisfy the constraints. If $N_i$ is the smallest positive integer for which $N_{i}b_i =0$ and $N_{ij}$ is the least common multiple of $N_i$ and $N_j$ then the rules for the basis vectors, known popularly as the ABK rules, are given as \begin{align} &(1) \quad \sum m_i \cdot b_i = 0 \iff m_{i}=0 \mod N_{i}\,\,\forall i \\ &(2) \quad N_{ij} \cdot b_i \cdot b_j = 0 \mbox{ mod } 4 \\ &(3) \quad N_{i} \cdot b_i \cdot b_i = 0 \mbox{ mod } 8 \\ &(5) \quad b_1 = \mathbf{1}\iff \mathbf{1}\in \Xi \\ &(4) \quad \mbox{Even number of real fermions} \end{align} where $$ b_i \cdot b_j = \left ( \frac{1}{2} \sum_{\mbox{left real}} + \sum_{\mbox{left complex}} - \frac{1}{2} \sum_{\mbox{right real}} - \sum_{\mbox{right complex}} \right ) b_i (f) \times b_j(f). $$ \subsection{{Rules for the One-Loop Phases}} The rules for the one-loop phases are \begin{eqnarray} C \binom{b_{i}}{b_{j}} &=& \delta_{b_j} e^{\frac{2i \pi}{N_j}n} = \delta_{b_i} e^{\frac{2i \pi}{N_i}m} e^{i \pi \frac{b_i \cdot b_j}{N_j}n} \\ C \binom{b_{i}}{b_{i} } &=& -e^{\frac{i \pi}{4} b_i \cdot b_j} C \binom{b_{i}}{1}\\ C \binom{b_{i}}{b_{j}} &=& e^{\frac{i \pi}{2} b_i \cdot b_j} C \binom{b_{i}}{1}^* \\ C \binom{b_{i}}{b_{j} + b_{k}} &=& \delta_{b_i} C \binom{b_{i}}{b_{j}} C \binom{b_{i}}{b_{k}} \end{eqnarray} where the spin-statistics index is defined as \[ \delta_{\alpha} = e^{i \alpha(\psi^\mu) \pi} = \begin{cases} \begin{array}{cc} \,\,\,\,1, \quad \quad &\alpha(\psi_{1,2}) = 0 \\ -1, \quad \quad &\alpha(\psi_{1,2}) = 1 \\ \end{array}\end{cases}. \] \subsection{{The GGSO Projections}} To complete this construction, we have to impose another set of constraints on the physical states called the GGSO projections. The GGSO projection selects the states ${|S\rangle}_\alpha$ belonging to the $\alpha$ sector satisfying \begin{equation} e^{i \pi b_i \cdot F_\alpha} {|S\rangle}_\alpha = \delta_\alpha C \binom{\alpha}{b_{i}}^* {|S\rangle}_\alpha \quad \quad \ \quad \quad \forall\,\, b_i \end{equation} where \begin{equation} b_i \cdot F_\alpha = \left ( \frac{1}{2} \sum_{\mbox{left real}} + \sum_{\mbox{left complex}} - \frac{1}{2} \sum_{\mbox{right real}} - \sum_{\mbox{right complex}} \right ) b_i (f)\times F_\alpha(f) \end{equation} where $F_\alpha(f)$ is the fermion number operator given by \[ F_\alpha(f) = \begin{cases} \begin{array}{cc} +1, \quad \quad & \mbox{if } f \mbox{ is a fermionic oscillator,}\\ -1, \quad \quad &\mbox{if } f \mbox{ is the complex conjugate.} \end{array}\end{cases} \] \subsection{{The Massless String Spectrum}} As we are interested in low-energy physics, we are only interested in the massless states. The physical states in the string spectrum satisfy the level matching condition \beq M_L^2=-{1\over 2}+{{{\alpha_L}\cdot{\alpha_L}}\over 8}+N_L=-1+ {{{\alpha_R}\cdot{\alpha_R}}\over 8}+N_R=M_R^2 \label{virasorocond} \eeq where $\alpha=(\alpha_L;\alpha_R)\in\Xi$ is a sector in the additive group, and \beq N_L=\sum_f ({\nu_L}) ;\hskip 3cm N_R=\sum_f ({\nu_R}); \label{nlnr} \eeq The frequencies of the fermionic oscillators depending on their boundary conditions is taken to be $$f \rightarrow - e^{\ i \pi \alpha(f)}f, \qquad f^{\ast}\rightarrow - e^{-i \pi \alpha(f)}f^{\ast}.$$ The frequency for the fermions is given by $$\nu_{f, f^{\ast}} = \frac{1\pm \alpha(f)}{2}.$$ Each complex fermion $f$ generates a $U(1)$ current with a charge with respect to the unbroken Cartan generators of the four dimensional gauge group given by \begin{eqnarray*} Q_{\nu}(f)&=&\nu -\frac{1}{2}\\ &=& \frac{\alpha(f)}{2}+F \end{eqnarray*} for each complex right--moving fermion $f$. \subsection{{The Enhancements}} Extra space-time vector bosons may be obtained from the sectors satisfying the conditions: $$\alpha_{L}^{2}=0,\qquad\qquad \alpha_{R}^{2}\neq0.$$ There are three possible types of enhancements: \begin{itemize} \item Observable for example $x$, \item Hidden for example $z_{1}+z_{2}$, \item Mixed for example $\alpha$. \end{itemize} \section{{The Free-Fermionic $4D$ Models}} The phenomenological free fermionic heterotic string models were constructed following two main routes, the first are the so called NAHE--based models. This set of models utilise a set of eight or nine boundary condition basis vectors. The first five consist of the so--called NAHE set \cite{nahe} and are common in all these models. The basis vectors underlying the NAHE--based models therefore differ by the additional three or four basis vectors that extend the NAHE set. \noindent The second route follows from the classification methodology that was developed in \cite{gkr} for the classification of type II free fermionic superstrings and adopted in \cite{fknr, fkr, acfkr, frs} for the classification of free fermionic heterotic string vacua with $SO(10)$ GUT symmetry and its Pati--Salam \cite{acfkr} and Flipped $SU(5)$ \cite{frs} subgroups. The main difference between the two classes of models is that while the NAHE--based models allow for asymmetric boundary conditions with respect to the set of internal fermions $\{ y, \omega\vert {\bar y}, {\bar\omega}\}$, the classification method only utilises symmetric boundary conditions. This distinction affects the moduli spaces of the models \cite{moduli}, which can be entirely fixed in the former case \cite{cleaver} but not in the later. On the other hand the classification method enables the systematic scan of spaces of the order of $10^{12}$ vacua, and led to the discovery of spinor--vector duality \cite{fkr, svduality} and exophobic heterotic string vacua \cite{acfkr}. \subsection{The Classification Methodology} A subset of basis vectors that respect the $SO(10)$ symmetry is given by the set of 12 boundary condition basis vectors $V=\{v_1,v_2,\dots,v_{12}\} $ \begin{eqnarray*} v_1=&1&=\{\psi_{\mu}^{1,2}, \chi^{1,...,6}, y^{1,...,6}, \omega^{1,...,6}|\bar{y}^{1,...,6}, \bar{\omega}^{1,...,6}, \bar{\psi}^{1,...,5}, \bar{\eta}^{1,2,3}, \bar{\phi}^{1,...,8}\}, \\ v_2=&S&=\{\psi^\mu,\chi^{12},\chi^{34},\chi^{56}\},\nonumber\\ v_{2+i}=&e_i&=\{y^{i},\omega^{i}|\overline{y}^i,\overline{\omega}^i\}, \ i=1,\dots,6,\nonumber\\ v_{9}=&b_{1}&=\{\chi^{34},\chi^{56},y^{34},y^{56}|\bar{y}^{34}, \overline{y}^{56},\overline{\eta}^1,\overline{\psi}^{1,\dots,5}\},\label{basis}\\ v_{10}=&b_{2}&=\{\chi^{12},\chi^{56},y^{12},y^{56}|\overline{y}^{12}, \overline{y}^{56},\overline{\eta}^2,\overline{\psi}^{1,\dots,5}\},\nonumber\\ v_{11}=&z_1 &=\{ \overline{\phi}^{1,...,4}\},\nonumber\\ v_{12}=&z_2&= \{ \overline{\phi}^{5,...,8}\}\nonumber\\ \end{eqnarray*} where the basis vectors ${{1}}$ and ${{S}}$, generate a model with the $SO(44)$ gauge symmetry and ${N} = 4$ space--time SUSY with the tachyons being projected out of the massless spectrum. The next six basis vectors: $e_{1},...,e_{6}$ all correspond to the possible symmetric shifts of the six internal coordinates thus breaking the $SO(44)$ gauge group to $SO(32)\times U(1)^{6}$ but keeping the ${N}=4$ SUSY intact. The vectors $b_{i}$ for ${{i}}=1,2$ correspond to the ${\mathbb{Z}}_2 \times {\mathbb{Z}}_2$ orbifold twists. The vectors ${b_{1}}$ and ${b_{2}}$ play the role of breaking the ${N}=4$ down to ${N}=1$ whilst reducing the gauge group to $SO(10)\times U(1)^{2}\times SO(18)$. The states coming from the hidden sector are produced by ${z_{1}}$ and ${z_{2}}$ left untouched by the action of previous basis vectors. These vectors together with the others generate the following adjoint representation of the gauge symmetry: $SO(10)\times U(1)^{3}\times SO(8)\times SO(8)$ where $SO(10)\times U(1)^{3}$ is the observable gauge group which gives rise to matter states from the twisted sectors charged under the $U(1)$s while $SO(8)\times SO(8)$ is the hidden gauge group gives rise to matter states which are neutral under the $U(1)$s. \subsection{{The Various $SO(10)$ Subgroups}} The $SO(10)$ GUT models generated can be broken to one of its subgroups by the boundary condition assignment on the complex fermion $\overline{\psi}^{1,...,5}$. For the Pati-Salam and the Flipped $SU(5)$ case, one additional basis vector is required to break the $SO(10)$ GUT symmetry. However, in order to construct the $SU(4)\times SU(2)\times U(1)$, the Standard-Like models and the Left-Right Symmetric models, the Pati-Salam breaking is required along with an additional $SO(10)$ breaking basis vector. The following boundary condition basis vectors can be used to construct the necessary gauge groups: \subsubsection{{The Pati-Salam Subgroup}} $$v_{13}=\alpha=\{\overline{\psi}^{4,5}, \overline{\phi}^{1,2}\}$$ \subsubsection{{The Flipped $SU(5)$ Subgroup}} $$v_{13}=\alpha=\{\overline{\eta}^{1,2,3}=\frac{1}{2}, \overline{\psi}^{1,...,5}=\frac{1}{2}, \overline{\phi}^{1,...,4}=\frac{1}{2}, \overline{\phi}^{5}\}$$ \subsubsection{{The $SU(4)\times SU(2)\times U(1)$ Subgroup}} \begin{eqnarray*} v_{13}&=\alpha&= \{\overline{\psi}^{4,5}, \overline{\phi}^{1,2}\}\\ v_{14}&=\beta&= \{\overline{\psi}^{4,5}=\frac{1}{2}, \overline{\phi}^{1,...,6}=\frac{1}{2}\} \end{eqnarray*} \subsubsection{{The Left-Right Symmetric Subgroup}} \begin{eqnarray*} v_{13}=\alpha &=& \{ \overline{\psi}^{4,5}, \overline{\phi}^{1,2}\},\nonumber\\ v_{14}=\beta &=& \{ \overline{\eta}^{1,2,3}=\textstyle\frac{1}{2},\overline{\psi}^{1,...,3}=\frac{1}{2},\overline{\phi}^{1,2}=\frac{1}{2},\overline{\phi}^{3,4}\}\nonumber\\ \end{eqnarray*} \subsubsection{{The Standard-Like Model Subgroup}} \begin{eqnarray*} v_{13}&=\alpha&= \{\overline{\psi}^{4,5}, \overline{\phi}^{1,2}\}\\ v_{14}&=\beta&= \{\overline{\eta}^{1,2,3}=\frac{1}{2}, \overline{\psi}^{1,...,5}=\frac{1}{2}, \overline{\phi}^{1,...,4}=\frac{1}{2}, \overline{\phi}^{5}\} \end{eqnarray*} \section{The $SU421$ And LRS Models} In \cite{Faraggi:2015iaa}, the fact was highlighted that the LRS and $SU421$ models are not viable as these models circumvent the $E_6 \rightarrow SO(10)\times U(1)_{\zeta}$ symmetry breaking pattern with the price that the $U(1)_{\zeta}$ charges of the SM states do not satisfy the $E_6$ embedding necessary for unified gauge couplings to agree with the low energy values of $\sin^2 \theta_W (M_{Z})$ and $\alpha_{s} (M_Z)$ \cite{Faraggi:2011xu, Faraggi:2013nia}. While the statement is true for the $SU421$ models \cite{Cleaver:2002ps, Faraggi:2014vma}, we introduce LRS models which are constructed in the free fermionic formalism to verify that the $U(1)_{\zeta}$ charges of the SM states satisfy the $E_6$ embedding. We begin by assuming that $U(1)_\zeta$ charges admit the $E_6$ embedding. In this case the heavy Higgs states consists of the pair ${\cal N}\left({\bf1},{\bf\frac{3}{2}},{\bf1},{\bf2},{\bf\frac{1}{2}}\right),~ {\bar{\cal N}}\left({\bf1},-{\bf\frac{3}{2}},{\bf1},{\bf2}, -{\bf\frac{1}{2}}\right). $ The VEV along the electrically neutral component leaves unbroken the SM gauge group and the $U(1)_{Z^\prime}$ combination \beq U(1)_{{Z}^\prime} ~=~ {1\over {2}} U(1)_{B-L} -{2\over3} U(1)_{T_{3_R}} - {5\over3}U(1)_\zeta ~\notin~ SO(10)\nonumber \eeq where $U(1)_{\zeta}=\sum_{i=1}^{3}U(1)_i$ is anomaly free which may remain unbroken down to low scales. We remark, however, that in the NAHE-based free fermionic LRS models \cite{lrs} the $U(1)_\zeta$ charges do not admit the $E_6$ embedding and go on to show that the same is true for free fermionic models constructed by utilizing the classification methodology \cite{gkr}. \subsection{{The Non-Viable $SU(4)\times SU(2)\times U(1)$}} In this section, we briefly consider the model presented in \cite{Faraggi:2014vma} which was obtained using the classification methodology. The set of basis vectors that generate the $SU(4)\times SU(2)\times U(1)$ heterotic string model are given by \begin{eqnarray*} v_1=&1&=\{\psi_{\mu}^{1,2}, \chi^{1,...,6}, y^{1,...,6}, \omega^{1,...,6}|\bar{y}^{1,...,6}, \bar{\omega}^{1,...,6}, \bar{\psi}^{1,...,5}, \bar{\eta}^{1,2,3}, \bar{\phi}^{1,...,8}\}, \\ v_2=&S&=\{\psi^\mu,\chi^{12},\chi^{34},\chi^{56}\},\nonumber\\ v_{2+i}=&e_i&=\{y^{i},\omega^{i}|\overline{y}^i,\overline{\omega}^i\}, \ i=1,\dots,6,\nonumber\\ v_{9}=&b_{1}&=\{\chi^{34},\chi^{56},y^{34},y^{56}|\bar{y}^{34}, \overline{y}^{56},\overline{\eta}^1,\overline{\psi}^{1,\dots,5}\},\label{basis}\\ v_{10}=&b_{2}&=\{\chi^{12},\chi^{56},y^{12},y^{56}|\overline{y}^{12}, \overline{y}^{56},\overline{\eta}^2,\overline{\psi}^{1,\dots,5}\},\nonumber\\ v_{11}=&z_1 &=\{ \overline{\phi}^{1,...,4}\},\nonumber\\ v_{12}=&z_2&= \{ \overline{\phi}^{5,...,8}\}\nonumber\\ v_{13}=&\alpha&= \{\overline{\psi}^{4,5}, \overline{\phi}^{1,2}\},\\ v_{14}=&\beta&= \{\overline{\psi}^{4,5}=\frac{1}{2}, \overline{\phi}^{1,...,6}=\frac{1}{2}\} \end{eqnarray*} where the space-time vector bosons are obtained solely from the untwisted sector and generate the observable and hidden gauge symmetries, given by: \beqn {\rm observable} ~: &~~~~~~~~SU(4)\times SU(2)\times U(1)\times U(1)^3 \nonumber\\ {\rm hidden} ~: &SU(2)\times U(1)\times SU(2)\times U(1)\times SU(2)\times U(1)\times SO(4)\nonumber \eeqn In order to preserve the aforementioned observable and hidden gauge groups, all the additional space–time vector bosons need to be projected out which can arise from the following $36$ sectors as enhancements: $$\begin{Bmatrix} z_{1},&z_{1}+\beta,&z_{1}+2\beta,\\ z_{1}+\alpha,&z_{1}+\alpha+\beta,&z_{1}+\alpha+2\beta,\\ z_{2},&z_{2}+\beta,&z_{2}+2\beta,\\ z_{2}+\alpha,&z_{2}+\alpha+\beta,&z_{2}+\alpha+2\beta,\\ z_{1}+z_{2},&z_{1}+z_{2}+\beta,&z_{1}+z_{2}+2\beta,\\ z_{1}+z_{2}+\alpha,&z_{1}+z_{2}+\alpha+\beta,&z_{1}+z_{2}+\alpha+2\beta,\\ \beta,& 2\beta, & \alpha,\\ \alpha+\beta, &\alpha+2\beta, & x\\ z_{1}+x+\beta,&z_{1}+x+2\beta, &z_{1}+x+\alpha,\\ z_{1}+x+\alpha+\beta,&z_{2}+x+\beta,&z_{2}+x+\alpha+\beta,\\ z_{1}+z_{2}+x+\beta, &z_{1}+z_{2}+x+2\beta,& z_{1}+z_{2}+x+\alpha+\beta\\ x+\beta, & x+\alpha, &x+\alpha+\beta,\\ \end{Bmatrix}$$ where $x=\{\overline{\psi}^{1,...,5},\overline{\eta}^{1,2,3}\}$. The conclusion was reached that the $SU421$ class of models is the only class that is excluded in vacua with symmetric internal boundary conditions. \subsection{The Free Fermionic LRSz Model Gauge Group}\label{1z} In this section, we present the LRS model constructed using the free-fermionic construction with one $z$ basis vector. This model is generated by the following set of basis vectors: \begin{eqnarray*} v_1=S&=&\{\psi^\mu,\chi^{12},\chi^{34},\chi^{56}\},\nonumber\\ v_{1+i}=e_i&=&\{y^{i},\omega^{i}|\overline{y}^i,\overline{\omega}^i\}, \ i=1,\dots,6,\nonumber\\ v_{8}=b_1&=&\{\chi^{34},\chi^{56},y^{34},y^{56}|\bar{y}^{34}, \overline{y}^{56},\overline{\eta}^1,\overline{\psi}^{1,\dots,5}\},\label{basis}\\ v_{9}=b_2&=&\{\chi^{12},\chi^{56},y^{12},y^{56}|\overline{y}^{12}, \overline{y}^{56},\overline{\eta}^2,\overline{\psi}^{1,\dots,5}\},\nonumber\\ v_{10}=z &=& \{ \overline{\phi}^{1,...,8}\},\nonumber\\ v_{11}=\alpha &=& \{ \overline{\psi}^{4,5}, \overline{\phi}^{1,2}\},\nonumber\\ v_{12}=\beta &=& \{ \overline{\eta}^{1,2,3}=\textstyle\frac{1}{2},\overline{\psi}^{1,...,3}=\frac{1}{2},\overline{\phi}^{1,2}=\frac{1}{2},\overline{\phi}^{3,4}\}\nonumber\\ \end{eqnarray*} where \begin{eqnarray*} {{1}} &=& S + \sum_{i=1}^{6}e_i +\alpha + 2\beta + z,\\ x&=&\alpha+2\beta,\\ b_3 &=& b_1 +b_2 +x.\\ \end{eqnarray*} The space-time vector bosons are obtained solely from the untwisted sector and generate the following observable gauge symmetries, given by: \beqn {\rm observable} ~: &~~~~~~~~SU(3)\times SU(2)_L\times SU(2)_R\times U(1)\times U(1)^3 \nonumber\\ {\rm hidden} ~: &~~~~~~~~SU(2)\times U(1)\times SO(4)\times SO(8)\nonumber\eeqn \subsection{{The Free Fermionic LRS2z Model Gauge Group}}\label{2z} In this section, we present the LRS model constructed using the free-fermionic construction where $z_i$ basis vectors are utilized for $i=1,2$. This model is generated by the following set of basis vectors: \begin{eqnarray*} v_1=&1&=\{\psi_{\mu}^{1,2}, \chi^{1,...,6}, y^{1,...,6}, \omega^{1,...,6}|\bar{y}^{1,...,6}, \bar{\omega}^{1,...,6}, \bar{\psi}^{1,...,5}, \bar{\eta}^{1,2,3}, \bar{\phi}^{1,...,8}\}, \\ v_2=&S&=\{\psi^\mu,\chi^{12},\chi^{34},\chi^{56}\},\nonumber\\ v_{2+i}=&e_i&=\{y^{i},\omega^{i}|\overline{y}^i,\overline{\omega}^i\}, \ i=1,\dots,6,\nonumber\\ v_{9}=&b_{1}&=\{\chi^{34},\chi^{56},y^{34},y^{56}|\bar{y}^{34}, \overline{y}^{56},\overline{\eta}^1,\overline{\psi}^{1,\dots,5}\},\label{basis}\\ v_{10}=&b_{2}&=\{\chi^{12},\chi^{56},y^{12},y^{56}|\overline{y}^{12}, \overline{y}^{56},\overline{\eta}^2,\overline{\psi}^{1,\dots,5}\},\nonumber\\ v_{11}=&z_1 &=\{ \overline{\phi}^{1,...,4}\},\nonumber\\ v_{12}=&z_2&= \{ \overline{\phi}^{5,...,8}\}\nonumber,\\ v_{13}=&\alpha &= \{ \overline{\psi}^{4,5}, \overline{\phi}^{1,2}\},\nonumber\\ v_{14}=&\beta &= \{ \overline{\eta}^{1,2,3}=\textstyle\frac{1}{2},\overline{\psi}^{1,...,3}=\frac{1}{2},\overline{\phi}^{1,2}=\frac{1}{2},\overline{\phi}^{3,4}\}\nonumber\\ \end{eqnarray*} The space-time vector bosons are obtained solely from the untwisted sector and generate the following observable gauge symmetries, given by: \beqn {\rm observable} ~: &~~~~~~~~SU(3)\times SU(2)_L\times SU(2)_R\times U(1)\times U(1)^3 \nonumber\\ {\rm hidden} ~: &~~~~~~~~SU(2)\times U(1)^{3}\times SO(8)\nonumber\eeqn \section{Descending To {\bf{D=2}}} In this section, compactifying the heterotic–string to two dimensions, we find that the two dimensional free fermions in the light-cone gauge are the real left-moving fermions $$\chi^{i}, y^{i}, \omega^{i},\qquad i=1,...,8,$$ the real right-moving fermions $$\overline{y}^{i},\overline{\omega}^{i},\qquad i=1,...,8 $$ and the complex right-moving fermions $$\overline{\psi}^{A},\quad A=1,...,4,\quad \overline{\eta}^{B},\quad B=0,...,3,\quad \overline{\phi}^{\alpha},\quad \alpha=1,...,8.$$ The class of models we consider will be generated by a maximal set of $7$ basis vectors defined as \begin{eqnarray*} v_{1}=& 1 &= \{\chi^{i}, y^{i}, \omega^{i}|\overline{y}^{i},\overline{\omega}^{i},\overline{\psi}^{A},\overline{\eta}^{B},\overline{\phi}^{\alpha}\}, \\ v_{2}=& H_{L} &=\{\chi^{i}, y^{i}, \omega^{i}\}, \\ v_{3}=& z_1 &=\{\overline{\phi}^{1,...,4}\},\\ v_{4}=& z_2& =\{\overline{\phi}^{5,...,8}\}, \\ v_{5}=& z_3 &=\{\overline{\psi}^{A}\}, \\ v_{6}=& z_4 &=\{\overline{\eta}^{B}\}, \\ v_{7}=& z_5& =\{ \overline{y}^{1,...,4}, \overline{\omega}^{1,...,4}\}\\ \end{eqnarray*} where $$z_{6} = 1+H_L+\sum_{i=1}^{5}z_{i} = \{\overline{y}^{5,...,8}, \overline{\omega}^{5,...,8}\} = \{\overline{\rho}^{5,...,8}\} .$$ The set of GGSO phases is given by \begin{center} $ \bordermatrix{~ & 1 & H_L &z_{1}&z_{2}&z_{3}&z_{4}&z_{5} \cr 1&-1&-1&+1&+1&+1&+1&+1\cr H_L &-1&-1&-1&-1&-1&-1&-1\cr z_{1}&+1&-1&+1&+1&+1&+1&+1\cr z_{2}&+1&-1&+1&+1&+1&+1&+1\cr z_{3}&+1&-1&+1&+1&+1&+1&+1\cr z_{4}&+1&-1&+1&+1&+1&+1&+1\cr z_{5}&+1&-1&+1&+1&+1&+1&+1\cr} $ \end{center} or simply $$-C\binom{1}{1} = -C\binom{1}{H_L}=C\binom{1}{z_{i}} =-C\binom{H_L}{H_L}=-C\binom{H_L}{z_{i}}= C\binom{z_i}{z_{i}}=C\binom{z_i}{z_{j}}=1 $$ yielding the untwisted symmetry $$SO(8)_1\times SO(8)_2\times SO(8)_3\times SO(8)_4\times SO(8)_5 \times SO(8)_6.$$ Here our focus was on the $SO(48)$ and the dedicated GGSO phases were chosen appropriately as the following table highlights: \begin{table}[H] \begin{center} \begin{tabular}{|c|c|c|} \hline &&\\ $C\binom{H_{L}}{z_{i}}$&$C\binom{z_{i}}{z_{i}}$&Gauge Group\\ &&\\ \hline $-$&$+$& $SO(48)$\\ \hline \end{tabular} \caption{\label{tableb} \small The configuration of the symmetry groups.} \end{center} \end{table} \section{Normed Division Algebras} In this section, we briefly discuss the normed division algebras. An algebra $A$ is a vector space equipped with a bilinear multiplication rule and a unit element. We call $A$ a division algebra if, given $x,y \in A$ with $xy = 0$, then either $x = 0$ or $y= 0$. A normed division algebra is an algebra $A$ equipped with a positive-definite norm satisfying the condition $$||xy|| =||x||\,\,||y||$$ which also implies $A$ is a division algebra. There is a remarkable theorem due to Hurwitz \cite{Hur}, which states that there are only four normed division algebras: the real numbers $\mathbb{R}$, the complex numbers $\mathbb{C}$, the quaternions $\mathbb{H}$ and the octonions $\mathbb{O}$. The algebras have dimensions $n= 1,2,4$ and $8$, respectively. They can be constructed, one-by-one, by use of the Cayley-Dickson doubling method, starting with the reals; the complex numbers are pairs of real numbers equipped with a particular multiplication rule, a quaternion is a pair of complex numbers and an octonion is a pair of quaternions. There is a Lie algebra associated with the division algebras \cite{Anastasiou:2013cya} known as the triality algebra of $A$ defined as follows $$\Tri(A) = \{(A,B,C)|A(xy)=B(x)y+xC(y)\},\qquad A,B,C \in {\mathfrak{so }}(A),\quad x,y \in A$$ where $ {\mathfrak{so }}(A)$ is the norm-preserving algebra isomorphic to $ {\mathfrak{so }}(n)$ where $n=\dim A$. We are interested primarily in the case where $$\Tri({\mathbb{O}}) = {\mathfrak{so }} (8).$$ The division algebras subsequently can be used to describe field theory in Minkowski space using the Lie algebra isomorphism $${\mathfrak{so}}(1,1+n)\cong {\mathfrak{sl}}(2,A)$$ particularly $${\mathfrak{so}}(1,9)\cong {\mathfrak{sl}}(2,\mathbb{O}).$$ \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=.8] \draw (5,0) node[anchor=west] {}; \foreach \x in {3,...,4} \draw[xshift=\x cm,thick] (\x cm,0) circle (.3cm); \draw[xshift=8 cm,thick] (30: 17 mm) circle (.3cm); \draw[xshift=8 cm,thick] (-30: 17 mm) circle (.3cm); \foreach \y in {3.15,...,3.15} \draw[xshift=\y cm,thick] (\y cm,0) -- +(1.4 cm,0); \draw[xshift=8 cm,thick] (30: 3 mm) -- (30: 14 mm); \draw[xshift=8 cm,thick] (-30: 3 mm) -- (-30: 14 mm); \end{tikzpicture} \end{center}\caption{The Dynkin Diagram of $D_{4}$.}\end{figure} \section{Discussion} In the free fermionic methodology the equivalence of the $8_V$, $8_{S}$ and $8_C$, $SO(8)$ representations is referred to as the triality structure. This equivalence then enables twisted constructions of the $E_8\times E_8$ or $SO(32)$ gauge groups. The root lattice of $SO(8)$ has a quaternionic description given by the set $$V = \bigg\{\pm1,\pm e_{1},\pm e_{2},\pm e_{3},\frac{1}{2}(\pm 1\pm e_{1}\pm e_{2}\pm e_{3})\bigg\}$$ which give the required $24$ roots. Alternatively, the root lattice of $SO(8)$ could have been composed from $SU(2)^{4}$. On the other hand, the decomposition of the adjoint representation of $E_8$ under $SO(8)\times SO(8)$ is given by $${\bf{248}}= ({\bf{28}},{\bf{1}})+({\bf{1}},{\bf{28}})+({\bf{8}}_v,{\bf{8}}_v)+({\bf{8}}_s,{\bf{8}}_c)+({\bf{8}}_c,{\bf{8}}_s).$$ The weights of the vectorial representation ${\bf{8}}_v$ are $$V_{1} = \bigg\{\frac{1}{2}(\pm1\pm e_1),\frac{1}{2}(\pm e_2\pm e_3)\bigg\},$$ the weights of the conjugate spinor representation ${\bf{8}}_c$ are $$V_{2} = \bigg\{\frac{1}{2}(\pm1\pm e_2),\frac{1}{2}(\pm e_3\pm e_1)\bigg\},$$ and the weights of the spinor representation ${\bf{8}}_s$ are $$V_{3} = \bigg\{\frac{1}{2}(\pm1\pm e_3),\frac{1}{2}(\pm e_1\pm e_2)\bigg\}.$$ This description makes the triality of $SO(8)$ manifest. It can be easily seen that permutations of the three imaginary elements $e_1$, $e_2$ and $e_3$ will map the representations $V_1\rightarrow V_2\rightarrow V_3$. In \cite{Evans:1987tm} an explicit correspondence between simple super-Yang-Mills and classical superstrings in dimensions $3$, $4$, $6$, $10$ and the division algebras $\mathbb{R}$, $\mathbb{C}$, $\mathbb{H}$, $\mathbb{O}$ was established. Here, we speculated about the fundamental role being played by the $SO(8)$ group representations, displaying the triality structure, which necessarily arise in models constructed under the free fermionic methodology being remnants of the higher-dimensional triality algebra, namely $$\Tri({\mathbb{O}}) = {\mathfrak{so }} (8).$$ \section{Acknowledgements} J. M. A. would like to thank the University of Kent and the University of Oxford for their warm hospitality.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In order to study qualitative effects of magnetism in plasma dynamics a very simple model is introduced. Two overlapping homogeneous spheres of equal radii, and of equal but oppositely signed charge densities, are assumed to move, relative to each other, with negligible dissipation (resistivity) under the influence of electric and magnetic interaction. The neglect of dissipation is motivated at the end. We first treat relative translation (oscillation) and then relative rotation. Many years ago Tonks and Langmuir \cite{tonks} carefully derived an equation of motion for a collective translational motion of the electrons relative to the positive ions. Their derivation seems to indicate that there should be a universal frequency for this mode, \begin{equation}\label{eq.plasma.freq} \omega_{\rm p} =\sqrt{\frac{n e^2}{m}}, \end{equation} the plasma frequency, depending only on the electron number density $n$. If a large number of electrons move relative to the positive ions one gets a large current and thus it seems as if magnetic effects should affect the result. Bohm and Pines \cite{bohm} studied the influence of magnetic interaction on plasma modes but they did not come up with any explicit correction to $\omega_{\rm p}$. In the textbook by Goldston and Rutherford \cite{goldston} the absence of magnetic effects is said to be due to a displacement current that compensates for the electron current. In this article the problem is approached from a very fundamental starting point: the relevant Lagrangian density. The conclusion is that the frequency is lowered by the large inductive inertia [see Eq. (\ref{eq.freq.sq})]. We then study the relative rotation of the two charged spheres. If magnetic interaction is neglected the kinetic energy is simply determined by angular momentum and moment of inertia. When magnetic interaction is included the kinetic energy for a given angular momentum is much smaller. The reason for this is again that the effective moment of inertia will be dominated by inductive inertia. By adding an external magnetic field to the model we can calculate the response of our model plasma and it turns out to be diamagnetic. \section{Separation of overall translation and rotation} The kinetic energy of any system of particles \begin{equation}\label{eq.kin.energy.part} T = \sum_i \half m_i \vecv_i^2 \end{equation} can be written \begin{equation}\label{eq.sep.collect.transl.rot} T=\half M \vecV^2 +\half \vecOmega {\sf J}\vecOmega +T' \end{equation} where $M$ is total mass, ${\sf J}$ is the (instantaneous) inertia tensor, $\vecV$ center of mass velocity, and $\vecOmega$ is a well defined average angular velocity \cite{jellinek,essen93}. $\vecV$ is chosen so that $\vecp=M\vecV$ is the total momentum of the system and $\vecOmega$ so that $\vecL={\sf J}\vecOmega$ is the total (center of mass) angular momentum. $T'$ is the kinetic energy of the particles relative to the system that moves with the center of mass velocity and rotates with the average angular velocity. We call this the co-moving system. One can introduce generalized coordinates so that there are six degrees freedom describing center of mass position and of average angular orientation, while $T'$ depends on the remaining $3N-6$ generalized coordinates. Now consider a blob of plasma that consists of two spheres of particles, one of positively, and one of negatively charged particles. For spheres the inertia tensor ${\sf J}$ can be replaced by a single moment of inertia $J$. The total kinetic energy $T$ is the sum of the kinetic energy, $T_1$, of the positive particles, and of the kinetic energy, $T_2$, of the negative charges. We first perform the transformation above to the co-moving systems separately for the positive and the negative particles. This gives, \begin{equation}\label{eq.pos.and.neg.collective} T =\half M_1 \vecV_1^2 +\half M_2 \vecV_2^2 +\half J_1 \vecOmega_1^2 +\half J_2 \vecOmega_2^2 +T'_1 +T'_2 , \end{equation} for the total kinetic energy, see Fig.\ \ref{FIG0}. In a second step we then introduce the co-moving system for the total system. We thus introduce, \begin{eqnarray} M=M_1+M_2 , & \mbox{\hskip 0.5cm} & J = J_1 +J_2 ,\\ \mu =M_1 M_2/M , & \mbox{\hskip 0.5cm} & I = J_1 J_2/J , \end{eqnarray} total mass and reduced mass as well as total moment of inertia and reduced moment of inertia. In terms of these one finds, \begin{eqnarray} \label{eq.v.transf.1} \vecV = (M_1\vecV_1 +M_2\vecV_2)/M & \mbox{\hskip 0.5cm} & \vecV_1 = \vecV +M_2 \vecv /M \\ \label{eq.v.transf.2} \vecv =\vecV_1 -\vecV_2 & \mbox{\hskip 0.5cm} & \vecV_2 = \vecV -M_1 \vecv /M , \end{eqnarray} for the total center of mass velocity $\vecV$, and relative velocity $\vecv$, of the two spheres. Finally, \begin{eqnarray} \vecOmega = (J_1\vecOmega_1 +J_2\vecOmega_2)/J & \mbox{\hskip 0.5cm} & \vecOmega_1 = \vecOmega +J_1 \vecomega /J \\ \vecomega =\vecOmega_1 -\vecOmega_2 & \mbox{\hskip 0.5cm} & \vecOmega_2 = \vecOmega - J_2 \vecomega /J , \end{eqnarray} gives the total average angular velocity $\vecOmega$, and the relative angular velocity $\vecomega$, of the two oppositely charged spheres. In terms of these quantities we get the expression, \begin{equation}\label{eq.collective.transformed} T =\half M \vecV^2 +\half \mu \vecv^2 +\half J \vecOmega^2 +\half I \vecomega^2 +T', \end{equation} for the total kinetic energy of our two-sphere system. The degrees of freedom in $T'$ are assumed to be random and not to produce any net charge or current density. They will be ignored henceforth. \section{Lagrangian including magnetic interactions} Maxwell's equations and the equations of motion for the charged particles with the Lorentz force, can all be derived from a single Lagrangian via the variational principle \cite{landau2}. The Lagrangian has three parts, particle, interaction, and field contributions. If radiation is neglected the field does not have independent degrees of freedom, but is determined by particle positions and velocities. Using the non-relativistic form for the kinetic energy one then gets, \begin{equation} \label{eq.LtotNoRad2} L = \sum_i \left(\frac{1}{2} m_i\vecv_i^2 + \frac{q_i}{2c} \vecv_i \cdot \vecA(\vecr_i) - \frac{q_i}{2}\phi(\vecr_i) \right) , \end{equation} where, \begin{equation} \label{eq.coul.pot} \phi(\vecr,t) = \sum_i \frac{q_i}{|\vecr -\vecr_i|}, \end{equation} and \begin{equation} \label{eq.darwin.A.ito.velocity} \vecA(\vecr,t) = \sum_i\frac{q_i [\vecv_i + (\vecv_i\cdot\vece_i) \vece_i] }{2c|\vecr-\vecr_i|} . \end{equation} Here the position and velocity vectors of the particles are $\vecr_i$ and $\vecv_i$ respectively, $m_i$ and $q_i$ their masses and charges, and $\vece_i = (\vecr-\vecr_i)/|\vecr-\vecr_i|$ (Darwin \cite{darwin}, Jackson \cite{jackson3}, Schwinger {\it et al.} \cite{schwingeretal}, Ess\'en \cite{essen96,essen99}). The vector potential here is in the Coulomb gauge, and this essentially means that all velocity dependence of the interaction appears in the magnetic part, leaving the Coulomb interaction energy in its static form. When the expressions (\ref{eq.coul.pot}) and (\ref{eq.darwin.A.ito.velocity}) are inserted into equation (\ref{eq.LtotNoRad2}) one finds infinite contributions from self-interactions. When these are discarded, so that each particle only interacts with the field from the others, one obtains \begin{equation} \label{eq.LtotNoRad.darwin} L = \sum_i \frac{1}{2} m_i\vecv_i^2+ \sum_{i<j}\frac{q_i q_j}{r_{ij}} \frac{ [\vecv_i \cdot\vecv_j +(\vecv_i\cdot\vece_{ij})(\vecv_j\cdot\vece_{ij})]}{2c^2} - \sum_{i<j}\frac{q_i q_j}{r_{ij}}, \end{equation} where now $r_{ij}$ is the distance between particles $i$ and $j$ and $\vece_{ij}$ is the unit vector pointing from $i$ to $j$. This is the so called Darwin Lagrangian \cite{darwin} for the system. We can write it, \begin{equation}\label{eq.parts.L.darwin} L = T + L_{\rm mag} -\Phi, \end{equation} where, $T$, is kinetic energy and, $\Phi$, the Coulomb electric interaction energy. The magnetic part can also be written \begin{equation}\label{eq.Lmag.in.darwin} L_{\rm mag} =\sum_i \frac{q_i}{2c} \vecv_i \cdot \vecA(\vecr_i) = \frac{1}{2c}\int \vecj(r)\cdot \vecA(r) \,\dfd V . \end{equation} Here it is important the the vector potential is divergence free ($\nabla\cdot\vecA=0$, Coulomb gauge). The Darwin Lagrangian thus includes both electric and magnetic interactions between the particles and is valid in when radiation can be neglected. \section{Relative translational motion} The Coulomb interaction, $\Phi$, between two overlapping charged spheres is calculated in Appendix \ref{app.coul}. The magnetic interaction between two charged spheres in relative translational motion is calculated in Appendix \ref{app.mag.transl} for the case of small displacement of centers of the spheres ($r \ll R$). Keeping only the quadratic term, one finds, \begin{equation}\label{eq.lagr.transl} L_{\rm rel}= \half \mu \vecv^2+ \frac{4 Q^2}{10 R c^2} \vecv^2 -\half \frac{Q^2}{R^3} \vecr^2. \end{equation} Here $\vecr$ is the vector to center of the positive sphere from the center of the negative, so that $\dot{\vecr}=\vecv$. The center of mass motion decouples, and we assume that the random motions decouple. This is then the relevant Lagrangian for the relative collective translation. It can be written, \begin{equation}\label{eq.lagr.transl.rel} L_{\rm rel}= \half {\cal M} \vecv^2 -\half {\cal K} \vecr^2 , \end{equation} where, \begin{equation}\label{eq.eff.inert.mass} {\cal M}= \mu + \frac{4 Q^2}{5 c^2 R} \approx N m\left(1+\frac{4}{5}\frac{Nr_{\rm e}}{R} \right), \end{equation} and ${\cal K}=Q^2/R^3$. If we assume a proton-electron plasma we get $M_1 = N m_{\rm p}$, $M_2 = N m$, $\mu = N m_{\rm p}m/(m_{\rm p}+m) \approx N m$, and $Q^2 = N^2 e^2$. On the right hand side we have introduced, $r_{\rm e}\equiv\frac{e^2}{mc^2}$, the classical electron radius. Note that when $N r_{\rm e}/R \gg 1$ the effective mass ${\cal M}$ is entirely due to inductive inertia. Clearly the Lagrangian (\ref{eq.lagr.transl.rel}) corresponds to an oscillating system with angular frequency $\omega_0 =\sqrt{{\cal K}/{\cal M}}$. For this frequency we get explicitly, \begin{equation}\label{eq.freq.sq} \omega_0^2 =\frac{\frac{Ne^2}{R^3 m}}{1+\frac{4}{5}\frac{N r_{\rm e}}{R}}. \end{equation} If we introduce the dimensionless number, \begin{equation}\label{eq.NreoverR} \nu \equiv N r_{\rm e}/R, \end{equation} we see that for $\nu \ll 1$, one obtains, essentially, the Langmuir plasma frequency, $\omega^2_{\rm p}\propto n e^2/m$, see Eq.\ (\ref{eq.plasma.freq}). If we reexpress the plasma frequency in terms of the classical electron radius \cite{hershberger}, Eq.\ (\ref{eq.freq.sq}) can be written in the form, \begin{equation} \label{eq.re.omega0} \omega_0^2 = \frac{\nu}{1+\frac{4}{5}\nu} \frac{c^2}{R^2}. \end{equation} Thus, when the number of particles is large enough so that $\nu \gg 1$, this gives, \begin{equation}\label{eq.largeN.freq} \omega_0^2= \frac{5 c^2}{4 R^2}. \end{equation} For this case the frequency turns out to depend on the size (radius) of the sphere, but not on the density. \section{Relative rotational motion} We now study pure rotational motion of the two charged spheres, about their coinciding centers of mass, but we include interaction, \begin{equation}\label{eq.Le.ext.mag.field} L_{\rm e} =\frac{1}{c}\int \vecj(\vecr)\cdot \vecA_{\rm e}(\vecr) \,\dfd V, \end{equation} with a constant external magnetic field $\vecB=\nabla\times\vecA_{\rm e}$. Here $\vecA_{\rm e}=\half \vecB\times\vecr$ is the vector potential of the external field. Starting from (\ref{eq.collective.transformed}) and (\ref{eq.parts.L.darwin}) we find that, \begin{equation}\label{eq.lagr.rot} L_{\rm rot}= \half I \vecomega^2 +L_{\rm mag} +L_{\rm e}, \end{equation} is the relevant Lagrangian for collective relative rotation. The explicit calculations are sketched in Appendix \ref{app.mag.int.rot}. One finds that $L_{\rm mag}\sim \vecomega^2$ and that this term therefore contributes to the effective moment of inertia, just as it contributed to the effective mass, ${\cal M}$, in the translational case. The result can be written, \begin{equation}\label{eq.rot.lagrange.expl} L_{\rm rot}=\half {\cal I}\vecomega^2 + \frac{QR^2}{10 c}\vecomega\cdot\vecB, \end{equation} where, \begin{equation}\label{eq.eff.mom.of.inert} {\cal I}=I+\frac{2}{35}\frac{Q^2 R}{c^2} = \frac{2}{5}NmR^2\left(1+\frac{1}{7}\frac{Nr_{\rm e}}{R}\right). \end{equation} We see that when $\nu =N r_{\rm e}/R \gg 1$ we can neglect the contribution from mass to the effective moment of inertia ${\cal I}$. In this limit therefore, \begin{equation}\label{eq.eff.mom.of.inert.induc} {\cal I}\approx \frac{2}{35}\frac{Q^2 R}{c^2}, \end{equation} and there is essentially only inductive moment of inertia. We assume this below. \section{Plasma energy and diamagnetism} In order to investigate the equation of motion we put $\vecomega=\dot\varphi\vece_z$ and $\vecB=B(\sin\theta\vece_x+\cos\theta\vece_z)$. The Lagrangian then becomes \begin{equation}\label{eq.rot.lagrange.expl.z} L_{\rm rot}=\half {\cal I}\dot\varphi^2 + \frac{QR^2}{10 c}\dot\varphi\, B\cos\theta. \end{equation} In general when, $\partial L/\partial \varphi =0$, one finds that, $\dot p_{\varphi} =(\dfd/\dfd t)(\partial L/\partial \dot\varphi)=0$. In our case this gives, \begin{equation}\label{eq.pphi.const} p_{\varphi} =\frac{\partial L_{\rm rot}}{\partial \dot\varphi} ={\cal I}\dot\varphi+\frac{QR^2}{10 c} B \cos\theta=\mbox{const.} \end{equation} If we assume that $\dot\varphi(t=0) =0$ when $B(t=0) = 0$ we find that the constant is zero: $p_{\varphi}=0$. At all times we then find the relation, \begin{equation}\label{eq.ang.vel.and.B.rel} \dot\varphi(t)= \frac{7Rc}{4Q} B(t) \cos\theta, \end{equation} between angular velocity and the magnetic field. To get the energy from $L(\varphi,\dot\varphi)$ one calculates the Hamiltonian $H=p_{\varphi}\dot\varphi - L$. For the $L_{\rm rot}$ of Eq.\ (\ref{eq.rot.lagrange.expl.z}) one finds, \begin{equation}\label{eq.hamilt.pphi.B} H=\frac{1}{2 {\cal I}} \left(p_{\varphi} - \frac{QR^2}{10 c} B\cos\theta \right)^2. \end{equation} Let us consider two special cases of this phase space energy of the plasma. We first assume that the external field is zero ($B=0$). The energy is then given by $E = H = p^2_{\varphi} /2{\cal I}$. Here $p_{\varphi}$ is the angular momentum of relative rotation. For a given value of this angular momentum the energy is thus much smaller when $\nu \gg 1$ than otherwise. This reflects the fact, repeatedly stressed by the author \cite{essen96,essen99,essen97,essennordmark}, that for given momenta the phase space energy of a plasma is lower when there is net current than in the absence of net current. Now consider instead the case $p_{\varphi}=0$. For simplicity we also assume $\theta=0$. One then finds that \begin{equation}\label{eq.energy.of.B} H(p_{\varphi}=0)=\frac{1}{2 {\cal I}} \left( \frac{QR^2}{10 c} B \right)^2 \end{equation} or, equivalently, using (\ref{eq.eff.mom.of.inert.induc}), that the energy as function of $B$ is given by, \begin{equation}\label{eq.energy.of.B.2} E(B)= \frac{7}{80} R^3 B^2 =\frac{21}{40} \left(\frac{4\pi R^3}{3}\right) \frac{B^2}{8\pi}. \end{equation} Note that here, $B^2/8\pi$, is the energy density of the field $B$ in our (gaussian) units. The energy is thus seen to grow quadratically with the applied magnetic field and our plasma spheres are strongly diamagnetic. Based on more detailed studies Cole \cite{cole} has also concluded that plasmas are diamagnetic. In our model plasma diamagnetism is seen to be closely related to the diamagnetism of superconductors, as discussed by Ess\'en \cite{essen05}: the external field induces a current that screens the external field and reduces it inside. In the absence of resistance this screening current persists. \section{Plasma resistivity} Resistivity is completely neglected in the present model. It has been pointed out by Kulsrud \cite{kulsrud}, in his book on astrophysical plasmas, that the negligible resistivity of such plasmas is in fact closely connected with magnetic induction. In the present treatment magnetic induction appears in the form of an inductive inertia that appears naturally as the main physical parameter in the present model. As early as 1933 Frenkel \cite{frenkel} suggested that superconductivity is due to inductive inertia. Frenkel also conjectured that inductive inertia can lower the energy and cause a phase transition. He did not, however, make his ideas quantitative. The present model system gives Frenkel's ideas some quantitative backing. We note that for a collective momentum $p$ involving $N$ particles the kinetic energy will be $T = p^2/(2mN)$, when magnetic interaction is neglected, see Eqs.\ (\ref{eq.lagr.transl.rel} -\ref{eq.eff.inert.mass}). When the effect of magnetic interaction is included this becomes $T+E_{\rm mag} = p^2/(2Nm[1+4\nu/5])$ and we find that $T+E_{\rm mag} \ll T$ when $\nu \gg 1$, assuming that $p$ remains constant. Collective modes are thus much more favorable thermodynamically when there is net current. Plasma resistivity is normally treated by studying the scattering of individual charged particles. Even with this type of treatment fast electrons become, so called, runaway electrons and experience no resistance \cite{alfven}. The present model indicates that resistivity can not be treated as resulting from the scattering of individual particles, since the collective motion of many charges leads to a large inductive non-local effect. All this points in the same direction, namely that plasmas need not be resistive, in agreement with our model treatment. If resistivity had to be included the translational oscillation would become a damped oscillation and any circulating current would eventually cease, thereby making the diamagnetic response temporary. A more immediate limitation of our model is probably the fact that a current would cause pinching and this would lead to instabilities that deform of the spherical shape. Lynden-Bell \cite{lyndenbell} has studied the relativistically spinning charged sphere and finds that charge concentrates near the equator (as a result of pinching). \section{Conclusions} The model treated in this article is not particularly realistic. Instead it can be motivated as the simplest possible model within which one can study plasma phenomena associated with current, induction, and magnetic interaction energy, in a meaningful way. Hopefully it also has some novelty. In the literature one can find a fair amount of work on the radially oscillating plasma sphere (see e.g. Barnes and Nebel \cite{barnes}, Park {\it et al.} \cite{park}), but not the modes treated here. A numerical study of a rotating convective plasma sphere, modelling a star, by Dobler {\it et al.} \cite{dobler} shows how complicated more realistic models necessarily become. The two-sphere model studied here is therefore valuable as a device for gaining insight into some very basic plasma phenomena. As we have seen the most basic of these is the dominance of inductive inertia in the effective mass ${\cal M}$ of Eq.\ (\ref{eq.eff.inert.mass}), and the effective moment of inertia ${\cal I}$ of Eq.\ (\ref{eq.eff.mom.of.inert}), when the number $N$ of participating charged particles is large enough. One notes that $\nu =Nr_{\rm e}/R \approx 118$ for a typical laboratory plasma of density $n=10^{20}\,{\rm m}^{-3}$ and of radius $R= 1\,$cm, assuming that all particles contribute to the collective mode. Finally the model also indicates how this large inductive inertia influences the energy of the plasma and how a plasma responds diamagnetically to an external magnetic field. \newpage \noindent{\Large\bf Appendices}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Within the framework of grand unified theories, as a result of vacuum symmetry breaking phase transitions, different types of topological defects could be produced in the early universe \cite{Kibb80,Vile94}. In particular, cosmic strings have attracted considerable attention. Although the recent observational data on the cosmic microwave background have ruled out cosmic strings as the primary source for large scale structure formation, they are still candidates for a variety of interesting physical phenomena such as the generation of gravitational waves \cite{Damo00}, high energy cosmic rays \cite{Bhat00}, and gamma ray bursts \cite{Bere01}. Recently the cosmic strings attracted a renewed interest partly because a variant of their formation mechanism is proposed in the framework of brane inflation~\cit {Sara02}. In the simplest theoretical model describing the infinite straight cosmic string the spacetime is locally flat except on the string where it has a delta shaped curvature tensor. The corresponding nontrivial topology raises a number of interesting physical effects. One of these concerns the effect of a string on the properties of quantum vacuum. Explicit calculations for vacuum polarization effects in the vicinity of a string have been done for various fields \cite{Hell86}-\cite{BezeKh06}. Vacuum polarization effects by the cosmic string carrying a magnetic flux are considered in Ref. \cit {Guim94}. Another type of topological quantum effects appears in models with compact spatial dimensions. The presence of compact dimensions is a key feature of most high energy theories of fundamental physics, including supergravity and superstring theories. An interesting application of the field theoretical models with compact dimensions recently appeared in nanophysics. The long-wavelength description of the electronic states in graphene can be formulated in terms of the Dirac-like theory in three-dimensional spacetime with the Fermi velocity playing the role of speed of light (see, e.g., \cite{Cast09}). Single-walled carbon nanotubes are generated by rolling up a graphene sheet to form a cylinder and the background spacetime for the corresponding Dirac-like theory has topology R^{2}\times S^{1}$. The compactification of spatial dimensions serves to alter vacuum fluctuations of a quantum field and leads to the Casimir-type contributions in the vacuum expectation values of physical observables (see Refs. \cite{Most97,Eliz94} for the topological Casimir effect and its role in cosmology). In the Kaluza-Klein-type models, the topological Casimir effect induced by the compactification has been used as a stabilization mechanism for the size of extra dimensions. The Casimir energy can also serve as a model for dark energy needed for the explanation of the present accelerated expansion of the universe. The influence of extra compact dimensions on the Casimir effect in the classical configuration of two parallel plates has been recently discussed for the case of a scalar field \cite{Chen06}, for the electromagnetic field with perfectly conducting boundary conditions \cite{Popp04}, and for a fermionic field with bag boundary conditions \cite{Bell09}. In this paper we shall study the configuration with both types of sources for the vacuum polarization, namely, a generalized cosmic string spacetime with a compact spatial dimension along its axis (for combined effects of topology and boundaries on the quantum vacuum in the geometry of a cosmic string see \cite{Brev95,Beze06,Beze07}). For a massive scalar field with an arbitrary curvature coupling parameter, we evaluate the Wightman function, the vacuum expectation values of the field squared and the energy-momentum tensor. These expectation values are among the most important quantities characterizing the vacuum state. Though the corresponding operators are local, due to the global nature of the vacuum, the vacuum expectation values carry an important information about the the global properties of the bulk. In addition, the vacuum expectation value of the energy-momentum tensor acts as the source of gravity in the semiclassical Einstein equations. It therefore plays an important role in modelling a self-consistent dynamics involving the gravitational field. The problem under consideration is also of separate interest as an example with two different kind of topological quantum effects, where all calculations can be performed in a closed form. We have organized the paper as follows. The next section is devoted to the evaluation of the Wightman function for a massive scalar field in a generalized cosmic string spacetime with a compact dimension. Quasiperiodic boundary condition with an arbitrary phase is assumed along the compact dimension . By using the formula for the Wightman function, in section \re {Sec:phi2} we evaluate the vacuum expectation value of the field squared. This expectation value is decomposed into two parts: the first one corresponding to the geometry of a cosmic string without compactification and the second one being induced by the compactification. The vacuum expectation value of the energy-momentum tensor is discussed in section \re {Sec:EMT}. Section \ref{Sec:VacEn} is devoted to the investigation of the part in topological vacuum energy induced by the planar angle deficit. Finally, the results are summarized and discussed in section \ref{sec:Conc}. \section{Wightman function} \label{sec:WightFunc} We consider a $(D+1)$-dimensional generalized cosmic string spacetime. Considering the generalized cylindrical coordinates $(x^{1},x^{2},\ldots ,x^{D})=(r,\phi ,z,x^{4},\ldots ,x^{D})$ with the string on the $(D-2)$ -dimensional hypersurface $r=0$, the corresponding geometry is described by the line element \begin{equation} ds^{2}=g_{ik}dx^{i}dx^{k}=dt^{2}-dr^{2}-r^{2}d\phi ^{2}-dz^{2}-\sum_{l=4}^{D}(dx^{l})^{2}. \label{ds21} \end{equation The coordinates take values in the ranges $r\geqslant 0$, $0\leqslant \phi \leqslant \phi _{0}$, $-\infty <x^{l}<+\infty $ for $l=4,\ldots ,D$, and the spatial points $(r,\phi ,z,x^{4},\ldots ,x^{D})$ and $(r,\phi +\phi _{0},x^{4},\ldots ,x^{D})$ are to be identified. Additionally we shall assume that the direction along the $z$-axis is compactified to a circle with the length $L$: $0\leqslant z\leqslant L$ (about the generalization of the model in the case of an arbitrary number of compact dimensions along the axis of the string see below). In the standard $D=3$ cosmic string case with $-\infty <z<+\infty $, the planar angle deficit is related to the mass per unit length of the string $\mu $ by $2\pi -\phi _{0}=8\pi G\mu $, where $G$ is the Newton gravitational constant. It is interesting to note that the effective metric produced in superfluid ^{3}\mathrm{He-A}$ by a radial disgyration is described by the $D=3$ line element (\ref{ds21}) with the negative planar angle deficit \cite{Volo98}. In this condensed matter system the role of the Planck energy scale is played by the gap amplitude. The graphitic cones are another class of condensed matter systems, described in the long wavelength approximation by the metric (\ref{ds21}) with $D=2$. Graphitic cones are obtained from the graphene sheet if one or more sectors are excised. The opening angle of the cone is related to the number of sectors removed, $N_{c}$, by the formula 2\pi (1-N_{c}/6)$, with $N_{c}=1,2,\ldots ,5$. All these angles have been observed in experiments \cite{Kris97}. Induced fermionic current and fermionic condensate in a $(2+1)$-dimensional conical spacetime in the presence of a circular boundary and a magnetic flux have been recently investigated in Ref. \cite{Beze10}. In this paper we are interested in the calculation of one-loop quantum vacuum effects for a scalar quantum field $\varphi (x)$, induced by the non-trivial topology of the $z$-direction in the geometry (\ref{ds21}). For a massive field with curvature coupling parameter $\xi $ the field equation has the form \begin{equation} \left( \nabla ^{i}\nabla _{i}+m^{2}+\xi R\right) \varphi (x)=0, \label{fieldeq} \end{equation where $\nabla _{i}$ is the covariant derivative operator and $R$ is the scalar curvature for the background spacetime. In the geometry under consideration $R=0$ for $r\neq 0$. The values of the curvature coupling parameter $\xi =0$ and $\xi =\xi _{D}\equiv (D-1)/4D$ correspond to the most important special cases of minimally and conformally coupled scalars, respectively. We assume that along the compact dimension the field obeys quasiperiodicity condition \begin{equation} \varphi (t,r,\phi ,z+L,x^{4},\ldots ,x^{D})=e^{2\pi i\beta }\varphi (t,r,\phi ,z,x^{4},\ldots ,x^{D}), \label{Period} \end{equation with a constant phase $\beta $, $0\leqslant \beta \leqslant 1$. The special cases $\beta =0$ and $\beta =1/2$ correspond to the untwisted and twisted fields, respectively, along the $z$-direction. We could also consider quasiperiodicity condition with respect to $\phi \rightarrow \phi +\phi _{0} . This would correspond to a cosmic string which carries an internal magnetic flux. Though the corresponding generalization is straightforward, for simplicity we shall consider a string without a magnetic flux. In quantum field theory, the imposition of the condition (\ref{Period}) changes the spectrum of the vacuum fluctuations compared to the case with uncompactified dimension. As a consequence, the vacuum expectation values (VEVs) of physical observables are changed. The properties of the vacuum state are described by the corresponding positive frequency Wightman function, $W(x,x^{\prime })=\langle 0|\varphi (x)\varphi (x^{\prime })|0\rangle $, where $|0\rangle $ stands for the vacuum state. In particular, having this function we can evaluate the VEVs of the field squared and the energy-momentum tensor. In addition, the response of particle detectors in an arbitrary state of motion is determined by this function (see, for instance, \cite{Birr82,Taga86}). For the evaluation of the Wightman function we use the mode sum formula \begin{equation} W(x,x^{\prime })=\sum_{\mathbf{\alpha }}\varphi _{\mathbf{\alpha }(x)\varphi _{\mathbf{\alpha }}^{\ast }(x^{\prime }), \label{vevWf} \end{equation where $\{\varphi _{\mathbf{\alpha }}(x),\varphi _{\mathbf{\alpha }}^{\ast }(x)\}$ is a complete set of normalized mode functions satisfying the periodicity condition (\ref{Period}) and $\alpha $ is a collective notation for the quantum numbers specifying the solution. In the problem under consideration, the mode functions are specified by the set of quantum numbers $\alpha =(n,\gamma ,k_{z},\mathbf{k})$, with the values in the ranges $n=0,\pm 1,\pm 2,\ldots $, $\mathbf{k}=(k_{4},\ldots ,k_{D})$, $-\infty <k_{l}<\infty $. The mode functions have the for \begin{equation} \varphi _{\alpha }(x)=\frac{q\gamma J_{q\left\vert n\right\vert }(\gamma r)} 2(2\pi )^{D-2}\omega L}\exp \left( iqn\phi +ik_{z}z+i\mathbf{kr}_{\parallel }-i\omega t\right) , \label{Eigfunccirc} \end{equation} where $J_{\nu }(z)$ is the Bessel function, $\mathbf{r}_{\parallel }=(x^{4},\ldots ,x^{D})$ an \begin{equation} \omega =\sqrt{\gamma ^{2}+k_{z}^{2}+k^{2}+m^{2}},\quad q=2\pi /\phi _{0}. \label{qu} \end{equation} From the periodicity condition (\ref{Period}), for the eigenvalues of the quantum number $k_{z}$ one find \begin{equation} k_{z}=2\pi (l+\beta )/L,\;l=0,\pm 1,\pm 2,\ldots . \label{Eigkz} \end{equation} Substituting the mode functions (\ref{Eigfunccirc}) into the sum (\ref{vevWf ), for the positive frequency Wightman function one finds \begin{eqnarray} W(x,x^{\prime }) &=&\frac{q}{(2\pi )^{D-2}L}\sideset{}{'}{\sum _{n=0}^{\infty }\cos (qn\Delta \phi )\int d\mathbf{k}\,e^{i\mathbf{k}\Delta \mathbf{r}_{\parallel }} \notag \\ &&\times \int_{0}^{\infty }d\gamma \,\gamma J_{qn}(\gamma r)J_{qn}(\gamma r^{\prime })\sum_{l=-\infty }^{\infty }\frac{e^{-i\omega \Delta t+ik_{z}\Delta z}}{\omega }, \label{WF1} \end{eqnarray where $\Delta \phi =\phi -\phi ^{\prime }$, $\Delta \mathbf{r}_{\parallel } \mathbf{r}_{\parallel }-\mathbf{r}_{\parallel }^{\prime }$, $\Delta t=t-t^{\prime }$, $\Delta z=z-z^{\prime }$, and the prime on the sum over $n$ means that the summand with $n=0$ should be taken with the weight 1/2. For the further evaluation, we apply to the series over $l$ the Abel-Plana summation formula in the form \cite{Bell10} (for generalizations of the Abel-Plana formula and their applications in quantum field theory see \cit {Most97,SahaBook,Saha06Sum}) \begin{eqnarray} &&\sum_{l=-\infty }^{\infty }g(l+\beta )f(|l+\beta |)=\int_{0}^{\infty }du\, \left[ g(u)+g(-u)\right] f(u) \notag \\ &&\qquad +i\int_{0}^{\infty }du\left[ f(iu)-f(-iu)\right] \sum_{\lambda =\pm 1}\frac{g(i\lambda u)}{e^{2\pi (u+i\lambda \beta )}-1}. \label{sumform} \end{eqnarray In the special case of $g(y)=1$ and $\beta =0$ this formula is reduced to the standard Abel-Plana formula. Taking in Eq. (\ref{sumform}) \begin{equation} g(y)=e^{2\pi iy\Delta z/L},\;f(y)=\frac{e^{-i\Delta t\sqrt{(2\pi y/L)^{2}+\gamma ^{2}+k^{2}+m^{2}}}}{\sqrt{(2\pi y/L)^{2}+\gamma ^{2}+k^{2}+m^{2}}}, \label{fg} \end{equation we present the Wightman function in the decomposed for \begin{eqnarray} &&W(x,x^{\prime })=W_{\text{s}}(x,x^{\prime })+\frac{2q}{(2\pi )^{D-1} \sideset{}{'}{\sum}_{n=0}^{\infty }\cos (qn\Delta \phi )\int d\mathbf{k \,e^{i\mathbf{k}\Delta \mathbf{r}_{\parallel }}\int_{0}^{\infty }d\gamma \,\gamma J_{qn}(\gamma r) \notag \\ &&\qquad \times J_{qn}(\gamma r^{\prime })\int_{\sqrt{\gamma ^{2}+k^{2}+m^{2 }}^{\infty }dy\frac{\cosh (\Delta t\sqrt{y^{2}-\gamma ^{2}-k^{2}-m^{2}})} \sqrt{y^{2}-\gamma ^{2}-k^{2}-m^{2}}}\sum_{\lambda =\pm 1}\frac{e^{-\lambda y\Delta z}}{e^{Ly+2\pi i\lambda \beta }-1}, \label{WF1b} \end{eqnarray where $W_{\text{s}}(x,x^{\prime })$ is the Wightman function in the geometry of a cosmic string without compactification. The latter corresponds to the first term on the right hand side of (\ref{sumform}). The integration over the angular part of $\mathbf{k}$ is done with the help of the formul \begin{equation} \int d\mathbf{k}\,e^{i\mathbf{k}\Delta \mathbf{r}_{\parallel }}F(k)=\frac (2\pi )^{(D-3)/2}}{|\Delta \mathbf{r}_{\parallel }|^{(D-5)/2} \int_{0}^{\infty }dk\,k^{(D-3)/2}J_{(D-5)/2}(k|\Delta \mathbf{r}_{\parallel }|)F(k), \label{IntForm1} \end{equation for a given function $F(k)$. By using the expansion $[e^{Ly+2\pi i\lambda \beta }-1]^{-1}=\sum_{l=1}^{\infty }e^{-lLy-2\pi il\lambda \beta }$, the further integrations over $k$ and $y$ are done by making use of formulas from \cite{Prud86}. Similar transformations are done for the part $W_{\text{ }}(x,x^{\prime })$. As a result we find the following expressio \begin{eqnarray} W(x,x^{\prime }) &=&\frac{2q}{(2\pi )^{(D+1)/2}}\sideset{}{'}{\sum _{n=0}^{\infty }\cos (qn\Delta \phi )\int_{0}^{\infty }d\gamma \,\gamma J_{qn}(\gamma r)J_{qn}(\gamma r^{\prime }) \notag \\ &&\times \sum_{l=-\infty }^{\infty }e^{-2\pi li\beta }\left( \gamma ^{2}+m^{2}\right) ^{(D-3)/2}f_{(D-3)/2}(w_{l}\sqrt{\gamma ^{2}+m^{2}}), \label{WF2} \end{eqnarray wher \begin{equation} w_{l}^{2}=\left( \Delta z+lL\right) ^{2}+|\Delta \mathbf{r}_{\parallel }|^{2}-(\Delta t)^{2}, \label{wl} \end{equation and we use the notation \begin{equation} f_{\nu }(x)=K_{\nu }(x)/x^{\nu }. \label{fnu} \end{equation The term $l=0$ in (\ref{WF2}) corresponds to the function $W_{\text{s }(x,x^{\prime })$. For the further transformation of the expression (\ref{WF2}) we employ the integral representation of the modified Bessel function \cite{Wats44} \begin{equation} K_{\nu }(y)=\frac{1}{2^{\nu +1}y^{\nu }}\int_{0}^{\infty }d\tau \frac e^{-\tau y^{2}-1/(4\tau )}}{\tau ^{\nu +1}}. \label{Kint} \end{equation Substituting this representation in (\ref{WF2}), the integration over \gamma $ is done explicitly. Introducing an integration variable $u=1/(2\tau )$ and by changing $l\rightarrow -l$, one finds \begin{equation} W(x,x^{\prime })=\frac{q}{(2\pi )^{(D+1)/2}}\sum_{l=-\infty }^{\infty }e^{2\pi li\beta }\int_{0}^{\infty }du\,u^{(D-3)/2}e^{-u_{l}^{2}u/2-m^{2}/(2u)}\sideset{}{'}{\sum _{n=0}^{\infty }\cos (qn\Delta \phi )I_{qn}(urr^{\prime }), \label{WF3} \end{equation wher \begin{equation} u_{l}^{2}=r^{2}+r^{\prime 2}+\left( \Delta z-lL\right) ^{2}+|\Delta \mathbf{ }_{\parallel }|^{2}-(\Delta t)^{2}. \label{ul} \end{equation} The expression for the Wightman function may be further simplified by using the formul \begin{equation} \sideset{}{'}{\sum}_{m=0}^{\infty }\cos (qm\Delta \phi )I_{qm}\left( w\right) =\frac{1}{2q}\sum_{k}e^{w\cos (2k\pi /q-\Delta \phi )}-\frac{1} 4\pi }\sum_{j=\pm 1}\int_{0}^{\infty }dy\frac{\sin (q\pi +jq\Delta \phi )e^{-w\cosh y}}{\cosh (qy)-\cos (q\pi +jq\Delta \phi )}, \label{sumform2} \end{equation where the summation in the first term on the right hind side goes under the condition \begin{equation} -q/2+q\Delta \phi /(2\pi )\leqslant k\leqslant q/2+q\Delta \phi /(2\pi ). \label{SumCond} \end{equation The formula (\ref{sumform2}) is obtained by making use of the integral representation 9.6.20 from \cite{Abra72} for the modified Bessel function and changing the order of the summation and integrations. Note that, for integer values of $q$, formula (\ref{sumform2}) reduces to the well-known result \cite{Prud86,Spin08 \begin{equation} \sideset{}{'}{\sum}_{m=0}^{\infty }\cos (qm\Delta \phi )I_{qm}\left( w\right) =\frac{1}{2q}\sum_{k=0}^{q-1}e^{w\cos (2k\pi /q-\Delta \phi )}. \label{SumFormSp} \end{equation Substituting (\ref{sumform2}) with $w=urr^{\prime }$ into (\ref{WF3}), the integration over $u$ is performed explicitly in terms of the modified Bessel function and one finds \begin{eqnarray} W(x,x^{\prime }) &=&\frac{m^{D-1}}{(2\pi )^{(D+1)/2}}\sum_{l=-\infty }^{\infty }e^{2\pi li\beta }\Bigg[\sum_{k}f_{(D-1)/2}(m\sqrt u_{l}^{2}-2rr^{\prime }\cos (2\pi k/q-\Delta \phi )}) \notag \\ &&-\frac{q}{2\pi }\sum_{j=\pm 1}\sin (q\pi +jq\Delta \phi )\int_{0}^{\infty }dy\frac{f_{(D-1)/2}(m\sqrt{u_{l}^{2}+2rr^{\prime }\cosh y})}{\cosh (qy)-\cos (q\pi +jq\Delta \phi )}\Bigg], \label{WF4} \end{eqnarray with the notation from (\ref{fnu}). This is the final expression of the Wightman function for the evaluation of the VEVs in the following sections. It allows us to present the VEVs of the field squared and the energy-momentum tensor for a scalar massive field in a closed form for general value of $q$. In the special case of integer $q$, the general formula is reduced t \begin{equation} W(x,x^{\prime })=\frac{m^{D-1}}{(2\pi )^{(D+1)/2}}\sum_{l=-\infty }^{\infty }e^{2\pi li\beta }\sum_{k=0}^{q-1}f_{(D-1)/2}(m\sqrt{u_{l}^{2}-2rr^{\prime }\cos (2\pi k/q-\Delta \phi )}). \label{WF4Sp} \end{equation In this case the Wightman function is expressed in terms of the $q$ images of the Minkowski spacetime function with a compactified dimension along the axis $z$. For a massless field, from (\ref{WF4}) one find \begin{eqnarray} W(x,x^{\prime }) &=&\frac{\Gamma ((D-1)/2)}{4\pi ^{(D+1)/2}}\sum_{l=-\infty }^{\infty }e^{2\pi li\beta }\bigg[ \sum_{k}(u_{l}^{2}-2rr^{\prime }\cos (2\pi k/q-\Delta \phi ))^{(1-D)/2} \notag \\ && -\frac{q}{2\pi }\sum_{j=\pm 1}\sin (q\pi +jq\Delta \phi )\int_{0}^{\infty }dy\frac{(u_{l}^{2}+2rr^{\prime }\cosh y)^{(1-D)/2}}{\cosh (qy)-\cos (q\pi +jq\Delta \phi )}\bigg] . \label{WF4m0} \end{eqnarray The $l=0$ term in the expressions above corresponds to the Wightman function in the geometry of a cosmic string without compactification: \begin{eqnarray} W_{\text{s}}(x,x^{\prime }) &=&\frac{m^{D-1}}{(2\pi )^{(D+1)/2}}\bigg[ \sum_{k}f_{(D-1)/2}(m\sqrt{u_{0}^{2}-2rr^{\prime }\cos (2\pi k/q-\Delta \phi )}) \notag \\ && -\frac{q}{2\pi }\sum_{j=\pm 1}\sin (q\pi +jq\Delta \phi )\int_{0}^{\infty }dy\frac{f_{(D-1)/2}(m\sqrt{u_{0}^{2}+2rr^{\prime }\cosh y})}{\cosh (qy)-\cos (q\pi +jq\Delta \phi )}\bigg] , \label{WF4Unc} \end{eqnarray where $u_{0}^{2}$ is given by (\ref{ul}) with $l=0$. The formulas given above can be generalized for a charged scalar field \varphi (x)$ in the presence of a gauge field with the vector potential A_{l}=$const and $A_{l}=0$ for $l=0,1,2$. Though the corresponding magnetic field strength vanishes, the nontrivial topology of the background spacetime leads to Aharonov-Bohm-like effects for the VEVs. By the gauge transformation $A_{l}=A_{l}^{\prime }+\partial _{l}\Lambda (x)$, $\varphi (x)=\varphi ^{\prime }(x)e^{-ie\Lambda (x)}$, with the function $\Lambda (x)=A_{l}x^{l}$, we can see that the new field $\varphi ^{\prime }(x)$ satisfies the field equation with $A_{l}^{\prime }=0$ and the quasiperiodicity conditions similar to~(\ref{Period}): $\varphi ^{\prime }(t,r,\phi ,z+L,x^{4},\ldots ,x^{D})=e^{2\pi i\beta ^{\prime }}\varphi ^{\prime }(t,r,\phi ,z,x^{4},\ldots ,x^{D})$, wit \begin{equation} \beta ^{\prime }=\beta +eA_{3}L/(2\pi ). \label{beta} \end{equation Hence, for a charged scalar field the corresponding expression for the Wightman function is obtained from (\ref{WF4}) by the replacement $\beta \rightarrow \beta ^{\prime }$. In this case the VEVs are periodic functions of the component of the vector potential along the compact dimension. We can consider a more general class of compactifications having the spatial topology $(S^{1})^{p}$ with compact dimensions $(x^{3}=z,x^{4},\ldots ,x^{p}) $, $p\leqslant D$. The phases in the quasiperiodicity conditions along separate dimensions can be different. For the eigenvalues of the quantum numbers $k_{i}$, $i=3,\ldots ,p$, one has $2\pi (l_{i}+\beta _{i})/L_{i}$, $l_{i}=0,\pm 1,\ldots $, with $L_{i}$ being the length of the compact dimension along the axis $x^{i}$. The mode sum for the corresponding Wightman function contains the summation over $l_{i}$, $i=3,\ldots ,p$, and the integration over $k_{i}$ with $i=p+1,\ldots ,D$. We apply to the series over $l_{p}$ the formula (\ref{sumform}). The term in the expression of the Wightman function which corresponds to the first integral on the right of \ref{sumform}) is the Wightman function for the topology $(S^{1})^{p-1}$ with compact dimensions $(x^{3}=z,x^{4},\ldots ,x^{p-1})$, and the second term gives the part induced by the compactness of the direction $x^{p}$. As a result a recurrence formula is obtained which relates the Wightmann functions for the topologies $(S^{1})^{p}$ and $(S^{1})^{p-1}$. The formulas for the Wightman function, given in this section, can be used to study the response of the Unruh-DeWitt type particle detector (see \cit {Birr82,Taga86}) moving in the region outside the string. This response in the standard geometry of a $D=3$ cosmic string with integer values of the parameter $q$ has been investigated in \cite{Davi88}. Our main interest in this paper are the VEVs of the field squared and the energy-momentum tensor and we turn to the evaluation of these quantities. \section{VEV of the field squared} \label{Sec:phi2} The VEV of the field squared is obtained from the Wightman function by taking the coincidence limit of the arguments. It is presented in the decomposed form \begin{equation} \langle \varphi ^{2}\rangle =\langle \varphi ^{2}\rangle _{\text{s}}+\langle \varphi ^{2}\rangle _{\text{t}}, \label{VEV2dec} \end{equation where $\langle \varphi ^{2}\rangle _{\text{s}}$ is the corresponding VEV in the geometry of a string without compact dimensions and $\langle \varphi ^{2}\rangle _{\text{t}}$ is the topological part induced by the compactification of the $z$-direction. Because the compactification does not change the local geometry of the cosmic string spacetime, the divergences in the coincidence limit are contained in the term $\langle \varphi ^{2}\rangle _{\text{s}}$ only and the topological part is finite. For $r\neq 0$ the renormalization of $\langle \varphi ^{2}\rangle _{\text{s}}$ is reduced to the subtraction of the corresponding quantity in the Minkowski spacetime: \begin{equation} \langle \varphi ^{2}\rangle _{\text{s}}=\lim_{x^{\prime }\rightarrow x}[W_ \text{s}}(x,x^{\prime })-W_{\text{M}}(x,x^{\prime })], \label{ph2sren} \end{equation with $W_{\text{M}}(x,x^{\prime })$ being the Wightman function in the Minkowski spacetime. The latter coincides with the $k=0$ term in the square brackets of (\ref{WF4Unc}). The subtraction of the Minkowskian Wightman function in (\ref{ph2sren}) removes the pole. As a result, one find \begin{equation} \langle \varphi ^{2}\rangle _{\text{s}}=\frac{2m^{D-1}}{(2\pi )^{(D+1)/2} \left[ \sum_{k=1}^{[q/2]}f_{(D-1)/2}(2mrs_{k})-\frac{q}{\pi }\sin (q\pi )\int_{0}^{\infty }dy\frac{f_{(D-1)/2}(2mr\cosh (y))}{\cosh (2qy)-\cos (q\pi )}\right] , \label{phi2s} \end{equation where $[q/2]$ means the integer part of $q/2$ and we have define \begin{equation} s_{k}=\sin (\pi k/q). \label{sk} \end{equation For $1\leqslant q<2$ the first term in the square brackets is absent. The VEV given by (\ref{phi2s}) is positive for $q>1$. For a massless field, from (\ref{phi2s}) we obtain the expression below: \begin{equation} \langle \varphi ^{2}\rangle _{\text{s}}=\frac{2\Gamma ((D-1)/2)}{(4\pi )^{(D+1)/2}r^{D-1}}g_{D}(q), \label{phi2sm0} \end{equation with the function $g_{D}(q)$ defined a \begin{equation} g_{D}(q)=\sum_{k=1}^{[q/2]}s_{k}^{1-D}-\frac{q}{\pi }\sin (q\pi )\int_{0}^{\infty }dy\frac{\cosh ^{1-D}(y)}{\cosh (2qy)-\cos (q\pi )}. \label{gD} \end{equation The latter is a monotonic increasing positive function of $q$ for $q>1$. For large values $q$, the dominant contribution to $g_{D}(q)$ comes from the first term in the square brackets on the right hand side and one find \begin{equation} g_{D}(q)\approx \zeta (D-1)(q/\pi )^{D-1},\;q\gg 1, \label{gDlargeq} \end{equation with $\zeta (x)$ being the Riemann zeta function. Simple expressions for g_{D}(q)$ can be found for odd values of spatial dimension. In particular, for $D=3,5$ one has \begin{equation} g_{3}(q)=\frac{q^{2}-1}{6},\;g_{5}(q)=\frac{\left( q^{2}-1\right) \left( q^{2}+11\right) }{90}. \label{g35} \end{equation The expressions for higher odd values of $D$ can be obtained by using the recurrence scheme described in \cite{Beze06}. From (\ref{phi2sm0}) we obtain the results previously derived in \cite{Sour92} for the case $D=2$ and in \cite{Line87,Smit89} for $D=3$. For a massive field, the leading term in the asymptotic expansion of the field squared for points near the string, $mr\ll 1$, coincides with (\ref{phi2sm0}). At large distances from the string, mr\gg 1$, the VEV (\ref{phi2s}) is exponentially suppressed. For the topological part, from (\ref{WF4}) we directly obtai \begin{eqnarray} \langle \varphi ^{2}\rangle _{\text{t}} &=&\frac{4m^{D-1}}{(2\pi )^{(D+1)/2} \sum_{l=1}^{\infty }\cos (2\pi l\beta )\left[ \sideset{}{'}{\sum _{k=0}^{[q/2]}f_{(D-1)/2}(m\sqrt{4r^{2}s_{k}^{2}+\left( lL\right) ^{2} )\right. \notag \\ &&\left. -\frac{q}{\pi }\sin (q\pi )\int_{0}^{\infty }dy\frac{f_{(D-1)/2}( \sqrt{4r^{2}\cosh ^{2}(y)+\left( lL\right) ^{2}})}{\cosh (2qy)-\cos (q\pi ) \right] , \label{phi2t} \end{eqnarray where the prime on the sign of the summation means that the $k=0$ term should be halved. Note that topological part is not changed under the replacement $\beta \rightarrow 1-\beta $. An alternative form of the VEV is obtained from (\ref{WF3}) \begin{equation} \langle \varphi ^{2}\rangle _{\text{t}}=\frac{2qr^{1-D}}{(2\pi )^{(D+1)/2} \sum_{l=1}^{\infty }\cos (2\pi l\beta )\int_{0}^{\infty }dy\,y^{(D-3)/2}e^{-(2y+\left( lL/r\right) ^{2}y+m^{2}r^{2}/y)/2 \sideset{}{'}{\sum}_{n=0}^{\infty }I_{qn}(y). \label{phi2tb} \end{equation For a massless field the integral in this formula is expressed in terms of the associated Legendre function. In particular, from (\ref{phi2tb}) it follows that the topological part is always positive for an untwisted scalar ($\beta =0$) and it is always negative for a twisted scalar ($\beta =1/2$). In both cases, $|\langle \varphi ^{2}\rangle _{\text{t}}|$ is a monotonically decreasing function of the field mass. In the special case of integer $q$, the general formula (\ref{phi2t}) is reduced t \begin{equation} \langle \varphi ^{2}\rangle _{\text{t}}=\frac{2m^{D-1}}{(2\pi )^{(D+1)/2} \sum_{k=0}^{q-1}\sum_{l=1}^{\infty }\cos (2\pi l\beta )f_{(D-1)/2}(m\sqrt 4r^{2}s_{k}^{2}+\left( lL\right) ^{2}}). \label{phi2tsp} \end{equation In the discussion below, we shall be mainly concerned with the topological part. In the presence of a constant gauge field the corresponding expressions for the VEV of the field squared are obtained by the replacement $\beta \rightarrow \beta ^{\prime }$ with $\beta ^{\prime }$ defined by (\re {beta}). Unlike the pure string part, $\langle \varphi ^{2}\rangle _{\text{s}}$, the topological part is finite on the string. Putting in (\ref{phi2t}) $r=0$ and using the relatio \begin{equation} \int_{0}^{\infty }dy\frac{\sin (q\pi )}{\cosh (qy)-\cos (q\pi )}=\left\{ \begin{array}{cc} \pi (1-\delta )/q, & q=2p_{0}+\delta , \\ \pi /q, & q=2p_{0 \end{array \right. , \label{IntForm} \end{equation with $\delta $ defined in accordance with $q=2p_{0}+\delta $,$\;0\leqslant \delta <2$, and $p_{0}$ being an integer, one find \begin{equation} \langle \varphi ^{2}\rangle _{\text{t},r=0}=q\langle \varphi ^{2}\rangle _ \text{t}}^{\text{(M)}}=\frac{2qm^{D-1}}{(2\pi )^{(D+1)/2}}\sum_{l=1}^{\infty }\cos (2\pi l\beta )f_{(D-1)/2}(lmL). \label{phi2r0} \end{equation Here $\langle \varphi ^{2}\rangle _{\text{t}}^{\text{(M)}}$ is the VEV of the field squared in the Minkowski spacetime with a compact dimension of the length $L$. Note that in the case of $\delta =0$ the left hand side of (\re {IntForm}) is understood in the sense of the limit $\delta \rightarrow 0$. For points near the string, $mr\ll 1$, the pure stringy part behaves as 1/r^{D-1}$ and for $r\ll L$ it dominates in the total VEV. For a massless field one find \begin{eqnarray} \langle \varphi ^{2}\rangle _{\text{t}} &=&\frac{\Gamma ((D-1)/2)}{\pi ^{(D+1)/2}L^{D-1}}\sum_{l=1}^{\infty }\cos (2\pi l\beta )\bigg \sideset{}{'}{\sum}_{k=0}^{[q/2]}[(2rs_{k}/L)^{2}+l^{2}]^{(1-D)/2} \notag \\ &&-\frac{q}{\pi }\sin (q\pi )\int_{0}^{\infty }dy\frac{[(2r/L)^{2}\cosh ^{2}(y)+l^{2}]^{(1-D)/2}}{\cosh (2qy)-\cos (q\pi )}\bigg]. \label{phi2tm0} \end{eqnarray At small distances from the string, $r\ll L$, in the leading order we have \langle \varphi ^{2}\rangle _{\text{t}}\approx q\langle \varphi ^{2}\rangle _{\text{t}}^{\text{(M)}}$. At large distances, $r\gg L$, the dominant contribution to (\ref{phi2tm0}) comes from the $k=0$ term and one has \langle \varphi ^{2}\rangle _{\text{t}}\approx \langle \varphi ^{2}\rangle _ \text{t}}^{\text{(M)}}$. As it can be seen from (\ref{phi2t}), we have the same asymptotics in the case of a massive field as well. Numerical examples below are given for the simplest 5-dimensional Kaluza-Klein-type model ($D=4$) with a single extra dimension. In the left panel of figure \ref{fig1} we depicted the topological part in the VEV of the field squared as a function of $mr$ for $mL=1$ and for various values of the parameter $q$ (figures near the curves). For $q=1$ the cosmic string is absent and the VEV of the field squared is uniform. The full/dashed curves correspond to untwsited/twisted fields ($\beta =0$ and $\beta =1/2$, respectively). For a twisted field the complete VEV $\langle \varphi ^{2}\rangle $ is positive for points near the string and it is negative at large distances. Hence, for some intermediate value of the radial coordinate it vanishes. As it is seen, the presence of the cosmic string enhances the vacuum polarization effects induced by the compactification of spatial dimensions. In the right panel of figure \ref{fig1}, we plot the topological part in the VEV of the field squared versus the parameter $\beta $ for $mL=1$ and $mr=0.25$. As before, the figures near the curves correspond to the values of the parameter $q$. As it has been already mentioned, the topological part is symmetric with respect to $\beta =1/2$. Note that the intersection point of the graphs for different $q$ depends on the values of mL$ and $mr$. For example, in the case $mL=0.25$ and $mr=0.25$ at the intersection point we have $\beta \approx 0.19$ and $\langle \varphi ^{2}\rangle _{\text{t}}/m^{3}\approx 0.39$. \begin{figure}[tbph] \begin{center} \begin{tabular}{cc} \epsfig{figure=Sahafig1a.eps, width=7.cm, height=6.cm} & \quad \epsfig{figure=Sahafig1b.eps, width=7.cm, height=6.cm \end{tabular \end{center} \caption{The topological part of the VEV in the field squared, $\langle \protect\varphi ^{2}\rangle _{\text{t}}/m^{D-1}$, for $D=4$ as a function of $mr$ for fixed value of $mL=1$ (left panel) and as a function of $\protec \beta $ for fixed values $mL=1$, $mr=0.25$ (right panel). The full and dashed curves on the left panel correspond to untwisted and twisted fields, respectively. The figures near the curves are the corresponding values of $q .} \label{fig1} \end{figure} \section{Energy-momentum tensor} \label{Sec:EMT} Another important characteristic of the vacuum state is the VEV of the energy-momentum tensor. For the evaluation of this quantity we use the formula \cite{Saha04} \begin{equation} \langle T_{ik}\rangle =\lim_{x^{\prime }\rightarrow x}\partial _{i^{\prime }}\partial _{k}W(x,x^{\prime })+\left[ \left( \xi -{1}/{4}\right) g_{ik}\nabla _{l}\nabla ^{l}-\xi \nabla _{i}\nabla _{k}-\xi R_{ik}\right] \langle \varphi ^{2}\rangle \ , \label{mvevEMT} \end{equation where for the spacetime under consideration the Ricci tensor, $R_{ik}$, vanishes for points outside the string. The expression for the energy-momentum tensor in (\ref{mvevEMT}) differs from the standard one, given, for example, in \cite{Birr82}, by the term which vanishes on the mass shell. By taking into account the expressions for the Wightman function and the VEV of the field squared, it can be seen that the vacuum energy-momentum tensor is diagonal. Moreover, similar to the field squared, it is presented in the decomposed form \begin{equation} \langle T_{i}^{j}\rangle =\langle T_{i}^{j}\rangle _{\text{s}}+\langle T_{i}^{j}\rangle _{\text{t}}, \label{EMTDec} \end{equation where $\langle T_{i}^{j}\rangle _{\text{s}}$ is the corresponding VEV in the geometry of a string without compactification and the part $\langle T_{i}^{j}\rangle _{\text{t}}$ is induced by the nontrivial topology of the $z $-direction. The topological part is finite and the renormalization is reduced to that for the pure string part. The topological part in the VEV of the energy-momentum tensor is found from \ref{mvevEMT}), by making use of the expressions for the corresponding parts in the Wightman function and the VEV of the field squared. After long but straightforward calculations, for the topological part one finds \begin{equation} \langle T_{i}^{j}\rangle _{\text{t}}=\frac{4m^{D+1}}{(2\pi )^{(D+1)/2} \sum_{l=1}^{\infty }\cos (2\pi l\beta )\left[ \sideset{}{'}{\sum _{k=0}^{[q/2]}F_{i,l}^{j}(2mr,s_{k})-\frac{q\sin (q\pi )}{\pi \int_{0}^{\infty }dy\frac{F_{i,l}^{j}(2mr,\cosh (y))}{\cosh (2qy)-\cos (q\pi )}\right] , \label{EMTt} \end{equation where the functions for separate components are given by the expression \begin{eqnarray} F_{0,l}^{0}(u,v) &=&\left( 1-4\xi \right) u^{2}v^{4}f_{(D+3)/2}(w)-\left[ 2\left( 1-4\xi \right) v^{2}+1\right] f_{(D+1)/2}(w), \notag \\ F_{1,l}^{1}(u,v) &=&\left( 4\xi v^{2}-1\right) f_{(D+1)/2}(w), \notag \\ F_{2,l}^{2}(u,v) &=&\left( 1-4\xi v^{2}\right) \left[ u^{2}v^{2}f_{(D+3)/2}(w)-f_{(D+1)/2}(w)\right] , \notag \\ F_{3,l}^{3}(u,v) &=&F_{0,l}^{0}(u,v)+\left( mlL\right) ^{2}f_{(D+3)/2}(w), \label{Fij} \end{eqnarray with the function $f_{\nu }(x)$ defined by Eq. (\ref{fnu}), and we use the notation \begin{equation} w=\sqrt{u^{2}v^{2}+\left( lmL\right) ^{2}}. \label{yl} \end{equation For the components with $i>3$ one has (no summation) F_{i}^{i}(u,v)=F_{0}^{0}(u,v)$. This relation is a direct consequence of the invariance of the problem with respect to the boosts along the directions x^{i}$, $i=4,...,D$. The topological part is symmetric with respect to \beta =1/2$. In the presence of a constant gauge field the expression for the VEV of the energy-momentum tensor is obtained from (\ref{EMTt}) by the replacement $\beta \rightarrow \beta ^{\prime }$ with $\beta ^{\prime }$ given by (\ref{beta}). The topological part is a periodic function of the component of the gauge field along the compact dimension. In order to compare the contributions of the separate terms in (\ref{EMTDec ), here we give also the expression for the pure string part: \begin{equation} \langle T_{i}^{j}\rangle _{\text{s}}=\frac{2m^{D+1}}{(2\pi )^{(D+1)/2}}\left[ \sum_{k=1}^{[q/2]}F_{i,0}^{j}(2mr,s_{k})-\frac{q\sin (q\pi )}{\pi \int_{0}^{\infty }dy\frac{F_{i,0}^{j}(2mr,\cosh (y))}{\cosh (2qy)-\cos (q\pi )}\right] , \label{EMTs} \end{equation where the functions $F_{i,0}^{j}(2mr,s_{k})$ are given by (\ref{Fij}) with l=0$. For integer values $q$, formula (\ref{EMTs}) is reduced to the one given in \cite{Beze06}. The VEVs corresponding to (\ref{EMTs}) diverge on the string as $1/r^{D+1}$. A procedure to cure this divergence is to consider the string as having a nontrivial inner structure. In fact, in a realistic point of view, the string has a characteristic core radius determined by the energy scale where the symmetry of the system is spontaneously broken. Various special cases of formula (\ref{EMTs}) can be found in the literature. In particular, for a massless scalar field from (\ref{EMTs}) we find the expression below \begin{eqnarray} \langle T_{i}^{j}\rangle _{\text{s}} &=&\frac{\Gamma ((D+1)/2)}{(4\pi )^{(D+1)/2}r^{D+1}}\left\{ \left[ \frac{D-1}{D}g_{D}(q)-g_{D+2}(q)\right] \text{diag}(1,1,-D,1,\ldots ,1)\right. \notag \\ &&\left. -4(D-1)\left( \xi -\xi _{D}\right) g_{D}(q)\text{diag}(1,\frac{-1} D-1},\frac{D}{D-1},1,\ldots ,1)\right\} , \label{EMTsm0} \end{eqnarray where the function $g_{D}(q)$ is defined by (\ref{gD}). In accordance with the asymptotic estimate (\ref{gDlargeq}), for large values of $q$ the expression on the right hand side of (\ref{EMTsm0}) is dominated by the term with $g_{D+2}(q)$. The leading term does not depend on the curvature coupling parameter and the corresponding energy density is always negative. The energy density, $\langle T_{0}^{0}\rangle _{\text{s}}$, for a massless field in arbitrary number of dimensions has been discussed previously in \cite{Dowk87}. In the special case $D=3$, the expression (\ref{EMTsm0}) reduces to the one given in \cite{Frol87} (see also \cite{Hell86} for the case of conformal coupling and \cite{Guim94} for a string which carries an internal magnetic flux). In this case and for a conformally coupled field the corresponding energy density is always negative. For a minimally coupled field the energy density is positive for $q^{2}<19$ and it is negative for q^{2}>19$. In figure \ref{fig2} we plot the energy density corresponding to \ref{EMTsm0}) as a function of $q$ for $D=3,4$ (figures near the curves) for minimally (full curves) and conformally (dashed curves) coupled massless fields. In the discussion below we shall be mainly concerned with the topological part. \begin{figure}[tbph] \begin{center} \epsfig{figure=Sahafig2.eps, width=7.cm, height=6.cm} \end{center} \caption{The VEV of the energy density in the uncompactified string geometry, $r^{D+1}\langle T_{0}^{0}\rangle _{\text{s}}$, as a function of $q$ for minimally (full curves) and conformally (dashed curves) coupled massless scalar fields. The figures near the curves correspond to the values of $D$.} \label{fig2} \end{figure} It can be easily checked that the topological part satisfies the covariant conservation equation for the energy-momentum tensor: $\nabla _{j}\langle T_{i}^{j}\rangle _{\text{t}}=0$. In the geometry under consideration, the latter is reduced to the equation $\langle T_{2}^{2}\rangle _{\text{t }=\partial _{r}\left( r\langle T_{1}^{1}\rangle _{\text{t}}\right) $. In addition, the topological part obeys the trace relatio \begin{equation} \langle T_{i}^{i}\rangle _{\text{t}}=D(\xi -\xi _{D})\nabla _{i}\nabla ^{i}\langle \varphi ^{2}\rangle _{\text{t}}+m^{2}\langle \varphi ^{2}\rangle _{\text{t}}. \label{trace} \end{equation In particular, it is traceless for a conformally coupled massless field. Let us consider special cases of the general formula (\ref{EMTt}). For integer values of $q$ one finds \begin{equation} \langle T_{i}^{j}\rangle _{\text{t}}=\frac{2m^{D+1}}{(2\pi )^{(D+1)/2} \sum_{k=0}^{q-1}\sum_{l=1}^{\infty }\cos (2\pi l\beta )F_{i,l}^{j}(2mr,s_{k}). \label{EMTtsp} \end{equation In the case of a massless field and general values of $q$ the formula (\re {EMTt}) reduces t \begin{eqnarray} \langle T_{i}^{j}\rangle _{\text{t}} &=&\frac{2\Gamma ((D+1)/2)}{\pi ^{(D+1)/2}L^{D+1}}\sum_{l=1}^{\infty }\frac{\cos (2\pi l\beta )}{l^{D+1} \bigg[\sideset{}{'}{\sum}_{k=0}^{[q/2]}\frac{F_{i}^{(0)j}(2r/lL,s_{k})} [(2rs_{k}/lL)^{2}+1]^{(D+3)/2}} \notag \\ &&-\frac{q\sin (q\pi )}{\pi }\int_{0}^{\infty }dy\frac{F_{i}^{(0)j}(2r/lL \cosh (y))}{\cosh (2qy)-\cos (q\pi )}[(2r/lL)^{2}\cosh ^{2}(y)+1]^{-(D+3)/2 \bigg], \label{EMTtm0} \end{eqnarray with the notation \begin{eqnarray} F_{0}^{(0)0}(u,v) &=&\left( 1-4\xi \right) v^{2}\left[ (D-1)v^{2}u^{2}- \right] -u^{2}v^{2}-1, \notag \\ F_{1}^{(0)1}(u,v) &=&\left( 4\xi v^{2}-1\right) \left( u^{2}v^{2}+1\right) , \notag \\ F_{2}^{(0)2}(u,v) &=&\left( 1-4\xi v^{2}\right) \left( Du^{2}v^{2}-1\right) , \notag \\ F_{3}^{(0)3}(u,v) &=&F_{0}^{(0)0}(u,v)+D+1, \label{Fijl} \end{eqnarray and (no summation) $F_{i}^{(0)i}(u,v)=F_{0}^{(0)0}(u,v)$ for $i>3$. Now we consider the asymptotics for the VEV of the energy-momentum tensor. When $r\gg L$, the dominant contribution in (\ref{EMTt}) comes from the term with $k=0$ \begin{eqnarray} \langle T_{i}^{j}\rangle _{\text{t}} &\approx &\langle T_{i}^{j}\rangle _ \text{t}}^{\text{(M)}}=-\frac{2m^{D+1}}{(2\pi )^{(D+1)/2}}\sum_{l=1}^{\infty }\cos (2\pi l\beta )f_{(D+1)/2}(lmL) \notag \\ &&\times \text{diag}(1,1,1,-D+\frac{f_{(D-1)/2}(lmL)}{f_{(D+1)/2}(lmL) ,1,\ldots ,1), \label{EMTtlr} \end{eqnarray where $\langle T_{i}^{j}\rangle _{\text{t}}^{\text{(M)}}$ is the VEV in the Minkowski spacetime with a compact dimension of the length $L$. Note that the latter does not depend on the curvature coupling parameter. From (\re {EMTtlr}) it follows that at large distances from the string the topological part in the energy density is negative/positive for untwisted/twisted scalar fields. At large distances the topological part dominates and the same is the case for the total VEV. On the string we have \begin{eqnarray} \langle T_{i}^{j}\rangle _{\text{t},r=0} &=&q\langle T_{i}^{j}\rangle _ \text{t}}^{\text{(M)}}+\frac{4m^{D+1}F_{i}^{j}}{(2\pi )^{(D+1)/2} \sum_{l=1}^{\infty }\cos (2\pi l\beta )f_{(D+1)/2}(lmL) \notag \\ &&\times \left[ \sideset{}{'}{\sum}_{k=0}^{[q/2]}s_{k}^{2}-\frac{q\sin (q\pi )}{\pi }\int_{0}^{\infty }dy\frac{\cosh ^{2}(y)}{\cosh (2qy)-\cos (q\pi ) \right] , \label{EMTtaxis} \end{eqnarray where the notations are as follows (no summation) \begin{equation} F_{i}^{i}=2\left( 4\xi -1\right) ,\;i=0,3,4,\ldots ,D,\;F_{1}^{1}=F_{2}^{2}=4\xi . \label{Fijaxis} \end{equation For both conformally and minimally coupled fields the energy density corresponding to (\ref{EMTtaxis}) is negative/positive for untwisted/twisted fields. For integer values of $q$, the expression in the square brackets in \ref{EMTtaxis}) is equal to $q/4$. The pure string part of the VEV diverges on the string as $1/r^{D+1}$ and, hence, it dominates for points near the string, $r\ll L$. Combining these features, we see that for a minimally coupled untwisted scalar field the vacuum energy is negative at large distances from the string and it is positive near the string for $q^{2}<19$ and $D\geqslant 3$. In figure \ref{fig3} we plot the topological part in the vacuum energy density, $\langle T_{0}^{0}\rangle _{\text{t}}/m^{D+1}$, as a function of the distance from the string and of the length of the compact dimension, for an untwisted scalar field ($\beta =0$) in $D=4$ cosmic string spacetime with $q=3$. For a twisted scalar field the corresponding energy density is positive. \begin{figure}[tbph] \begin{center} \epsfig{figure=Sahafig3.eps, width=7.5cm, height=6.5cm} \end{center} \caption{The topological part in the VEV of the energy density, $\langle T_{0}^{0}\rangle _{\text{t}}/m^{D+1}$, for a minimally coupled untwisted scalar field in 5-dimensional cosmic string spacetime with $q=3$, as a function of $mr$ and $mL$.} \label{fig3} \end{figure} For a massless field the quantity $L^{D+1}\langle T_{i}^{j}\rangle _{\text{t }$ is a function of the ratio $r/L$. In figure \ref{fig4} we present the corresponding energy density (left panel) and the stress along the compact dimension (right panel) in the case of a $D=4$ minimally coupled scalar field for various values of the parameter $q$ (figures near the curves). The full/dashed curves correspond to untwsited/twisted scalar fields. In the case $q=1$ the cosmic string is absent and the corresponding VEVs are uniform. The graphs for a conformally coupled field are similar to those given in figure \ref{fig4}. Note that for the topological part in the vacuum effective pressure along the $j$-th direction we have (no summation) $p_ \text{t},j}=-$ $\langle T_{j}^{j}\rangle _{\text{t}}$, and, hence, for the example corresponding to figure \ref{fig4} both the energy density and the pressure along the compact dimension are negative/positive for untwisted/twisted fields. \begin{figure}[tbph] \begin{tabular}{cc} \epsfig{figure=Sahafig4a.eps, width=7.cm, height=6.cm} & \quad \epsfig{figure=Sahafig4b.eps, width=7.cm, height=6.cm \end{tabular \caption{The topological parts in the VEV of the energy density (left panel) and the stress along the compact dimension, as functions of $r/L$ for a minimally coupled massless scalar field. The full and dashed curves correspond to untwisted and twisted fields, respectively. The figures near the curves are the values of the parameter $q$.} \label{fig4} \end{figure} The topological parts in the radial and azimuthal stresses are plotted in figure \ref{fig5} versus $r/L$ for minimally (full curves) and conformally (dashed curves) coupled untwisted fields in the geometry of a $D=4$ cosmic string with $q=2$. Note that the azimuthal stress is not a monotonic function in both cases. As it is seen, the corresponding effective pressures are positive. For a twisted field the graphs have similar structure with the signs changed. \begin{figure}[tbph] \begin{center} \epsfig{figure=Sahafig5.eps, width=7.cm, height=6.cm} \end{center} \caption{The topological parts in the radial and azimuthal stresses for a D=4$ minimally coupled massless field as a function of the ratio $r/L $ in the geometry of a $D=4$ cosmic string with $q=2$.} \label{fig5} \end{figure} The dependence of the energy density on the parameter $\beta $ in the quasiperiodicity condition along the compact dimension is presented in figure \ref{fig6} for a minimally coupled massless scalar field in $D=4$. The graphs are plotted for $r/L=0.3$ and for various values of $q$ (numbers near the curves). They are symmetric with respect to $\beta =1/2$. \begin{figure}[tbph] \begin{center} \epsfig{figure=Sahafig6.eps, width=7.cm, height=6.cm} \end{center} \caption{The topological part in the VEV of the energy density for a $D=4$ minimally coupled massless field as a function of the phase parameter \protect\beta $ for $r/L=0.3$. The figures near the curves correspond to the values of $q$.} \label{fig6} \end{figure} \section{Vacuum energy} \label{Sec:VacEn} As we have seen before, the energy density corresponding to the pure string part diverges on the string as $1/r^{D+1}$. Consequently, the corresponding total vacuum energy is divergent. We can evaluate the total vacuum energy in the region $r\geqslant r_{0}>0$ per unit volume in the subspace (x^{3},x^{4},\ldots ,x^{D})$, defined as $E_{0,r\geqslant r_{0}}^{\text{(s) }=\phi _{0}\int_{r_{0}}^{\infty }dr\,r\langle T_{0}^{0}\rangle _{\text{s}}$. By using the recurrence relation for the modified Bessel function and the formula (see \cite{Prud86}) $\int_{a}^{\infty }dx\,xf_{\nu }(x)=f_{\nu -1}(a) $, with the function $f_{\nu }(x)$ defined in (\ref{fnu}), one find \begin{equation} E_{0,r\geqslant r_{0}}^{\text{(s)}}=\frac{m^{D-1}\phi _{0}}{2(2\pi )^{(D+1)/2}}\left[ \sum_{k=1}^{[q/2]}F(2mr_{0}s_{k},s_{k})-\frac{q\sin (q\pi )}{\pi }\int_{0}^{\infty }dy\frac{F(2mr_{0}\cosh (y),\cosh (y))}{\cosh (2qy)-\cos (q\pi )}\right] , \label{Es0} \end{equation with the notatio \begin{equation} F(u,v)=\left( 1-4\xi \right) u^{2}f_{(D+1)/2}(u)-f_{(D-1)/2}(u)/v^{2}. \label{Fuv} \end{equation For a massless field this formula is reduced t \begin{equation} E_{0,r\geqslant r_{0}}^{\text{(s)}}=\frac{\Gamma ((D-1)/2)}{4(4\pi )^{(D-1)/2}qr_{0}^{D-1}}\left[ \left( 1-4\xi \right) (D-1)g_{D}(q)-g_{D+2}(q \right] . \label{Es0m0} \end{equation Of course, this result could also be obtained directly from (\ref{EMTsm0}). For $mr_{0}\gg 1$ and for fixed value of $q$, the dominant contribution to \ref{Es0}) comes from the first term in the right hand side of Eq. (\ref{Fuv ) and the vacuum energy is positive for $\xi <1/4$. In particular, this is the case for both minimally and conformally coupled fields. In the case mr_{0}\ll 1$, the leading term in the asymptotic expansion of the vacuum energy is given by (\ref{Es0m0}). For large values of $q$ the second term in the square brackets of (\ref{Es0m0}) dominates and the vacuum energy is negative with independence of the curvature coupling parameter. For q\gtrsim 1$, the vacuum energy given by (\ref{Es0m0}) remains negative for a conformally coupled field and becomes positive for a minimally coupled field. Now we turn to the topological part of the vacuum energy. The corresponding energy-momentum tensor can be further decomposed as \begin{equation} \langle T_{i}^{j}\rangle _{\text{t}}=\langle T_{i}^{j}\rangle _{\text{t}}^ \text{(M)}}+\langle T_{i}^{j}\rangle _{\text{t}}^{\text{(s)}}, \label{DecTop} \end{equation where the second term on the right hand side is the correction due to the presence of the string. The expression for $\langle T_{i}^{j}\rangle _{\text t}}^{\text{(s)}}$ is obtained from (\ref{EMTt}) by subtracting the part corresponding to the term $k=0$ (the latter coincides with $\langle T_{i}^{j}\rangle _{\text{t}}^{\text{(M)}}$). For the correction in the topological part of the vacuum energy per unit volume in the subspace (x^{4},\ldots ,x^{D})$, induced by the string, we have \begin{equation} E_{\text{t}}^{\text{(s)}}=\int_{0}^{\infty }dr\,r\int_{0}^{\phi _{0}}d\phi \int_{0}^{L}dz\,\langle T_{0}^{0}\rangle _{\text{t}}^{\text{(s)}}. \label{EST} \end{equation By taking into account that $\langle T_{0}^{0}\rangle _{\text{t}}^{\text{(s) }$ is given by the expression (\ref{EMTt}) omitting the $k=0$ term, the integral over $r$ is evaluated by using the formula \cite{Prud86 \begin{equation} \int_{a}^{\infty }dx\,x(x^{2}-a^{2})^{\beta -1}f_{\nu }(cx)=2^{\beta -1}c^{-2\beta }f_{\nu -\beta }(ac), \label{IntForm2} \end{equation with the function $f_{\nu }(x)$ defined in Eq. (\ref{fnu}). As a result, the dependence on the parameter $q$ is factorized in the form of $g_{3}(q)/q$, where the function $g_{D}(q)$ is given by (\ref{gD}). By taking into account the expression (\ref{g35}) for $g_{3}(q)$, we find the final expression for the vacuum energy \begin{equation} E_{\text{t}}^{\text{(s)}}=-\frac{q^{2}-1}{6q}\frac{m^{D-1}L}{(2\pi )^{(D-1)/2}}\sum_{l=1}^{\infty }\cos (2\pi l\beta )f_{(D-1)/2}(lmL). \label{EST2} \end{equation As it is seen, the total energy does not depend on the curvature coupling parameter. It is negative/positive for untwisted/twisted scalar fields. For a massless field we fin \begin{equation} E_{\text{t}}^{\text{(s)}}=\frac{q^{2}-1}{qL^{D-2}}h_{D}(\beta ), \label{ESTm0} \end{equation wher \begin{equation} h_{D}(\beta )=-\frac{\Gamma ((D-1)/2)}{12\pi ^{(D-1)/2}}\sum_{l=1}^{\infty \frac{\cos (2\pi l\beta )}{l^{D-1}}. \label{hD} \end{equation For a massive field, the expression (\ref{ESTm0}) gives the leading term in the asymptotic expansion for $mL\ll 1$. For odd values of $D$, the series in (\ref{hD}) is given in terms of the Bernoulli polynomials: \begin{equation} h_{D}(\beta )=\frac{(-1)^{(D-1)/2}\pi ^{D/2}}{12(D-1)\Gamma (D/2) B_{D-1}(\beta ). \label{ESTm0odd} \end{equation In figure \ref{fig7} we plotted the function $h_{D}(\beta )$ for $D=3,4,5$ (figures near the curves). As in the case of the vacuum densities, this function is symmetric with respect to $\beta =1/2$. \begin{figure}[tbph] \begin{center} \epsfig{figure=Sahafig7.eps, width=7.cm, height=6.cm} \end{center} \caption{The function $h_{D}(\protect\beta )$ in (\protect\ref{ESTm0}) for D=3,4,5$ (figures near the curves).} \label{fig7} \end{figure} \section{Conclusion} \label{sec:Conc} In the present paper we have investigated the one-loop quantum effects for a massive scalar field with general curvature coupling parameter, induced by the compactification of spatial dimensions in a generalized cosmic string spacetime. It is assumed that along the compact dimension the field obeys quasiperiodicity condition with an arbitrary phase. As the first step for the investigation of vacuum densities we evaluate the positive frequency Wightman function. This function gives comprehensive insight into vacuum fluctuations and determines the response of a particle detector of the Unruh-DeWitt type in a given state of motion. For a massive field and for general value of the planar angle deficit, the Wightman function is given by formula (\ref{WF4}). The $l=0$ term in this expression corresponds to the Wightman function in the geometry of a cosmic string without compactification and, hence, the topological part is explicitly extracted. As the compactification under consideration does not change the local geometry, in this way the renormalization for the VEVs in the coincidence limit is reduced to that for the standard cosmic string geometry without compactification. For integer values of the parameter $q$, the Wightman function is expressed as an image sum of the corresponding function in the Minkowski spacetime with a compact dimension and is given by Eq. (\ref{WF4Sp ). The VEV of the field squared is decomposed as the sum of the pure string part and the correction due to the compactification. For a massive field the string part is given by (\ref{phi2s}). Since the geometry is locally flat, this part does not depend on the curvature coupling parameter. It is positive for $q>1$ and diverges on the string like $1/r^{D-1}$. This divergence may be regularized considering a more realistic model of the string with nontrivial inner structure. The topological part in the VEV\ of the field squared is given by the expression (\ref{phi2t}) and it is finite everywhere including the points on the string. In dependence of the value of the phase $\beta $ in the quasiperiodicity condition, this part can be either positive or negative. In particular, the topological part is positive for an untwisted scalar and it is negative for a twisted scalar. At distances from the string larger than the length of the compact dimension, the topological part in the VEV of the field squared approaches to the corresponding quantity in the Minkowski spacetime with a compact dimension. For points near the string we have the simple asymptotic relation $\langle \varphi ^{2}\rangle _{\text{t}}\approx q\langle \varphi ^{2}\rangle _{\text{t}}^ \text{(M)}}$. The VEV of the energy-momentum tensor is investigated in section \re {Sec:EMT}. This VEV is diagonal and, similar to the case of the field squared, it is decomposed into the pure string and topological parts, given by expressions (\ref{EMTs}) and (\ref{EMTt}), respectively. We have explicitly checked that the topological part satisfies the covariant conservation equation and its trace is related to the VEV of the field squared by the standard formula. For a massive field and for general value of the parameter $q$, we give a closed expression for the pure string part in the VEV of the energy-momentum tensor in arbitrary number of dimensions. The latter generalizes various special cases previously discussed in the literature. At large distances from the string, the topological part coincides with the corresponding result in the Minkowski spacetime and it dominates in the total VEV. In this limit the VEV of the energy-momentum tensor does not depend on the curvature coupling parameter and the corresponding energy density is negative/positive for untwisted/twisted scalar fields. The topological part is finite on the string and for points near the string the leading term in the corresponding asymptotic expansion is given by (\ref{EMTtaxis}). The pure string part in the VEV of the energy density diverges on the string as $1/r^{D+1}$ and near the string it dominates in the total VEV. For a conformally coupled scalar field the corresponding energy density is negative, whereas for a minimally coupled field the energy density is positive for small values of the parameter $q$ and becomes negative for large values of $q$. The numerical examples are given for the simplest Kaluza-Klein-type model with a single extra dimension. They show that he nontrivial topology due to the cosmic string enhances the vacuum polarization effects induced by the compactness of spatial dimensions for both the field squared and the vacuum energy density. For a charged scalar field, in the presence of a constant gauge field the expression for the topological parts are obtained from the formulas given above by the replacement $\beta \rightarrow \beta ^{\prime }$ with $\beta ^{\prime }$ defined by (\ref{beta}). In this case the topological parts are periodic functions of the component of the gauge field along the compact dimension. This is an analog of the Aharonov-Bohm effect. As a result of the non-integrable divergence of the energy density in the pure string part on the string, the corresponding total vacuum energy is divergent. In section \ \ref{Sec:VacEn} we give a closed expression, Eq. \ref{Es0}), for the vacuum energy in the region $r\geqslant r_{0}>0$. The topological part in the VEV of the energy-momentum tensor can be further decomposed into the Minkowskian and string induced parts. The latter is finite on the string and vanishes at large distances from the string. As a result, the total vacuum energy corresponding to this part is finite. This energy is given by the expressions (\ref{EST2}) and (\ref{ESTm0}) for massive and massless fields, respectively. In these expressions the dependence on the parameter $q$ is simply factorized. The string induced part in the topological energy does not depend on the curvature coupling parameter and it is negative/positive for untwisted/twisted scalars. In a way similar to that described above, one can consider the topological Casimir effect for a cosmic string in de Sitter spacetime with compact spatial dimensions. The vacuum polarization effects induced by the presence of the string in uncompactified de Sitter spacetime have been recently discussed in Ref. \cite{Beze09}. The topological Casimir densities in de Sitter spacetime with toroidally compactified spatial dimensions are investigated in Ref. \cite{Saha08}. It has been shown that the curvature of the background spacetime decisively influences the behavior of the topological parts in the VEVs of the field squared and the energy density for lengths of compact dimensions larger than the curvature scale of the spacetime. In this paper the string geometry is taken as a static, given classical background for quantum matter fields. This approach follows the main part of the papers where the influence of the string on quantum matter is investigated (see references \cite{Hell86}-\cite{Guim94}). Of course, in a more complete approach the dynamics of the cosmic string should be taken into account. In the simplest model, the cosmic string dynamics can be described by the Nambu action (see, for instance, \cite{Vile94}). If the scalar field under consideration interacts with the Higgs field inside the string core, then, within this model, the total action will contain also the term describing the interaction of the scalar field with the vibrational modes of the string. This would be the further development of the model under discussion. The results obtained in the present paper are the first step to this more general problem. Another development would be the investigation of the back-reaction effects of the quantum energy-momentum tensor on the gravitational field of the cosmic string. For the geometry of infinitely thin straight cosmic string the back-reaction for conformal fields has been discussed in \cite{Iell97,Hisc87} by using the linearized semiclassical Einstein equations. It would also be interesting to generalize the vacuum polarization calculations of the present paper for the models with nontrivial string core. For a general cylindrically symmetric static model of the string core with finite support this can be done in a way similar to that used in \cite{Beze06} for the geometry of a straight cosmic string. \section*{Acknowledgments} AAS was supported by CAPES Program. ERBM thanks Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico (CNPq) for partial financial support.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \subsection{Background and Motivation} The optimal control of distributed parameter systems has important applications in various scientific areas, such as physics, chemistry, engineering, medicine, and finance. We refer to, e.g. \cite{glowinski1994exact, glowinski1995exact, glowinski2008exact, lions1971optimal, troltzsch2010optimal,zuazua2006}, for a few references. In a typical mathematical model of a controlled distributed parameter system, either boundary or internal locally distributed controls are usually used; these controls have localized support and are called additive controls because they arise in the model equations as additive terms. Optimal control problems with additive controls have received a significant attention in past decades following the pioneering work of J. L. Lions \cite{lions1971optimal}, and many mathematical and computational tools have been developed, see e.g., \cite{glowinski1994exact,glowinski1995exact,glowinski2008exact,lions1988,zuazua2005,zuazua2007}. However, it is worth noting that additive controls describe the effect of external added sources or forces and they do not change the principal intrinsic properties of the controlled system. Hence, they are not suitable to deal with processes whose principal intrinsic properties should be changed by some control actions. For instance, if we aim at changing the reaction rate in some chain reaction-type processes from biomedical, nuclear, and chemical applications, additive controls amount to controlling the chain reaction by adding into or withdrawing out of a certain amount of the reactants, which is not realistic. To address this issue, a natural idea is to use certain catalysts or smart materials to control the systems, which can be mathematically modeled by optimal control problems with bilinear controls. We refer to \cite{khapalov2010} for more detailed discussions. Bilinear controls, also known as multiplicative controls, enter the model as coefficients of the corresponding partial differential equations (PDEs). These bilinear controls can change some main physical characteristics of the system under investigation, such as a natural frequency response of a beam or the rate of a chemical reaction. In the literature, bilinear controls of distributed parameter systems have become an increasingly popular topic and bilinear optimal control problems constrained by various PDEs, such as elliptic equations \cite{kroner2009}, convection-diffusion equations \cite{borzi2015}, parabolic equations \cite{khapalov2003}, the Schr{\"o}dinger equation \cite{kunisch2007} and the Fokker-Planck equation \cite{fleig2017}, have been widely studied both mathematically and computationally. In particular, bilinear controls play a crucial role in optimal control problems modeled by advection-reaction-diffusion systems. On one hand, the control can be the coefficient of the diffusion or the reaction term. For instance, a system controlled by the so-called catalysts that can accelerate or slow down various chemical or biological reactions can be modeled by a bilinear optimal control problem for an advection-reaction-diffusion equation where the control arises as the coefficient of the reaction term \cite{khapalov2003}; this kind of bilinear optimal control problems have been studied in e.g., \cite{borzi2015,cannarsa2017,khapalov2003,khapalov2010}. On the other hand, the systems can also be controlled by the velocity field in the advection term, which captures important applications in e.g., bioremediation \cite{lenhart1998}, environmental remediation process \cite{lenhart1995}, and mixing enhancement of different fluids \cite{liu2008}. We note that there is a very limited research being done on the velocity field controlled bilinear optimal control problems; and only some special one-dimensional space cases have been studied in \cite{lenhart1998,joshi2005,lenhart1995} for the existence of an optimal control and the derivation of first-order optimality conditions. To the best of our knowledge, no work has been done yet to develop efficient numerical methods for solving multi-dimensional bilinear optimal control problems controlled by the velocity field in the advection term. All these facts motivate us to study bilinear optimal control problems constrained by an advection-reaction-diffusion equation, where the control enters into the model as the velocity field in the advection term. Actually, investigating this kind of problems was suggested to one of us (R. Glowinski), in the late 1990's, by J. L. Lions (1928-2001). \subsection{Model} Let $\Omega$ be a bounded domain of $\mathbb{R}^d$ with $d\geq 1$ and let $\Gamma$ be its boundary. We consider the following bilinear optimal control problem: \begin{flalign}\tag{BCP} & \left\{ \begin{aligned} & \bm{u}\in \mathcal{U}, \\ &J(\bm{u})\leq J(\bm{v}), \forall \bm{v}\in \mathcal{U}, \end{aligned} \right. \end{flalign} with the objective functional $J$ defined by \begin{equation}\label{objective_functional} J(\bm{v})=\frac{1}{2}\iint_Q|\bm{v}|^2dxdt+\frac{\alpha_1}{2}\iint_Q|y-y_d|^2dxdt+\frac{\alpha_2}{2}\int_\Omega|y(T)-y_T|^2dx, \end{equation} and $y=y(t;\bm{v})$ the solution of the following advection-reaction-diffusion equation \begin{flalign}\label{state_equation} & \left\{ \begin{aligned} \frac{\partial y}{\partial t}-\nu \nabla^2y+\bm{v}\cdot \nabla y+a_0y&=f\quad \text{in}\quad Q, \\ y&=g\quad \text{on}\quad \Sigma,\\ y(0)&=\phi. \end{aligned} \right. \end{flalign} Above and below, $Q=\Omega\times (0,T)$ and $\Sigma=\Gamma\times (0,T)$ with $0<T<+\infty$; $\alpha_1\geq 0, \alpha_2\geq 0, \alpha_1+\alpha_2>0$; the target functions $y_d$ and $y_T$ are given in $L^2(Q)$ and $L^2(\Omega)$, respectively; the diffusion coefficient $\nu>0$ and the reaction coefficient $a_0$ are assumed to be constants; the functions $f\in L^2(Q)$, $g\in L^2(0,T;H^{1/2}(\Gamma))$ and $\phi\in L^2(\Omega)$. The set $\mathcal{U}$ of the admissible controls is defined by \begin{equation*} \mathcal{U}:=\{\bm{v}|\bm{v}\in [L^2(Q)]^d, \nabla\cdot\bm{v}=0\}. \end{equation*} Clearly, the control variable $\bm{v}$ arises in (BCP) as a flow velocity field in the advection term of (\ref{state_equation}), and the divergence-free constraint $\nabla\cdot\bm{v}=0$ implies that the flow is incompressible. One can control the system by changing the flow velocity $\bm{v}$ in order that $y$ and $y(T)$ are good approximations to $y_d$ and $y_T$, respectively. \subsection{Difficulties and Goals} In this paper, we intend to study the bilinear optimal control problem (BCP) in the general case of $d\geq 2$ both mathematically and computationally. Precisely, we first study the well-posedness of (\ref{state_equation}), the existence of an optimal control $\bm{u}$, and its first-order optimality condition. Then, computationally, we propose an efficient and relatively easy to implement numerical method to solve (BCP). For this purpose, we advocate combining a conjugate gradient (CG) method with a finite difference method (for the time discretization) and a finite element method (for the space discretization) for the numerical solution of (BCP). Although these numerical approaches have been well developed in the literature, it is nontrivial to implement them to solve (BCP) as discussed below, due to the complicated problem settings. \subsubsection{Difficulties in Algorithmic Design} Conceptually, a CG method for solving (BCP) can be easily derived following \cite{glowinski2008exact}. However, CG algorithms are challenging to implement numerically for the following reasons: 1). The state $y$ depends non-linearly on the control $\bm{v}$ despite the fact that the state equation (\ref{state_equation}) is linear. 2). The additional divergence-free constraint on the control $\bm{v}$, i.e., $\nabla\cdot\bm{v}=0$, is coupled together with the state equation (\ref{state_equation}). To be more precise, the fact that the state $y$ is a nonlinear function of the control $\bm{v}$ makes the optimality system a nonlinear problem. Hence, seeking a suitable stepsize in each CG iteration requires solving an optimization problem and it can not be as easily computed as in the linear case \cite{glowinski2008exact}. Note that commonly used line search strategies are too expensive to employ in our settings because they require evaluating the objective functional value $J(\bm{v})$ repeatedly and every evaluation of $J(\bm{v})$ entails solving the state equation (\ref{state_equation}). The same concern on the computational cost also applies when the Newton method is employed to solve the corresponding optimization problem for finding a stepsize. To tackle this issue, we propose an efficient inexact stepsize strategy which requires solving only one additional linear parabolic problem and is cheap to implement as shown in Section \ref{se:cg}. Furthermore, due to the divergence-free constraint $\nabla\cdot\bm{v}=0$, an extra projection onto the admissible set $\mathcal{U}$ is required to compute the first-order differential of $J$ at each CG iteration in order that all iterates of the CG method are feasible. Generally, this projection subproblem has no closed-form solution and has to be solved iteratively. Here, we introduce a Lagrange multiplier associated with the constraint $\nabla\cdot\bm{v}=0$, then the computation of the first-order differential $DJ(\bm{v})$ of $J$ at $\bm{v}$ is equivalent to solving a Stokes type problem. Inspired by \cite{glowinski2003}, we advocate employing a preconditioned CG method, which operates on the space of the Lagrange multiplier, to solve the resulting Stokes type problem. With an appropriately chosen preconditioner, a fast convergence of the resulting preconditioned CG method can be expected in practice (and indeed, has been observed). \subsubsection{Difficulties in Numerical Discretization} For the numerical discretization of (BCP), we note that if an implicit finite difference scheme is used for the time discretization of the state equation (\ref{state_equation}), a stationary advection-reaction-diffusion equation should be solved at each time step. To solve this stationary advection-reaction-diffusion equation, it is well known that standard finite element techniques may lead to strongly oscillatory solutions unless the mesh-size is sufficiently small with respect to the ratio between $\nu$ and $\|\bm{v}\|$. In the context of optimal control problems, to overcome such difficulties, different stabilized finite element methods have been proposed and analyzed, see e.g., \cite{BV07,DQ05}. Different from the above references, we implement the time discretization by a semi-implicit finite difference method for simplicity, namely, we use explicit advection and reaction terms and treat the diffusion term implicitly. Consequently, only a simple linear elliptic equation is required to be solved at each time step. We then implement the space discretization of the resulting elliptic equation at each time step by a standard piecewise linear finite element method and the resulting linear system is very easy to solve. Moreover, we recall that the divergence-free constraint $\nabla\cdot \bm{v}=0$ leads to a projection subproblem, which is equivalent to a Stokes type problem, at each iteration of the CG algorithm. As discussed in \cite{glowinski1992}, to discretize a Stokes type problem, direct applications of standard finite element methods always lead to an ill-posed discrete problem. To overcome this difficulty, one can use different types of element approximations for pressure and velocity. Inspired by \cite{glowinski1992,glowinski2003}, we employ the Bercovier-Pironneau finite element pair \cite{BP79} (also known as $P_1$-$P_1$ iso $P_2$ finite element) to approximate the control $\bm{v}$ and the Lagrange multiplier associated with the divergence-free constraint. More concretely, we approximate the Lagrange multiplier by a piecewise linear finite element space which is twice coarser than the one for the control $\bm{v}$. In this way, the discrete problem is well-posed and can be solved by a preconditioned CG method. As a byproduct of the above discretization, the total number of degrees of freedom of the discrete Lagrange multiplier is only $\frac{1}{d2^d}$ of the number of the discrete control. Hence, the inner preconditioned CG method is implemented in a lower-dimensional space than that of the state equation (\ref{state_equation}), implying a computational cost reduction. With the above mentioned discretization schemes, we can relatively easily obtain the fully discrete version of (BCP) and derive the discrete analogue of our proposed nested CG method. \subsection{Organization} An outline of this paper is as follows. In Section \ref{se:existence and oc}, we prove the existence of optimal controls for (BCP) and derive the associated first-order optimality conditions. An easily implementable nested CG method is proposed in Section \ref{se:cg} for solving (BCP) numerically. In Section \ref{se:discretization}, we discuss the numerical discretization of (BCP) by finite difference and finite element methods. Some preliminary numerical results are reported in Section \ref{se:numerical} to validate the efficiency of our proposed numerical approach. Finally, some conclusions are drawn in Section \ref{se:conclusion}. \section{Existence of optimal controls and first-order optimality conditions}\label{se:existence and oc} In this section, first we present some notation and known results from the literature that will be used in later analysis. Then, we prove the existence of optimal controls for (BCP) and derive the associated first-order optimality conditions. Without loss of generality, we assume that $f=0$ and $g=0$ in (\ref{state_equation}) for convenience. \subsection{Preliminaries} Throughout, we denote by $L^s(\Omega)$ and $H^s(\Omega)$ the usual Sobolev spaces for any $s>0$. The space $H_0^s(\Omega)$ denotes the completion of $C_0^{\infty}(\Omega)$ in $H^s(\Omega)$, where $C_0^{\infty}(\Omega)$ denotes the space of all infinitely differentiable functions over $\Omega$ with a compact support in $\Omega$. In addition, we shall also use the following vector-valued function spaces: \begin{eqnarray*} &&\bm{L}^2(\Omega):=[L^2(\Omega)]^d,\\ &&\bm{L}_{div}^2(\Omega):=\{\bm{v}\in \bm{L}^2(\Omega),\nabla\cdot\bm{v}=0~\text{in}~\Omega\}. \end{eqnarray*} Let $X$ be a Banach space with a norm $\|\cdot\|_X$, then the space $L^2(0, T;X)$ consists of all measurable functions $z:(0,T)\rightarrow X$ satisfying $$ \|z\|_{L^2(0, T;X)}:=\left(\int_0^T\|z(t)\|_X^2dt \right)^{\frac{1}{2}}<+\infty. $$ With the above notation, it is clear that the admissible set $\mathcal{U}$ can be denoted as $\mathcal{U}:=L^2(0,T; \bm{L}_{div}^2(\Omega))$. Moreover, the space $W(0,T)$ consists of all functions $z\in L^2(0, T; H_0^1(\Omega))$ such that $\frac{\partial z}{\partial t}\in L^2(0, T; H^{-1}(\Omega))$ exists in a weak sense, i.e. $$ W(0,T):=\{z|z\in L^2(0,T; H_0^1(\Omega)), \frac{\partial z}{\partial t}\in L^2(0,T; H^{-1}(\Omega))\}, $$ where $H^{-1}(\Omega)(=H_0^1(\Omega)^\prime)$ is the dual space of $H_0^1(\Omega)$. Next, we summarize some known results for the advection-reaction-diffusion equation (\ref{state_equation}) in the literature for the convenience of further analysis. The variational formulation of the state equation (\ref{state_equation}) reads: find $y\in W(0,T)$ such that $y(0)=\phi$ and $\forall z\in L^2(0,T;H_0^1(\Omega))$, \begin{equation}\label{weak_form} \int_0^T\left\langle\frac{\partial y}{\partial t}, z \right\rangle_{H^{-1}(\Omega),H_0^1(\Omega)} dt+\nu\iint_{Q} \nabla y\cdot\nabla z dxdt+\iint_{Q}\bm{v}\cdot\nabla yzdxdt+a_0\iint_{Q} yzdxdt=0, \end{equation} where $\left\langle\cdot,\cdot\right\rangle_{H^{-1}(\Omega),H_0^1(\Omega)}$ denotes the duality pairing between $H^{-1}(\Omega)$ and $H_0^1(\Omega)$. The existence and uniqueness of the solution $y\in W(0,T)$ to problem (\ref{weak_form}) can be proved by standard arguments relying on the Lax-Milgram theorem, we refer to \cite{lions1971optimal} for the details. Moreover, we can define the control-to-state operator $S:\mathcal{U}\rightarrow W(0,T)$, which maps $\bm{v}$ to $y=S(\bm{v})$. Then, the objective functional $J$ in (BCP) can be reformulated as \begin{equation*} J(\bm{v})=\frac{1}{2}\iint_Q|\bm{v}|^2dxdt+\frac{\alpha_1}{2}\iint_Q|S(\bm{v})-y_d|^2dxdt+\frac{\alpha_2}{2}\int_\Omega|S(\bm{v})(T)-y_T|^2dx, \end{equation*} and the nonlinearity of the solution operator $S$ implies that (BCP) is nonconvex. For the solution $y\in W(0,T)$, we have the following estimates. \begin{lemma} Let $\bm{v}\in L^2(0,T; \bm{L}^2_{div}(\Omega))$, then the solution $y\in W(0,T)$ of the state equation (\ref{state_equation}) satisfies the following estimate: \begin{equation}\label{est_y} \|y(t)\|_{L^2(\Omega)}^2+2\nu\int_0^t\|\nabla y(s)\|_{L^2(\Omega)}^2ds+2a_0\int_0^t\| y(s)\|_{L^2(\Omega)}^2ds=\|\phi\|_{L^2(\Omega)}^2. \end{equation} \end{lemma} \begin{proof} We first multiply the state equation (\ref{state_equation}) by $y(t)$, then applying the Green's formula in space yields \begin{equation}\label{e1} \frac{1}{2}\frac{d}{dt}\|y(t)\|_{L^2(\Omega)}^2=-\nu\|\nabla y(t)\|_{L^2(\Omega)}^2-a_0\|y(t)\|_{L^2(\Omega)}^2. \end{equation} The desired result (\ref{est_y}) can be directly obtained by integrating (\ref{e1}) over $[0,t]$. \end{proof} Above estimate implies that \begin{equation}\label{bd_y} y~\text{is bounded in}~L^2(0,T; H_0^1(\Omega)). \end{equation} On the other hand, $$ \frac{\partial y}{\partial t}=\nu \nabla^2y-\bm{v}\cdot \nabla y-a_0y, $$ and the right hand side is bounded in $L^2(0,T; H^{-1}(\Omega))$. Hence, \begin{equation}\label{bd_yt} \frac{\partial y}{\partial t}~\text{is bounded in}~ L^2(0,T; H^{-1}(\Omega)). \end{equation} Furthermore, since $\nabla\cdot\bm{v}=0$, it is clear that $$\iint_Q\bm{v}\cdot\nabla yzdxdt=\iint_Q\nabla y\cdot (\bm{v}z)dxdt=-\iint_Q y\nabla\cdot(\bm{v}z)dxdt=-\iint_Q y(\bm{v}\cdot\nabla z)dxdt,\forall z\in L^2(0,T;H_0^1(\Omega)).$$ Hence, the variational formulation (\ref{weak_form}) can be equivalently written as:" find $y\in W(0,T)$ such that $y(0)=\phi$ and $\forall z\in L^2(0,T;H_0^1(\Omega))$, \begin{equation*} \int_0^T\left\langle\frac{\partial y}{\partial t}, z \right\rangle_{H^{-1}(\Omega),H_0^1(\Omega)} dt+\nu\iint_{Q} \nabla y\cdot\nabla z dxdt-\iint_Q(\bm{v}\cdot\nabla z) ydxdt+a_0\iint_{Q} yzdxdt=0. \end{equation*} \subsection{Existence of Optimal Controls} With above preparations, we prove in this subsection the existence of optimal controls for (BCP). For this purpose, we first show that the objective functional $J$ is weakly lower semi-continuous. \begin{lemma}\label{wlsc} The objective functional $J$ given by (\ref{objective_functional}) is weakly lower semi-continuous. That is, if a sequence $\{\bm{v}_n\}$ converges weakly to $\bar{\bm{v}}$ in $L^2(0,T; \bm{L}^2_{div}(\Omega))$, we have $$ J(\bar{\bm{v}})\leq \underset{n\rightarrow \infty}{\lim\inf} J(\bm{v}_n). $$ \end{lemma} \begin{proof} Let $\{\bm{v}_n\}$ be a sequence that converges weakly to $\bar{\bm{v}}$ in $L^2(0,T;\bm{L}^2_{div}(\Omega))$ and $y_n:=y(x,t;\bm{v}_n)$ the solution of the following variational problem: find $y_n\in W(0,T)$ such that $y_n(0)=\phi$ and $\forall z\in L^2(0,T;H_0^1(\Omega))$, \begin{equation}\label{seq_state} \int_0^T\left\langle\frac{\partial y_n}{\partial t}, z \right\rangle_{H^{-1}(\Omega),H_0^1(\Omega)} dt+\nu\iint_{Q} \nabla y_n\cdot\nabla z dxdt-\iint_Q(\bm{v}_n\cdot\nabla z) y_ndxdt+a_0\iint_{Q} y_nzdxdt=0. \end{equation} Moreover, it follows from (\ref{bd_y}) and (\ref{bd_yt}) that there exists a subsequence of $\{y_n\}$, still denoted by $\{y_n\}$ for convenience, such that $$y_n\rightarrow\bar{y}~\text{weakly in}~ L^2(0,T; H_0^1(\Omega)),$$ and $$\frac{\partial y_n}{\partial t}\rightarrow\frac{\partial \bar{y}}{\partial t} ~\text{weakly in}~L^2(0,T; H^{-1}(\Omega)).$$ Since $\Omega$ is bounded, it follows directly from the compactness property (also known as Rellich's Theorem) that $$y_n\rightarrow\bar{y}~\text{strongly in}~ L^2(0,T; L^2(\Omega)).$$ Taking $\bm{v}_n\rightarrow \bar{\bm{v}}$ weakly in $L^2(0,T; \bm{L}_{div}^2(\Omega))$ into account, we can pass the limit in (\ref{seq_state}) and derive that $\bar{y}(0)=\phi$ and $\forall z\in L^2(0,T;H_0^1(\Omega))$, \begin{equation*} \int_0^T\left\langle\frac{\partial \bar{y}}{\partial t}, z \right\rangle_{H^{-1}(\Omega),H_0^1(\Omega)} dt+\nu\iint_{Q} \nabla \bar{y}\cdot\nabla z dxdt-\iint_Q(\bar{\bm{v}}\cdot\nabla z) \bar{y}dxdt+a_0\iint_{Q} \bar{y}zdxdt=0, \end{equation*} which implies that $\bar{y}$ is the solution of the state equation (\ref{state_equation}) associated with $\bar{\bm{v}}$. Since any norm of a Banach space is weakly lower semi-continuous, we have that \begin{equation*} \begin{aligned} &\underset{n\rightarrow \infty}{\lim\inf} J(\bm{v}_n)\\ = &\underset{n\rightarrow \infty}{\lim\inf}\left( \frac{1}{2}\iint_Q|\bm{v}_n|^2dxdt+\frac{\alpha_1}{2}\iint_Q|y_n-y_d|^2dxdt+\frac{\alpha_2}{2}\int_\Omega|y_n(T)-y_T|^2dx\right)\\ \geq& \frac{1}{2}\iint_Q|\bar{\bm{v}}|^2dxdt+\frac{\alpha_1}{2}\iint_Q|\bar{y}-y_d|^2dxdt+\frac{\alpha_2}{2}\int_\Omega|\bar{y}(T)-y_T|^2dx\\ =& J(\bar{\bm{v}}). \end{aligned} \end{equation*} We thus obtain that the objective functional $J$ is weakly lower semi-continuous and complete the proof. \end{proof} Now, we are in a position to prove the existence of an optimal solution $\bm{u}$ to (BCP). \begin{theorem}\label{thm_existence} There exists at least one optimal control $\bm{u}\in \mathcal{U}=L^2(0,T; \bm{L}_{div}^2(\Omega))$ such that $J(\bm{u})\leq J(\bm{v}),\forall\bm{v}\in \mathcal{U}$. \end{theorem} \begin{proof} We first observe that $J(\bm{v})\geq 0,\forall\bm{v}\in \mathcal{U}$, then the infimum of $J(\bm{v})$ exists and we denote it as $$ j=\inf_{\bm{v}\in\mathcal{U}}J(\bm{v}), $$ and there is a minimizing sequence $\{\bm{v}_n\}\subset\mathcal{U}$ such that $$ \lim_{n\rightarrow \infty}J(\bm{v}_n)=j. $$ This fact, together with $\frac{1}{2}\|\bm{v}_n\|^2_{L^2(0,T; \bm{L}^2_{div}(\Omega))}\leq J(\bm{v}_n)$, implies that $\{\bm{v}_n\}$ is bounded in $L^2(0,T;\bm{L}^2_{div}(\Omega))$. Hence, there exists a subsequence, still denoted by $\{\bm{v}_n\}$, that converges weakly to $\bm{u}$ in $L^2(0,T; \bm{L}^2_{div}(\Omega))$. It follows from Lemma \ref{wlsc} that $J$ is weakly lower semi-continuous and we thus have $$ J(\bm{u})\leq \underset{n\rightarrow \infty}{\lim\inf} J(\bm{v}_n)=j. $$ Since $\bm{u}\in\mathcal{U}$, we must have $J(\bm{u})=j$, and $\bm{u}$ is therefore an optimal control. \end{proof} We note that the uniqueness of optimal control $\bm{u}$ cannot be guaranteed and only a local optimal solution can be pursued because the objective functional $J$ is nonconvex due to the nonlinear relationship between the state $y$ and the control $\bm{v}$. \subsection{First-order Optimality Conditions} Let $DJ(\bm{v})$ be the first-order differential of $J$ at $\bm{v}$ and $\bm{u}$ an optimal control of (BCP). It is clear that the first-order optimality condition of (BCP) reads \begin{equation*} DJ(\bm{u})=0. \end{equation*} In the sequel of this subsection, we discuss the computation of $DJ(\bm{v})$, which will play an important role in subsequent sections. To compute $DJ(\bm{v})$, we employ a formal perturbation analysis as in \cite{glowinski2008exact}. First, let $\delta \bm{v}\in \mathcal{U}$ be a perturbation of $\bm{v}\in \mathcal{U}$, we clearly have \begin{equation}\label{Dj and delta j} \delta J(\bm{v})=\iint_{Q}DJ(\bm{v})\cdot\delta \bm{v} dxdt, \end{equation} and also \begin{eqnarray}{\label{def_delta_j}} \begin{aligned} &\delta J(\bm{v})=\iint_{Q}\bm{v}\cdot\delta \bm{v} dxdt+\alpha_1\iint_{Q}(y-y_d)\delta y dxdt+\alpha_2\int_\Omega(y(T)-y_T)\delta y(T)dx, \end{aligned} \end{eqnarray} in which $\delta y$ is the solution of \begin{flalign}\label{perturbation_state_eqn} &\left\{ \begin{aligned} \frac{\partial \delta y}{\partial t}-\nu \nabla^2\delta y+\delta \bm{v}\cdot \nabla y+\bm{v}\cdot\nabla\delta y+a_0\delta y&=0\quad \text{in}\quad Q, \\ \delta y&=0\quad \text{on}\quad \Sigma,\\ \delta y(0)&=0. \end{aligned} \right. \end{flalign} Consider now a function $p$ defined over $\overline{Q}$ (the closure of $Q$); and assume that $p$ is a differentiable function of $x$ and $t$. Multiplying both sides of the first equation in (\ref{perturbation_state_eqn}) by $p$ and integrating over $Q$, we obtain \begin{equation*} \iint_{Q}p\frac{\partial }{\partial t}\delta ydxdt-\nu\iint_{Q}p \nabla^2\delta ydxdt+\iint_Q\delta \bm{v}\cdot \nabla ypdxdt+\iint_Q\bm{v}\cdot\nabla\delta ypdxdt+a_0\iint_{Q}p\delta ydxdt=0. \end{equation*} Integration by parts in time and application of Green's formula in space yield \begin{eqnarray}{\label{weakform_p}} \begin{aligned} \int_\Omega p(T)\delta y(T)dx-\int_\Omega p(0)\delta y(0)dx+\iint_{Q}\Big[ -\frac{\partial p}{\partial t} -\nu\nabla^2p-\bm{v}\cdot\nabla p+a_0p\Big]\delta ydxdt\\ +\iint_Q\delta \bm{v}\cdot \nabla ypdxdt-\nu\iint_{\Sigma}(\frac{\partial\delta y}{\partial \bm{n}}p-\frac{\partial p}{\partial \bm{n}}\delta y)dxdt+\iint_\Sigma p\delta y\bm{v}\cdot \bm{n}dxdt=0. \end{aligned} \end{eqnarray} where $\bm{n}$ is the unit outward normal vector at $\Gamma$. Next, let us assume that the function $p$ is the solution to the following adjoint system \begin{flalign}\label{adjoint_equation} &\qquad \left\{ \begin{aligned} -\frac{\partial p}{\partial t} -\nu\nabla^2p-\bm{v}\cdot\nabla p +a_0p&=\alpha_1(y-y_d)~ \text{in}~ Q, \\ p&=0~\qquad\quad\quad\text{on}~ \Sigma,\\ p(T)&=\alpha_2(y(T)-y_T). \end{aligned} \right. \end{flalign} It follows from (\ref{def_delta_j}), (\ref{perturbation_state_eqn}), (\ref{weakform_p}) and (\ref{adjoint_equation}) that \begin{equation*} \delta J(\bm{v})=\iint_{Q}(\bm{v}-p\nabla y)\cdot\delta \bm{v} dxdt. \end{equation*} which, together with (\ref{Dj and delta j}), implies that \begin{equation}\label{gradient} \left\{ \begin{aligned} &DJ(\bm{v})\in \mathcal{U},\\ &\iint_QDJ(\bm{v})\cdot \bm{z}dxdt=\iint_Q(\bm{v}-p\nabla y)\cdot \bm{z}dxdt,\forall \bm{z}\in \mathcal{U}. \end{aligned} \right. \end{equation} From the discussion above, the first-order optimality condition of (BCP) can be summarized as follows. \begin{theorem} Let $\bm{u}\in \mathcal{U}$ be a solution of (BCP). Then, it satisfies the following optimality condition \begin{equation*} \iint_Q(\bm{u}-p\nabla y)\cdot \bm{z}dxdt=0,\forall \bm{z}\in \mathcal{U}, \end{equation*} where $y$ and $p$ are obtained from $\bm{u}$ via the solutions of the following two parabolic equations: \begin{flalign*}\tag{state equation} &\quad\qquad\qquad\qquad\left\{ \begin{aligned} \frac{\partial y}{\partial t}-\nu \nabla^2y+\bm{u}\cdot \nabla y+a_0y&=f\quad \text{in}~ Q, \\ y&=g\quad \text{on}~\Sigma,\\ y(0)&=\phi, \end{aligned} \right.& \end{flalign*} and \begin{flalign*}\tag{adjoint equation} &\qquad\qquad\qquad\left\{ \begin{aligned} -\frac{\partial p}{\partial t} -\nu\nabla^2p-\bm{u}\cdot\nabla p +a_0p&=\alpha_1(y-y_d)\quad \text{in}~ Q, \\ p&=0 \quad\qquad\qquad\text{on}~\Sigma,\\ p(T)&=\alpha_2(y(T)-y_T). \end{aligned} \right.& \end{flalign*} \end{theorem} \section{An Implementable Nested Conjugate Gradient Method}\label{se:cg} In this section, we discuss the application of a CG strategy to solve (BCP). In particular, we elaborate on the computation of the gradient and the stepsize at each CG iteration, and thus obtain an easily implementable algorithm. \subsection{A Generic Conjugate Gradient Method for (BCP)} Conceptually, implementing the CG method to (BCP), we readily obtain the following algorithm: \begin{enumerate} \item[\textbf{(a)}] Given $\bm{u}^0\in \mathcal{U}$. \item [\textbf{(b)}] Compute $\bm{g}^0=DJ(\bm{u}^0)$. If $DJ(\bm{u}^0)=0$, then $\bm{u}=\bm{u}^0$; otherwise set $\bm{w}^0=\bm{g}^0$. \item[]\noindent For $k\geq 0$, $\bm{u}^k,\bm{g}^k$ and $\bm{w}^k$ being known, the last two different from $\bm{0}$, one computes $\bm{u}^{k+1}, \bm{g}^{k+1}$ and $\bm{w}^{k+1}$ as follows: \item[\textbf{(c)}] Compute the stepsize $\rho_k$ by solving the following optimization problem \begin{flalign}\label{op_step} &\left\{ \begin{aligned} & \rho_k\in \mathbb{R}, \\ &J(\bm{u}^k-\rho_k\bm{w}^k)\leq J(\bm{u}^k-\rho \bm{w}^k), \forall \rho\in \mathbb{R}. \end{aligned} \right. \end{flalign} \item[\textbf{(d)}] Update $\bm{u}^{k+1}$ and $\bm{g}^{k+1}$, respectively, by $$\bm{u}^{k+1}=\bm{u}^k-\rho_k \bm{w}^k,$$ and $$\bm{g}^{k+1}=DJ(\bm{u}^{k+1}).$$ \item[] If $DJ(\bm{u}^{k+1})=0$, take $\bm{u}=\bm{u}^{k+1}$; otherwise, \item[\textbf{(e)}] Compute $$\beta_k=\frac{\iint_{Q}|\bm{g}^{k+1}|^2dxdt}{\iint_{Q}|\bm{g}^k|^2dxdt},$$ and then update $$\bm{w}^{k+1}=\bm{g}^{k+1}+\beta_k \bm{w}^k.$$ \item[] Do $k+1\rightarrow k$ and return to (\textbf{c}). \end{enumerate} The above iterative method looks very simple, but practically, the implementation of the CG method (\textbf{a})--(\textbf{e}) for the solution of (BCP) is nontrivial. In particular, it is numerically challenging to compute $DJ(\bm{v})$, $\forall \bm{v}\in\mathcal{U}$ and $\rho_k$ as illustrated below. We shall discuss how to address these two issues in the following part of this section. \subsection{Computation of $DJ(\bm{v})$}\label{com_gra} It is clear that the implementation of the generic CG method (\textbf{a})--(\textbf{e}) for the solution of (BCP) requires the knowledge of $DJ(\bm{v})$ for various $\bm{v}\in \mathcal{U}$, and this has been conceptually provided in (\ref{gradient}). However, it is numerically challenging to compute $DJ(\bm{v})$ by (\ref{gradient}) due to the restriction $\nabla\cdot DJ(\bm{v})=0$ which ensures that all iterates $\bm{u}^k$ of the CG method meet the additional divergence-free constraint $\nabla\cdot \bm{u}^k=0$. In this subsection, we show that equation (\ref{gradient}) can be reformulated as a saddle point problem by introducing a Lagrange multiplier associated with the constraint $\nabla\cdot DJ(\bm{v})=0$. Then, a preconditioned CG method is proposed to solve this saddle point problem. We first note that equation (\ref{gradient}) can be equivalently reformulated as \begin{equation}\label{gradient_e} \left\{ \begin{aligned} &DJ(\bm{v})(t)\in \mathbb{S},\\ &\int_\Omega DJ(\bm{v})(t)\cdot \bm{z}dx=\int_\Omega(\bm{v}(t)-p(t)\nabla y(t))\cdot \bm{z}dx,\forall \bm{z}\in \mathbb{S}, \end{aligned} \right. \end{equation} where \begin{equation*} \mathbb{S}=\{\bm{z}|\bm{z}\in [L^2(\Omega)]^d, \nabla\cdot\bm{z}=0\}. \end{equation*} Clearly, problem (\ref{gradient_e}) is a particular case of \begin{equation}\label{gradient_e2} \left\{ \begin{aligned} &\bm{g}\in \mathbb{S},\\ &\int_\Omega \bm{g}\cdot \bm{z}dx=\int_\Omega\bm{f}\cdot \bm{z}dx,\forall \bm{z}\in \mathbb{S}, \end{aligned} \right. \end{equation} with $\bm{f}$ given in $[L^2(\Omega)]^d$. Introducing a Lagrange multiplier $\lambda\in H_0^1(\Omega)$ associated with the constraint $\nabla\cdot\bm{z}=0$, it is clear that problem (\ref{gradient_e2}) is equivalent to the following saddle point problem \begin{equation}\label{gradient_e3} \left\{ \begin{aligned} &(\bm{g},\lambda)\in [L^2(\Omega)]^d\times H_0^1(\Omega),\\ &\int_\Omega \bm{g}\cdot \bm{z}dx=\int_\Omega\bm{f}\cdot \bm{z}dx+\int_\Omega\lambda\nabla\cdot \bm{z}dx,\forall \bm{z}\in [L^2(\Omega)]^d,\\ &\int_\Omega\nabla\cdot \bm{g}qdx=0,\forall q\in H_0^1(\Omega), \end{aligned} \right. \end{equation} which is actually a Stokes type problem. In order to solve problem (\ref{gradient_e3}), we advocate a CG method inspired from \cite{glowinski2003, glowinski2015}. For this purpose, one has to specify the inner product to be used over $H_0^1(\Omega)$. As discussed in \cite{glowinski2003}, the usual $L^2$-inner product, namely, $\{q,q'\}\rightarrow\int_\Omega qq'dx$ leads to a CG method with poor convergence properties. Indeed, using some arguments similar to those in \cite{glowinski1992,glowinski2003}, we can show that the saddle point problem (\ref{gradient_e3}) can be reformulated as a linear variational problem in terms of the Lagrange multiplier $\lambda$. The corresponding coefficient matrix after space discretization with mesh size $h$ has a condition number of the order of $h^{-2}$, which is ill-conditioned especially for small $h$ and makes the CG method converges fairly slow. Hence, preconditioning is necessary for solving problem (\ref{gradient_e3}). As suggested in \cite{glowinski2003}, we choose $-\nabla\cdot\nabla$ as a preconditioner for problem (\ref{gradient_e3}), and the corresponding preconditioned CG method operates in the space $H_0^1(\Omega)$ equipped with the inner product $\{q,q'\}\rightarrow\int_\Omega\nabla q\cdot\nabla q'dx$ and the associated norm $\|q\|_{H_0^1(\Omega)}=(\int_\Omega|\nabla q|^2dx)^{1/2}, \forall q,q'\in H_0^1(\Omega)$. The resulting algorithm reads as: \begin{enumerate} \item [\textbf{G1}] Choose $\lambda^0\in H_0^1(\Omega)$. \item [\textbf{G2}] Solve \begin{equation*} \left\{ \begin{aligned} &\bm{g}^0\in [L^2(\Omega)]^d,\\ &\int_\Omega \bm{g}^0\cdot \bm{z}dx=\int_\Omega\bm{f}\cdot \bm{z}dx+\int_\Omega\lambda^0\nabla\cdot \bm{z}dx,\forall \bm{z}\in [L^2(\Omega)]^d, \end{aligned} \right. \end{equation*} and \begin{equation*} \left\{ \begin{aligned} &r^0\in H_0^1(\Omega),\\ &\int_\Omega \nabla r^0\cdot \nabla qdx=\int_\Omega\nabla\cdot \bm{g}^0qdx,\forall q\in H_0^1(\Omega). \end{aligned} \right. \end{equation*} \smallskip If $\frac{\int_\Omega|\nabla r^0|^2dx}{\max\{1,\int_\Omega|\nabla \lambda^0|^2dx\}}\leq tol_1$, take $\lambda=\lambda^0$ and $\bm{g}=\bm{g}^0$; otherwise set $w^0=r^0$. For $k\geq 0$, $\lambda^k,\bm{g}^k, r^k$ and $w^k$ being known with the last two different from 0, we compute $\lambda^{k+1},\bm{g}^{k+1}, r^{k+1}$ and if necessary $w^{k+1}$, as follows: \smallskip \item[\textbf{G3}] Solve \begin{equation*} \left\{ \begin{aligned} &\bar{\bm{g}}^k\in [L^2(\Omega)]^d,\\ &\int_\Omega \bar{\bm{g}}^k\cdot \bm{z}dx=\int_\Omega w^k\nabla\cdot \bm{z}dx,\forall \bm{z}\in [L^2(\Omega)]^d, \end{aligned} \right. \end{equation*} and \begin{equation*} \left\{ \begin{aligned} &\bar{r}^k\in H_0^1(\Omega),\\ &\int_\Omega \nabla \bar{r}^k\cdot \nabla qdx=\int_\Omega\nabla\cdot \bar{\bm{g}}^kqdx,\forall q\in H_0^1(\Omega), \end{aligned} \right. \end{equation*} and compute the stepsize via $$ \eta_k=\frac{\int_\Omega|\nabla r^k|^2dx}{\int_\Omega\nabla\bar{r}^k\cdot\nabla w^kdx}. $$ \item[\textbf{G4}] Update $\lambda^k, \bm{g}^k$ and $r^k$ via $$\lambda^{k+1}=\lambda^k-\eta_kw^k,\bm{g}^{k+1}=\bm{g}^k-\eta_k \bar{\bm{g}}^k,~\text{and}~r^{k+1}=r^k-\eta_k \bar{r}^k.$$ \smallskip If $\frac{\int_\Omega|\nabla r^{k+1}|^2dx}{\max\{1,\int_\Omega|\nabla r^0|^2dx\}}\leq tol_1$, take $\lambda=\lambda^{k+1}$ and $\bm{g}=\bm{g}^{k+1}$; otherwise, \item[\textbf{G5}] Compute $$\gamma_k=\frac{\int_\Omega|\nabla r^{k+1}|^2dx}{\int_\Omega|\nabla r^k|^2dx},$$ and update $w^k$ via $$w^{k+1}=r^{k+1}+\gamma_k w^{k}.$$ Do $k+1\rightarrow k$ and return to \textbf{G3}. \end{enumerate} Clearly, one only needs to solve two simple linear equations at each iteration of the preconditioned CG algorithm (\textbf{G1})-(\textbf{G5}), which implies that the algorithm is easy and cheap to implement. Moreover, due to the well-chosen preconditioner $-\nabla\cdot\nabla$, one can expect the above preconditioned CG algorithm to have a fast convergence; this will be validated by the numerical experiments reported in Section \ref{se:numerical}. \subsection{Computation of the Stepsize $\rho_k$}\label{com_step} Another crucial step to implement the CG method \textbf{(a)}--\textbf{(e)} is the computation of the stepsize $\rho_k$. It is the solution of the optimization problem (\ref{op_step}) which is numerically expensive to be solved exactly or up to a high accuracy. For instance, to solve (\ref{op_step}), one may consider the Newton method applied to the solution of $$ H_k'(\rho_k)=0, $$ where $$H_k(\rho)=J(\bm{u}^k-\rho\bm{w}^k).$$ The Newton method requires the second-order derivative $H_k''(\rho)$ which can be computed via an iterated adjoint technique requiring the solution of \emph{four} parabolic problems per Newton's iteration. Hence, the implementation of the Newton method is numerically expensive. The high computational load for solving (\ref{op_step}) motivates us to implement certain stepsize rule to determine an approximation of $\rho_k$. Here, we advocate the following procedure to compute an approximate stepsize $\hat{\rho}_k$. For a given $\bm{w}^k\in\mathcal{U}$, we replace the state $y=S(\bm{u}^k-\rho\bm{w}^k)$ in the objective functional $J(\bm{u}^k-\rho\bm{w}^k)$ by $$ S(\bm{u}^k)-\rho S'(\bm{u}^k)\bm{w}^k, $$ which is indeed the linearization of the mapping $\rho \mapsto S(\bm{u}^k - \rho \bm{w}^k)$ at $\rho= 0$. We thus obtain the following quadratic approximation of $H_k(\rho)$: \begin{equation}\label{q_rho} Q_k(\rho):=\frac{1}{2}\iint_Q|\bm{u}^k-\rho \bm{w}^k|^2dxdt+\frac{\alpha_1}{2}\iint_Q|y^k-\rho z^k-y_d|^2dxdt+\frac{\alpha_2}{2}\int_\Omega|y^k(T)-\rho z^k(T)-y_T|^2dx, \end{equation} where $y^k=S(\bm{u}^k)$ is the solution of the state equation (\ref{state_equation}) associated with $\bm{u}^k$, and $z^k=S'(\bm{u}^k)\bm{w}^k$ satisfies the following linear parabolic problem \begin{flalign}\label{linear_state} &\left\{ \begin{aligned} \frac{\partial z^k}{\partial t}-\nu \nabla^2 z^k+\bm{w}^k\cdot \nabla y^k +\bm{u}^k\cdot\nabla z^k+a_0 z^k&=0\quad \text{in}\quad Q, \\ z^k&=0\quad \text{on}\quad \Sigma,\\ z^k(0)&=0. \end{aligned} \right. \end{flalign} Then, it is easy to show that the equation $ Q_k'(\rho)=0 $ admits a unique solution \begin{equation}\label{step_size} \hat{\rho}_k =\frac{\iint_Q\bm{g}^k\cdot \bm{w}^k dxdt}{\iint_Q|\bm{w}^k|^2dxdt+ \alpha_1\iint_Q|z^k|^2dxdt+\alpha_2\int_\Omega|z^k(T)|^2dx}, \end{equation} and we take $\hat{\rho}_k$, which is clearly an approximation of $\rho_k$, as the stepsize in each CG iteration. Altogether, with the stepsize given by (\ref{step_size}), every iteration of the resulting CG algorithm requires solving only \emph{three} parabolic problems, namely, the state equation (\ref{state_equation}) forward in time and the associated adjoint equation (\ref{adjoint_equation}) backward in time for the computation of $\bm{g}^k$, and to solving the linearized parabolic equation (\ref{linear_state}) forward in time for the stepsize $\hat{\rho}_k$. For comparison, if the Newton method is employed to compute the stepsize $\rho_k$ by solving (\ref{op_step}), at least \emph{six} parabolic problems are required to be solved at each iteration of the CG method, which is much more expensive numerically. \begin{remark} To find an appropriate stepsize, a natural idea is to employ some line search strategies, such as the backtracking strategy based on the Armijo--Goldstein condition or the Wolf condition, see e.g., \cite{nocedal2006}. It is worth noting that these line search strategies require the evaluation of $J(\bm{v})$ repeatedly, which is numerically expensive because every evaluation of $J(\bm{v})$ for a given $\bm{v}$ requires solving the state equation (\ref{state_equation}). Moreover, we have implemented the CG method for solving (BCP) with various line search strategies and observed from the numerical results that line search strategies always lead to tiny stepsizes making extremely slow the convergence of the CG method. \end{remark} \subsection{A Nested CG Method for Solving (BCP)} Following Sections \ref{com_gra} and \ref{com_step}, we advocate the following nested CG method for solving (BCP): \begin{enumerate} \item[\textbf{I.}] Given $\bm{u}^0\in \mathcal{U}$. \item[\textbf{II.}] Compute $y^0$ and $p^0$ by solving the state equation (\ref{state_equation}) and the adjoint equation (\ref{adjoint_equation}) corresponding to $\bm{u}^0$. Then, for a.e. $t \in(0, T)$, solve \begin{equation*} \left\{ \begin{aligned} &\bm{g}^0(t)\in \mathbb{S},\\ &\int_\Omega \bm{g}^0(t)\cdot \bm{z}dx=\int_\Omega(\bm{u}^0(t)-p^0(t)\nabla y^0(t))\cdot \bm{z}dx,\forall \bm{z}\in \mathbb{S}, \end{aligned} \right. \end{equation*} by the preconditioned CG algorithm (\textbf{G1})--(\textbf{G5}); and set $\bm{w}^0=\bm{g}^0.$ \medskip \noindent For $k\geq 0$, $\bm{u}^k, \bm{g}^k$ and $\bm{w}^k$ being known, the last two different from $\bm{0}$, one computes $\bm{u}^{k+1}, \bm{g}^{k+1}$ and $\bm{w}^{k+1}$ as follows: \medskip \item[\textbf{III.}] Compute the stepsize $\hat{\rho}_k$ by (\ref{step_size}). \item[\textbf{IV.}] Update $\bm{u}^{k+1}$ by $$\bm{u}^{k+1}=\bm{u}^k-\hat{\rho}_k\bm{w}^k.$$ Compute $y^{k+1}$ and $p^{k+1}$ by solving the state equation (\ref{state_equation}) and the adjoint equation (\ref{adjoint_equation}) corresponding to $\bm{u}^{k+1}$; and for a.e. $t \in(0, T)$, solve \begin{equation*} \left\{ \begin{aligned} &\bm{g}^{k+1}(t)\in \mathbb{S},\\ &\int_\Omega \bm{g}^{k+1}(t)\cdot \bm{z}dx=\int_\Omega(\bm{u}^{k+1}(t)-p^{k+1}(t)\nabla y^{k+1}(t))\cdot \bm{z}dx,\forall \bm{z}\in \mathbb{S}, \end{aligned} \right. \end{equation*} by the preconditioned CG algorithm (\textbf{G1})--(\textbf{G5}). \medskip \noindent If $\frac{\iint_Q|\bm{g}^{k+1}|^2dxdt}{\iint_Q|\bm{g}^{0}|^2dxdt}\leq tol$, take $\bm{u} = \bm{u}^{k+1}$; else \medskip \item[\textbf{V.}] Compute $$\beta_k=\frac{\iint_Q|\bm{g}^{k+1}|^2dxdt}{\iint_Q|\bm{g}^{k}|^2dxdt},~\text{and}~\bm{w}^{k+1} = \bm{g}^{k+1} + \beta_k\bm{w}^k.$$ Do $k+1\rightarrow k$ and return to \textbf{III}. \end{enumerate} \section{Space and time discretizations}\label{se:discretization} In this section, we discuss first the numerical discretization of the bilinear optimal control problem (BCP). We achieve the time discretization by a semi-implicit finite difference method and the space discretization by a piecewise linear finite element method. Then, we discuss an implementable nested CG method for solving the fully discrete bilinear optimal control problem. \subsection{Time Discretization of (BCP)} First, we define a time discretization step $\Delta t$ by $\Delta t= T/N$, with $N$ a positive integer. Then, we approximate the control space $\mathcal{U}=L^2(0, T;\mathbb{S})$ by $ \mathcal{U}^{\Delta t}:=(\mathbb{S})^N; $ and equip $\mathcal{U}^{\Delta t}$ with the following inner product $$ (\bm{v},\bm{w})_{\Delta t} = \Delta t \sum^N_{n=1}\int_\Omega \bm{v}_n\cdot \bm{w}_ndx, \quad\forall \bm{v}= \{\bm{v}_n\}^N_{n=1}, \bm{w} = \{\bm{w}_n\}^N_{n=1} \in\mathcal{U}^{\Delta t}, $$ and the norm $$ \|\bm{v}\|_{\Delta t} = \left(\Delta t \sum^N_{n=1}\int_\Omega |\bm{v}_n|^2dx\right)^{\frac{1}{2}}, \quad\forall \bm{v}= \{\bm{v}_n\}^N_{n=1} \in\mathcal{U}^{\Delta t}. $$ Then, (BCP) is approximated by the following semi-discrete bilinear control problem (BCP)$^{\Delta t}$: \begin{flalign*} &\hspace{-4.5cm}\text{(BCP)}^{\Delta t}\qquad\qquad\qquad\qquad\left\{ \begin{aligned} & \bm{u}^{\Delta t}\in \mathcal{U}^{\Delta t}, \\ &J^{\Delta t}(\bm{u}^{\Delta t})\leq J^{\Delta t}(\bm{v}),\forall \bm{v}=\{\bm{v}_n\}_{n=1}^N\in\mathcal{U}^{\Delta t}, \end{aligned} \right. \end{flalign*} where the cost functional $J^{\Delta t}$ is defined by \begin{equation*} J^{\Delta t}(\bm{v})=\frac{1}{2}\Delta t \sum^N_{n=1}\int_\Omega |\bm{v}_n|^2dx+\frac{\alpha_1}{2}\Delta t \sum^N_{n=1}\int_\Omega |y_n-y_d^n|^2dx+\frac{\alpha_2}{2}\int_\Omega|y_N-y_T|^2dx, \end{equation*} with $\{y_n\}^N_{n=1}$ the solution of the following semi-discrete state equation: $y_0=\phi$; then for $n=1,\ldots,N$, with $y_{n-1}$ being known, we obtain $y_n$ from the solution of the following linear elliptic problem: \begin{flalign}\label{state_semidis} &\left\{ \begin{aligned} \frac{{y}_n-{y}_{n-1}}{\Delta t}-\nu \nabla^2{y}_n+\bm{v}_n\cdot\nabla{y}_{n-1}+a_0{y}_{n-1}&= f_n\quad \text{in}\quad \Omega, \\ y_n&=g_n\quad \text{on}\quad \Gamma. \end{aligned} \right. \end{flalign} \begin{remark} For simplicity, we have chosen a one-step semi-explicit scheme to discretize system (\ref{state_equation}). This scheme is first-order accurate and reasonably robust, once combined to an appropriate space discretization. The application of second-order accurate time discretization schemes to optimal control problems has been discussed in e.g., \cite{carthelglowinski1994}. \end{remark} \begin{remark} At each step of scheme (\ref{state_semidis}), we only need to solve a simple linear elliptic problem to obtain $y_n$ from $y_{n-1}$, and there is no particular difficulty in solving such a problem. \end{remark} The existence of a solution to the semi-discrete bilinear optimal control problem (BCP)$^{\Delta t}$ can be proved in a similar way as what we have done for the continuous case. Let $\bm{u}^{\Delta t}$ be a solution of (BCP)$^{\Delta t}$, then it verifies the following first-order optimality condition: \begin{equation*} DJ^{\Delta t}(\bm{u}^{\Delta t}) = 0, \end{equation*} where $DJ^{\Delta t}(\bm{v})$ is the first-order differential of the functional $J^{\Delta t}$ at $\bm{v}\in\mathcal{U}^{\Delta t}$. Proceeding as in the continuous case, we can show that $DJ^{\Delta t}(\bm{v})=\{\bm{g}_n\}_{n=1}^N\in\mathcal{U}^{\Delta t}$ where \begin{equation*} \left\{ \begin{aligned} &\bm{g}_n\in \mathbb{S},\\ &\int_\Omega \bm{g}_n\cdot \bm{w}dx=\int_\Omega(\bm{v}_n-p_n\nabla y_{n-1})\cdot \bm{w}dx,\forall\bm{w}\in \mathbb{S}, \end{aligned} \right. \end{equation*} and the vector-valued function $\{p_n\}^N_{n=1}$ is the solution of the semi-discrete adjoint system below: \begin{equation*} {p}_{N+1}=\alpha_2({y}_N-y_T); \end{equation*} for $n=N$, solve \begin{flalign*} \qquad \left\{ \begin{aligned} \frac{{p}_N-{p}_{N+1}}{\Delta t}-\nu \nabla^2{p}_N&= \alpha_1({y}_N-y_d^N)&\quad \text{in}\quad \Omega, \\ p_N&=0&\quad \text{on}\quad \Gamma, \end{aligned} \right. \end{flalign*} and for $n=N-1,\cdots,1,$ solve \begin{flalign*} \qquad \left\{ \begin{aligned} \frac{{p}_n-{p}_{n+1}}{\Delta t}-\nu\nabla^2{p}_n-\bm{v}_{n+1}\cdot\nabla{p}_{n+1}+a_0{p}_{n+1}&= \alpha_1({y}_n-y_d^n)&\quad \text{in}\quad \Omega, \\ p_n&=0&\quad \text{on}\quad \Gamma. \end{aligned} \right. \end{flalign*} \subsection{Space Discretization of (BCP)$^{\Delta t}$} In this subsection, we discuss the space discretization of (BCP)$^{\Delta t}$, obtaining thus a full space-time discretization of (BCP). For simplicity, we suppose from now on that $\Omega$ is a polygonal domain of $\mathbb{R}^2$ (or has been approximated by a family of such domains). Let $\mathcal{T}_H$ be a classical triangulation of $\Omega$, with $H$ the largest length of the edges of the triangles of $\mathcal{T}_H$. From $\mathcal{T}_{H}$ we construct $\mathcal{T}_{h}$ with $h=H/2$ by joining the mid-points of the edges of the triangles of $\mathcal{T}_{H}$. We first consider the finite element space $V_h$ defined by \begin{equation*} V_h = \{\varphi_h| \varphi_h\in C^0(\bar{\Omega}); { \varphi_h\mid}_{\mathbb{T}}\in P_1, \forall\, {\mathbb{T}}\in\mathcal{T}_h\} \end{equation*} with $P_1$ the space of the polynomials of two variables of degree $\leq 1$. Two useful sub-spaces of $V_h$ are \begin{equation*} V_{0h} =\{\varphi_h| \varphi_h\in V_h, \varphi_h\mid_{\Gamma}=0\}:=V_h\cap H_0^1(\Omega), \end{equation*} and (assuming that $g(t)\in C^0(\Gamma)$) \begin{eqnarray*} V_{gh}(t) =\{\varphi_h| \varphi_h\in V_h, \varphi_h(Q)=g(Q,t), \forall\, Q ~\text{vertex of} ~\mathcal{T}_h~\text{located on}~\Gamma \}. \end{eqnarray*} In order to construct the discrete control space, we introduce first \begin{equation*} \Lambda_H = \{\varphi_H| \varphi_H\in C^0(\bar{\Omega}); { \varphi_H\mid}_{\mathbb{T}}\in P_1, \forall\, {\mathbb{T}}\in\mathcal{T}_H\},~\text{and}~\Lambda_{0H} =\{\varphi_H| \varphi_H\in \Lambda_H, \varphi_H\mid_{\Gamma}=0\}. \end{equation*} Then, the discrete control space $\mathcal{U}_h^{\Delta t}$ is defined by \begin{equation*} \mathcal{U}_h^{\Delta t}=(\mathbb{S}_h)^N,~\text{with}~\mathbb{S}_h=\{\bm{v}_h|\bm{v}_h\in V_h\times V_h,\int_\Omega \nabla\cdot\bm{v}_hq_Hdx\left(=-\int_\Omega\bm{v}_h\cdot\nabla q_Hdx\right)=0,\forall q_H\in \Lambda_{0H}\}. \end{equation*} With the above finite element spaces, we approximate (BCP) and (BCP)$^{\Delta t}$ by (BCP)$_h^{\Delta t}$ defined by \begin{flalign*} &\hspace{-4.2cm}\text{(BCP)}_h^{\Delta t}\qquad\qquad\qquad\qquad\qquad\qquad\left\{ \begin{aligned} & \bm{u}_h^{\Delta t}\in \mathcal{U}_h^{\Delta t}, \\ &J_h^{\Delta t}(\bm{u}_h^{\Delta t})\leq J_h^{\Delta t}(\bm{v}_h^{\Delta t}),\forall \bm{v}_h^{\Delta t}\in\mathcal{U}_h^{\Delta t}, \end{aligned} \right. \end{flalign*} where the fully discrete cost functional $J_h^{\Delta t}$ is defined by \begin{equation}\label{obj_fuldis} J_h^{\Delta t}(\bm{v}_h^{\Delta t})=\frac{1}{2}\Delta t \sum^N_{n=1}\int_\Omega |\bm{v}_{n,h}|^2dx+\frac{\alpha_1}{2}\Delta t \sum^N_{n=1}\int_\Omega |y_{n,h}-y_d^n|^2dx+\frac{\alpha_2}{2}\int_\Omega|y_{N,h}-y_T|^2dx \end{equation} with $\{y_{n,h}\}^N_{n=1}$ the solution of the following fully discrete state equation: $y_{0,h}=\phi_h\in V_h$, where $\phi_h$ verifies $$ \phi_h\in V_h, \forall\, h>0,~\text{and}~\lim_{h\rightarrow 0}\phi_h=\phi,~\text{in}~L^2(\Omega), $$ then, for $n=1,\ldots,N$, with $y_{n-1,h}$ being known, we obtain $y_{n,h}\in V_{gh}(n\Delta t)$ from the solution of the following linear variational problem: \begin{equation}\label{state_fuldis} \int_\Omega\frac{{y}_{n,h}-{y}_{n-1,h}}{\Delta t}\varphi dx+\nu \int_\Omega\nabla{y}_{n,h}\cdot\nabla\varphi dx+\int_\Omega\bm{v}_n\cdot\nabla{y}_{n-1,h}\varphi dx+\int_\Omega a_0{y}_{n-1,h}\varphi dx= \int_\Omega f_{n}\varphi dx,\forall \varphi\in V_{0h}. \end{equation} In the following discussion, the subscript $h$ in all variables will be omitted for simplicity. In a similar way as what we have done in the continuous case, one can show that the first-order differential of $J_h^{\Delta t} $ at $\bm{v}\in\mathcal{U}_h^{\Delta t}$ is $DJ_h^{\Delta t}(\bm{v})=\{\bm{g}_n\}_{n=1}^N\in (\mathbb{S}_h)^N$ where \begin{equation}\label{gradient_ful} \left\{ \begin{aligned} &\bm{g}_n\in \mathbb{S}_h,\\ &\int_\Omega \bm{g}_n\cdot \bm{z}dx=\int_\Omega(\bm{v}_n-p_n\nabla y_{n-1})\cdot \bm{z}dx,\forall\bm{z}\in \mathbb{S}_h, \end{aligned} \right. \end{equation} and the vector-valued function $\{p_n\}^N_{n=1}$ is the solution of the following fully discrete adjoint system: \begin{equation}\label{ful_adjoint_1} {p}_{N+1}=\alpha_2({y}_N-y_T); \end{equation} for $n=N$, solve \begin{flalign}\label{ful_adjoint_2} \qquad \left\{ \begin{aligned} &p_N\in V_{0h},\\ &\int_\Omega\frac{{p}_N-{p}_{N+1}}{\Delta t}\varphi dx+\nu\int_\Omega \nabla{p}_N\cdot\nabla \varphi dx= \int_\Omega\alpha_1({y}_N-y_d^N)\varphi dx,\forall \varphi\in V_{0h}, \end{aligned} \right. \end{flalign} then, for $n=N-1,\cdots,1,$, solve \begin{flalign}\label{ful_adjoint_3} \qquad \left\{ \begin{aligned} &p_n\in V_{0h},\\ &\int_\Omega\frac{{p}_n-{p}_{n+1}}{\Delta t}\varphi dx+\nu\int_\Omega\nabla{p}_n\cdot\nabla\varphi dx-\int_\Omega\bm{v}_{n+1}\cdot\nabla{p}_{n+1}\varphi dx\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad+a_0\int_\Omega{p}_{n+1}\varphi dx=\int_\Omega \alpha_1({y}_n-y_d^n)\varphi dx,\forall \varphi\in V_{0h}. \end{aligned} \right. \end{flalign} It is worth mentioning that the so-called discretize-then-optimize strategy is employed here, which implies that we first discretize (BCP), and to compute the gradient in a discrete setting, the fully discrete adjoint equation (\ref{ful_adjoint_1})--(\ref{ful_adjoint_3}) has been derived from the fully discrete cost functional $J_h^{\Delta t}(\bm{v})$ (\ref{obj_fuldis}) and the fully discrete state equation (\ref{state_fuldis}). This implies that the fully discrete state equation (\ref{state_fuldis}) and the fully discrete adjoint equation (\ref{ful_adjoint_1})--(\ref{ful_adjoint_3}) are strictly in duality. This fact guarantees that $-DJ_h^{\Delta t}(\bm{v})$ is a descent direction of the fully discrete bilinear optimal control problem (BCP)$_h^{\Delta t}$. \begin{remark} A natural alternative has been advocated in the literature: (i) Derive the adjoint equation to compute the first-order differential of the cost functional in a continuous setting; (ii) Discretize the state and adjoint state equations by certain numerical schemes; (iii) Use the resulting discrete analogs of $y$ and $p$ to compute a discretization of the differential of the cost functional. The main problem with this optimize-then-discretize approach is that it may not preserve a strict duality between the discrete state equation and the discrete adjoint equation. This fact implies in turn that the resulting discretization of the continuous gradient may not be a gradient of the discrete optimal control problem. As a consequence, the resulting algorithm is not a descent algorithm and divergence may take place (see \cite{GH1998} for a related discussion). \end{remark} \subsection{A Nested CG Method for Solving the Fully Discrete Problem (BCP)$_h^{\Delta t}$}\label{DCG} In this subsection, we propose a nested CG method for solving the fully discrete problem (BCP)$_h^{\Delta t}$. As discussed in Section \ref{se:cg}, the implementation of CG requires the knowledge of $DJ_h^{\Delta t}(\bm{v})$ and an appropriate stepsize. In the following discussion, we address these two issues by extending the results for the continuous case in Sections \ref{com_gra} and \ref{com_step} to the fully discrete settings; and derive the corresponding CG algorithm. First, it is clear that one can compute $DJ_h^{\Delta t}(\bm{v})$ via the solution of the $N$ linear variational problems encountered in (\ref{gradient_ful}). For this purpose, we introduce a Lagrange multiplier $\lambda\in \Lambda_{0H}$ associated with the divergence-free constraint, then problem (\ref{gradient_ful}) is equivalent to the following saddle point system \begin{equation}\label{fulgradient_e} \left\{ \begin{aligned} &(\bm{g}_n,\lambda)\in (V_h\times V_h)\times \Lambda_{0H},\\ &\int_\Omega \bm{g}_n\cdot \bm{z}dx=\int_\Omega (\bm{v}_n-p_n\nabla y_{n-1})\cdot \bm{z}dx+\int_\Omega\lambda\nabla\cdot \bm{z}dx,\forall \bm{z}\in V_h\times V_h,\\ &\int_\Omega\nabla\cdot\bm{g}_nqdx=0,\forall q\in \Lambda_{0H}. \end{aligned} \right. \end{equation} As discussed in Section \ref{com_gra}, problem (\ref{fulgradient_e}) can be solved by the following preconditioned CG algorithm, which is actually a discrete analogue of (\textbf{G1})--(\textbf{G5}). \begin{enumerate} \item [\textbf{DG1}] Choose $\lambda^0\in \Lambda_{0H}$. \item [\textbf{DG2}] Solve \begin{equation*} \left\{ \begin{aligned} &\bm{g}_n^0\in V_h\times V_h,\\ &\int_\Omega\bm{g}_n^0\cdot \bm{z}dx=\int_\Omega(\bm{v}_n-p_n\nabla y_{n-1})\cdot \bm{z}dx+\int_\Omega\lambda^0\nabla\cdot \bm{z}dx,\forall \bm{z}\in V_h\times V_h, \end{aligned} \right. \end{equation*} and \begin{equation*} \left\{ \begin{aligned} &r^0\in \Lambda_{0H},\\ &\int_\Omega \nabla r^0\cdot \nabla qdx=\int_\Omega\nabla\cdot \bm{g}_n^0qdx,\forall q\in \Lambda_{0H}. \end{aligned} \right. \end{equation*} \smallskip If $\frac{\int_\Omega|\nabla r^0|^2dx}{\max\{1,\int_\Omega|\nabla \lambda^0|^2dx\}}\leq tol_1$, take $\lambda=\lambda^0$ and $\bm{g}_n=\bm{g}_n^0$; otherwise set $w^0=r^0$. For $k\geq 0$, $\lambda^k,\bm{g}_n^k, r^k$ and $w^k$ being known with the last two different from 0, we define $\lambda^{k+1},\bm{g}_n^{k+1}, r^{k+1}$ and if necessary $w^{k+1}$, as follows: \smallskip \item[\textbf{DG3}] Solve \begin{equation*} \left\{ \begin{aligned} &\bar{\bm{g}}_n^k\in V_h\times V_h,\\ &\int_\Omega \bar{\bm{g}}_n^k\cdot \bm{z}dx=\int_\Omega w^k\nabla\cdot \bm{z}dx,\forall \bm{z}\in V_h\times V_h, \end{aligned} \right. \end{equation*} and \begin{equation*} \left\{ \begin{aligned} &\bar{r}^k\in \Lambda_{0H},\\ &\int_\Omega \nabla \bar{r}^k\cdot \nabla qdx=\int_\Omega\nabla\cdot \bar{\bm{g}}_n^kqdx,\forall q\in\Lambda_{0H}, \end{aligned} \right. \end{equation*} and compute $$ \eta_k=\frac{\int_\Omega|\nabla r^k|^2dx}{\int_\Omega\nabla\bar{r}^k\cdot\nabla w^kdx}. $$ \item[\textbf{DG4}] Update $\lambda^k,\bm{g}_n^k$ and $r^k$ via $$\lambda^{k+1}=\lambda^k-\eta_kw^k,\bm{g}_n^{k+1}=\bm{g}_n^k-\eta_k\bar{\bm{g}}_n^k,~\text{and}~r^{k+1}=r^k-\eta_k \bar{r}^k.$$ \smallskip If $\frac{\int_\Omega|\nabla r^{k+1}|^2dx}{\max\{1,\int_\Omega|\nabla r^0|^2dx\}}\leq tol_1$, take $\lambda=\lambda^{k+1}$ and $\bm{g}_n=\bm{g}_n^{k+1}$; otherwise, \item[\textbf{DG5}] Compute $$\gamma_k=\frac{\int_\Omega|\nabla r^{k+1}|^2dx}{\int_\Omega|\nabla r^k|^2dx},$$ and update $w^k$ via $$w^{k+1}=r^{k+1}+\gamma_k w^{k}.$$ Do $k+1\rightarrow k$ and return to \textbf{DG3}. \end{enumerate} To find an appropriate stepsize in the CG iteration for the solution of (BCP)$_h^{\Delta t}$, we note that, for any $\{\bm{w}_n\}_{n=1}^N\in (\mathbb{S}_h)^N$, the fully discrete analogue of $Q_k(\rho)$ in (\ref{q_rho}) reads as $ Q_h^{\Delta t}(\rho)=\frac{1}{2}\Delta t \sum^N_{n=1}\int_\Omega |\bm{u}_n-\rho\bm{w}_n|^2dx+\frac{\alpha_1}{2}\Delta t \sum^N_{n=1}\int_\Omega |y_{n}-\rho z_{n}-y_d^n|^2dx+\frac{\alpha_2}{2}\int_\Omega|y_{N}-\rho z_{N}-y_T|^2dx, $ where the vector-valued function $\{z_n\}^N_{n=1}$ is obtained as follows: $z_0=0$; then for $n=1,\ldots,N$, with $z_{n-1}$ being known, $z_n$ is obtained from the solution of the linear variational problem $ \left\{ \begin{aligned} &z_n\in V_{0h},\\ &\int_\Omega\frac{{z}_n-{z}_{n-1}}{\Delta t}\varphi dx+\nu\int_\Omega \nabla{z}_n\cdot\nabla\varphi dx+\int_\Omega\bm{w}_n\cdot\nabla y_n\varphi dx\\ &\qquad\qquad\qquad\qquad\qquad+\int_\Omega\bm{u}_n\cdot\nabla{z}_{n-1}\varphi dx+a_0\int_\Omega{z}_{n-1}\varphi dx= 0,\forall\varphi\in V_{0h}. \\ \end{aligned} \right. $$ As discussed in Section \ref{com_step} for the continuous case, we take the unique solution of ${Q_h^{\Delta t}}'(\rho)=0$ as the stepsize in each CG iteration, that is \begin{equation}\label{step_ful} \hat{\rho}_h^{\Delta t} =\frac{\Delta t\sum_{n=1}^{N}\int_\Omega\bm{g}_n\cdot \bm{w}_n dx}{\Delta t\sum_{n=1}^{N}\int_\Omega|\bm{w}_n|^2dxdt+ \alpha_1\Delta t\sum_{n=1}^{N}\int_\Omega|z_n|^2dxdt+\alpha_2\int_\Omega|z_N|^2dx}. \end{equation} Finally, with above preparations, we propose the following nested CG algorithm for the solution of the fully discrete control problem (BCP)$_h^{\Delta t}$. \begin{enumerate} \item[\textbf{DI.}] Given $\bm{u}^0:=\{\bm{u}_n^0\}_{n=1}^N\in (\mathbb{S}_h)^N$. \item[\textbf{DII.}] Compute $\{y_n^0\}_{n=0}^N$ and $\{p^0_n\}_{n=1}^{N+1}$ by solving the fully discrete state equation (\ref{state_fuldis}) and the fully discrete adjoint equation (\ref{ful_adjoint_1})--(\ref{ful_adjoint_3}) corresponding to $\bm{u}^0$. Then, for $n=1,\cdots, N$ solve \begin{equation*} \left\{ \begin{aligned} &\bm{g}_n^0\in \mathbb{S}_h,\\ &\int_\Omega \bm{g}_n^0\cdot \bm{z}dx=\int_\Omega(\bm{u}_n^0-p_n^0\nabla y_{n-1}^0)\cdot \bm{z}dx,\forall \bm{z}\in \mathbb{S}_h, \end{aligned} \right. \end{equation*} by the preconditioned CG algorithm (\textbf{DG1})--(\textbf{DG5}), and set $\bm{w}^0_n=\bm{g}_n^0.$ \medskip \noindent For $k\geq 0$, $\bm{u}^k, \bm{g}^k$ and $\bm{w}^k$ being known, the last two different from $\bm{0}$, one computes $\bm{u}^{k+1}, \bm{g}^{k+1}$ and $\bm{w}^{k+1}$ as follows: \medskip \item[\textbf{DIII.}] Compute the stepsize $\hat{\rho}_k$ by (\ref{step_ful}). \item[\textbf{DIV.}] Update $\bm{u}^{k+1}$ by $$\bm{u}^{k+1}=\bm{u}^k-\hat{\rho}_k\bm{w}^k.$$ Compute $\{y_n^{k+1}\}_{n=0}^N$ and $\{p_n^{k+1}\}_{n=1}^{N+1}$ by solving the fully discrete state equation (\ref{state_fuldis}) and the fully discrete adjoint equation (\ref{ful_adjoint_1})--(\ref{ful_adjoint_3}) corresponding to $\bm{u}^{k+1}$. Then, for $n=1,\cdots,N$, solve \begin{equation}\label{dis_gradient} \left\{ \begin{aligned} &\bm{g}_n^{k+1}\in \mathbb{S}_h,\\ &\int_\Omega \bm{g}_n^{k+1}\cdot \bm{z}dx=\int_\Omega(\bm{u}_n^{k+1}-p_n^{k+1}\nabla y_{n-1}^{k+1})\cdot \bm{z}dx,\forall \bm{z}\in \mathbb{S}_h, \end{aligned} \right. \end{equation} by the preconditioned CG algorithm (\textbf{DG1})--(\textbf{DG5}). \medskip \noindent If $\frac{\Delta t\sum_{n=1}^N\int_\Omega|\bm{g}_n^{k+1}|^2dx}{\Delta t\sum_{n=1}^N\int_\Omega|\bm{g}_n^{0}|^2dx}\leq tol$, take $\bm{u} = \bm{u}^{k+1}$; else \medskip \item[\textbf{DV.}] Compute $$\beta_k=\frac{\Delta t\sum_{n=1}^N\int_\Omega|\bm{g}_n^{k+1}|^2dx}{\Delta t\sum_{n=1}^N\int_\Omega|\bm{g}_n^{k}|^2dx},~\text{and}~\bm{w}^{k+1} = \bm{g}^{k+1} + \beta_k\bm{w}^k.$$ Do $k+1\rightarrow k$ and return to \textbf{DIII}. \end{enumerate} Despite its apparent complexity, the CG algorithm (\textbf{DI})-(\textbf{DV}) is easy to implement. Actually, one of the main computational difficulties in the implementation of the above algorithm seems to be the solution of $N$ linear systems (\ref{dis_gradient}), which is time-consuming. However, it is worth noting that the linear systems (\ref{dis_gradient}) are separable with respect to different $n$ and they can be solved in parallel. As a consequent, one can compute the gradient $\{\bm{g}^{k}_n\}_{n=1}^N$ simultaneously and the computation time can be reduced significantly. Moreover, it is clear that the computation of $\{\bm{g}^{k}_n\}_{n=1}^N$ requires the storage of the solutions of (\ref{state_fuldis}) and (\ref{ful_adjoint_1})-(\ref{ful_adjoint_3}) at all points in space and time. For large scale problems, especially in three space dimensions, it will be very memory demanding and maybe even impossible to store the full sets $\{y_n^k\}_{n=0}^N$ and $\{p_n^k\}_{n=1}^{N+1}$ simultaneously. To tackle this issue, one can employ the strategy described in e.g., \cite[Section 1.12]{glowinski2008exact} that can drastically reduce the storage requirements at the expense of a small CPU increase. \section{Numerical Experiments}\label{se:numerical} In this section, we report some preliminary numerical results validating the efficiency of the proposed CG algorithm (\textbf{DI})--(\textbf{DV}) for (BCP). All codes were written in MATLAB R2016b and numerical experiments were conducted on a Surface Pro 5 laptop with 64-bit Windows 10.0 operation system, Intel(R) Core(TM) i7-7660U CPU (2.50 GHz), and 16 GB RAM. \medskip \noindent\textbf{Example 1.} We consider the bilinear optimal control problem (BCP) on the domain $Q=\Omega\times(0,T)$ with $\Omega=(0,1)^2$ and $T=1$. In particular, we take the control $\bm{v}(x,t)$ in a finite-dimensional space, i.e. $\bm{v}\in L^2(0,T;\mathbb{R}^2)$. In addition, we set $\alpha_2=0$ in (\ref{objective_functional}) and consider the following tracking-type bilinear optimal control problem: \begin{equation}\label{model_ex1} \min_{\bm{v}\in L^2(0,T;\mathbb{R}^2)}J(\bm{v})=\frac{1}{2}\int_0^T|\bm{v}(t)|^2dt+\frac{\alpha_1}{2}\iint_Q|y-y_d|^2dxdt, \end{equation} where $|\bm{v}(t)|=\sqrt{\bm{v}_1(t)^2+\bm{v}_2(t)^2}$ is the canonical 2-norm, and $y$ is obtained from $\bm{v}$ via the solution of the state equation (\ref{state_equation}). Since the control $\bm{v}$ is considered in a finite-dimensional space, the divergence-free constraint $\nabla\cdot\bm{v}=0$ is verified automatically. As a consequence, the first-order differential $DJ(\bm{v})$ can be easily computed. Indeed, it is easy to show that \begin{equation}\label{oc_finite} DJ(\bm{v})=\left\{\bm{v}_i(t)+\int_\Omega y(t)\frac{\partial p(t)}{\partial x_i}dx\right \}_{i=1}^2,~\text{a.e.~on}~(0,T),\forall \bm{v}\in L^2(0,T;\mathbb{R}^2), \end{equation} where $p(t)$ is the solution of the adjoint equation (\ref{adjoint_equation}). The inner preconditioned CG algorithm (\textbf{DG1})-(\textbf{DG5}) for the computation of the gradient $\{\bm{g}_n\}_{n=1}^N$ is thus avoided. In order to examine the efficiency of the proposed CG algorithm (\textbf{DI})--(\textbf{DV}), we construct an example with a known exact solution. To this end, we set $\nu=1$ and $a_0=1$ in (\ref{state_equation}), and $$ y=e^t(-3\sin(2\pi x_1)\sin(\pi x_2)+1.5\sin(\pi x_1)\sin(2\pi x_2)),\quad p=(T-t)\sin \pi x_1 \sin \pi x_2. $$ Substituting these two functions into the optimality condition $DJ(\bm{u}(t))=0$, we have $$ \bm{u}=(\bm{u}_1,\bm{u}_2)^\top=(2e^t(T-t),-e^t(T-t))^\top. $$ We further set \begin{eqnarray*} &&f=\frac{\partial y}{\partial t}-\nabla^2y+{\bm{u}}\cdot \nabla y+y, \quad\phi=-3\sin(2\pi x_1)\sin(\pi x_2)+1.5\sin(\pi x_1)\sin(2\pi x_2),\\ &&y_d=y-\frac{1}{\alpha_1}\left(-\frac{\partial p}{\partial t} -\nabla^2p-\bm{u}\cdot\nabla p +p\right),\quad g=0. \end{eqnarray*} Then, it is easy to verify that $\bm{u}$ is a solution point of the problem (\ref{model_ex1}). We display the solution $\bm{u}$ and the target function $y_d$ at different instants of time in Figure \ref{exactU_ex1} and Figure \ref{target_ex1}, respectively. \begin{figure}[htpb] \centering{ \includegraphics[width=0.43\textwidth]{exact_u.pdf} } \caption{The exact optimal control $\bm{u}$ for Example 1.} \label{exactU_ex1} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{target25.pdf} \includegraphics[width=0.3\textwidth]{target50.pdf} \includegraphics[width=0.3\textwidth]{target75.pdf} } \caption{The target function $y_d$ at $t=0.25, 0.5$ and $0.75$ (from left to right) for Example 1.} \label{target_ex1} \end{figure} The stopping criterion of the CG algorithm (\textbf{DI})--(\textbf{DV}) is set as $$ \frac{\Delta t\sum_{n=1}^N|\bm{g}^{k+1}_n|^2}{\Delta t\sum_{n=1}^N|\bm{g}^{0}_n|^2}\leq 10^{-5}. $$ The initial value is chosen as $\bm{u}^0=(0,0)^\top$; and we denote by $\bm{u}^{\Delta t}$ and $y_h^{\Delta t}$ the computed control and state, respectively. First, we take $h=\frac{1}{2^i}, i=5,6,7,8$, $\Delta t=\frac{h}{2}$ and $\alpha_1=10^6$, and implement the proposed CG algorithm (\textbf{DI})--(\textbf{DV}) for solving the problem (\ref{model_ex1}). The numerical results reported in Table \ref{tab:mesh_EX1} show that the CG algorithm converges fairly fast and is robust with respect to different mesh sizes. We also observe that the target function $y_d$ has been reached within a good accuracy. Similar comments hold for the approximation of the optimal control $\bm{u}$ and of the state $y$ of problem (\ref{model_ex1}). By taking $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$, the computed state $y_h^{\Delta t}$ and $y_h^{\Delta t}-y_d$ at $t=0.25,0.5$ and $0.75$ are reported in Figures \ref{stateEx1_1}, \ref{stateEx1_2} and \ref{stateEx1_3}, respectively; and the computed control $\bm{u}^{\Delta t}$ and error $\bm{u}^{\Delta t}-\bm{u}$ are visualized in Figure \ref{controlEx1}. \begin{table}[htpb] {\small\centering \caption{Results of the CG algorithm (\textbf{DI})--(\textbf{DV}) with different $h$ and $\Delta t$ for Example 1.} \label{tab:mesh_EX1} \begin{tabular}{|c|c|c|c|c|c|} \hline Mesh sizes &$Iter$& $\|\bm{u}^{\Delta t}-\bm{u}\|_{L^2(0,T;\mathbb{R}^2)}$&$\|y_h^{\Delta t}-y\|_{L^2(Q)}$& ${\|y_h^{\Delta t}-y_d\|_{L^2(Q)}}/{\|y_d\|_{{L^2(Q)}}}$ \\ \hline $h=1/2^5,\Delta t=1/2^6$ & 117 &2.8820$\times 10^{-2}$ &1.1569$\times 10^{-2}$&3.8433$\times 10^{-3}$ \\ \hline $h=1/2^6,\Delta t=1/2^7$ &48&1.3912$\times 10^{-2}$& 2.5739$\times 10^{-3}$&8.5623$\times 10^{-4}$ \\ \hline $h=1/2^7,\Delta t=1/2^8$ &48&6.9095$\times 10^{-3}$& 4.8574$\times 10^{-4}$ &1.6516$\times 10^{-4}$ \\ \hline $h=1/2^8,\Delta t=1/2^9$& 31 &3.4845$\times 10^{-3}$ &6.6231$\times 10^{-5}$ &2.2196$\times 10^{-5}$ \\ \hline \end{tabular}} \end{table} \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{soln_y25.pdf} \includegraphics[width=0.3\textwidth]{err_y25.pdf} \includegraphics[width=0.3\textwidth]{dis_y_25.pdf}} \caption{Computed state $y^{\Delta t}_h$, error $y^{\Delta t}_h-y$ and $y^{\Delta t}_h-y_d$ (from left to right) at $t=0.25$ for Example 1.} \label{stateEx1_1} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{soln_y50.pdf} \includegraphics[width=0.3\textwidth]{err_y.pdf} \includegraphics[width=0.3\textwidth]{dis_y_50.pdf}} \caption{Computed state $y^{\Delta t}_h$, error $y^{\Delta t}_h-y$ and $y^{\Delta t}_h-y_d$ (from left to right) at $t=0.5$ for Example 1.} \label{stateEx1_2} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{soln_y75.pdf} \includegraphics[width=0.3\textwidth]{err_y75.pdf} \includegraphics[width=0.3\textwidth]{dis_y_75.pdf}} \caption{Computed state $y^{\Delta t}_h$, error $y^{\Delta t}_h-y$ and $y^{\Delta t}_h-y_d$ (from left to right) at $t=0.75$ for Example 1.} \label{stateEx1_3} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.45\textwidth]{soln_u.pdf} \includegraphics[width=0.45\textwidth]{err_u.pdf} } \caption{Computed optimal control $\bm{u}^{\Delta t}$ and error $\bm{u}^{\Delta t}-\bm{u}$ for Example 1.} \label{controlEx1} \end{figure} Furthermore, we tested the proposed CG algorithm (\textbf{DI})--(\textbf{DV}) with $h=\frac{1}{2^6}$ and $\Delta t=\frac{1}{2^7}$ for different penalty parameter $\alpha_1$. The results reported in Table \ref{reg_EX1} show that the performance of the proposed CG algorithm is robust with respect to the penalty parameter, at least for the example being considered. We also observe that as $\alpha_1$ increases, the value of $\frac{\|y_h^{\Delta t}-y_d\|_{L^2(Q)}}{\|y_d\|_{{L^2(Q)}}}$ decreases. This implies that, as expected, the computed state $y_h^{\Delta t}$ is closer to the target function $y_d$ when the penalty parameter gets larger. \begin{table}[htpb] {\small \centering \caption{Results of the CG algorithm (\textbf{DI})--(\textbf{DV}) with different $\alpha_1$ for Example 1.} \begin{tabular}{|c|c|c|c|c|c|} \hline $\alpha_1$ &$Iter$& $CPU(s)$&$\|\bm{u}^{\Delta t}-\bm{u}\|_{L^2(0,T;\mathbb{R}^2)}$&$\|y_h^{\Delta t}-y\|_{L^2(Q)}$& $\frac{\|y_h^{\Delta t}-y_d\|_{L^2(Q)}}{\|y_d\|_{{L^2(Q)}}}$ \\ \hline $10^4$ & 46 & 126.0666&1.3872$\times 10^{-2}$ &2.5739$\times 10^{-3}$ & 8.7666$\times 10^{-4}$ \\ \hline $10^5$ & 48 & 126.4185 &1.3908$\times 10^{-2}$ &2.5739$\times 10^{-3}$ &8.6596$\times 10^{-4}$ \\ \hline $10^6$ &48&128.2346 &1.3912$\times 10^{-2}$ & 2.5739$\times 10^{-3}$ &8.5623$\times 10^{-4}$ \\ \hline $10^7$ &48 & 127.1858&1.3912$\times 10^{-2}$&2.5739$\times 10^{-3}$ &8.5612$\times 10^{-4}$ \\ \hline $10^8$& 48 & 124.1160&1.3912$\times 10^{-2}$&2.5739$\times 10^{-3}$ &8.5610$\times 10^{-4}$ \\ \hline \end{tabular} \label{reg_EX1} } \end{table} \medskip \noindent\textbf{Example 2.} As in Example 1, we consider the bilinear optimal control problem (BCP) on the domain $Q=\Omega\times(0,T)$ with $\Omega=(0,1)^2$ and $T=1$. Now, we take the control $\bm{v}(x,t)$ in the infinite-dimensional space $\mathcal{U}=\{\bm{v}|\bm{v}\in [L^2(Q)]^2, \nabla\cdot\bm{v}=0\}.$ We set $\alpha_2=0$ in (\ref{objective_functional}), $\nu=1$ and $a_0=1$ in (\ref{state_equation}), and consider the following tracking-type bilinear optimal control problem: \begin{equation}\label{model_ex2} \min_{\bm{v}\in\mathcal{U}}J(\bm{v})=\frac{1}{2}\iint_Q|\bm{v}|^2dxdt+\frac{\alpha_1}{2}\iint_Q|y-y_d|^2dxdt, \end{equation} where $y$ is obtained from $\bm{v}$ via the solution of the state equation (\ref{state_equation}). First, we let \begin{eqnarray*} &&y=e^t(-3\sin(2\pi x_1)\sin(\pi x_2)+1.5\sin(\pi x_1)\sin(2\pi x_2)),\\ &&p=(T-t)\sin \pi x_1 \sin \pi x_2, ~ \text{and} ~\bm{u}=P_{\mathcal{U}}(p\nabla y), \end{eqnarray*} where $P_{\mathcal{U}}(\cdot)$ is the projection onto the set $\mathcal{U}$. We further set \begin{eqnarray*} &&f=\frac{\partial y}{\partial t}-\nabla^2y+{\bm{u}}\cdot \nabla y+y, \quad\phi=-3\sin(2\pi x_1)\sin(\pi x_2)+1.5\sin(\pi x_1)\sin(2\pi x_2),\\ &&y_d=y-\frac{1}{\alpha_1}\left(-\frac{\partial p}{\partial t} -\nabla^2p-\bm{u}\cdot\nabla p +p\right),\quad g=0. \end{eqnarray*} Then, it is easy to show that $\bm{u}$ is a solution point of the problem (\ref{model_ex2}). We note that $\bm{u}=P_{\mathcal{U}}(p\nabla y)$ has no analytical solution and it can only be solved numerically. Here, we solve $\bm{u}=P_{\mathcal{U}}(p\nabla y)$ by the preconditioned CG algorithm (\textbf{DG1})--(\textbf{DG5}) with $h=\frac{1}{2^9}$ and $\Delta t=\frac{1}{2^{10}}$, and use the resulting control $\bm{u}$ as a reference solution for the example we considered. \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{ex2_target25.pdf} \includegraphics[width=0.3\textwidth]{ex2_target50.pdf} \includegraphics[width=0.3\textwidth]{ex2_target75.pdf} } \caption{The target function $y_d$ with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ at $t=0.25, 0.5$ and $0.75$ (from left to right) for Example 2.} \label{target_ex2} \end{figure} The stopping criteria of the outer CG algorithm (\textbf{DI})--(\textbf{DV}) and the inner preconditioned CG algorithm (\textbf{DG1})--(\textbf{DG5}) are respectively set as $$ \frac{\Delta t\sum_{n=1}^N\int_\Omega|\bm{g}_n^{k+1}|^2dx}{\Delta t\sum_{n=1}^N\int_\Omega|\bm{g}_n^{0}|^2dx}\leq 5\times10^{-8}, ~\text{and}~\frac{\int_\Omega|\nabla r^{k+1}|^2dx}{\max\{1,\int_\Omega|\nabla r^0|^2dx\}}\leq 10^{-8}. $$ The initial values are chosen as $\bm{u}^0=(0,0)^\top$ and $\lambda^0=0$; and we denote by $\bm{u}_h^{\Delta t}$ and $y_h^{\Delta t}$ the computed control and state, respectively. First, we take $h=\frac{1}{2^i}, i=6,7,8$, $\Delta t=\frac{h}{2}$, $\alpha_1=10^6$, and implement the proposed nested CG algorithm (\textbf{DI})--(\textbf{DV}) for solving the problem (\ref{model_ex2}). The numerical results reported in Table \ref{tab:mesh_EX2} show that the CG algorithm converges fast and is robust with respect to different mesh sizes. In addition, the preconditioned CG algorithm (\textbf{DG1})--(\textbf{DG5}) converges within 10 iterations for all cases and thus is efficient for computing the gradient $\{\bm{g}_n\}_{n=1}^N$. We also observe that the target function $y_d$ has been reached within a good accuracy. Similar comments hold for the approximation of the optimal control $\bm{u}$ and of the state $y$ of problem (\ref{model_ex2}). \begin{table}[htpb] {\small \centering \caption{Results of the nested CG algorithm (\textbf{DI})--(\textbf{DV}) with different $h$ and $\Delta t$ for Example 2.} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Mesh sizes &{{$Iter_{CG}$}}&$MaxIter_{PCG}$& $\|\bm{u}_h^{\Delta t}-\bm{u}\|_{L^2(Q)}$&$\|y_h^{\Delta t}-y\|_{L^2(Q)}$& $\frac{\|y_h^{\Delta t}-y_d\|_{L^2(Q)}}{\|y_d\|_{{L^2(Q)}}}$ \\ \hline $h=1/2^6,\Delta t=1/2^7$ &443&9&3.7450$\times 10^{-3}$& 9.7930$\times 10^{-5}$&1.0906$\times 10^{-6}$ \\ \hline $h=1/2^7,\Delta t=1/2^8$ &410&9&1.8990$\times 10^{-3}$& 1.7423$\times 10^{-5}$ & 3.3863$\times 10^{-7}$ \\ \hline $h=1/2^8,\Delta t=1/2^9$& 405&8 &1.1223$\times 10^{-3}$ &4.4003$\times 10^{-6}$ &1.0378$\times 10^{-7}$ \\ \hline \end{tabular} \label{tab:mesh_EX2} } \end{table} Taking $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$, the computed state $y_h^{\Delta t}$, the error $y_h^{\Delta t}-y$ and $y_h^{\Delta t}-y_d$ at $t=0.25,0.5,0.75$ are reported in Figures \ref{stateEx2_1}, \ref{stateEx2_2} and \ref{stateEx2_3}, respectively; and the computed control $\bm{u}_h^{\Delta t}$, the exact control $\bm{u}$, and the error $\bm{u}_h^{\Delta t}-\bm{u}$ at $t=0.25,0.5,0.75$ are presented in Figures \ref{controlEx2_1}, \ref{controlEx2_2} and \ref{controlEx2_3}. \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{ex2_soln_y25.pdf} \includegraphics[width=0.3\textwidth]{ex2_err_y25.pdf} \includegraphics[width=0.3\textwidth]{ex2_dis_y_25.pdf}} \caption{Computed state $y^{\Delta t}_h$, error $y^{\Delta t}_h-y$ and $y^{\Delta t}_h-y_d$ with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ (from left to right) at $t=0.25$ for Example 2.} \label{stateEx2_1} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{ex2_soln_y50.pdf} \includegraphics[width=0.3\textwidth]{ex2_err_y.pdf} \includegraphics[width=0.3\textwidth]{ex2_dis_y_50.pdf}} \caption{Computed state $y^{\Delta t}_h$, error $y^{\Delta t}_h-y$ and $y^{\Delta t}_h-y_d$ with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ (from left to right) at $t=0.5$ for Example 2.} \label{stateEx2_2} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{ex2_soln_y75.pdf} \includegraphics[width=0.3\textwidth]{ex2_err_y75.pdf} \includegraphics[width=0.3\textwidth]{ex2_dis_y_75.pdf}} \caption{Computed state $y^{\Delta t}_h$, error $y^{\Delta t}_h-y$ and $y^{\Delta t}_h-y_d$ with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ (from left to right) at $t=0.75$ for Example 2.} \label{stateEx2_3} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.45\textwidth]{ex2_u25.pdf} \includegraphics[width=0.45\textwidth]{ex2_erru25.pdf}} \caption{Computed control $\bm{u}^{\Delta t}_h$ and exact control $\bm{u}$ (left, from top to bottom) and the error $\bm{u}^{\Delta t}_h-\bm{u}$ (right) with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ at $t=0.25$ for Example 2.} \label{controlEx2_1} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.45\textwidth]{ex2_u50.pdf} \includegraphics[width=0.45\textwidth]{ex2_erru50.pdf}} \caption{Computed control $\bm{u}^{\Delta t}_h$ and exact control $\bm{u}$ (left, from top to bottom) and the error $\bm{u}^{\Delta t}_h-\bm{u}$ (right) with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ at $t=0.5$ for Example 2.} \label{controlEx2_2} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.45\textwidth]{ex2_u75.pdf} \includegraphics[width=0.45\textwidth]{ex2_erru75.pdf}} \caption{Computed control $\bm{u}^{\Delta t}_h$ and exact control $\bm{u}$ (left, from top to bottom) and the error $\bm{u}^{\Delta t}_h-\bm{u}$ (right) with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ at $t=0.75$ for Example 2.} \label{controlEx2_3} \end{figure} \newpage \section{Conclusion and Outlook}\label{se:conclusion} We studied the bilinear control of an advection-reaction-diffusion system, where the control variable enters the model as a velocity field of the advection term. Mathematically, we proved the existence of optimal controls and derived the associated first-order optimality conditions. Computationally, the conjugate gradient (CG) method was suggested and its implementation is nontrivial. In particular, an additional divergence-free constraint on the control variable leads to a projection subproblem to compute the gradient; and the computation of a stepsize at each CG iteration requires solving the state equation repeatedly due to the nonlinear relation between the state and control variables. To resolve the above issues, we reformulated the gradient computation as a Stokes-type problem and proposed a fast preconditioned CG method to solve it. We also proposed an efficient inexactness strategy to determine the stepsize, which only requires the solution of one linear parabolic equation. An easily implementable nested CG method was thus proposed. For the numerical discretization, we employed the standard piecewise linear finite element method and the Bercovier-Pironneau finite element method for the space discretizations of the bilinear optimal control and the Stokes-type problem, respectively, and a semi-implicit finite difference method for the time discretization. The resulting algorithm was shown to be numerically efficient by some preliminary numerical experiments. We focused in this paper on an advection-reaction-diffusion system controlled by a general form velocity field. In a real physical system, the velocity field may be determined by some partial differential equations (PDEs), such as the Navier-Stokes equations. As a result, we meet some bilinear optimal control problems constrained by coupled PDE systems. Moreover, instead of (\ref{objective_functional}), one can also consider other types of objective functionals in the bilinear optimal control of an advection-reaction-diffusion system. For instance, one can incorporate $\iint_{Q}|\nabla \bm{v}|^2dxdt$ and $\iint_{Q}|\frac{\partial \bm{v}}{\partial t}|^2dxdt$ into the objective functional to promote that the optimal velocity field has the least rotation and is almost steady, respectively, which are essential in e.g., mixing enhancement for different flows \cite{liu2008}. All these problems are of practical interest but more challenging from algorithmic design perspectives, and they have not been well-addressed numerically in the literature. Our current work has laid a solid foundation for solving these problems and we leave them in the future. \bibliographystyle{amsplain} {\small \section{Introduction} \subsection{Background and Motivation} The optimal control of distributed parameter systems has important applications in various scientific areas, such as physics, chemistry, engineering, medicine, and finance. We refer to, e.g. \cite{glowinski1994exact, glowinski1995exact, glowinski2008exact, lions1971optimal, troltzsch2010optimal,zuazua2006}, for a few references. In a typical mathematical model of a controlled distributed parameter system, either boundary or internal locally distributed controls are usually used; these controls have localized support and are called additive controls because they arise in the model equations as additive terms. Optimal control problems with additive controls have received a significant attention in past decades following the pioneering work of J. L. Lions \cite{lions1971optimal}, and many mathematical and computational tools have been developed, see e.g., \cite{glowinski1994exact,glowinski1995exact,glowinski2008exact,lions1988,zuazua2005,zuazua2007}. However, it is worth noting that additive controls describe the effect of external added sources or forces and they do not change the principal intrinsic properties of the controlled system. Hence, they are not suitable to deal with processes whose principal intrinsic properties should be changed by some control actions. For instance, if we aim at changing the reaction rate in some chain reaction-type processes from biomedical, nuclear, and chemical applications, additive controls amount to controlling the chain reaction by adding into or withdrawing out of a certain amount of the reactants, which is not realistic. To address this issue, a natural idea is to use certain catalysts or smart materials to control the systems, which can be mathematically modeled by optimal control problems with bilinear controls. We refer to \cite{khapalov2010} for more detailed discussions. Bilinear controls, also known as multiplicative controls, enter the model as coefficients of the corresponding partial differential equations (PDEs). These bilinear controls can change some main physical characteristics of the system under investigation, such as a natural frequency response of a beam or the rate of a chemical reaction. In the literature, bilinear controls of distributed parameter systems have become an increasingly popular topic and bilinear optimal control problems constrained by various PDEs, such as elliptic equations \cite{kroner2009}, convection-diffusion equations \cite{borzi2015}, parabolic equations \cite{khapalov2003}, the Schr{\"o}dinger equation \cite{kunisch2007} and the Fokker-Planck equation \cite{fleig2017}, have been widely studied both mathematically and computationally. In particular, bilinear controls play a crucial role in optimal control problems modeled by advection-reaction-diffusion systems. On one hand, the control can be the coefficient of the diffusion or the reaction term. For instance, a system controlled by the so-called catalysts that can accelerate or slow down various chemical or biological reactions can be modeled by a bilinear optimal control problem for an advection-reaction-diffusion equation where the control arises as the coefficient of the reaction term \cite{khapalov2003}; this kind of bilinear optimal control problems have been studied in e.g., \cite{borzi2015,cannarsa2017,khapalov2003,khapalov2010}. On the other hand, the systems can also be controlled by the velocity field in the advection term, which captures important applications in e.g., bioremediation \cite{lenhart1998}, environmental remediation process \cite{lenhart1995}, and mixing enhancement of different fluids \cite{liu2008}. We note that there is a very limited research being done on the velocity field controlled bilinear optimal control problems; and only some special one-dimensional space cases have been studied in \cite{lenhart1998,joshi2005,lenhart1995} for the existence of an optimal control and the derivation of first-order optimality conditions. To the best of our knowledge, no work has been done yet to develop efficient numerical methods for solving multi-dimensional bilinear optimal control problems controlled by the velocity field in the advection term. All these facts motivate us to study bilinear optimal control problems constrained by an advection-reaction-diffusion equation, where the control enters into the model as the velocity field in the advection term. Actually, investigating this kind of problems was suggested to one of us (R. Glowinski), in the late 1990's, by J. L. Lions (1928-2001). \subsection{Model} Let $\Omega$ be a bounded domain of $\mathbb{R}^d$ with $d\geq 1$ and let $\Gamma$ be its boundary. We consider the following bilinear optimal control problem: \begin{flalign}\tag{BCP} & \left\{ \begin{aligned} & \bm{u}\in \mathcal{U}, \\ &J(\bm{u})\leq J(\bm{v}), \forall \bm{v}\in \mathcal{U}, \end{aligned} \right. \end{flalign} with the objective functional $J$ defined by \begin{equation}\label{objective_functional} J(\bm{v})=\frac{1}{2}\iint_Q|\bm{v}|^2dxdt+\frac{\alpha_1}{2}\iint_Q|y-y_d|^2dxdt+\frac{\alpha_2}{2}\int_\Omega|y(T)-y_T|^2dx, \end{equation} and $y=y(t;\bm{v})$ the solution of the following advection-reaction-diffusion equation \begin{flalign}\label{state_equation} & \left\{ \begin{aligned} \frac{\partial y}{\partial t}-\nu \nabla^2y+\bm{v}\cdot \nabla y+a_0y&=f\quad \text{in}\quad Q, \\ y&=g\quad \text{on}\quad \Sigma,\\ y(0)&=\phi. \end{aligned} \right. \end{flalign} Above and below, $Q=\Omega\times (0,T)$ and $\Sigma=\Gamma\times (0,T)$ with $0<T<+\infty$; $\alpha_1\geq 0, \alpha_2\geq 0, \alpha_1+\alpha_2>0$; the target functions $y_d$ and $y_T$ are given in $L^2(Q)$ and $L^2(\Omega)$, respectively; the diffusion coefficient $\nu>0$ and the reaction coefficient $a_0$ are assumed to be constants; the functions $f\in L^2(Q)$, $g\in L^2(0,T;H^{1/2}(\Gamma))$ and $\phi\in L^2(\Omega)$. The set $\mathcal{U}$ of the admissible controls is defined by \begin{equation*} \mathcal{U}:=\{\bm{v}|\bm{v}\in [L^2(Q)]^d, \nabla\cdot\bm{v}=0\}. \end{equation*} Clearly, the control variable $\bm{v}$ arises in (BCP) as a flow velocity field in the advection term of (\ref{state_equation}), and the divergence-free constraint $\nabla\cdot\bm{v}=0$ implies that the flow is incompressible. One can control the system by changing the flow velocity $\bm{v}$ in order that $y$ and $y(T)$ are good approximations to $y_d$ and $y_T$, respectively. \subsection{Difficulties and Goals} In this paper, we intend to study the bilinear optimal control problem (BCP) in the general case of $d\geq 2$ both mathematically and computationally. Precisely, we first study the well-posedness of (\ref{state_equation}), the existence of an optimal control $\bm{u}$, and its first-order optimality condition. Then, computationally, we propose an efficient and relatively easy to implement numerical method to solve (BCP). For this purpose, we advocate combining a conjugate gradient (CG) method with a finite difference method (for the time discretization) and a finite element method (for the space discretization) for the numerical solution of (BCP). Although these numerical approaches have been well developed in the literature, it is nontrivial to implement them to solve (BCP) as discussed below, due to the complicated problem settings. \subsubsection{Difficulties in Algorithmic Design} Conceptually, a CG method for solving (BCP) can be easily derived following \cite{glowinski2008exact}. However, CG algorithms are challenging to implement numerically for the following reasons: 1). The state $y$ depends non-linearly on the control $\bm{v}$ despite the fact that the state equation (\ref{state_equation}) is linear. 2). The additional divergence-free constraint on the control $\bm{v}$, i.e., $\nabla\cdot\bm{v}=0$, is coupled together with the state equation (\ref{state_equation}). To be more precise, the fact that the state $y$ is a nonlinear function of the control $\bm{v}$ makes the optimality system a nonlinear problem. Hence, seeking a suitable stepsize in each CG iteration requires solving an optimization problem and it can not be as easily computed as in the linear case \cite{glowinski2008exact}. Note that commonly used line search strategies are too expensive to employ in our settings because they require evaluating the objective functional value $J(\bm{v})$ repeatedly and every evaluation of $J(\bm{v})$ entails solving the state equation (\ref{state_equation}). The same concern on the computational cost also applies when the Newton method is employed to solve the corresponding optimization problem for finding a stepsize. To tackle this issue, we propose an efficient inexact stepsize strategy which requires solving only one additional linear parabolic problem and is cheap to implement as shown in Section \ref{se:cg}. Furthermore, due to the divergence-free constraint $\nabla\cdot\bm{v}=0$, an extra projection onto the admissible set $\mathcal{U}$ is required to compute the first-order differential of $J$ at each CG iteration in order that all iterates of the CG method are feasible. Generally, this projection subproblem has no closed-form solution and has to be solved iteratively. Here, we introduce a Lagrange multiplier associated with the constraint $\nabla\cdot\bm{v}=0$, then the computation of the first-order differential $DJ(\bm{v})$ of $J$ at $\bm{v}$ is equivalent to solving a Stokes type problem. Inspired by \cite{glowinski2003}, we advocate employing a preconditioned CG method, which operates on the space of the Lagrange multiplier, to solve the resulting Stokes type problem. With an appropriately chosen preconditioner, a fast convergence of the resulting preconditioned CG method can be expected in practice (and indeed, has been observed). \subsubsection{Difficulties in Numerical Discretization} For the numerical discretization of (BCP), we note that if an implicit finite difference scheme is used for the time discretization of the state equation (\ref{state_equation}), a stationary advection-reaction-diffusion equation should be solved at each time step. To solve this stationary advection-reaction-diffusion equation, it is well known that standard finite element techniques may lead to strongly oscillatory solutions unless the mesh-size is sufficiently small with respect to the ratio between $\nu$ and $\|\bm{v}\|$. In the context of optimal control problems, to overcome such difficulties, different stabilized finite element methods have been proposed and analyzed, see e.g., \cite{BV07,DQ05}. Different from the above references, we implement the time discretization by a semi-implicit finite difference method for simplicity, namely, we use explicit advection and reaction terms and treat the diffusion term implicitly. Consequently, only a simple linear elliptic equation is required to be solved at each time step. We then implement the space discretization of the resulting elliptic equation at each time step by a standard piecewise linear finite element method and the resulting linear system is very easy to solve. Moreover, we recall that the divergence-free constraint $\nabla\cdot \bm{v}=0$ leads to a projection subproblem, which is equivalent to a Stokes type problem, at each iteration of the CG algorithm. As discussed in \cite{glowinski1992}, to discretize a Stokes type problem, direct applications of standard finite element methods always lead to an ill-posed discrete problem. To overcome this difficulty, one can use different types of element approximations for pressure and velocity. Inspired by \cite{glowinski1992,glowinski2003}, we employ the Bercovier-Pironneau finite element pair \cite{BP79} (also known as $P_1$-$P_1$ iso $P_2$ finite element) to approximate the control $\bm{v}$ and the Lagrange multiplier associated with the divergence-free constraint. More concretely, we approximate the Lagrange multiplier by a piecewise linear finite element space which is twice coarser than the one for the control $\bm{v}$. In this way, the discrete problem is well-posed and can be solved by a preconditioned CG method. As a byproduct of the above discretization, the total number of degrees of freedom of the discrete Lagrange multiplier is only $\frac{1}{d2^d}$ of the number of the discrete control. Hence, the inner preconditioned CG method is implemented in a lower-dimensional space than that of the state equation (\ref{state_equation}), implying a computational cost reduction. With the above mentioned discretization schemes, we can relatively easily obtain the fully discrete version of (BCP) and derive the discrete analogue of our proposed nested CG method. \subsection{Organization} An outline of this paper is as follows. In Section \ref{se:existence and oc}, we prove the existence of optimal controls for (BCP) and derive the associated first-order optimality conditions. An easily implementable nested CG method is proposed in Section \ref{se:cg} for solving (BCP) numerically. In Section \ref{se:discretization}, we discuss the numerical discretization of (BCP) by finite difference and finite element methods. Some preliminary numerical results are reported in Section \ref{se:numerical} to validate the efficiency of our proposed numerical approach. Finally, some conclusions are drawn in Section \ref{se:conclusion}. \section{Existence of optimal controls and first-order optimality conditions}\label{se:existence and oc} In this section, first we present some notation and known results from the literature that will be used in later analysis. Then, we prove the existence of optimal controls for (BCP) and derive the associated first-order optimality conditions. Without loss of generality, we assume that $f=0$ and $g=0$ in (\ref{state_equation}) for convenience. \subsection{Preliminaries} Throughout, we denote by $L^s(\Omega)$ and $H^s(\Omega)$ the usual Sobolev spaces for any $s>0$. The space $H_0^s(\Omega)$ denotes the completion of $C_0^{\infty}(\Omega)$ in $H^s(\Omega)$, where $C_0^{\infty}(\Omega)$ denotes the space of all infinitely differentiable functions over $\Omega$ with a compact support in $\Omega$. In addition, we shall also use the following vector-valued function spaces: \begin{eqnarray*} &&\bm{L}^2(\Omega):=[L^2(\Omega)]^d,\\ &&\bm{L}_{div}^2(\Omega):=\{\bm{v}\in \bm{L}^2(\Omega),\nabla\cdot\bm{v}=0~\text{in}~\Omega\}. \end{eqnarray*} Let $X$ be a Banach space with a norm $\|\cdot\|_X$, then the space $L^2(0, T;X)$ consists of all measurable functions $z:(0,T)\rightarrow X$ satisfying $$ \|z\|_{L^2(0, T;X)}:=\left(\int_0^T\|z(t)\|_X^2dt \right)^{\frac{1}{2}}<+\infty. $$ With the above notation, it is clear that the admissible set $\mathcal{U}$ can be denoted as $\mathcal{U}:=L^2(0,T; \bm{L}_{div}^2(\Omega))$. Moreover, the space $W(0,T)$ consists of all functions $z\in L^2(0, T; H_0^1(\Omega))$ such that $\frac{\partial z}{\partial t}\in L^2(0, T; H^{-1}(\Omega))$ exists in a weak sense, i.e. $$ W(0,T):=\{z|z\in L^2(0,T; H_0^1(\Omega)), \frac{\partial z}{\partial t}\in L^2(0,T; H^{-1}(\Omega))\}, $$ where $H^{-1}(\Omega)(=H_0^1(\Omega)^\prime)$ is the dual space of $H_0^1(\Omega)$. Next, we summarize some known results for the advection-reaction-diffusion equation (\ref{state_equation}) in the literature for the convenience of further analysis. The variational formulation of the state equation (\ref{state_equation}) reads: find $y\in W(0,T)$ such that $y(0)=\phi$ and $\forall z\in L^2(0,T;H_0^1(\Omega))$, \begin{equation}\label{weak_form} \int_0^T\left\langle\frac{\partial y}{\partial t}, z \right\rangle_{H^{-1}(\Omega),H_0^1(\Omega)} dt+\nu\iint_{Q} \nabla y\cdot\nabla z dxdt+\iint_{Q}\bm{v}\cdot\nabla yzdxdt+a_0\iint_{Q} yzdxdt=0, \end{equation} where $\left\langle\cdot,\cdot\right\rangle_{H^{-1}(\Omega),H_0^1(\Omega)}$ denotes the duality pairing between $H^{-1}(\Omega)$ and $H_0^1(\Omega)$. The existence and uniqueness of the solution $y\in W(0,T)$ to problem (\ref{weak_form}) can be proved by standard arguments relying on the Lax-Milgram theorem, we refer to \cite{lions1971optimal} for the details. Moreover, we can define the control-to-state operator $S:\mathcal{U}\rightarrow W(0,T)$, which maps $\bm{v}$ to $y=S(\bm{v})$. Then, the objective functional $J$ in (BCP) can be reformulated as \begin{equation*} J(\bm{v})=\frac{1}{2}\iint_Q|\bm{v}|^2dxdt+\frac{\alpha_1}{2}\iint_Q|S(\bm{v})-y_d|^2dxdt+\frac{\alpha_2}{2}\int_\Omega|S(\bm{v})(T)-y_T|^2dx, \end{equation*} and the nonlinearity of the solution operator $S$ implies that (BCP) is nonconvex. For the solution $y\in W(0,T)$, we have the following estimates. \begin{lemma} Let $\bm{v}\in L^2(0,T; \bm{L}^2_{div}(\Omega))$, then the solution $y\in W(0,T)$ of the state equation (\ref{state_equation}) satisfies the following estimate: \begin{equation}\label{est_y} \|y(t)\|_{L^2(\Omega)}^2+2\nu\int_0^t\|\nabla y(s)\|_{L^2(\Omega)}^2ds+2a_0\int_0^t\| y(s)\|_{L^2(\Omega)}^2ds=\|\phi\|_{L^2(\Omega)}^2. \end{equation} \end{lemma} \begin{proof} We first multiply the state equation (\ref{state_equation}) by $y(t)$, then applying the Green's formula in space yields \begin{equation}\label{e1} \frac{1}{2}\frac{d}{dt}\|y(t)\|_{L^2(\Omega)}^2=-\nu\|\nabla y(t)\|_{L^2(\Omega)}^2-a_0\|y(t)\|_{L^2(\Omega)}^2. \end{equation} The desired result (\ref{est_y}) can be directly obtained by integrating (\ref{e1}) over $[0,t]$. \end{proof} Above estimate implies that \begin{equation}\label{bd_y} y~\text{is bounded in}~L^2(0,T; H_0^1(\Omega)). \end{equation} On the other hand, $$ \frac{\partial y}{\partial t}=\nu \nabla^2y-\bm{v}\cdot \nabla y-a_0y, $$ and the right hand side is bounded in $L^2(0,T; H^{-1}(\Omega))$. Hence, \begin{equation}\label{bd_yt} \frac{\partial y}{\partial t}~\text{is bounded in}~ L^2(0,T; H^{-1}(\Omega)). \end{equation} Furthermore, since $\nabla\cdot\bm{v}=0$, it is clear that $$\iint_Q\bm{v}\cdot\nabla yzdxdt=\iint_Q\nabla y\cdot (\bm{v}z)dxdt=-\iint_Q y\nabla\cdot(\bm{v}z)dxdt=-\iint_Q y(\bm{v}\cdot\nabla z)dxdt,\forall z\in L^2(0,T;H_0^1(\Omega)).$$ Hence, the variational formulation (\ref{weak_form}) can be equivalently written as:" find $y\in W(0,T)$ such that $y(0)=\phi$ and $\forall z\in L^2(0,T;H_0^1(\Omega))$, \begin{equation*} \int_0^T\left\langle\frac{\partial y}{\partial t}, z \right\rangle_{H^{-1}(\Omega),H_0^1(\Omega)} dt+\nu\iint_{Q} \nabla y\cdot\nabla z dxdt-\iint_Q(\bm{v}\cdot\nabla z) ydxdt+a_0\iint_{Q} yzdxdt=0. \end{equation*} \subsection{Existence of Optimal Controls} With above preparations, we prove in this subsection the existence of optimal controls for (BCP). For this purpose, we first show that the objective functional $J$ is weakly lower semi-continuous. \begin{lemma}\label{wlsc} The objective functional $J$ given by (\ref{objective_functional}) is weakly lower semi-continuous. That is, if a sequence $\{\bm{v}_n\}$ converges weakly to $\bar{\bm{v}}$ in $L^2(0,T; \bm{L}^2_{div}(\Omega))$, we have $$ J(\bar{\bm{v}})\leq \underset{n\rightarrow \infty}{\lim\inf} J(\bm{v}_n). $$ \end{lemma} \begin{proof} Let $\{\bm{v}_n\}$ be a sequence that converges weakly to $\bar{\bm{v}}$ in $L^2(0,T;\bm{L}^2_{div}(\Omega))$ and $y_n:=y(x,t;\bm{v}_n)$ the solution of the following variational problem: find $y_n\in W(0,T)$ such that $y_n(0)=\phi$ and $\forall z\in L^2(0,T;H_0^1(\Omega))$, \begin{equation}\label{seq_state} \int_0^T\left\langle\frac{\partial y_n}{\partial t}, z \right\rangle_{H^{-1}(\Omega),H_0^1(\Omega)} dt+\nu\iint_{Q} \nabla y_n\cdot\nabla z dxdt-\iint_Q(\bm{v}_n\cdot\nabla z) y_ndxdt+a_0\iint_{Q} y_nzdxdt=0. \end{equation} Moreover, it follows from (\ref{bd_y}) and (\ref{bd_yt}) that there exists a subsequence of $\{y_n\}$, still denoted by $\{y_n\}$ for convenience, such that $$y_n\rightarrow\bar{y}~\text{weakly in}~ L^2(0,T; H_0^1(\Omega)),$$ and $$\frac{\partial y_n}{\partial t}\rightarrow\frac{\partial \bar{y}}{\partial t} ~\text{weakly in}~L^2(0,T; H^{-1}(\Omega)).$$ Since $\Omega$ is bounded, it follows directly from the compactness property (also known as Rellich's Theorem) that $$y_n\rightarrow\bar{y}~\text{strongly in}~ L^2(0,T; L^2(\Omega)).$$ Taking $\bm{v}_n\rightarrow \bar{\bm{v}}$ weakly in $L^2(0,T; \bm{L}_{div}^2(\Omega))$ into account, we can pass the limit in (\ref{seq_state}) and derive that $\bar{y}(0)=\phi$ and $\forall z\in L^2(0,T;H_0^1(\Omega))$, \begin{equation*} \int_0^T\left\langle\frac{\partial \bar{y}}{\partial t}, z \right\rangle_{H^{-1}(\Omega),H_0^1(\Omega)} dt+\nu\iint_{Q} \nabla \bar{y}\cdot\nabla z dxdt-\iint_Q(\bar{\bm{v}}\cdot\nabla z) \bar{y}dxdt+a_0\iint_{Q} \bar{y}zdxdt=0, \end{equation*} which implies that $\bar{y}$ is the solution of the state equation (\ref{state_equation}) associated with $\bar{\bm{v}}$. Since any norm of a Banach space is weakly lower semi-continuous, we have that \begin{equation*} \begin{aligned} &\underset{n\rightarrow \infty}{\lim\inf} J(\bm{v}_n)\\ = &\underset{n\rightarrow \infty}{\lim\inf}\left( \frac{1}{2}\iint_Q|\bm{v}_n|^2dxdt+\frac{\alpha_1}{2}\iint_Q|y_n-y_d|^2dxdt+\frac{\alpha_2}{2}\int_\Omega|y_n(T)-y_T|^2dx\right)\\ \geq& \frac{1}{2}\iint_Q|\bar{\bm{v}}|^2dxdt+\frac{\alpha_1}{2}\iint_Q|\bar{y}-y_d|^2dxdt+\frac{\alpha_2}{2}\int_\Omega|\bar{y}(T)-y_T|^2dx\\ =& J(\bar{\bm{v}}). \end{aligned} \end{equation*} We thus obtain that the objective functional $J$ is weakly lower semi-continuous and complete the proof. \end{proof} Now, we are in a position to prove the existence of an optimal solution $\bm{u}$ to (BCP). \begin{theorem}\label{thm_existence} There exists at least one optimal control $\bm{u}\in \mathcal{U}=L^2(0,T; \bm{L}_{div}^2(\Omega))$ such that $J(\bm{u})\leq J(\bm{v}),\forall\bm{v}\in \mathcal{U}$. \end{theorem} \begin{proof} We first observe that $J(\bm{v})\geq 0,\forall\bm{v}\in \mathcal{U}$, then the infimum of $J(\bm{v})$ exists and we denote it as $$ j=\inf_{\bm{v}\in\mathcal{U}}J(\bm{v}), $$ and there is a minimizing sequence $\{\bm{v}_n\}\subset\mathcal{U}$ such that $$ \lim_{n\rightarrow \infty}J(\bm{v}_n)=j. $$ This fact, together with $\frac{1}{2}\|\bm{v}_n\|^2_{L^2(0,T; \bm{L}^2_{div}(\Omega))}\leq J(\bm{v}_n)$, implies that $\{\bm{v}_n\}$ is bounded in $L^2(0,T;\bm{L}^2_{div}(\Omega))$. Hence, there exists a subsequence, still denoted by $\{\bm{v}_n\}$, that converges weakly to $\bm{u}$ in $L^2(0,T; \bm{L}^2_{div}(\Omega))$. It follows from Lemma \ref{wlsc} that $J$ is weakly lower semi-continuous and we thus have $$ J(\bm{u})\leq \underset{n\rightarrow \infty}{\lim\inf} J(\bm{v}_n)=j. $$ Since $\bm{u}\in\mathcal{U}$, we must have $J(\bm{u})=j$, and $\bm{u}$ is therefore an optimal control. \end{proof} We note that the uniqueness of optimal control $\bm{u}$ cannot be guaranteed and only a local optimal solution can be pursued because the objective functional $J$ is nonconvex due to the nonlinear relationship between the state $y$ and the control $\bm{v}$. \subsection{First-order Optimality Conditions} Let $DJ(\bm{v})$ be the first-order differential of $J$ at $\bm{v}$ and $\bm{u}$ an optimal control of (BCP). It is clear that the first-order optimality condition of (BCP) reads \begin{equation*} DJ(\bm{u})=0. \end{equation*} In the sequel of this subsection, we discuss the computation of $DJ(\bm{v})$, which will play an important role in subsequent sections. To compute $DJ(\bm{v})$, we employ a formal perturbation analysis as in \cite{glowinski2008exact}. First, let $\delta \bm{v}\in \mathcal{U}$ be a perturbation of $\bm{v}\in \mathcal{U}$, we clearly have \begin{equation}\label{Dj and delta j} \delta J(\bm{v})=\iint_{Q}DJ(\bm{v})\cdot\delta \bm{v} dxdt, \end{equation} and also \begin{eqnarray}{\label{def_delta_j}} \begin{aligned} &\delta J(\bm{v})=\iint_{Q}\bm{v}\cdot\delta \bm{v} dxdt+\alpha_1\iint_{Q}(y-y_d)\delta y dxdt+\alpha_2\int_\Omega(y(T)-y_T)\delta y(T)dx, \end{aligned} \end{eqnarray} in which $\delta y$ is the solution of \begin{flalign}\label{perturbation_state_eqn} &\left\{ \begin{aligned} \frac{\partial \delta y}{\partial t}-\nu \nabla^2\delta y+\delta \bm{v}\cdot \nabla y+\bm{v}\cdot\nabla\delta y+a_0\delta y&=0\quad \text{in}\quad Q, \\ \delta y&=0\quad \text{on}\quad \Sigma,\\ \delta y(0)&=0. \end{aligned} \right. \end{flalign} Consider now a function $p$ defined over $\overline{Q}$ (the closure of $Q$); and assume that $p$ is a differentiable function of $x$ and $t$. Multiplying both sides of the first equation in (\ref{perturbation_state_eqn}) by $p$ and integrating over $Q$, we obtain \begin{equation*} \iint_{Q}p\frac{\partial }{\partial t}\delta ydxdt-\nu\iint_{Q}p \nabla^2\delta ydxdt+\iint_Q\delta \bm{v}\cdot \nabla ypdxdt+\iint_Q\bm{v}\cdot\nabla\delta ypdxdt+a_0\iint_{Q}p\delta ydxdt=0. \end{equation*} Integration by parts in time and application of Green's formula in space yield \begin{eqnarray}{\label{weakform_p}} \begin{aligned} \int_\Omega p(T)\delta y(T)dx-\int_\Omega p(0)\delta y(0)dx+\iint_{Q}\Big[ -\frac{\partial p}{\partial t} -\nu\nabla^2p-\bm{v}\cdot\nabla p+a_0p\Big]\delta ydxdt\\ +\iint_Q\delta \bm{v}\cdot \nabla ypdxdt-\nu\iint_{\Sigma}(\frac{\partial\delta y}{\partial \bm{n}}p-\frac{\partial p}{\partial \bm{n}}\delta y)dxdt+\iint_\Sigma p\delta y\bm{v}\cdot \bm{n}dxdt=0. \end{aligned} \end{eqnarray} where $\bm{n}$ is the unit outward normal vector at $\Gamma$. Next, let us assume that the function $p$ is the solution to the following adjoint system \begin{flalign}\label{adjoint_equation} &\qquad \left\{ \begin{aligned} -\frac{\partial p}{\partial t} -\nu\nabla^2p-\bm{v}\cdot\nabla p +a_0p&=\alpha_1(y-y_d)~ \text{in}~ Q, \\ p&=0~\qquad\quad\quad\text{on}~ \Sigma,\\ p(T)&=\alpha_2(y(T)-y_T). \end{aligned} \right. \end{flalign} It follows from (\ref{def_delta_j}), (\ref{perturbation_state_eqn}), (\ref{weakform_p}) and (\ref{adjoint_equation}) that \begin{equation*} \delta J(\bm{v})=\iint_{Q}(\bm{v}-p\nabla y)\cdot\delta \bm{v} dxdt. \end{equation*} which, together with (\ref{Dj and delta j}), implies that \begin{equation}\label{gradient} \left\{ \begin{aligned} &DJ(\bm{v})\in \mathcal{U},\\ &\iint_QDJ(\bm{v})\cdot \bm{z}dxdt=\iint_Q(\bm{v}-p\nabla y)\cdot \bm{z}dxdt,\forall \bm{z}\in \mathcal{U}. \end{aligned} \right. \end{equation} From the discussion above, the first-order optimality condition of (BCP) can be summarized as follows. \begin{theorem} Let $\bm{u}\in \mathcal{U}$ be a solution of (BCP). Then, it satisfies the following optimality condition \begin{equation*} \iint_Q(\bm{u}-p\nabla y)\cdot \bm{z}dxdt=0,\forall \bm{z}\in \mathcal{U}, \end{equation*} where $y$ and $p$ are obtained from $\bm{u}$ via the solutions of the following two parabolic equations: \begin{flalign*}\tag{state equation} &\quad\qquad\qquad\qquad\left\{ \begin{aligned} \frac{\partial y}{\partial t}-\nu \nabla^2y+\bm{u}\cdot \nabla y+a_0y&=f\quad \text{in}~ Q, \\ y&=g\quad \text{on}~\Sigma,\\ y(0)&=\phi, \end{aligned} \right.& \end{flalign*} and \begin{flalign*}\tag{adjoint equation} &\qquad\qquad\qquad\left\{ \begin{aligned} -\frac{\partial p}{\partial t} -\nu\nabla^2p-\bm{u}\cdot\nabla p +a_0p&=\alpha_1(y-y_d)\quad \text{in}~ Q, \\ p&=0 \quad\qquad\qquad\text{on}~\Sigma,\\ p(T)&=\alpha_2(y(T)-y_T). \end{aligned} \right.& \end{flalign*} \end{theorem} \section{An Implementable Nested Conjugate Gradient Method}\label{se:cg} In this section, we discuss the application of a CG strategy to solve (BCP). In particular, we elaborate on the computation of the gradient and the stepsize at each CG iteration, and thus obtain an easily implementable algorithm. \subsection{A Generic Conjugate Gradient Method for (BCP)} Conceptually, implementing the CG method to (BCP), we readily obtain the following algorithm: \begin{enumerate} \item[\textbf{(a)}] Given $\bm{u}^0\in \mathcal{U}$. \item [\textbf{(b)}] Compute $\bm{g}^0=DJ(\bm{u}^0)$. If $DJ(\bm{u}^0)=0$, then $\bm{u}=\bm{u}^0$; otherwise set $\bm{w}^0=\bm{g}^0$. \item[]\noindent For $k\geq 0$, $\bm{u}^k,\bm{g}^k$ and $\bm{w}^k$ being known, the last two different from $\bm{0}$, one computes $\bm{u}^{k+1}, \bm{g}^{k+1}$ and $\bm{w}^{k+1}$ as follows: \item[\textbf{(c)}] Compute the stepsize $\rho_k$ by solving the following optimization problem \begin{flalign}\label{op_step} &\left\{ \begin{aligned} & \rho_k\in \mathbb{R}, \\ &J(\bm{u}^k-\rho_k\bm{w}^k)\leq J(\bm{u}^k-\rho \bm{w}^k), \forall \rho\in \mathbb{R}. \end{aligned} \right. \end{flalign} \item[\textbf{(d)}] Update $\bm{u}^{k+1}$ and $\bm{g}^{k+1}$, respectively, by $$\bm{u}^{k+1}=\bm{u}^k-\rho_k \bm{w}^k,$$ and $$\bm{g}^{k+1}=DJ(\bm{u}^{k+1}).$$ \item[] If $DJ(\bm{u}^{k+1})=0$, take $\bm{u}=\bm{u}^{k+1}$; otherwise, \item[\textbf{(e)}] Compute $$\beta_k=\frac{\iint_{Q}|\bm{g}^{k+1}|^2dxdt}{\iint_{Q}|\bm{g}^k|^2dxdt},$$ and then update $$\bm{w}^{k+1}=\bm{g}^{k+1}+\beta_k \bm{w}^k.$$ \item[] Do $k+1\rightarrow k$ and return to (\textbf{c}). \end{enumerate} The above iterative method looks very simple, but practically, the implementation of the CG method (\textbf{a})--(\textbf{e}) for the solution of (BCP) is nontrivial. In particular, it is numerically challenging to compute $DJ(\bm{v})$, $\forall \bm{v}\in\mathcal{U}$ and $\rho_k$ as illustrated below. We shall discuss how to address these two issues in the following part of this section. \subsection{Computation of $DJ(\bm{v})$}\label{com_gra} It is clear that the implementation of the generic CG method (\textbf{a})--(\textbf{e}) for the solution of (BCP) requires the knowledge of $DJ(\bm{v})$ for various $\bm{v}\in \mathcal{U}$, and this has been conceptually provided in (\ref{gradient}). However, it is numerically challenging to compute $DJ(\bm{v})$ by (\ref{gradient}) due to the restriction $\nabla\cdot DJ(\bm{v})=0$ which ensures that all iterates $\bm{u}^k$ of the CG method meet the additional divergence-free constraint $\nabla\cdot \bm{u}^k=0$. In this subsection, we show that equation (\ref{gradient}) can be reformulated as a saddle point problem by introducing a Lagrange multiplier associated with the constraint $\nabla\cdot DJ(\bm{v})=0$. Then, a preconditioned CG method is proposed to solve this saddle point problem. We first note that equation (\ref{gradient}) can be equivalently reformulated as \begin{equation}\label{gradient_e} \left\{ \begin{aligned} &DJ(\bm{v})(t)\in \mathbb{S},\\ &\int_\Omega DJ(\bm{v})(t)\cdot \bm{z}dx=\int_\Omega(\bm{v}(t)-p(t)\nabla y(t))\cdot \bm{z}dx,\forall \bm{z}\in \mathbb{S}, \end{aligned} \right. \end{equation} where \begin{equation*} \mathbb{S}=\{\bm{z}|\bm{z}\in [L^2(\Omega)]^d, \nabla\cdot\bm{z}=0\}. \end{equation*} Clearly, problem (\ref{gradient_e}) is a particular case of \begin{equation}\label{gradient_e2} \left\{ \begin{aligned} &\bm{g}\in \mathbb{S},\\ &\int_\Omega \bm{g}\cdot \bm{z}dx=\int_\Omega\bm{f}\cdot \bm{z}dx,\forall \bm{z}\in \mathbb{S}, \end{aligned} \right. \end{equation} with $\bm{f}$ given in $[L^2(\Omega)]^d$. Introducing a Lagrange multiplier $\lambda\in H_0^1(\Omega)$ associated with the constraint $\nabla\cdot\bm{z}=0$, it is clear that problem (\ref{gradient_e2}) is equivalent to the following saddle point problem \begin{equation}\label{gradient_e3} \left\{ \begin{aligned} &(\bm{g},\lambda)\in [L^2(\Omega)]^d\times H_0^1(\Omega),\\ &\int_\Omega \bm{g}\cdot \bm{z}dx=\int_\Omega\bm{f}\cdot \bm{z}dx+\int_\Omega\lambda\nabla\cdot \bm{z}dx,\forall \bm{z}\in [L^2(\Omega)]^d,\\ &\int_\Omega\nabla\cdot \bm{g}qdx=0,\forall q\in H_0^1(\Omega), \end{aligned} \right. \end{equation} which is actually a Stokes type problem. In order to solve problem (\ref{gradient_e3}), we advocate a CG method inspired from \cite{glowinski2003, glowinski2015}. For this purpose, one has to specify the inner product to be used over $H_0^1(\Omega)$. As discussed in \cite{glowinski2003}, the usual $L^2$-inner product, namely, $\{q,q'\}\rightarrow\int_\Omega qq'dx$ leads to a CG method with poor convergence properties. Indeed, using some arguments similar to those in \cite{glowinski1992,glowinski2003}, we can show that the saddle point problem (\ref{gradient_e3}) can be reformulated as a linear variational problem in terms of the Lagrange multiplier $\lambda$. The corresponding coefficient matrix after space discretization with mesh size $h$ has a condition number of the order of $h^{-2}$, which is ill-conditioned especially for small $h$ and makes the CG method converges fairly slow. Hence, preconditioning is necessary for solving problem (\ref{gradient_e3}). As suggested in \cite{glowinski2003}, we choose $-\nabla\cdot\nabla$ as a preconditioner for problem (\ref{gradient_e3}), and the corresponding preconditioned CG method operates in the space $H_0^1(\Omega)$ equipped with the inner product $\{q,q'\}\rightarrow\int_\Omega\nabla q\cdot\nabla q'dx$ and the associated norm $\|q\|_{H_0^1(\Omega)}=(\int_\Omega|\nabla q|^2dx)^{1/2}, \forall q,q'\in H_0^1(\Omega)$. The resulting algorithm reads as: \begin{enumerate} \item [\textbf{G1}] Choose $\lambda^0\in H_0^1(\Omega)$. \item [\textbf{G2}] Solve \begin{equation*} \left\{ \begin{aligned} &\bm{g}^0\in [L^2(\Omega)]^d,\\ &\int_\Omega \bm{g}^0\cdot \bm{z}dx=\int_\Omega\bm{f}\cdot \bm{z}dx+\int_\Omega\lambda^0\nabla\cdot \bm{z}dx,\forall \bm{z}\in [L^2(\Omega)]^d, \end{aligned} \right. \end{equation*} and \begin{equation*} \left\{ \begin{aligned} &r^0\in H_0^1(\Omega),\\ &\int_\Omega \nabla r^0\cdot \nabla qdx=\int_\Omega\nabla\cdot \bm{g}^0qdx,\forall q\in H_0^1(\Omega). \end{aligned} \right. \end{equation*} \smallskip If $\frac{\int_\Omega|\nabla r^0|^2dx}{\max\{1,\int_\Omega|\nabla \lambda^0|^2dx\}}\leq tol_1$, take $\lambda=\lambda^0$ and $\bm{g}=\bm{g}^0$; otherwise set $w^0=r^0$. For $k\geq 0$, $\lambda^k,\bm{g}^k, r^k$ and $w^k$ being known with the last two different from 0, we compute $\lambda^{k+1},\bm{g}^{k+1}, r^{k+1}$ and if necessary $w^{k+1}$, as follows: \smallskip \item[\textbf{G3}] Solve \begin{equation*} \left\{ \begin{aligned} &\bar{\bm{g}}^k\in [L^2(\Omega)]^d,\\ &\int_\Omega \bar{\bm{g}}^k\cdot \bm{z}dx=\int_\Omega w^k\nabla\cdot \bm{z}dx,\forall \bm{z}\in [L^2(\Omega)]^d, \end{aligned} \right. \end{equation*} and \begin{equation*} \left\{ \begin{aligned} &\bar{r}^k\in H_0^1(\Omega),\\ &\int_\Omega \nabla \bar{r}^k\cdot \nabla qdx=\int_\Omega\nabla\cdot \bar{\bm{g}}^kqdx,\forall q\in H_0^1(\Omega), \end{aligned} \right. \end{equation*} and compute the stepsize via $$ \eta_k=\frac{\int_\Omega|\nabla r^k|^2dx}{\int_\Omega\nabla\bar{r}^k\cdot\nabla w^kdx}. $$ \item[\textbf{G4}] Update $\lambda^k, \bm{g}^k$ and $r^k$ via $$\lambda^{k+1}=\lambda^k-\eta_kw^k,\bm{g}^{k+1}=\bm{g}^k-\eta_k \bar{\bm{g}}^k,~\text{and}~r^{k+1}=r^k-\eta_k \bar{r}^k.$$ \smallskip If $\frac{\int_\Omega|\nabla r^{k+1}|^2dx}{\max\{1,\int_\Omega|\nabla r^0|^2dx\}}\leq tol_1$, take $\lambda=\lambda^{k+1}$ and $\bm{g}=\bm{g}^{k+1}$; otherwise, \item[\textbf{G5}] Compute $$\gamma_k=\frac{\int_\Omega|\nabla r^{k+1}|^2dx}{\int_\Omega|\nabla r^k|^2dx},$$ and update $w^k$ via $$w^{k+1}=r^{k+1}+\gamma_k w^{k}.$$ Do $k+1\rightarrow k$ and return to \textbf{G3}. \end{enumerate} Clearly, one only needs to solve two simple linear equations at each iteration of the preconditioned CG algorithm (\textbf{G1})-(\textbf{G5}), which implies that the algorithm is easy and cheap to implement. Moreover, due to the well-chosen preconditioner $-\nabla\cdot\nabla$, one can expect the above preconditioned CG algorithm to have a fast convergence; this will be validated by the numerical experiments reported in Section \ref{se:numerical}. \subsection{Computation of the Stepsize $\rho_k$}\label{com_step} Another crucial step to implement the CG method \textbf{(a)}--\textbf{(e)} is the computation of the stepsize $\rho_k$. It is the solution of the optimization problem (\ref{op_step}) which is numerically expensive to be solved exactly or up to a high accuracy. For instance, to solve (\ref{op_step}), one may consider the Newton method applied to the solution of $$ H_k'(\rho_k)=0, $$ where $$H_k(\rho)=J(\bm{u}^k-\rho\bm{w}^k).$$ The Newton method requires the second-order derivative $H_k''(\rho)$ which can be computed via an iterated adjoint technique requiring the solution of \emph{four} parabolic problems per Newton's iteration. Hence, the implementation of the Newton method is numerically expensive. The high computational load for solving (\ref{op_step}) motivates us to implement certain stepsize rule to determine an approximation of $\rho_k$. Here, we advocate the following procedure to compute an approximate stepsize $\hat{\rho}_k$. For a given $\bm{w}^k\in\mathcal{U}$, we replace the state $y=S(\bm{u}^k-\rho\bm{w}^k)$ in the objective functional $J(\bm{u}^k-\rho\bm{w}^k)$ by $$ S(\bm{u}^k)-\rho S'(\bm{u}^k)\bm{w}^k, $$ which is indeed the linearization of the mapping $\rho \mapsto S(\bm{u}^k - \rho \bm{w}^k)$ at $\rho= 0$. We thus obtain the following quadratic approximation of $H_k(\rho)$: \begin{equation}\label{q_rho} Q_k(\rho):=\frac{1}{2}\iint_Q|\bm{u}^k-\rho \bm{w}^k|^2dxdt+\frac{\alpha_1}{2}\iint_Q|y^k-\rho z^k-y_d|^2dxdt+\frac{\alpha_2}{2}\int_\Omega|y^k(T)-\rho z^k(T)-y_T|^2dx, \end{equation} where $y^k=S(\bm{u}^k)$ is the solution of the state equation (\ref{state_equation}) associated with $\bm{u}^k$, and $z^k=S'(\bm{u}^k)\bm{w}^k$ satisfies the following linear parabolic problem \begin{flalign}\label{linear_state} &\left\{ \begin{aligned} \frac{\partial z^k}{\partial t}-\nu \nabla^2 z^k+\bm{w}^k\cdot \nabla y^k +\bm{u}^k\cdot\nabla z^k+a_0 z^k&=0\quad \text{in}\quad Q, \\ z^k&=0\quad \text{on}\quad \Sigma,\\ z^k(0)&=0. \end{aligned} \right. \end{flalign} Then, it is easy to show that the equation $ Q_k'(\rho)=0 $ admits a unique solution \begin{equation}\label{step_size} \hat{\rho}_k =\frac{\iint_Q\bm{g}^k\cdot \bm{w}^k dxdt}{\iint_Q|\bm{w}^k|^2dxdt+ \alpha_1\iint_Q|z^k|^2dxdt+\alpha_2\int_\Omega|z^k(T)|^2dx}, \end{equation} and we take $\hat{\rho}_k$, which is clearly an approximation of $\rho_k$, as the stepsize in each CG iteration. Altogether, with the stepsize given by (\ref{step_size}), every iteration of the resulting CG algorithm requires solving only \emph{three} parabolic problems, namely, the state equation (\ref{state_equation}) forward in time and the associated adjoint equation (\ref{adjoint_equation}) backward in time for the computation of $\bm{g}^k$, and to solving the linearized parabolic equation (\ref{linear_state}) forward in time for the stepsize $\hat{\rho}_k$. For comparison, if the Newton method is employed to compute the stepsize $\rho_k$ by solving (\ref{op_step}), at least \emph{six} parabolic problems are required to be solved at each iteration of the CG method, which is much more expensive numerically. \begin{remark} To find an appropriate stepsize, a natural idea is to employ some line search strategies, such as the backtracking strategy based on the Armijo--Goldstein condition or the Wolf condition, see e.g., \cite{nocedal2006}. It is worth noting that these line search strategies require the evaluation of $J(\bm{v})$ repeatedly, which is numerically expensive because every evaluation of $J(\bm{v})$ for a given $\bm{v}$ requires solving the state equation (\ref{state_equation}). Moreover, we have implemented the CG method for solving (BCP) with various line search strategies and observed from the numerical results that line search strategies always lead to tiny stepsizes making extremely slow the convergence of the CG method. \end{remark} \subsection{A Nested CG Method for Solving (BCP)} Following Sections \ref{com_gra} and \ref{com_step}, we advocate the following nested CG method for solving (BCP): \begin{enumerate} \item[\textbf{I.}] Given $\bm{u}^0\in \mathcal{U}$. \item[\textbf{II.}] Compute $y^0$ and $p^0$ by solving the state equation (\ref{state_equation}) and the adjoint equation (\ref{adjoint_equation}) corresponding to $\bm{u}^0$. Then, for a.e. $t \in(0, T)$, solve \begin{equation*} \left\{ \begin{aligned} &\bm{g}^0(t)\in \mathbb{S},\\ &\int_\Omega \bm{g}^0(t)\cdot \bm{z}dx=\int_\Omega(\bm{u}^0(t)-p^0(t)\nabla y^0(t))\cdot \bm{z}dx,\forall \bm{z}\in \mathbb{S}, \end{aligned} \right. \end{equation*} by the preconditioned CG algorithm (\textbf{G1})--(\textbf{G5}); and set $\bm{w}^0=\bm{g}^0.$ \medskip \noindent For $k\geq 0$, $\bm{u}^k, \bm{g}^k$ and $\bm{w}^k$ being known, the last two different from $\bm{0}$, one computes $\bm{u}^{k+1}, \bm{g}^{k+1}$ and $\bm{w}^{k+1}$ as follows: \medskip \item[\textbf{III.}] Compute the stepsize $\hat{\rho}_k$ by (\ref{step_size}). \item[\textbf{IV.}] Update $\bm{u}^{k+1}$ by $$\bm{u}^{k+1}=\bm{u}^k-\hat{\rho}_k\bm{w}^k.$$ Compute $y^{k+1}$ and $p^{k+1}$ by solving the state equation (\ref{state_equation}) and the adjoint equation (\ref{adjoint_equation}) corresponding to $\bm{u}^{k+1}$; and for a.e. $t \in(0, T)$, solve \begin{equation*} \left\{ \begin{aligned} &\bm{g}^{k+1}(t)\in \mathbb{S},\\ &\int_\Omega \bm{g}^{k+1}(t)\cdot \bm{z}dx=\int_\Omega(\bm{u}^{k+1}(t)-p^{k+1}(t)\nabla y^{k+1}(t))\cdot \bm{z}dx,\forall \bm{z}\in \mathbb{S}, \end{aligned} \right. \end{equation*} by the preconditioned CG algorithm (\textbf{G1})--(\textbf{G5}). \medskip \noindent If $\frac{\iint_Q|\bm{g}^{k+1}|^2dxdt}{\iint_Q|\bm{g}^{0}|^2dxdt}\leq tol$, take $\bm{u} = \bm{u}^{k+1}$; else \medskip \item[\textbf{V.}] Compute $$\beta_k=\frac{\iint_Q|\bm{g}^{k+1}|^2dxdt}{\iint_Q|\bm{g}^{k}|^2dxdt},~\text{and}~\bm{w}^{k+1} = \bm{g}^{k+1} + \beta_k\bm{w}^k.$$ Do $k+1\rightarrow k$ and return to \textbf{III}. \end{enumerate} \section{Space and time discretizations}\label{se:discretization} In this section, we discuss first the numerical discretization of the bilinear optimal control problem (BCP). We achieve the time discretization by a semi-implicit finite difference method and the space discretization by a piecewise linear finite element method. Then, we discuss an implementable nested CG method for solving the fully discrete bilinear optimal control problem. \subsection{Time Discretization of (BCP)} First, we define a time discretization step $\Delta t$ by $\Delta t= T/N$, with $N$ a positive integer. Then, we approximate the control space $\mathcal{U}=L^2(0, T;\mathbb{S})$ by $ \mathcal{U}^{\Delta t}:=(\mathbb{S})^N; $ and equip $\mathcal{U}^{\Delta t}$ with the following inner product $$ (\bm{v},\bm{w})_{\Delta t} = \Delta t \sum^N_{n=1}\int_\Omega \bm{v}_n\cdot \bm{w}_ndx, \quad\forall \bm{v}= \{\bm{v}_n\}^N_{n=1}, \bm{w} = \{\bm{w}_n\}^N_{n=1} \in\mathcal{U}^{\Delta t}, $$ and the norm $$ \|\bm{v}\|_{\Delta t} = \left(\Delta t \sum^N_{n=1}\int_\Omega |\bm{v}_n|^2dx\right)^{\frac{1}{2}}, \quad\forall \bm{v}= \{\bm{v}_n\}^N_{n=1} \in\mathcal{U}^{\Delta t}. $$ Then, (BCP) is approximated by the following semi-discrete bilinear control problem (BCP)$^{\Delta t}$: \begin{flalign*} &\hspace{-4.5cm}\text{(BCP)}^{\Delta t}\qquad\qquad\qquad\qquad\left\{ \begin{aligned} & \bm{u}^{\Delta t}\in \mathcal{U}^{\Delta t}, \\ &J^{\Delta t}(\bm{u}^{\Delta t})\leq J^{\Delta t}(\bm{v}),\forall \bm{v}=\{\bm{v}_n\}_{n=1}^N\in\mathcal{U}^{\Delta t}, \end{aligned} \right. \end{flalign*} where the cost functional $J^{\Delta t}$ is defined by \begin{equation*} J^{\Delta t}(\bm{v})=\frac{1}{2}\Delta t \sum^N_{n=1}\int_\Omega |\bm{v}_n|^2dx+\frac{\alpha_1}{2}\Delta t \sum^N_{n=1}\int_\Omega |y_n-y_d^n|^2dx+\frac{\alpha_2}{2}\int_\Omega|y_N-y_T|^2dx, \end{equation*} with $\{y_n\}^N_{n=1}$ the solution of the following semi-discrete state equation: $y_0=\phi$; then for $n=1,\ldots,N$, with $y_{n-1}$ being known, we obtain $y_n$ from the solution of the following linear elliptic problem: \begin{flalign}\label{state_semidis} &\left\{ \begin{aligned} \frac{{y}_n-{y}_{n-1}}{\Delta t}-\nu \nabla^2{y}_n+\bm{v}_n\cdot\nabla{y}_{n-1}+a_0{y}_{n-1}&= f_n\quad \text{in}\quad \Omega, \\ y_n&=g_n\quad \text{on}\quad \Gamma. \end{aligned} \right. \end{flalign} \begin{remark} For simplicity, we have chosen a one-step semi-explicit scheme to discretize system (\ref{state_equation}). This scheme is first-order accurate and reasonably robust, once combined to an appropriate space discretization. The application of second-order accurate time discretization schemes to optimal control problems has been discussed in e.g., \cite{carthelglowinski1994}. \end{remark} \begin{remark} At each step of scheme (\ref{state_semidis}), we only need to solve a simple linear elliptic problem to obtain $y_n$ from $y_{n-1}$, and there is no particular difficulty in solving such a problem. \end{remark} The existence of a solution to the semi-discrete bilinear optimal control problem (BCP)$^{\Delta t}$ can be proved in a similar way as what we have done for the continuous case. Let $\bm{u}^{\Delta t}$ be a solution of (BCP)$^{\Delta t}$, then it verifies the following first-order optimality condition: \begin{equation*} DJ^{\Delta t}(\bm{u}^{\Delta t}) = 0, \end{equation*} where $DJ^{\Delta t}(\bm{v})$ is the first-order differential of the functional $J^{\Delta t}$ at $\bm{v}\in\mathcal{U}^{\Delta t}$. Proceeding as in the continuous case, we can show that $DJ^{\Delta t}(\bm{v})=\{\bm{g}_n\}_{n=1}^N\in\mathcal{U}^{\Delta t}$ where \begin{equation*} \left\{ \begin{aligned} &\bm{g}_n\in \mathbb{S},\\ &\int_\Omega \bm{g}_n\cdot \bm{w}dx=\int_\Omega(\bm{v}_n-p_n\nabla y_{n-1})\cdot \bm{w}dx,\forall\bm{w}\in \mathbb{S}, \end{aligned} \right. \end{equation*} and the vector-valued function $\{p_n\}^N_{n=1}$ is the solution of the semi-discrete adjoint system below: \begin{equation*} {p}_{N+1}=\alpha_2({y}_N-y_T); \end{equation*} for $n=N$, solve \begin{flalign*} \qquad \left\{ \begin{aligned} \frac{{p}_N-{p}_{N+1}}{\Delta t}-\nu \nabla^2{p}_N&= \alpha_1({y}_N-y_d^N)&\quad \text{in}\quad \Omega, \\ p_N&=0&\quad \text{on}\quad \Gamma, \end{aligned} \right. \end{flalign*} and for $n=N-1,\cdots,1,$ solve \begin{flalign*} \qquad \left\{ \begin{aligned} \frac{{p}_n-{p}_{n+1}}{\Delta t}-\nu\nabla^2{p}_n-\bm{v}_{n+1}\cdot\nabla{p}_{n+1}+a_0{p}_{n+1}&= \alpha_1({y}_n-y_d^n)&\quad \text{in}\quad \Omega, \\ p_n&=0&\quad \text{on}\quad \Gamma. \end{aligned} \right. \end{flalign*} \subsection{Space Discretization of (BCP)$^{\Delta t}$} In this subsection, we discuss the space discretization of (BCP)$^{\Delta t}$, obtaining thus a full space-time discretization of (BCP). For simplicity, we suppose from now on that $\Omega$ is a polygonal domain of $\mathbb{R}^2$ (or has been approximated by a family of such domains). Let $\mathcal{T}_H$ be a classical triangulation of $\Omega$, with $H$ the largest length of the edges of the triangles of $\mathcal{T}_H$. From $\mathcal{T}_{H}$ we construct $\mathcal{T}_{h}$ with $h=H/2$ by joining the mid-points of the edges of the triangles of $\mathcal{T}_{H}$. We first consider the finite element space $V_h$ defined by \begin{equation*} V_h = \{\varphi_h| \varphi_h\in C^0(\bar{\Omega}); { \varphi_h\mid}_{\mathbb{T}}\in P_1, \forall\, {\mathbb{T}}\in\mathcal{T}_h\} \end{equation*} with $P_1$ the space of the polynomials of two variables of degree $\leq 1$. Two useful sub-spaces of $V_h$ are \begin{equation*} V_{0h} =\{\varphi_h| \varphi_h\in V_h, \varphi_h\mid_{\Gamma}=0\}:=V_h\cap H_0^1(\Omega), \end{equation*} and (assuming that $g(t)\in C^0(\Gamma)$) \begin{eqnarray*} V_{gh}(t) =\{\varphi_h| \varphi_h\in V_h, \varphi_h(Q)=g(Q,t), \forall\, Q ~\text{vertex of} ~\mathcal{T}_h~\text{located on}~\Gamma \}. \end{eqnarray*} In order to construct the discrete control space, we introduce first \begin{equation*} \Lambda_H = \{\varphi_H| \varphi_H\in C^0(\bar{\Omega}); { \varphi_H\mid}_{\mathbb{T}}\in P_1, \forall\, {\mathbb{T}}\in\mathcal{T}_H\},~\text{and}~\Lambda_{0H} =\{\varphi_H| \varphi_H\in \Lambda_H, \varphi_H\mid_{\Gamma}=0\}. \end{equation*} Then, the discrete control space $\mathcal{U}_h^{\Delta t}$ is defined by \begin{equation*} \mathcal{U}_h^{\Delta t}=(\mathbb{S}_h)^N,~\text{with}~\mathbb{S}_h=\{\bm{v}_h|\bm{v}_h\in V_h\times V_h,\int_\Omega \nabla\cdot\bm{v}_hq_Hdx\left(=-\int_\Omega\bm{v}_h\cdot\nabla q_Hdx\right)=0,\forall q_H\in \Lambda_{0H}\}. \end{equation*} With the above finite element spaces, we approximate (BCP) and (BCP)$^{\Delta t}$ by (BCP)$_h^{\Delta t}$ defined by \begin{flalign*} &\hspace{-4.2cm}\text{(BCP)}_h^{\Delta t}\qquad\qquad\qquad\qquad\qquad\qquad\left\{ \begin{aligned} & \bm{u}_h^{\Delta t}\in \mathcal{U}_h^{\Delta t}, \\ &J_h^{\Delta t}(\bm{u}_h^{\Delta t})\leq J_h^{\Delta t}(\bm{v}_h^{\Delta t}),\forall \bm{v}_h^{\Delta t}\in\mathcal{U}_h^{\Delta t}, \end{aligned} \right. \end{flalign*} where the fully discrete cost functional $J_h^{\Delta t}$ is defined by \begin{equation}\label{obj_fuldis} J_h^{\Delta t}(\bm{v}_h^{\Delta t})=\frac{1}{2}\Delta t \sum^N_{n=1}\int_\Omega |\bm{v}_{n,h}|^2dx+\frac{\alpha_1}{2}\Delta t \sum^N_{n=1}\int_\Omega |y_{n,h}-y_d^n|^2dx+\frac{\alpha_2}{2}\int_\Omega|y_{N,h}-y_T|^2dx \end{equation} with $\{y_{n,h}\}^N_{n=1}$ the solution of the following fully discrete state equation: $y_{0,h}=\phi_h\in V_h$, where $\phi_h$ verifies $$ \phi_h\in V_h, \forall\, h>0,~\text{and}~\lim_{h\rightarrow 0}\phi_h=\phi,~\text{in}~L^2(\Omega), $$ then, for $n=1,\ldots,N$, with $y_{n-1,h}$ being known, we obtain $y_{n,h}\in V_{gh}(n\Delta t)$ from the solution of the following linear variational problem: \begin{equation}\label{state_fuldis} \int_\Omega\frac{{y}_{n,h}-{y}_{n-1,h}}{\Delta t}\varphi dx+\nu \int_\Omega\nabla{y}_{n,h}\cdot\nabla\varphi dx+\int_\Omega\bm{v}_n\cdot\nabla{y}_{n-1,h}\varphi dx+\int_\Omega a_0{y}_{n-1,h}\varphi dx= \int_\Omega f_{n}\varphi dx,\forall \varphi\in V_{0h}. \end{equation} In the following discussion, the subscript $h$ in all variables will be omitted for simplicity. In a similar way as what we have done in the continuous case, one can show that the first-order differential of $J_h^{\Delta t} $ at $\bm{v}\in\mathcal{U}_h^{\Delta t}$ is $DJ_h^{\Delta t}(\bm{v})=\{\bm{g}_n\}_{n=1}^N\in (\mathbb{S}_h)^N$ where \begin{equation}\label{gradient_ful} \left\{ \begin{aligned} &\bm{g}_n\in \mathbb{S}_h,\\ &\int_\Omega \bm{g}_n\cdot \bm{z}dx=\int_\Omega(\bm{v}_n-p_n\nabla y_{n-1})\cdot \bm{z}dx,\forall\bm{z}\in \mathbb{S}_h, \end{aligned} \right. \end{equation} and the vector-valued function $\{p_n\}^N_{n=1}$ is the solution of the following fully discrete adjoint system: \begin{equation}\label{ful_adjoint_1} {p}_{N+1}=\alpha_2({y}_N-y_T); \end{equation} for $n=N$, solve \begin{flalign}\label{ful_adjoint_2} \qquad \left\{ \begin{aligned} &p_N\in V_{0h},\\ &\int_\Omega\frac{{p}_N-{p}_{N+1}}{\Delta t}\varphi dx+\nu\int_\Omega \nabla{p}_N\cdot\nabla \varphi dx= \int_\Omega\alpha_1({y}_N-y_d^N)\varphi dx,\forall \varphi\in V_{0h}, \end{aligned} \right. \end{flalign} then, for $n=N-1,\cdots,1,$, solve \begin{flalign}\label{ful_adjoint_3} \qquad \left\{ \begin{aligned} &p_n\in V_{0h},\\ &\int_\Omega\frac{{p}_n-{p}_{n+1}}{\Delta t}\varphi dx+\nu\int_\Omega\nabla{p}_n\cdot\nabla\varphi dx-\int_\Omega\bm{v}_{n+1}\cdot\nabla{p}_{n+1}\varphi dx\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad+a_0\int_\Omega{p}_{n+1}\varphi dx=\int_\Omega \alpha_1({y}_n-y_d^n)\varphi dx,\forall \varphi\in V_{0h}. \end{aligned} \right. \end{flalign} It is worth mentioning that the so-called discretize-then-optimize strategy is employed here, which implies that we first discretize (BCP), and to compute the gradient in a discrete setting, the fully discrete adjoint equation (\ref{ful_adjoint_1})--(\ref{ful_adjoint_3}) has been derived from the fully discrete cost functional $J_h^{\Delta t}(\bm{v})$ (\ref{obj_fuldis}) and the fully discrete state equation (\ref{state_fuldis}). This implies that the fully discrete state equation (\ref{state_fuldis}) and the fully discrete adjoint equation (\ref{ful_adjoint_1})--(\ref{ful_adjoint_3}) are strictly in duality. This fact guarantees that $-DJ_h^{\Delta t}(\bm{v})$ is a descent direction of the fully discrete bilinear optimal control problem (BCP)$_h^{\Delta t}$. \begin{remark} A natural alternative has been advocated in the literature: (i) Derive the adjoint equation to compute the first-order differential of the cost functional in a continuous setting; (ii) Discretize the state and adjoint state equations by certain numerical schemes; (iii) Use the resulting discrete analogs of $y$ and $p$ to compute a discretization of the differential of the cost functional. The main problem with this optimize-then-discretize approach is that it may not preserve a strict duality between the discrete state equation and the discrete adjoint equation. This fact implies in turn that the resulting discretization of the continuous gradient may not be a gradient of the discrete optimal control problem. As a consequence, the resulting algorithm is not a descent algorithm and divergence may take place (see \cite{GH1998} for a related discussion). \end{remark} \subsection{A Nested CG Method for Solving the Fully Discrete Problem (BCP)$_h^{\Delta t}$}\label{DCG} In this subsection, we propose a nested CG method for solving the fully discrete problem (BCP)$_h^{\Delta t}$. As discussed in Section \ref{se:cg}, the implementation of CG requires the knowledge of $DJ_h^{\Delta t}(\bm{v})$ and an appropriate stepsize. In the following discussion, we address these two issues by extending the results for the continuous case in Sections \ref{com_gra} and \ref{com_step} to the fully discrete settings; and derive the corresponding CG algorithm. First, it is clear that one can compute $DJ_h^{\Delta t}(\bm{v})$ via the solution of the $N$ linear variational problems encountered in (\ref{gradient_ful}). For this purpose, we introduce a Lagrange multiplier $\lambda\in \Lambda_{0H}$ associated with the divergence-free constraint, then problem (\ref{gradient_ful}) is equivalent to the following saddle point system \begin{equation}\label{fulgradient_e} \left\{ \begin{aligned} &(\bm{g}_n,\lambda)\in (V_h\times V_h)\times \Lambda_{0H},\\ &\int_\Omega \bm{g}_n\cdot \bm{z}dx=\int_\Omega (\bm{v}_n-p_n\nabla y_{n-1})\cdot \bm{z}dx+\int_\Omega\lambda\nabla\cdot \bm{z}dx,\forall \bm{z}\in V_h\times V_h,\\ &\int_\Omega\nabla\cdot\bm{g}_nqdx=0,\forall q\in \Lambda_{0H}. \end{aligned} \right. \end{equation} As discussed in Section \ref{com_gra}, problem (\ref{fulgradient_e}) can be solved by the following preconditioned CG algorithm, which is actually a discrete analogue of (\textbf{G1})--(\textbf{G5}). \begin{enumerate} \item [\textbf{DG1}] Choose $\lambda^0\in \Lambda_{0H}$. \item [\textbf{DG2}] Solve \begin{equation*} \left\{ \begin{aligned} &\bm{g}_n^0\in V_h\times V_h,\\ &\int_\Omega\bm{g}_n^0\cdot \bm{z}dx=\int_\Omega(\bm{v}_n-p_n\nabla y_{n-1})\cdot \bm{z}dx+\int_\Omega\lambda^0\nabla\cdot \bm{z}dx,\forall \bm{z}\in V_h\times V_h, \end{aligned} \right. \end{equation*} and \begin{equation*} \left\{ \begin{aligned} &r^0\in \Lambda_{0H},\\ &\int_\Omega \nabla r^0\cdot \nabla qdx=\int_\Omega\nabla\cdot \bm{g}_n^0qdx,\forall q\in \Lambda_{0H}. \end{aligned} \right. \end{equation*} \smallskip If $\frac{\int_\Omega|\nabla r^0|^2dx}{\max\{1,\int_\Omega|\nabla \lambda^0|^2dx\}}\leq tol_1$, take $\lambda=\lambda^0$ and $\bm{g}_n=\bm{g}_n^0$; otherwise set $w^0=r^0$. For $k\geq 0$, $\lambda^k,\bm{g}_n^k, r^k$ and $w^k$ being known with the last two different from 0, we define $\lambda^{k+1},\bm{g}_n^{k+1}, r^{k+1}$ and if necessary $w^{k+1}$, as follows: \smallskip \item[\textbf{DG3}] Solve \begin{equation*} \left\{ \begin{aligned} &\bar{\bm{g}}_n^k\in V_h\times V_h,\\ &\int_\Omega \bar{\bm{g}}_n^k\cdot \bm{z}dx=\int_\Omega w^k\nabla\cdot \bm{z}dx,\forall \bm{z}\in V_h\times V_h, \end{aligned} \right. \end{equation*} and \begin{equation*} \left\{ \begin{aligned} &\bar{r}^k\in \Lambda_{0H},\\ &\int_\Omega \nabla \bar{r}^k\cdot \nabla qdx=\int_\Omega\nabla\cdot \bar{\bm{g}}_n^kqdx,\forall q\in\Lambda_{0H}, \end{aligned} \right. \end{equation*} and compute $$ \eta_k=\frac{\int_\Omega|\nabla r^k|^2dx}{\int_\Omega\nabla\bar{r}^k\cdot\nabla w^kdx}. $$ \item[\textbf{DG4}] Update $\lambda^k,\bm{g}_n^k$ and $r^k$ via $$\lambda^{k+1}=\lambda^k-\eta_kw^k,\bm{g}_n^{k+1}=\bm{g}_n^k-\eta_k\bar{\bm{g}}_n^k,~\text{and}~r^{k+1}=r^k-\eta_k \bar{r}^k.$$ \smallskip If $\frac{\int_\Omega|\nabla r^{k+1}|^2dx}{\max\{1,\int_\Omega|\nabla r^0|^2dx\}}\leq tol_1$, take $\lambda=\lambda^{k+1}$ and $\bm{g}_n=\bm{g}_n^{k+1}$; otherwise, \item[\textbf{DG5}] Compute $$\gamma_k=\frac{\int_\Omega|\nabla r^{k+1}|^2dx}{\int_\Omega|\nabla r^k|^2dx},$$ and update $w^k$ via $$w^{k+1}=r^{k+1}+\gamma_k w^{k}.$$ Do $k+1\rightarrow k$ and return to \textbf{DG3}. \end{enumerate} To find an appropriate stepsize in the CG iteration for the solution of (BCP)$_h^{\Delta t}$, we note that, for any $\{\bm{w}_n\}_{n=1}^N\in (\mathbb{S}_h)^N$, the fully discrete analogue of $Q_k(\rho)$ in (\ref{q_rho}) reads as $ Q_h^{\Delta t}(\rho)=\frac{1}{2}\Delta t \sum^N_{n=1}\int_\Omega |\bm{u}_n-\rho\bm{w}_n|^2dx+\frac{\alpha_1}{2}\Delta t \sum^N_{n=1}\int_\Omega |y_{n}-\rho z_{n}-y_d^n|^2dx+\frac{\alpha_2}{2}\int_\Omega|y_{N}-\rho z_{N}-y_T|^2dx, $ where the vector-valued function $\{z_n\}^N_{n=1}$ is obtained as follows: $z_0=0$; then for $n=1,\ldots,N$, with $z_{n-1}$ being known, $z_n$ is obtained from the solution of the linear variational problem $ \left\{ \begin{aligned} &z_n\in V_{0h},\\ &\int_\Omega\frac{{z}_n-{z}_{n-1}}{\Delta t}\varphi dx+\nu\int_\Omega \nabla{z}_n\cdot\nabla\varphi dx+\int_\Omega\bm{w}_n\cdot\nabla y_n\varphi dx\\ &\qquad\qquad\qquad\qquad\qquad+\int_\Omega\bm{u}_n\cdot\nabla{z}_{n-1}\varphi dx+a_0\int_\Omega{z}_{n-1}\varphi dx= 0,\forall\varphi\in V_{0h}. \\ \end{aligned} \right. $$ As discussed in Section \ref{com_step} for the continuous case, we take the unique solution of ${Q_h^{\Delta t}}'(\rho)=0$ as the stepsize in each CG iteration, that is \begin{equation}\label{step_ful} \hat{\rho}_h^{\Delta t} =\frac{\Delta t\sum_{n=1}^{N}\int_\Omega\bm{g}_n\cdot \bm{w}_n dx}{\Delta t\sum_{n=1}^{N}\int_\Omega|\bm{w}_n|^2dxdt+ \alpha_1\Delta t\sum_{n=1}^{N}\int_\Omega|z_n|^2dxdt+\alpha_2\int_\Omega|z_N|^2dx}. \end{equation} Finally, with above preparations, we propose the following nested CG algorithm for the solution of the fully discrete control problem (BCP)$_h^{\Delta t}$. \begin{enumerate} \item[\textbf{DI.}] Given $\bm{u}^0:=\{\bm{u}_n^0\}_{n=1}^N\in (\mathbb{S}_h)^N$. \item[\textbf{DII.}] Compute $\{y_n^0\}_{n=0}^N$ and $\{p^0_n\}_{n=1}^{N+1}$ by solving the fully discrete state equation (\ref{state_fuldis}) and the fully discrete adjoint equation (\ref{ful_adjoint_1})--(\ref{ful_adjoint_3}) corresponding to $\bm{u}^0$. Then, for $n=1,\cdots, N$ solve \begin{equation*} \left\{ \begin{aligned} &\bm{g}_n^0\in \mathbb{S}_h,\\ &\int_\Omega \bm{g}_n^0\cdot \bm{z}dx=\int_\Omega(\bm{u}_n^0-p_n^0\nabla y_{n-1}^0)\cdot \bm{z}dx,\forall \bm{z}\in \mathbb{S}_h, \end{aligned} \right. \end{equation*} by the preconditioned CG algorithm (\textbf{DG1})--(\textbf{DG5}), and set $\bm{w}^0_n=\bm{g}_n^0.$ \medskip \noindent For $k\geq 0$, $\bm{u}^k, \bm{g}^k$ and $\bm{w}^k$ being known, the last two different from $\bm{0}$, one computes $\bm{u}^{k+1}, \bm{g}^{k+1}$ and $\bm{w}^{k+1}$ as follows: \medskip \item[\textbf{DIII.}] Compute the stepsize $\hat{\rho}_k$ by (\ref{step_ful}). \item[\textbf{DIV.}] Update $\bm{u}^{k+1}$ by $$\bm{u}^{k+1}=\bm{u}^k-\hat{\rho}_k\bm{w}^k.$$ Compute $\{y_n^{k+1}\}_{n=0}^N$ and $\{p_n^{k+1}\}_{n=1}^{N+1}$ by solving the fully discrete state equation (\ref{state_fuldis}) and the fully discrete adjoint equation (\ref{ful_adjoint_1})--(\ref{ful_adjoint_3}) corresponding to $\bm{u}^{k+1}$. Then, for $n=1,\cdots,N$, solve \begin{equation}\label{dis_gradient} \left\{ \begin{aligned} &\bm{g}_n^{k+1}\in \mathbb{S}_h,\\ &\int_\Omega \bm{g}_n^{k+1}\cdot \bm{z}dx=\int_\Omega(\bm{u}_n^{k+1}-p_n^{k+1}\nabla y_{n-1}^{k+1})\cdot \bm{z}dx,\forall \bm{z}\in \mathbb{S}_h, \end{aligned} \right. \end{equation} by the preconditioned CG algorithm (\textbf{DG1})--(\textbf{DG5}). \medskip \noindent If $\frac{\Delta t\sum_{n=1}^N\int_\Omega|\bm{g}_n^{k+1}|^2dx}{\Delta t\sum_{n=1}^N\int_\Omega|\bm{g}_n^{0}|^2dx}\leq tol$, take $\bm{u} = \bm{u}^{k+1}$; else \medskip \item[\textbf{DV.}] Compute $$\beta_k=\frac{\Delta t\sum_{n=1}^N\int_\Omega|\bm{g}_n^{k+1}|^2dx}{\Delta t\sum_{n=1}^N\int_\Omega|\bm{g}_n^{k}|^2dx},~\text{and}~\bm{w}^{k+1} = \bm{g}^{k+1} + \beta_k\bm{w}^k.$$ Do $k+1\rightarrow k$ and return to \textbf{DIII}. \end{enumerate} Despite its apparent complexity, the CG algorithm (\textbf{DI})-(\textbf{DV}) is easy to implement. Actually, one of the main computational difficulties in the implementation of the above algorithm seems to be the solution of $N$ linear systems (\ref{dis_gradient}), which is time-consuming. However, it is worth noting that the linear systems (\ref{dis_gradient}) are separable with respect to different $n$ and they can be solved in parallel. As a consequent, one can compute the gradient $\{\bm{g}^{k}_n\}_{n=1}^N$ simultaneously and the computation time can be reduced significantly. Moreover, it is clear that the computation of $\{\bm{g}^{k}_n\}_{n=1}^N$ requires the storage of the solutions of (\ref{state_fuldis}) and (\ref{ful_adjoint_1})-(\ref{ful_adjoint_3}) at all points in space and time. For large scale problems, especially in three space dimensions, it will be very memory demanding and maybe even impossible to store the full sets $\{y_n^k\}_{n=0}^N$ and $\{p_n^k\}_{n=1}^{N+1}$ simultaneously. To tackle this issue, one can employ the strategy described in e.g., \cite[Section 1.12]{glowinski2008exact} that can drastically reduce the storage requirements at the expense of a small CPU increase. \section{Numerical Experiments}\label{se:numerical} In this section, we report some preliminary numerical results validating the efficiency of the proposed CG algorithm (\textbf{DI})--(\textbf{DV}) for (BCP). All codes were written in MATLAB R2016b and numerical experiments were conducted on a Surface Pro 5 laptop with 64-bit Windows 10.0 operation system, Intel(R) Core(TM) i7-7660U CPU (2.50 GHz), and 16 GB RAM. \medskip \noindent\textbf{Example 1.} We consider the bilinear optimal control problem (BCP) on the domain $Q=\Omega\times(0,T)$ with $\Omega=(0,1)^2$ and $T=1$. In particular, we take the control $\bm{v}(x,t)$ in a finite-dimensional space, i.e. $\bm{v}\in L^2(0,T;\mathbb{R}^2)$. In addition, we set $\alpha_2=0$ in (\ref{objective_functional}) and consider the following tracking-type bilinear optimal control problem: \begin{equation}\label{model_ex1} \min_{\bm{v}\in L^2(0,T;\mathbb{R}^2)}J(\bm{v})=\frac{1}{2}\int_0^T|\bm{v}(t)|^2dt+\frac{\alpha_1}{2}\iint_Q|y-y_d|^2dxdt, \end{equation} where $|\bm{v}(t)|=\sqrt{\bm{v}_1(t)^2+\bm{v}_2(t)^2}$ is the canonical 2-norm, and $y$ is obtained from $\bm{v}$ via the solution of the state equation (\ref{state_equation}). Since the control $\bm{v}$ is considered in a finite-dimensional space, the divergence-free constraint $\nabla\cdot\bm{v}=0$ is verified automatically. As a consequence, the first-order differential $DJ(\bm{v})$ can be easily computed. Indeed, it is easy to show that \begin{equation}\label{oc_finite} DJ(\bm{v})=\left\{\bm{v}_i(t)+\int_\Omega y(t)\frac{\partial p(t)}{\partial x_i}dx\right \}_{i=1}^2,~\text{a.e.~on}~(0,T),\forall \bm{v}\in L^2(0,T;\mathbb{R}^2), \end{equation} where $p(t)$ is the solution of the adjoint equation (\ref{adjoint_equation}). The inner preconditioned CG algorithm (\textbf{DG1})-(\textbf{DG5}) for the computation of the gradient $\{\bm{g}_n\}_{n=1}^N$ is thus avoided. In order to examine the efficiency of the proposed CG algorithm (\textbf{DI})--(\textbf{DV}), we construct an example with a known exact solution. To this end, we set $\nu=1$ and $a_0=1$ in (\ref{state_equation}), and $$ y=e^t(-3\sin(2\pi x_1)\sin(\pi x_2)+1.5\sin(\pi x_1)\sin(2\pi x_2)),\quad p=(T-t)\sin \pi x_1 \sin \pi x_2. $$ Substituting these two functions into the optimality condition $DJ(\bm{u}(t))=0$, we have $$ \bm{u}=(\bm{u}_1,\bm{u}_2)^\top=(2e^t(T-t),-e^t(T-t))^\top. $$ We further set \begin{eqnarray*} &&f=\frac{\partial y}{\partial t}-\nabla^2y+{\bm{u}}\cdot \nabla y+y, \quad\phi=-3\sin(2\pi x_1)\sin(\pi x_2)+1.5\sin(\pi x_1)\sin(2\pi x_2),\\ &&y_d=y-\frac{1}{\alpha_1}\left(-\frac{\partial p}{\partial t} -\nabla^2p-\bm{u}\cdot\nabla p +p\right),\quad g=0. \end{eqnarray*} Then, it is easy to verify that $\bm{u}$ is a solution point of the problem (\ref{model_ex1}). We display the solution $\bm{u}$ and the target function $y_d$ at different instants of time in Figure \ref{exactU_ex1} and Figure \ref{target_ex1}, respectively. \begin{figure}[htpb] \centering{ \includegraphics[width=0.43\textwidth]{exact_u.pdf} } \caption{The exact optimal control $\bm{u}$ for Example 1.} \label{exactU_ex1} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{target25.pdf} \includegraphics[width=0.3\textwidth]{target50.pdf} \includegraphics[width=0.3\textwidth]{target75.pdf} } \caption{The target function $y_d$ at $t=0.25, 0.5$ and $0.75$ (from left to right) for Example 1.} \label{target_ex1} \end{figure} The stopping criterion of the CG algorithm (\textbf{DI})--(\textbf{DV}) is set as $$ \frac{\Delta t\sum_{n=1}^N|\bm{g}^{k+1}_n|^2}{\Delta t\sum_{n=1}^N|\bm{g}^{0}_n|^2}\leq 10^{-5}. $$ The initial value is chosen as $\bm{u}^0=(0,0)^\top$; and we denote by $\bm{u}^{\Delta t}$ and $y_h^{\Delta t}$ the computed control and state, respectively. First, we take $h=\frac{1}{2^i}, i=5,6,7,8$, $\Delta t=\frac{h}{2}$ and $\alpha_1=10^6$, and implement the proposed CG algorithm (\textbf{DI})--(\textbf{DV}) for solving the problem (\ref{model_ex1}). The numerical results reported in Table \ref{tab:mesh_EX1} show that the CG algorithm converges fairly fast and is robust with respect to different mesh sizes. We also observe that the target function $y_d$ has been reached within a good accuracy. Similar comments hold for the approximation of the optimal control $\bm{u}$ and of the state $y$ of problem (\ref{model_ex1}). By taking $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$, the computed state $y_h^{\Delta t}$ and $y_h^{\Delta t}-y_d$ at $t=0.25,0.5$ and $0.75$ are reported in Figures \ref{stateEx1_1}, \ref{stateEx1_2} and \ref{stateEx1_3}, respectively; and the computed control $\bm{u}^{\Delta t}$ and error $\bm{u}^{\Delta t}-\bm{u}$ are visualized in Figure \ref{controlEx1}. \begin{table}[htpb] {\small\centering \caption{Results of the CG algorithm (\textbf{DI})--(\textbf{DV}) with different $h$ and $\Delta t$ for Example 1.} \label{tab:mesh_EX1} \begin{tabular}{|c|c|c|c|c|c|} \hline Mesh sizes &$Iter$& $\|\bm{u}^{\Delta t}-\bm{u}\|_{L^2(0,T;\mathbb{R}^2)}$&$\|y_h^{\Delta t}-y\|_{L^2(Q)}$& ${\|y_h^{\Delta t}-y_d\|_{L^2(Q)}}/{\|y_d\|_{{L^2(Q)}}}$ \\ \hline $h=1/2^5,\Delta t=1/2^6$ & 117 &2.8820$\times 10^{-2}$ &1.1569$\times 10^{-2}$&3.8433$\times 10^{-3}$ \\ \hline $h=1/2^6,\Delta t=1/2^7$ &48&1.3912$\times 10^{-2}$& 2.5739$\times 10^{-3}$&8.5623$\times 10^{-4}$ \\ \hline $h=1/2^7,\Delta t=1/2^8$ &48&6.9095$\times 10^{-3}$& 4.8574$\times 10^{-4}$ &1.6516$\times 10^{-4}$ \\ \hline $h=1/2^8,\Delta t=1/2^9$& 31 &3.4845$\times 10^{-3}$ &6.6231$\times 10^{-5}$ &2.2196$\times 10^{-5}$ \\ \hline \end{tabular}} \end{table} \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{soln_y25.pdf} \includegraphics[width=0.3\textwidth]{err_y25.pdf} \includegraphics[width=0.3\textwidth]{dis_y_25.pdf}} \caption{Computed state $y^{\Delta t}_h$, error $y^{\Delta t}_h-y$ and $y^{\Delta t}_h-y_d$ (from left to right) at $t=0.25$ for Example 1.} \label{stateEx1_1} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{soln_y50.pdf} \includegraphics[width=0.3\textwidth]{err_y.pdf} \includegraphics[width=0.3\textwidth]{dis_y_50.pdf}} \caption{Computed state $y^{\Delta t}_h$, error $y^{\Delta t}_h-y$ and $y^{\Delta t}_h-y_d$ (from left to right) at $t=0.5$ for Example 1.} \label{stateEx1_2} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{soln_y75.pdf} \includegraphics[width=0.3\textwidth]{err_y75.pdf} \includegraphics[width=0.3\textwidth]{dis_y_75.pdf}} \caption{Computed state $y^{\Delta t}_h$, error $y^{\Delta t}_h-y$ and $y^{\Delta t}_h-y_d$ (from left to right) at $t=0.75$ for Example 1.} \label{stateEx1_3} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.45\textwidth]{soln_u.pdf} \includegraphics[width=0.45\textwidth]{err_u.pdf} } \caption{Computed optimal control $\bm{u}^{\Delta t}$ and error $\bm{u}^{\Delta t}-\bm{u}$ for Example 1.} \label{controlEx1} \end{figure} Furthermore, we tested the proposed CG algorithm (\textbf{DI})--(\textbf{DV}) with $h=\frac{1}{2^6}$ and $\Delta t=\frac{1}{2^7}$ for different penalty parameter $\alpha_1$. The results reported in Table \ref{reg_EX1} show that the performance of the proposed CG algorithm is robust with respect to the penalty parameter, at least for the example being considered. We also observe that as $\alpha_1$ increases, the value of $\frac{\|y_h^{\Delta t}-y_d\|_{L^2(Q)}}{\|y_d\|_{{L^2(Q)}}}$ decreases. This implies that, as expected, the computed state $y_h^{\Delta t}$ is closer to the target function $y_d$ when the penalty parameter gets larger. \begin{table}[htpb] {\small \centering \caption{Results of the CG algorithm (\textbf{DI})--(\textbf{DV}) with different $\alpha_1$ for Example 1.} \begin{tabular}{|c|c|c|c|c|c|} \hline $\alpha_1$ &$Iter$& $CPU(s)$&$\|\bm{u}^{\Delta t}-\bm{u}\|_{L^2(0,T;\mathbb{R}^2)}$&$\|y_h^{\Delta t}-y\|_{L^2(Q)}$& $\frac{\|y_h^{\Delta t}-y_d\|_{L^2(Q)}}{\|y_d\|_{{L^2(Q)}}}$ \\ \hline $10^4$ & 46 & 126.0666&1.3872$\times 10^{-2}$ &2.5739$\times 10^{-3}$ & 8.7666$\times 10^{-4}$ \\ \hline $10^5$ & 48 & 126.4185 &1.3908$\times 10^{-2}$ &2.5739$\times 10^{-3}$ &8.6596$\times 10^{-4}$ \\ \hline $10^6$ &48&128.2346 &1.3912$\times 10^{-2}$ & 2.5739$\times 10^{-3}$ &8.5623$\times 10^{-4}$ \\ \hline $10^7$ &48 & 127.1858&1.3912$\times 10^{-2}$&2.5739$\times 10^{-3}$ &8.5612$\times 10^{-4}$ \\ \hline $10^8$& 48 & 124.1160&1.3912$\times 10^{-2}$&2.5739$\times 10^{-3}$ &8.5610$\times 10^{-4}$ \\ \hline \end{tabular} \label{reg_EX1} } \end{table} \medskip \noindent\textbf{Example 2.} As in Example 1, we consider the bilinear optimal control problem (BCP) on the domain $Q=\Omega\times(0,T)$ with $\Omega=(0,1)^2$ and $T=1$. Now, we take the control $\bm{v}(x,t)$ in the infinite-dimensional space $\mathcal{U}=\{\bm{v}|\bm{v}\in [L^2(Q)]^2, \nabla\cdot\bm{v}=0\}.$ We set $\alpha_2=0$ in (\ref{objective_functional}), $\nu=1$ and $a_0=1$ in (\ref{state_equation}), and consider the following tracking-type bilinear optimal control problem: \begin{equation}\label{model_ex2} \min_{\bm{v}\in\mathcal{U}}J(\bm{v})=\frac{1}{2}\iint_Q|\bm{v}|^2dxdt+\frac{\alpha_1}{2}\iint_Q|y-y_d|^2dxdt, \end{equation} where $y$ is obtained from $\bm{v}$ via the solution of the state equation (\ref{state_equation}). First, we let \begin{eqnarray*} &&y=e^t(-3\sin(2\pi x_1)\sin(\pi x_2)+1.5\sin(\pi x_1)\sin(2\pi x_2)),\\ &&p=(T-t)\sin \pi x_1 \sin \pi x_2, ~ \text{and} ~\bm{u}=P_{\mathcal{U}}(p\nabla y), \end{eqnarray*} where $P_{\mathcal{U}}(\cdot)$ is the projection onto the set $\mathcal{U}$. We further set \begin{eqnarray*} &&f=\frac{\partial y}{\partial t}-\nabla^2y+{\bm{u}}\cdot \nabla y+y, \quad\phi=-3\sin(2\pi x_1)\sin(\pi x_2)+1.5\sin(\pi x_1)\sin(2\pi x_2),\\ &&y_d=y-\frac{1}{\alpha_1}\left(-\frac{\partial p}{\partial t} -\nabla^2p-\bm{u}\cdot\nabla p +p\right),\quad g=0. \end{eqnarray*} Then, it is easy to show that $\bm{u}$ is a solution point of the problem (\ref{model_ex2}). We note that $\bm{u}=P_{\mathcal{U}}(p\nabla y)$ has no analytical solution and it can only be solved numerically. Here, we solve $\bm{u}=P_{\mathcal{U}}(p\nabla y)$ by the preconditioned CG algorithm (\textbf{DG1})--(\textbf{DG5}) with $h=\frac{1}{2^9}$ and $\Delta t=\frac{1}{2^{10}}$, and use the resulting control $\bm{u}$ as a reference solution for the example we considered. \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{ex2_target25.pdf} \includegraphics[width=0.3\textwidth]{ex2_target50.pdf} \includegraphics[width=0.3\textwidth]{ex2_target75.pdf} } \caption{The target function $y_d$ with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ at $t=0.25, 0.5$ and $0.75$ (from left to right) for Example 2.} \label{target_ex2} \end{figure} The stopping criteria of the outer CG algorithm (\textbf{DI})--(\textbf{DV}) and the inner preconditioned CG algorithm (\textbf{DG1})--(\textbf{DG5}) are respectively set as $$ \frac{\Delta t\sum_{n=1}^N\int_\Omega|\bm{g}_n^{k+1}|^2dx}{\Delta t\sum_{n=1}^N\int_\Omega|\bm{g}_n^{0}|^2dx}\leq 5\times10^{-8}, ~\text{and}~\frac{\int_\Omega|\nabla r^{k+1}|^2dx}{\max\{1,\int_\Omega|\nabla r^0|^2dx\}}\leq 10^{-8}. $$ The initial values are chosen as $\bm{u}^0=(0,0)^\top$ and $\lambda^0=0$; and we denote by $\bm{u}_h^{\Delta t}$ and $y_h^{\Delta t}$ the computed control and state, respectively. First, we take $h=\frac{1}{2^i}, i=6,7,8$, $\Delta t=\frac{h}{2}$, $\alpha_1=10^6$, and implement the proposed nested CG algorithm (\textbf{DI})--(\textbf{DV}) for solving the problem (\ref{model_ex2}). The numerical results reported in Table \ref{tab:mesh_EX2} show that the CG algorithm converges fast and is robust with respect to different mesh sizes. In addition, the preconditioned CG algorithm (\textbf{DG1})--(\textbf{DG5}) converges within 10 iterations for all cases and thus is efficient for computing the gradient $\{\bm{g}_n\}_{n=1}^N$. We also observe that the target function $y_d$ has been reached within a good accuracy. Similar comments hold for the approximation of the optimal control $\bm{u}$ and of the state $y$ of problem (\ref{model_ex2}). \begin{table}[htpb] {\small \centering \caption{Results of the nested CG algorithm (\textbf{DI})--(\textbf{DV}) with different $h$ and $\Delta t$ for Example 2.} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Mesh sizes &{{$Iter_{CG}$}}&$MaxIter_{PCG}$& $\|\bm{u}_h^{\Delta t}-\bm{u}\|_{L^2(Q)}$&$\|y_h^{\Delta t}-y\|_{L^2(Q)}$& $\frac{\|y_h^{\Delta t}-y_d\|_{L^2(Q)}}{\|y_d\|_{{L^2(Q)}}}$ \\ \hline $h=1/2^6,\Delta t=1/2^7$ &443&9&3.7450$\times 10^{-3}$& 9.7930$\times 10^{-5}$&1.0906$\times 10^{-6}$ \\ \hline $h=1/2^7,\Delta t=1/2^8$ &410&9&1.8990$\times 10^{-3}$& 1.7423$\times 10^{-5}$ & 3.3863$\times 10^{-7}$ \\ \hline $h=1/2^8,\Delta t=1/2^9$& 405&8 &1.1223$\times 10^{-3}$ &4.4003$\times 10^{-6}$ &1.0378$\times 10^{-7}$ \\ \hline \end{tabular} \label{tab:mesh_EX2} } \end{table} Taking $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$, the computed state $y_h^{\Delta t}$, the error $y_h^{\Delta t}-y$ and $y_h^{\Delta t}-y_d$ at $t=0.25,0.5,0.75$ are reported in Figures \ref{stateEx2_1}, \ref{stateEx2_2} and \ref{stateEx2_3}, respectively; and the computed control $\bm{u}_h^{\Delta t}$, the exact control $\bm{u}$, and the error $\bm{u}_h^{\Delta t}-\bm{u}$ at $t=0.25,0.5,0.75$ are presented in Figures \ref{controlEx2_1}, \ref{controlEx2_2} and \ref{controlEx2_3}. \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{ex2_soln_y25.pdf} \includegraphics[width=0.3\textwidth]{ex2_err_y25.pdf} \includegraphics[width=0.3\textwidth]{ex2_dis_y_25.pdf}} \caption{Computed state $y^{\Delta t}_h$, error $y^{\Delta t}_h-y$ and $y^{\Delta t}_h-y_d$ with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ (from left to right) at $t=0.25$ for Example 2.} \label{stateEx2_1} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{ex2_soln_y50.pdf} \includegraphics[width=0.3\textwidth]{ex2_err_y.pdf} \includegraphics[width=0.3\textwidth]{ex2_dis_y_50.pdf}} \caption{Computed state $y^{\Delta t}_h$, error $y^{\Delta t}_h-y$ and $y^{\Delta t}_h-y_d$ with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ (from left to right) at $t=0.5$ for Example 2.} \label{stateEx2_2} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{ex2_soln_y75.pdf} \includegraphics[width=0.3\textwidth]{ex2_err_y75.pdf} \includegraphics[width=0.3\textwidth]{ex2_dis_y_75.pdf}} \caption{Computed state $y^{\Delta t}_h$, error $y^{\Delta t}_h-y$ and $y^{\Delta t}_h-y_d$ with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ (from left to right) at $t=0.75$ for Example 2.} \label{stateEx2_3} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.45\textwidth]{ex2_u25.pdf} \includegraphics[width=0.45\textwidth]{ex2_erru25.pdf}} \caption{Computed control $\bm{u}^{\Delta t}_h$ and exact control $\bm{u}$ (left, from top to bottom) and the error $\bm{u}^{\Delta t}_h-\bm{u}$ (right) with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ at $t=0.25$ for Example 2.} \label{controlEx2_1} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.45\textwidth]{ex2_u50.pdf} \includegraphics[width=0.45\textwidth]{ex2_erru50.pdf}} \caption{Computed control $\bm{u}^{\Delta t}_h$ and exact control $\bm{u}$ (left, from top to bottom) and the error $\bm{u}^{\Delta t}_h-\bm{u}$ (right) with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ at $t=0.5$ for Example 2.} \label{controlEx2_2} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.45\textwidth]{ex2_u75.pdf} \includegraphics[width=0.45\textwidth]{ex2_erru75.pdf}} \caption{Computed control $\bm{u}^{\Delta t}_h$ and exact control $\bm{u}$ (left, from top to bottom) and the error $\bm{u}^{\Delta t}_h-\bm{u}$ (right) with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ at $t=0.75$ for Example 2.} \label{controlEx2_3} \end{figure} \newpage \section{Conclusion and Outlook}\label{se:conclusion} We studied the bilinear control of an advection-reaction-diffusion system, where the control variable enters the model as a velocity field of the advection term. Mathematically, we proved the existence of optimal controls and derived the associated first-order optimality conditions. Computationally, the conjugate gradient (CG) method was suggested and its implementation is nontrivial. In particular, an additional divergence-free constraint on the control variable leads to a projection subproblem to compute the gradient; and the computation of a stepsize at each CG iteration requires solving the state equation repeatedly due to the nonlinear relation between the state and control variables. To resolve the above issues, we reformulated the gradient computation as a Stokes-type problem and proposed a fast preconditioned CG method to solve it. We also proposed an efficient inexactness strategy to determine the stepsize, which only requires the solution of one linear parabolic equation. An easily implementable nested CG method was thus proposed. For the numerical discretization, we employed the standard piecewise linear finite element method and the Bercovier-Pironneau finite element method for the space discretizations of the bilinear optimal control and the Stokes-type problem, respectively, and a semi-implicit finite difference method for the time discretization. The resulting algorithm was shown to be numerically efficient by some preliminary numerical experiments. We focused in this paper on an advection-reaction-diffusion system controlled by a general form velocity field. In a real physical system, the velocity field may be determined by some partial differential equations (PDEs), such as the Navier-Stokes equations. As a result, we meet some bilinear optimal control problems constrained by coupled PDE systems. Moreover, instead of (\ref{objective_functional}), one can also consider other types of objective functionals in the bilinear optimal control of an advection-reaction-diffusion system. For instance, one can incorporate $\iint_{Q}|\nabla \bm{v}|^2dxdt$ and $\iint_{Q}|\frac{\partial \bm{v}}{\partial t}|^2dxdt$ into the objective functional to promote that the optimal velocity field has the least rotation and is almost steady, respectively, which are essential in e.g., mixing enhancement for different flows \cite{liu2008}. All these problems are of practical interest but more challenging from algorithmic design perspectives, and they have not been well-addressed numerically in the literature. Our current work has laid a solid foundation for solving these problems and we leave them in the future. \bibliographystyle{amsplain} {\small \section{Introduction} \subsection{Background and Motivation} The optimal control of distributed parameter systems has important applications in various scientific areas, such as physics, chemistry, engineering, medicine, and finance. We refer to, e.g. \cite{glowinski1994exact, glowinski1995exact, glowinski2008exact, lions1971optimal, troltzsch2010optimal,zuazua2006}, for a few references. In a typical mathematical model of a controlled distributed parameter system, either boundary or internal locally distributed controls are usually used; these controls have localized support and are called additive controls because they arise in the model equations as additive terms. Optimal control problems with additive controls have received a significant attention in past decades following the pioneering work of J. L. Lions \cite{lions1971optimal}, and many mathematical and computational tools have been developed, see e.g., \cite{glowinski1994exact,glowinski1995exact,glowinski2008exact,lions1988,zuazua2005,zuazua2007}. However, it is worth noting that additive controls describe the effect of external added sources or forces and they do not change the principal intrinsic properties of the controlled system. Hence, they are not suitable to deal with processes whose principal intrinsic properties should be changed by some control actions. For instance, if we aim at changing the reaction rate in some chain reaction-type processes from biomedical, nuclear, and chemical applications, additive controls amount to controlling the chain reaction by adding into or withdrawing out of a certain amount of the reactants, which is not realistic. To address this issue, a natural idea is to use certain catalysts or smart materials to control the systems, which can be mathematically modeled by optimal control problems with bilinear controls. We refer to \cite{khapalov2010} for more detailed discussions. Bilinear controls, also known as multiplicative controls, enter the model as coefficients of the corresponding partial differential equations (PDEs). These bilinear controls can change some main physical characteristics of the system under investigation, such as a natural frequency response of a beam or the rate of a chemical reaction. In the literature, bilinear controls of distributed parameter systems have become an increasingly popular topic and bilinear optimal control problems constrained by various PDEs, such as elliptic equations \cite{kroner2009}, convection-diffusion equations \cite{borzi2015}, parabolic equations \cite{khapalov2003}, the Schr{\"o}dinger equation \cite{kunisch2007} and the Fokker-Planck equation \cite{fleig2017}, have been widely studied both mathematically and computationally. In particular, bilinear controls play a crucial role in optimal control problems modeled by advection-reaction-diffusion systems. On one hand, the control can be the coefficient of the diffusion or the reaction term. For instance, a system controlled by the so-called catalysts that can accelerate or slow down various chemical or biological reactions can be modeled by a bilinear optimal control problem for an advection-reaction-diffusion equation where the control arises as the coefficient of the reaction term \cite{khapalov2003}; this kind of bilinear optimal control problems have been studied in e.g., \cite{borzi2015,cannarsa2017,khapalov2003,khapalov2010}. On the other hand, the systems can also be controlled by the velocity field in the advection term, which captures important applications in e.g., bioremediation \cite{lenhart1998}, environmental remediation process \cite{lenhart1995}, and mixing enhancement of different fluids \cite{liu2008}. We note that there is a very limited research being done on the velocity field controlled bilinear optimal control problems; and only some special one-dimensional space cases have been studied in \cite{lenhart1998,joshi2005,lenhart1995} for the existence of an optimal control and the derivation of first-order optimality conditions. To the best of our knowledge, no work has been done yet to develop efficient numerical methods for solving multi-dimensional bilinear optimal control problems controlled by the velocity field in the advection term. All these facts motivate us to study bilinear optimal control problems constrained by an advection-reaction-diffusion equation, where the control enters into the model as the velocity field in the advection term. Actually, investigating this kind of problems was suggested to one of us (R. Glowinski), in the late 1990's, by J. L. Lions (1928-2001). \subsection{Model} Let $\Omega$ be a bounded domain of $\mathbb{R}^d$ with $d\geq 1$ and let $\Gamma$ be its boundary. We consider the following bilinear optimal control problem: \begin{flalign}\tag{BCP} & \left\{ \begin{aligned} & \bm{u}\in \mathcal{U}, \\ &J(\bm{u})\leq J(\bm{v}), \forall \bm{v}\in \mathcal{U}, \end{aligned} \right. \end{flalign} with the objective functional $J$ defined by \begin{equation}\label{objective_functional} J(\bm{v})=\frac{1}{2}\iint_Q|\bm{v}|^2dxdt+\frac{\alpha_1}{2}\iint_Q|y-y_d|^2dxdt+\frac{\alpha_2}{2}\int_\Omega|y(T)-y_T|^2dx, \end{equation} and $y=y(t;\bm{v})$ the solution of the following advection-reaction-diffusion equation \begin{flalign}\label{state_equation} & \left\{ \begin{aligned} \frac{\partial y}{\partial t}-\nu \nabla^2y+\bm{v}\cdot \nabla y+a_0y&=f\quad \text{in}\quad Q, \\ y&=g\quad \text{on}\quad \Sigma,\\ y(0)&=\phi. \end{aligned} \right. \end{flalign} Above and below, $Q=\Omega\times (0,T)$ and $\Sigma=\Gamma\times (0,T)$ with $0<T<+\infty$; $\alpha_1\geq 0, \alpha_2\geq 0, \alpha_1+\alpha_2>0$; the target functions $y_d$ and $y_T$ are given in $L^2(Q)$ and $L^2(\Omega)$, respectively; the diffusion coefficient $\nu>0$ and the reaction coefficient $a_0$ are assumed to be constants; the functions $f\in L^2(Q)$, $g\in L^2(0,T;H^{1/2}(\Gamma))$ and $\phi\in L^2(\Omega)$. The set $\mathcal{U}$ of the admissible controls is defined by \begin{equation*} \mathcal{U}:=\{\bm{v}|\bm{v}\in [L^2(Q)]^d, \nabla\cdot\bm{v}=0\}. \end{equation*} Clearly, the control variable $\bm{v}$ arises in (BCP) as a flow velocity field in the advection term of (\ref{state_equation}), and the divergence-free constraint $\nabla\cdot\bm{v}=0$ implies that the flow is incompressible. One can control the system by changing the flow velocity $\bm{v}$ in order that $y$ and $y(T)$ are good approximations to $y_d$ and $y_T$, respectively. \subsection{Difficulties and Goals} In this paper, we intend to study the bilinear optimal control problem (BCP) in the general case of $d\geq 2$ both mathematically and computationally. Precisely, we first study the well-posedness of (\ref{state_equation}), the existence of an optimal control $\bm{u}$, and its first-order optimality condition. Then, computationally, we propose an efficient and relatively easy to implement numerical method to solve (BCP). For this purpose, we advocate combining a conjugate gradient (CG) method with a finite difference method (for the time discretization) and a finite element method (for the space discretization) for the numerical solution of (BCP). Although these numerical approaches have been well developed in the literature, it is nontrivial to implement them to solve (BCP) as discussed below, due to the complicated problem settings. \subsubsection{Difficulties in Algorithmic Design} Conceptually, a CG method for solving (BCP) can be easily derived following \cite{glowinski2008exact}. However, CG algorithms are challenging to implement numerically for the following reasons: 1). The state $y$ depends non-linearly on the control $\bm{v}$ despite the fact that the state equation (\ref{state_equation}) is linear. 2). The additional divergence-free constraint on the control $\bm{v}$, i.e., $\nabla\cdot\bm{v}=0$, is coupled together with the state equation (\ref{state_equation}). To be more precise, the fact that the state $y$ is a nonlinear function of the control $\bm{v}$ makes the optimality system a nonlinear problem. Hence, seeking a suitable stepsize in each CG iteration requires solving an optimization problem and it can not be as easily computed as in the linear case \cite{glowinski2008exact}. Note that commonly used line search strategies are too expensive to employ in our settings because they require evaluating the objective functional value $J(\bm{v})$ repeatedly and every evaluation of $J(\bm{v})$ entails solving the state equation (\ref{state_equation}). The same concern on the computational cost also applies when the Newton method is employed to solve the corresponding optimization problem for finding a stepsize. To tackle this issue, we propose an efficient inexact stepsize strategy which requires solving only one additional linear parabolic problem and is cheap to implement as shown in Section \ref{se:cg}. Furthermore, due to the divergence-free constraint $\nabla\cdot\bm{v}=0$, an extra projection onto the admissible set $\mathcal{U}$ is required to compute the first-order differential of $J$ at each CG iteration in order that all iterates of the CG method are feasible. Generally, this projection subproblem has no closed-form solution and has to be solved iteratively. Here, we introduce a Lagrange multiplier associated with the constraint $\nabla\cdot\bm{v}=0$, then the computation of the first-order differential $DJ(\bm{v})$ of $J$ at $\bm{v}$ is equivalent to solving a Stokes type problem. Inspired by \cite{glowinski2003}, we advocate employing a preconditioned CG method, which operates on the space of the Lagrange multiplier, to solve the resulting Stokes type problem. With an appropriately chosen preconditioner, a fast convergence of the resulting preconditioned CG method can be expected in practice (and indeed, has been observed). \subsubsection{Difficulties in Numerical Discretization} For the numerical discretization of (BCP), we note that if an implicit finite difference scheme is used for the time discretization of the state equation (\ref{state_equation}), a stationary advection-reaction-diffusion equation should be solved at each time step. To solve this stationary advection-reaction-diffusion equation, it is well known that standard finite element techniques may lead to strongly oscillatory solutions unless the mesh-size is sufficiently small with respect to the ratio between $\nu$ and $\|\bm{v}\|$. In the context of optimal control problems, to overcome such difficulties, different stabilized finite element methods have been proposed and analyzed, see e.g., \cite{BV07,DQ05}. Different from the above references, we implement the time discretization by a semi-implicit finite difference method for simplicity, namely, we use explicit advection and reaction terms and treat the diffusion term implicitly. Consequently, only a simple linear elliptic equation is required to be solved at each time step. We then implement the space discretization of the resulting elliptic equation at each time step by a standard piecewise linear finite element method and the resulting linear system is very easy to solve. Moreover, we recall that the divergence-free constraint $\nabla\cdot \bm{v}=0$ leads to a projection subproblem, which is equivalent to a Stokes type problem, at each iteration of the CG algorithm. As discussed in \cite{glowinski1992}, to discretize a Stokes type problem, direct applications of standard finite element methods always lead to an ill-posed discrete problem. To overcome this difficulty, one can use different types of element approximations for pressure and velocity. Inspired by \cite{glowinski1992,glowinski2003}, we employ the Bercovier-Pironneau finite element pair \cite{BP79} (also known as $P_1$-$P_1$ iso $P_2$ finite element) to approximate the control $\bm{v}$ and the Lagrange multiplier associated with the divergence-free constraint. More concretely, we approximate the Lagrange multiplier by a piecewise linear finite element space which is twice coarser than the one for the control $\bm{v}$. In this way, the discrete problem is well-posed and can be solved by a preconditioned CG method. As a byproduct of the above discretization, the total number of degrees of freedom of the discrete Lagrange multiplier is only $\frac{1}{d2^d}$ of the number of the discrete control. Hence, the inner preconditioned CG method is implemented in a lower-dimensional space than that of the state equation (\ref{state_equation}), implying a computational cost reduction. With the above mentioned discretization schemes, we can relatively easily obtain the fully discrete version of (BCP) and derive the discrete analogue of our proposed nested CG method. \subsection{Organization} An outline of this paper is as follows. In Section \ref{se:existence and oc}, we prove the existence of optimal controls for (BCP) and derive the associated first-order optimality conditions. An easily implementable nested CG method is proposed in Section \ref{se:cg} for solving (BCP) numerically. In Section \ref{se:discretization}, we discuss the numerical discretization of (BCP) by finite difference and finite element methods. Some preliminary numerical results are reported in Section \ref{se:numerical} to validate the efficiency of our proposed numerical approach. Finally, some conclusions are drawn in Section \ref{se:conclusion}. \section{Existence of optimal controls and first-order optimality conditions}\label{se:existence and oc} In this section, first we present some notation and known results from the literature that will be used in later analysis. Then, we prove the existence of optimal controls for (BCP) and derive the associated first-order optimality conditions. Without loss of generality, we assume that $f=0$ and $g=0$ in (\ref{state_equation}) for convenience. \subsection{Preliminaries} Throughout, we denote by $L^s(\Omega)$ and $H^s(\Omega)$ the usual Sobolev spaces for any $s>0$. The space $H_0^s(\Omega)$ denotes the completion of $C_0^{\infty}(\Omega)$ in $H^s(\Omega)$, where $C_0^{\infty}(\Omega)$ denotes the space of all infinitely differentiable functions over $\Omega$ with a compact support in $\Omega$. In addition, we shall also use the following vector-valued function spaces: \begin{eqnarray*} &&\bm{L}^2(\Omega):=[L^2(\Omega)]^d,\\ &&\bm{L}_{div}^2(\Omega):=\{\bm{v}\in \bm{L}^2(\Omega),\nabla\cdot\bm{v}=0~\text{in}~\Omega\}. \end{eqnarray*} Let $X$ be a Banach space with a norm $\|\cdot\|_X$, then the space $L^2(0, T;X)$ consists of all measurable functions $z:(0,T)\rightarrow X$ satisfying $$ \|z\|_{L^2(0, T;X)}:=\left(\int_0^T\|z(t)\|_X^2dt \right)^{\frac{1}{2}}<+\infty. $$ With the above notation, it is clear that the admissible set $\mathcal{U}$ can be denoted as $\mathcal{U}:=L^2(0,T; \bm{L}_{div}^2(\Omega))$. Moreover, the space $W(0,T)$ consists of all functions $z\in L^2(0, T; H_0^1(\Omega))$ such that $\frac{\partial z}{\partial t}\in L^2(0, T; H^{-1}(\Omega))$ exists in a weak sense, i.e. $$ W(0,T):=\{z|z\in L^2(0,T; H_0^1(\Omega)), \frac{\partial z}{\partial t}\in L^2(0,T; H^{-1}(\Omega))\}, $$ where $H^{-1}(\Omega)(=H_0^1(\Omega)^\prime)$ is the dual space of $H_0^1(\Omega)$. Next, we summarize some known results for the advection-reaction-diffusion equation (\ref{state_equation}) in the literature for the convenience of further analysis. The variational formulation of the state equation (\ref{state_equation}) reads: find $y\in W(0,T)$ such that $y(0)=\phi$ and $\forall z\in L^2(0,T;H_0^1(\Omega))$, \begin{equation}\label{weak_form} \int_0^T\left\langle\frac{\partial y}{\partial t}, z \right\rangle_{H^{-1}(\Omega),H_0^1(\Omega)} dt+\nu\iint_{Q} \nabla y\cdot\nabla z dxdt+\iint_{Q}\bm{v}\cdot\nabla yzdxdt+a_0\iint_{Q} yzdxdt=0, \end{equation} where $\left\langle\cdot,\cdot\right\rangle_{H^{-1}(\Omega),H_0^1(\Omega)}$ denotes the duality pairing between $H^{-1}(\Omega)$ and $H_0^1(\Omega)$. The existence and uniqueness of the solution $y\in W(0,T)$ to problem (\ref{weak_form}) can be proved by standard arguments relying on the Lax-Milgram theorem, we refer to \cite{lions1971optimal} for the details. Moreover, we can define the control-to-state operator $S:\mathcal{U}\rightarrow W(0,T)$, which maps $\bm{v}$ to $y=S(\bm{v})$. Then, the objective functional $J$ in (BCP) can be reformulated as \begin{equation*} J(\bm{v})=\frac{1}{2}\iint_Q|\bm{v}|^2dxdt+\frac{\alpha_1}{2}\iint_Q|S(\bm{v})-y_d|^2dxdt+\frac{\alpha_2}{2}\int_\Omega|S(\bm{v})(T)-y_T|^2dx, \end{equation*} and the nonlinearity of the solution operator $S$ implies that (BCP) is nonconvex. For the solution $y\in W(0,T)$, we have the following estimates. \begin{lemma} Let $\bm{v}\in L^2(0,T; \bm{L}^2_{div}(\Omega))$, then the solution $y\in W(0,T)$ of the state equation (\ref{state_equation}) satisfies the following estimate: \begin{equation}\label{est_y} \|y(t)\|_{L^2(\Omega)}^2+2\nu\int_0^t\|\nabla y(s)\|_{L^2(\Omega)}^2ds+2a_0\int_0^t\| y(s)\|_{L^2(\Omega)}^2ds=\|\phi\|_{L^2(\Omega)}^2. \end{equation} \end{lemma} \begin{proof} We first multiply the state equation (\ref{state_equation}) by $y(t)$, then applying the Green's formula in space yields \begin{equation}\label{e1} \frac{1}{2}\frac{d}{dt}\|y(t)\|_{L^2(\Omega)}^2=-\nu\|\nabla y(t)\|_{L^2(\Omega)}^2-a_0\|y(t)\|_{L^2(\Omega)}^2. \end{equation} The desired result (\ref{est_y}) can be directly obtained by integrating (\ref{e1}) over $[0,t]$. \end{proof} Above estimate implies that \begin{equation}\label{bd_y} y~\text{is bounded in}~L^2(0,T; H_0^1(\Omega)). \end{equation} On the other hand, $$ \frac{\partial y}{\partial t}=\nu \nabla^2y-\bm{v}\cdot \nabla y-a_0y, $$ and the right hand side is bounded in $L^2(0,T; H^{-1}(\Omega))$. Hence, \begin{equation}\label{bd_yt} \frac{\partial y}{\partial t}~\text{is bounded in}~ L^2(0,T; H^{-1}(\Omega)). \end{equation} Furthermore, since $\nabla\cdot\bm{v}=0$, it is clear that $$\iint_Q\bm{v}\cdot\nabla yzdxdt=\iint_Q\nabla y\cdot (\bm{v}z)dxdt=-\iint_Q y\nabla\cdot(\bm{v}z)dxdt=-\iint_Q y(\bm{v}\cdot\nabla z)dxdt,\forall z\in L^2(0,T;H_0^1(\Omega)).$$ Hence, the variational formulation (\ref{weak_form}) can be equivalently written as:" find $y\in W(0,T)$ such that $y(0)=\phi$ and $\forall z\in L^2(0,T;H_0^1(\Omega))$, \begin{equation*} \int_0^T\left\langle\frac{\partial y}{\partial t}, z \right\rangle_{H^{-1}(\Omega),H_0^1(\Omega)} dt+\nu\iint_{Q} \nabla y\cdot\nabla z dxdt-\iint_Q(\bm{v}\cdot\nabla z) ydxdt+a_0\iint_{Q} yzdxdt=0. \end{equation*} \subsection{Existence of Optimal Controls} With above preparations, we prove in this subsection the existence of optimal controls for (BCP). For this purpose, we first show that the objective functional $J$ is weakly lower semi-continuous. \begin{lemma}\label{wlsc} The objective functional $J$ given by (\ref{objective_functional}) is weakly lower semi-continuous. That is, if a sequence $\{\bm{v}_n\}$ converges weakly to $\bar{\bm{v}}$ in $L^2(0,T; \bm{L}^2_{div}(\Omega))$, we have $$ J(\bar{\bm{v}})\leq \underset{n\rightarrow \infty}{\lim\inf} J(\bm{v}_n). $$ \end{lemma} \begin{proof} Let $\{\bm{v}_n\}$ be a sequence that converges weakly to $\bar{\bm{v}}$ in $L^2(0,T;\bm{L}^2_{div}(\Omega))$ and $y_n:=y(x,t;\bm{v}_n)$ the solution of the following variational problem: find $y_n\in W(0,T)$ such that $y_n(0)=\phi$ and $\forall z\in L^2(0,T;H_0^1(\Omega))$, \begin{equation}\label{seq_state} \int_0^T\left\langle\frac{\partial y_n}{\partial t}, z \right\rangle_{H^{-1}(\Omega),H_0^1(\Omega)} dt+\nu\iint_{Q} \nabla y_n\cdot\nabla z dxdt-\iint_Q(\bm{v}_n\cdot\nabla z) y_ndxdt+a_0\iint_{Q} y_nzdxdt=0. \end{equation} Moreover, it follows from (\ref{bd_y}) and (\ref{bd_yt}) that there exists a subsequence of $\{y_n\}$, still denoted by $\{y_n\}$ for convenience, such that $$y_n\rightarrow\bar{y}~\text{weakly in}~ L^2(0,T; H_0^1(\Omega)),$$ and $$\frac{\partial y_n}{\partial t}\rightarrow\frac{\partial \bar{y}}{\partial t} ~\text{weakly in}~L^2(0,T; H^{-1}(\Omega)).$$ Since $\Omega$ is bounded, it follows directly from the compactness property (also known as Rellich's Theorem) that $$y_n\rightarrow\bar{y}~\text{strongly in}~ L^2(0,T; L^2(\Omega)).$$ Taking $\bm{v}_n\rightarrow \bar{\bm{v}}$ weakly in $L^2(0,T; \bm{L}_{div}^2(\Omega))$ into account, we can pass the limit in (\ref{seq_state}) and derive that $\bar{y}(0)=\phi$ and $\forall z\in L^2(0,T;H_0^1(\Omega))$, \begin{equation*} \int_0^T\left\langle\frac{\partial \bar{y}}{\partial t}, z \right\rangle_{H^{-1}(\Omega),H_0^1(\Omega)} dt+\nu\iint_{Q} \nabla \bar{y}\cdot\nabla z dxdt-\iint_Q(\bar{\bm{v}}\cdot\nabla z) \bar{y}dxdt+a_0\iint_{Q} \bar{y}zdxdt=0, \end{equation*} which implies that $\bar{y}$ is the solution of the state equation (\ref{state_equation}) associated with $\bar{\bm{v}}$. Since any norm of a Banach space is weakly lower semi-continuous, we have that \begin{equation*} \begin{aligned} &\underset{n\rightarrow \infty}{\lim\inf} J(\bm{v}_n)\\ = &\underset{n\rightarrow \infty}{\lim\inf}\left( \frac{1}{2}\iint_Q|\bm{v}_n|^2dxdt+\frac{\alpha_1}{2}\iint_Q|y_n-y_d|^2dxdt+\frac{\alpha_2}{2}\int_\Omega|y_n(T)-y_T|^2dx\right)\\ \geq& \frac{1}{2}\iint_Q|\bar{\bm{v}}|^2dxdt+\frac{\alpha_1}{2}\iint_Q|\bar{y}-y_d|^2dxdt+\frac{\alpha_2}{2}\int_\Omega|\bar{y}(T)-y_T|^2dx\\ =& J(\bar{\bm{v}}). \end{aligned} \end{equation*} We thus obtain that the objective functional $J$ is weakly lower semi-continuous and complete the proof. \end{proof} Now, we are in a position to prove the existence of an optimal solution $\bm{u}$ to (BCP). \begin{theorem}\label{thm_existence} There exists at least one optimal control $\bm{u}\in \mathcal{U}=L^2(0,T; \bm{L}_{div}^2(\Omega))$ such that $J(\bm{u})\leq J(\bm{v}),\forall\bm{v}\in \mathcal{U}$. \end{theorem} \begin{proof} We first observe that $J(\bm{v})\geq 0,\forall\bm{v}\in \mathcal{U}$, then the infimum of $J(\bm{v})$ exists and we denote it as $$ j=\inf_{\bm{v}\in\mathcal{U}}J(\bm{v}), $$ and there is a minimizing sequence $\{\bm{v}_n\}\subset\mathcal{U}$ such that $$ \lim_{n\rightarrow \infty}J(\bm{v}_n)=j. $$ This fact, together with $\frac{1}{2}\|\bm{v}_n\|^2_{L^2(0,T; \bm{L}^2_{div}(\Omega))}\leq J(\bm{v}_n)$, implies that $\{\bm{v}_n\}$ is bounded in $L^2(0,T;\bm{L}^2_{div}(\Omega))$. Hence, there exists a subsequence, still denoted by $\{\bm{v}_n\}$, that converges weakly to $\bm{u}$ in $L^2(0,T; \bm{L}^2_{div}(\Omega))$. It follows from Lemma \ref{wlsc} that $J$ is weakly lower semi-continuous and we thus have $$ J(\bm{u})\leq \underset{n\rightarrow \infty}{\lim\inf} J(\bm{v}_n)=j. $$ Since $\bm{u}\in\mathcal{U}$, we must have $J(\bm{u})=j$, and $\bm{u}$ is therefore an optimal control. \end{proof} We note that the uniqueness of optimal control $\bm{u}$ cannot be guaranteed and only a local optimal solution can be pursued because the objective functional $J$ is nonconvex due to the nonlinear relationship between the state $y$ and the control $\bm{v}$. \subsection{First-order Optimality Conditions} Let $DJ(\bm{v})$ be the first-order differential of $J$ at $\bm{v}$ and $\bm{u}$ an optimal control of (BCP). It is clear that the first-order optimality condition of (BCP) reads \begin{equation*} DJ(\bm{u})=0. \end{equation*} In the sequel of this subsection, we discuss the computation of $DJ(\bm{v})$, which will play an important role in subsequent sections. To compute $DJ(\bm{v})$, we employ a formal perturbation analysis as in \cite{glowinski2008exact}. First, let $\delta \bm{v}\in \mathcal{U}$ be a perturbation of $\bm{v}\in \mathcal{U}$, we clearly have \begin{equation}\label{Dj and delta j} \delta J(\bm{v})=\iint_{Q}DJ(\bm{v})\cdot\delta \bm{v} dxdt, \end{equation} and also \begin{eqnarray}{\label{def_delta_j}} \begin{aligned} &\delta J(\bm{v})=\iint_{Q}\bm{v}\cdot\delta \bm{v} dxdt+\alpha_1\iint_{Q}(y-y_d)\delta y dxdt+\alpha_2\int_\Omega(y(T)-y_T)\delta y(T)dx, \end{aligned} \end{eqnarray} in which $\delta y$ is the solution of \begin{flalign}\label{perturbation_state_eqn} &\left\{ \begin{aligned} \frac{\partial \delta y}{\partial t}-\nu \nabla^2\delta y+\delta \bm{v}\cdot \nabla y+\bm{v}\cdot\nabla\delta y+a_0\delta y&=0\quad \text{in}\quad Q, \\ \delta y&=0\quad \text{on}\quad \Sigma,\\ \delta y(0)&=0. \end{aligned} \right. \end{flalign} Consider now a function $p$ defined over $\overline{Q}$ (the closure of $Q$); and assume that $p$ is a differentiable function of $x$ and $t$. Multiplying both sides of the first equation in (\ref{perturbation_state_eqn}) by $p$ and integrating over $Q$, we obtain \begin{equation*} \iint_{Q}p\frac{\partial }{\partial t}\delta ydxdt-\nu\iint_{Q}p \nabla^2\delta ydxdt+\iint_Q\delta \bm{v}\cdot \nabla ypdxdt+\iint_Q\bm{v}\cdot\nabla\delta ypdxdt+a_0\iint_{Q}p\delta ydxdt=0. \end{equation*} Integration by parts in time and application of Green's formula in space yield \begin{eqnarray}{\label{weakform_p}} \begin{aligned} \int_\Omega p(T)\delta y(T)dx-\int_\Omega p(0)\delta y(0)dx+\iint_{Q}\Big[ -\frac{\partial p}{\partial t} -\nu\nabla^2p-\bm{v}\cdot\nabla p+a_0p\Big]\delta ydxdt\\ +\iint_Q\delta \bm{v}\cdot \nabla ypdxdt-\nu\iint_{\Sigma}(\frac{\partial\delta y}{\partial \bm{n}}p-\frac{\partial p}{\partial \bm{n}}\delta y)dxdt+\iint_\Sigma p\delta y\bm{v}\cdot \bm{n}dxdt=0. \end{aligned} \end{eqnarray} where $\bm{n}$ is the unit outward normal vector at $\Gamma$. Next, let us assume that the function $p$ is the solution to the following adjoint system \begin{flalign}\label{adjoint_equation} &\qquad \left\{ \begin{aligned} -\frac{\partial p}{\partial t} -\nu\nabla^2p-\bm{v}\cdot\nabla p +a_0p&=\alpha_1(y-y_d)~ \text{in}~ Q, \\ p&=0~\qquad\quad\quad\text{on}~ \Sigma,\\ p(T)&=\alpha_2(y(T)-y_T). \end{aligned} \right. \end{flalign} It follows from (\ref{def_delta_j}), (\ref{perturbation_state_eqn}), (\ref{weakform_p}) and (\ref{adjoint_equation}) that \begin{equation*} \delta J(\bm{v})=\iint_{Q}(\bm{v}-p\nabla y)\cdot\delta \bm{v} dxdt. \end{equation*} which, together with (\ref{Dj and delta j}), implies that \begin{equation}\label{gradient} \left\{ \begin{aligned} &DJ(\bm{v})\in \mathcal{U},\\ &\iint_QDJ(\bm{v})\cdot \bm{z}dxdt=\iint_Q(\bm{v}-p\nabla y)\cdot \bm{z}dxdt,\forall \bm{z}\in \mathcal{U}. \end{aligned} \right. \end{equation} From the discussion above, the first-order optimality condition of (BCP) can be summarized as follows. \begin{theorem} Let $\bm{u}\in \mathcal{U}$ be a solution of (BCP). Then, it satisfies the following optimality condition \begin{equation*} \iint_Q(\bm{u}-p\nabla y)\cdot \bm{z}dxdt=0,\forall \bm{z}\in \mathcal{U}, \end{equation*} where $y$ and $p$ are obtained from $\bm{u}$ via the solutions of the following two parabolic equations: \begin{flalign*}\tag{state equation} &\quad\qquad\qquad\qquad\left\{ \begin{aligned} \frac{\partial y}{\partial t}-\nu \nabla^2y+\bm{u}\cdot \nabla y+a_0y&=f\quad \text{in}~ Q, \\ y&=g\quad \text{on}~\Sigma,\\ y(0)&=\phi, \end{aligned} \right.& \end{flalign*} and \begin{flalign*}\tag{adjoint equation} &\qquad\qquad\qquad\left\{ \begin{aligned} -\frac{\partial p}{\partial t} -\nu\nabla^2p-\bm{u}\cdot\nabla p +a_0p&=\alpha_1(y-y_d)\quad \text{in}~ Q, \\ p&=0 \quad\qquad\qquad\text{on}~\Sigma,\\ p(T)&=\alpha_2(y(T)-y_T). \end{aligned} \right.& \end{flalign*} \end{theorem} \section{An Implementable Nested Conjugate Gradient Method}\label{se:cg} In this section, we discuss the application of a CG strategy to solve (BCP). In particular, we elaborate on the computation of the gradient and the stepsize at each CG iteration, and thus obtain an easily implementable algorithm. \subsection{A Generic Conjugate Gradient Method for (BCP)} Conceptually, implementing the CG method to (BCP), we readily obtain the following algorithm: \begin{enumerate} \item[\textbf{(a)}] Given $\bm{u}^0\in \mathcal{U}$. \item [\textbf{(b)}] Compute $\bm{g}^0=DJ(\bm{u}^0)$. If $DJ(\bm{u}^0)=0$, then $\bm{u}=\bm{u}^0$; otherwise set $\bm{w}^0=\bm{g}^0$. \item[]\noindent For $k\geq 0$, $\bm{u}^k,\bm{g}^k$ and $\bm{w}^k$ being known, the last two different from $\bm{0}$, one computes $\bm{u}^{k+1}, \bm{g}^{k+1}$ and $\bm{w}^{k+1}$ as follows: \item[\textbf{(c)}] Compute the stepsize $\rho_k$ by solving the following optimization problem \begin{flalign}\label{op_step} &\left\{ \begin{aligned} & \rho_k\in \mathbb{R}, \\ &J(\bm{u}^k-\rho_k\bm{w}^k)\leq J(\bm{u}^k-\rho \bm{w}^k), \forall \rho\in \mathbb{R}. \end{aligned} \right. \end{flalign} \item[\textbf{(d)}] Update $\bm{u}^{k+1}$ and $\bm{g}^{k+1}$, respectively, by $$\bm{u}^{k+1}=\bm{u}^k-\rho_k \bm{w}^k,$$ and $$\bm{g}^{k+1}=DJ(\bm{u}^{k+1}).$$ \item[] If $DJ(\bm{u}^{k+1})=0$, take $\bm{u}=\bm{u}^{k+1}$; otherwise, \item[\textbf{(e)}] Compute $$\beta_k=\frac{\iint_{Q}|\bm{g}^{k+1}|^2dxdt}{\iint_{Q}|\bm{g}^k|^2dxdt},$$ and then update $$\bm{w}^{k+1}=\bm{g}^{k+1}+\beta_k \bm{w}^k.$$ \item[] Do $k+1\rightarrow k$ and return to (\textbf{c}). \end{enumerate} The above iterative method looks very simple, but practically, the implementation of the CG method (\textbf{a})--(\textbf{e}) for the solution of (BCP) is nontrivial. In particular, it is numerically challenging to compute $DJ(\bm{v})$, $\forall \bm{v}\in\mathcal{U}$ and $\rho_k$ as illustrated below. We shall discuss how to address these two issues in the following part of this section. \subsection{Computation of $DJ(\bm{v})$}\label{com_gra} It is clear that the implementation of the generic CG method (\textbf{a})--(\textbf{e}) for the solution of (BCP) requires the knowledge of $DJ(\bm{v})$ for various $\bm{v}\in \mathcal{U}$, and this has been conceptually provided in (\ref{gradient}). However, it is numerically challenging to compute $DJ(\bm{v})$ by (\ref{gradient}) due to the restriction $\nabla\cdot DJ(\bm{v})=0$ which ensures that all iterates $\bm{u}^k$ of the CG method meet the additional divergence-free constraint $\nabla\cdot \bm{u}^k=0$. In this subsection, we show that equation (\ref{gradient}) can be reformulated as a saddle point problem by introducing a Lagrange multiplier associated with the constraint $\nabla\cdot DJ(\bm{v})=0$. Then, a preconditioned CG method is proposed to solve this saddle point problem. We first note that equation (\ref{gradient}) can be equivalently reformulated as \begin{equation}\label{gradient_e} \left\{ \begin{aligned} &DJ(\bm{v})(t)\in \mathbb{S},\\ &\int_\Omega DJ(\bm{v})(t)\cdot \bm{z}dx=\int_\Omega(\bm{v}(t)-p(t)\nabla y(t))\cdot \bm{z}dx,\forall \bm{z}\in \mathbb{S}, \end{aligned} \right. \end{equation} where \begin{equation*} \mathbb{S}=\{\bm{z}|\bm{z}\in [L^2(\Omega)]^d, \nabla\cdot\bm{z}=0\}. \end{equation*} Clearly, problem (\ref{gradient_e}) is a particular case of \begin{equation}\label{gradient_e2} \left\{ \begin{aligned} &\bm{g}\in \mathbb{S},\\ &\int_\Omega \bm{g}\cdot \bm{z}dx=\int_\Omega\bm{f}\cdot \bm{z}dx,\forall \bm{z}\in \mathbb{S}, \end{aligned} \right. \end{equation} with $\bm{f}$ given in $[L^2(\Omega)]^d$. Introducing a Lagrange multiplier $\lambda\in H_0^1(\Omega)$ associated with the constraint $\nabla\cdot\bm{z}=0$, it is clear that problem (\ref{gradient_e2}) is equivalent to the following saddle point problem \begin{equation}\label{gradient_e3} \left\{ \begin{aligned} &(\bm{g},\lambda)\in [L^2(\Omega)]^d\times H_0^1(\Omega),\\ &\int_\Omega \bm{g}\cdot \bm{z}dx=\int_\Omega\bm{f}\cdot \bm{z}dx+\int_\Omega\lambda\nabla\cdot \bm{z}dx,\forall \bm{z}\in [L^2(\Omega)]^d,\\ &\int_\Omega\nabla\cdot \bm{g}qdx=0,\forall q\in H_0^1(\Omega), \end{aligned} \right. \end{equation} which is actually a Stokes type problem. In order to solve problem (\ref{gradient_e3}), we advocate a CG method inspired from \cite{glowinski2003, glowinski2015}. For this purpose, one has to specify the inner product to be used over $H_0^1(\Omega)$. As discussed in \cite{glowinski2003}, the usual $L^2$-inner product, namely, $\{q,q'\}\rightarrow\int_\Omega qq'dx$ leads to a CG method with poor convergence properties. Indeed, using some arguments similar to those in \cite{glowinski1992,glowinski2003}, we can show that the saddle point problem (\ref{gradient_e3}) can be reformulated as a linear variational problem in terms of the Lagrange multiplier $\lambda$. The corresponding coefficient matrix after space discretization with mesh size $h$ has a condition number of the order of $h^{-2}$, which is ill-conditioned especially for small $h$ and makes the CG method converges fairly slow. Hence, preconditioning is necessary for solving problem (\ref{gradient_e3}). As suggested in \cite{glowinski2003}, we choose $-\nabla\cdot\nabla$ as a preconditioner for problem (\ref{gradient_e3}), and the corresponding preconditioned CG method operates in the space $H_0^1(\Omega)$ equipped with the inner product $\{q,q'\}\rightarrow\int_\Omega\nabla q\cdot\nabla q'dx$ and the associated norm $\|q\|_{H_0^1(\Omega)}=(\int_\Omega|\nabla q|^2dx)^{1/2}, \forall q,q'\in H_0^1(\Omega)$. The resulting algorithm reads as: \begin{enumerate} \item [\textbf{G1}] Choose $\lambda^0\in H_0^1(\Omega)$. \item [\textbf{G2}] Solve \begin{equation*} \left\{ \begin{aligned} &\bm{g}^0\in [L^2(\Omega)]^d,\\ &\int_\Omega \bm{g}^0\cdot \bm{z}dx=\int_\Omega\bm{f}\cdot \bm{z}dx+\int_\Omega\lambda^0\nabla\cdot \bm{z}dx,\forall \bm{z}\in [L^2(\Omega)]^d, \end{aligned} \right. \end{equation*} and \begin{equation*} \left\{ \begin{aligned} &r^0\in H_0^1(\Omega),\\ &\int_\Omega \nabla r^0\cdot \nabla qdx=\int_\Omega\nabla\cdot \bm{g}^0qdx,\forall q\in H_0^1(\Omega). \end{aligned} \right. \end{equation*} \smallskip If $\frac{\int_\Omega|\nabla r^0|^2dx}{\max\{1,\int_\Omega|\nabla \lambda^0|^2dx\}}\leq tol_1$, take $\lambda=\lambda^0$ and $\bm{g}=\bm{g}^0$; otherwise set $w^0=r^0$. For $k\geq 0$, $\lambda^k,\bm{g}^k, r^k$ and $w^k$ being known with the last two different from 0, we compute $\lambda^{k+1},\bm{g}^{k+1}, r^{k+1}$ and if necessary $w^{k+1}$, as follows: \smallskip \item[\textbf{G3}] Solve \begin{equation*} \left\{ \begin{aligned} &\bar{\bm{g}}^k\in [L^2(\Omega)]^d,\\ &\int_\Omega \bar{\bm{g}}^k\cdot \bm{z}dx=\int_\Omega w^k\nabla\cdot \bm{z}dx,\forall \bm{z}\in [L^2(\Omega)]^d, \end{aligned} \right. \end{equation*} and \begin{equation*} \left\{ \begin{aligned} &\bar{r}^k\in H_0^1(\Omega),\\ &\int_\Omega \nabla \bar{r}^k\cdot \nabla qdx=\int_\Omega\nabla\cdot \bar{\bm{g}}^kqdx,\forall q\in H_0^1(\Omega), \end{aligned} \right. \end{equation*} and compute the stepsize via $$ \eta_k=\frac{\int_\Omega|\nabla r^k|^2dx}{\int_\Omega\nabla\bar{r}^k\cdot\nabla w^kdx}. $$ \item[\textbf{G4}] Update $\lambda^k, \bm{g}^k$ and $r^k$ via $$\lambda^{k+1}=\lambda^k-\eta_kw^k,\bm{g}^{k+1}=\bm{g}^k-\eta_k \bar{\bm{g}}^k,~\text{and}~r^{k+1}=r^k-\eta_k \bar{r}^k.$$ \smallskip If $\frac{\int_\Omega|\nabla r^{k+1}|^2dx}{\max\{1,\int_\Omega|\nabla r^0|^2dx\}}\leq tol_1$, take $\lambda=\lambda^{k+1}$ and $\bm{g}=\bm{g}^{k+1}$; otherwise, \item[\textbf{G5}] Compute $$\gamma_k=\frac{\int_\Omega|\nabla r^{k+1}|^2dx}{\int_\Omega|\nabla r^k|^2dx},$$ and update $w^k$ via $$w^{k+1}=r^{k+1}+\gamma_k w^{k}.$$ Do $k+1\rightarrow k$ and return to \textbf{G3}. \end{enumerate} Clearly, one only needs to solve two simple linear equations at each iteration of the preconditioned CG algorithm (\textbf{G1})-(\textbf{G5}), which implies that the algorithm is easy and cheap to implement. Moreover, due to the well-chosen preconditioner $-\nabla\cdot\nabla$, one can expect the above preconditioned CG algorithm to have a fast convergence; this will be validated by the numerical experiments reported in Section \ref{se:numerical}. \subsection{Computation of the Stepsize $\rho_k$}\label{com_step} Another crucial step to implement the CG method \textbf{(a)}--\textbf{(e)} is the computation of the stepsize $\rho_k$. It is the solution of the optimization problem (\ref{op_step}) which is numerically expensive to be solved exactly or up to a high accuracy. For instance, to solve (\ref{op_step}), one may consider the Newton method applied to the solution of $$ H_k'(\rho_k)=0, $$ where $$H_k(\rho)=J(\bm{u}^k-\rho\bm{w}^k).$$ The Newton method requires the second-order derivative $H_k''(\rho)$ which can be computed via an iterated adjoint technique requiring the solution of \emph{four} parabolic problems per Newton's iteration. Hence, the implementation of the Newton method is numerically expensive. The high computational load for solving (\ref{op_step}) motivates us to implement certain stepsize rule to determine an approximation of $\rho_k$. Here, we advocate the following procedure to compute an approximate stepsize $\hat{\rho}_k$. For a given $\bm{w}^k\in\mathcal{U}$, we replace the state $y=S(\bm{u}^k-\rho\bm{w}^k)$ in the objective functional $J(\bm{u}^k-\rho\bm{w}^k)$ by $$ S(\bm{u}^k)-\rho S'(\bm{u}^k)\bm{w}^k, $$ which is indeed the linearization of the mapping $\rho \mapsto S(\bm{u}^k - \rho \bm{w}^k)$ at $\rho= 0$. We thus obtain the following quadratic approximation of $H_k(\rho)$: \begin{equation}\label{q_rho} Q_k(\rho):=\frac{1}{2}\iint_Q|\bm{u}^k-\rho \bm{w}^k|^2dxdt+\frac{\alpha_1}{2}\iint_Q|y^k-\rho z^k-y_d|^2dxdt+\frac{\alpha_2}{2}\int_\Omega|y^k(T)-\rho z^k(T)-y_T|^2dx, \end{equation} where $y^k=S(\bm{u}^k)$ is the solution of the state equation (\ref{state_equation}) associated with $\bm{u}^k$, and $z^k=S'(\bm{u}^k)\bm{w}^k$ satisfies the following linear parabolic problem \begin{flalign}\label{linear_state} &\left\{ \begin{aligned} \frac{\partial z^k}{\partial t}-\nu \nabla^2 z^k+\bm{w}^k\cdot \nabla y^k +\bm{u}^k\cdot\nabla z^k+a_0 z^k&=0\quad \text{in}\quad Q, \\ z^k&=0\quad \text{on}\quad \Sigma,\\ z^k(0)&=0. \end{aligned} \right. \end{flalign} Then, it is easy to show that the equation $ Q_k'(\rho)=0 $ admits a unique solution \begin{equation}\label{step_size} \hat{\rho}_k =\frac{\iint_Q\bm{g}^k\cdot \bm{w}^k dxdt}{\iint_Q|\bm{w}^k|^2dxdt+ \alpha_1\iint_Q|z^k|^2dxdt+\alpha_2\int_\Omega|z^k(T)|^2dx}, \end{equation} and we take $\hat{\rho}_k$, which is clearly an approximation of $\rho_k$, as the stepsize in each CG iteration. Altogether, with the stepsize given by (\ref{step_size}), every iteration of the resulting CG algorithm requires solving only \emph{three} parabolic problems, namely, the state equation (\ref{state_equation}) forward in time and the associated adjoint equation (\ref{adjoint_equation}) backward in time for the computation of $\bm{g}^k$, and to solving the linearized parabolic equation (\ref{linear_state}) forward in time for the stepsize $\hat{\rho}_k$. For comparison, if the Newton method is employed to compute the stepsize $\rho_k$ by solving (\ref{op_step}), at least \emph{six} parabolic problems are required to be solved at each iteration of the CG method, which is much more expensive numerically. \begin{remark} To find an appropriate stepsize, a natural idea is to employ some line search strategies, such as the backtracking strategy based on the Armijo--Goldstein condition or the Wolf condition, see e.g., \cite{nocedal2006}. It is worth noting that these line search strategies require the evaluation of $J(\bm{v})$ repeatedly, which is numerically expensive because every evaluation of $J(\bm{v})$ for a given $\bm{v}$ requires solving the state equation (\ref{state_equation}). Moreover, we have implemented the CG method for solving (BCP) with various line search strategies and observed from the numerical results that line search strategies always lead to tiny stepsizes making extremely slow the convergence of the CG method. \end{remark} \subsection{A Nested CG Method for Solving (BCP)} Following Sections \ref{com_gra} and \ref{com_step}, we advocate the following nested CG method for solving (BCP): \begin{enumerate} \item[\textbf{I.}] Given $\bm{u}^0\in \mathcal{U}$. \item[\textbf{II.}] Compute $y^0$ and $p^0$ by solving the state equation (\ref{state_equation}) and the adjoint equation (\ref{adjoint_equation}) corresponding to $\bm{u}^0$. Then, for a.e. $t \in(0, T)$, solve \begin{equation*} \left\{ \begin{aligned} &\bm{g}^0(t)\in \mathbb{S},\\ &\int_\Omega \bm{g}^0(t)\cdot \bm{z}dx=\int_\Omega(\bm{u}^0(t)-p^0(t)\nabla y^0(t))\cdot \bm{z}dx,\forall \bm{z}\in \mathbb{S}, \end{aligned} \right. \end{equation*} by the preconditioned CG algorithm (\textbf{G1})--(\textbf{G5}); and set $\bm{w}^0=\bm{g}^0.$ \medskip \noindent For $k\geq 0$, $\bm{u}^k, \bm{g}^k$ and $\bm{w}^k$ being known, the last two different from $\bm{0}$, one computes $\bm{u}^{k+1}, \bm{g}^{k+1}$ and $\bm{w}^{k+1}$ as follows: \medskip \item[\textbf{III.}] Compute the stepsize $\hat{\rho}_k$ by (\ref{step_size}). \item[\textbf{IV.}] Update $\bm{u}^{k+1}$ by $$\bm{u}^{k+1}=\bm{u}^k-\hat{\rho}_k\bm{w}^k.$$ Compute $y^{k+1}$ and $p^{k+1}$ by solving the state equation (\ref{state_equation}) and the adjoint equation (\ref{adjoint_equation}) corresponding to $\bm{u}^{k+1}$; and for a.e. $t \in(0, T)$, solve \begin{equation*} \left\{ \begin{aligned} &\bm{g}^{k+1}(t)\in \mathbb{S},\\ &\int_\Omega \bm{g}^{k+1}(t)\cdot \bm{z}dx=\int_\Omega(\bm{u}^{k+1}(t)-p^{k+1}(t)\nabla y^{k+1}(t))\cdot \bm{z}dx,\forall \bm{z}\in \mathbb{S}, \end{aligned} \right. \end{equation*} by the preconditioned CG algorithm (\textbf{G1})--(\textbf{G5}). \medskip \noindent If $\frac{\iint_Q|\bm{g}^{k+1}|^2dxdt}{\iint_Q|\bm{g}^{0}|^2dxdt}\leq tol$, take $\bm{u} = \bm{u}^{k+1}$; else \medskip \item[\textbf{V.}] Compute $$\beta_k=\frac{\iint_Q|\bm{g}^{k+1}|^2dxdt}{\iint_Q|\bm{g}^{k}|^2dxdt},~\text{and}~\bm{w}^{k+1} = \bm{g}^{k+1} + \beta_k\bm{w}^k.$$ Do $k+1\rightarrow k$ and return to \textbf{III}. \end{enumerate} \section{Space and time discretizations}\label{se:discretization} In this section, we discuss first the numerical discretization of the bilinear optimal control problem (BCP). We achieve the time discretization by a semi-implicit finite difference method and the space discretization by a piecewise linear finite element method. Then, we discuss an implementable nested CG method for solving the fully discrete bilinear optimal control problem. \subsection{Time Discretization of (BCP)} First, we define a time discretization step $\Delta t$ by $\Delta t= T/N$, with $N$ a positive integer. Then, we approximate the control space $\mathcal{U}=L^2(0, T;\mathbb{S})$ by $ \mathcal{U}^{\Delta t}:=(\mathbb{S})^N; $ and equip $\mathcal{U}^{\Delta t}$ with the following inner product $$ (\bm{v},\bm{w})_{\Delta t} = \Delta t \sum^N_{n=1}\int_\Omega \bm{v}_n\cdot \bm{w}_ndx, \quad\forall \bm{v}= \{\bm{v}_n\}^N_{n=1}, \bm{w} = \{\bm{w}_n\}^N_{n=1} \in\mathcal{U}^{\Delta t}, $$ and the norm $$ \|\bm{v}\|_{\Delta t} = \left(\Delta t \sum^N_{n=1}\int_\Omega |\bm{v}_n|^2dx\right)^{\frac{1}{2}}, \quad\forall \bm{v}= \{\bm{v}_n\}^N_{n=1} \in\mathcal{U}^{\Delta t}. $$ Then, (BCP) is approximated by the following semi-discrete bilinear control problem (BCP)$^{\Delta t}$: \begin{flalign*} &\hspace{-4.5cm}\text{(BCP)}^{\Delta t}\qquad\qquad\qquad\qquad\left\{ \begin{aligned} & \bm{u}^{\Delta t}\in \mathcal{U}^{\Delta t}, \\ &J^{\Delta t}(\bm{u}^{\Delta t})\leq J^{\Delta t}(\bm{v}),\forall \bm{v}=\{\bm{v}_n\}_{n=1}^N\in\mathcal{U}^{\Delta t}, \end{aligned} \right. \end{flalign*} where the cost functional $J^{\Delta t}$ is defined by \begin{equation*} J^{\Delta t}(\bm{v})=\frac{1}{2}\Delta t \sum^N_{n=1}\int_\Omega |\bm{v}_n|^2dx+\frac{\alpha_1}{2}\Delta t \sum^N_{n=1}\int_\Omega |y_n-y_d^n|^2dx+\frac{\alpha_2}{2}\int_\Omega|y_N-y_T|^2dx, \end{equation*} with $\{y_n\}^N_{n=1}$ the solution of the following semi-discrete state equation: $y_0=\phi$; then for $n=1,\ldots,N$, with $y_{n-1}$ being known, we obtain $y_n$ from the solution of the following linear elliptic problem: \begin{flalign}\label{state_semidis} &\left\{ \begin{aligned} \frac{{y}_n-{y}_{n-1}}{\Delta t}-\nu \nabla^2{y}_n+\bm{v}_n\cdot\nabla{y}_{n-1}+a_0{y}_{n-1}&= f_n\quad \text{in}\quad \Omega, \\ y_n&=g_n\quad \text{on}\quad \Gamma. \end{aligned} \right. \end{flalign} \begin{remark} For simplicity, we have chosen a one-step semi-explicit scheme to discretize system (\ref{state_equation}). This scheme is first-order accurate and reasonably robust, once combined to an appropriate space discretization. The application of second-order accurate time discretization schemes to optimal control problems has been discussed in e.g., \cite{carthelglowinski1994}. \end{remark} \begin{remark} At each step of scheme (\ref{state_semidis}), we only need to solve a simple linear elliptic problem to obtain $y_n$ from $y_{n-1}$, and there is no particular difficulty in solving such a problem. \end{remark} The existence of a solution to the semi-discrete bilinear optimal control problem (BCP)$^{\Delta t}$ can be proved in a similar way as what we have done for the continuous case. Let $\bm{u}^{\Delta t}$ be a solution of (BCP)$^{\Delta t}$, then it verifies the following first-order optimality condition: \begin{equation*} DJ^{\Delta t}(\bm{u}^{\Delta t}) = 0, \end{equation*} where $DJ^{\Delta t}(\bm{v})$ is the first-order differential of the functional $J^{\Delta t}$ at $\bm{v}\in\mathcal{U}^{\Delta t}$. Proceeding as in the continuous case, we can show that $DJ^{\Delta t}(\bm{v})=\{\bm{g}_n\}_{n=1}^N\in\mathcal{U}^{\Delta t}$ where \begin{equation*} \left\{ \begin{aligned} &\bm{g}_n\in \mathbb{S},\\ &\int_\Omega \bm{g}_n\cdot \bm{w}dx=\int_\Omega(\bm{v}_n-p_n\nabla y_{n-1})\cdot \bm{w}dx,\forall\bm{w}\in \mathbb{S}, \end{aligned} \right. \end{equation*} and the vector-valued function $\{p_n\}^N_{n=1}$ is the solution of the semi-discrete adjoint system below: \begin{equation*} {p}_{N+1}=\alpha_2({y}_N-y_T); \end{equation*} for $n=N$, solve \begin{flalign*} \qquad \left\{ \begin{aligned} \frac{{p}_N-{p}_{N+1}}{\Delta t}-\nu \nabla^2{p}_N&= \alpha_1({y}_N-y_d^N)&\quad \text{in}\quad \Omega, \\ p_N&=0&\quad \text{on}\quad \Gamma, \end{aligned} \right. \end{flalign*} and for $n=N-1,\cdots,1,$ solve \begin{flalign*} \qquad \left\{ \begin{aligned} \frac{{p}_n-{p}_{n+1}}{\Delta t}-\nu\nabla^2{p}_n-\bm{v}_{n+1}\cdot\nabla{p}_{n+1}+a_0{p}_{n+1}&= \alpha_1({y}_n-y_d^n)&\quad \text{in}\quad \Omega, \\ p_n&=0&\quad \text{on}\quad \Gamma. \end{aligned} \right. \end{flalign*} \subsection{Space Discretization of (BCP)$^{\Delta t}$} In this subsection, we discuss the space discretization of (BCP)$^{\Delta t}$, obtaining thus a full space-time discretization of (BCP). For simplicity, we suppose from now on that $\Omega$ is a polygonal domain of $\mathbb{R}^2$ (or has been approximated by a family of such domains). Let $\mathcal{T}_H$ be a classical triangulation of $\Omega$, with $H$ the largest length of the edges of the triangles of $\mathcal{T}_H$. From $\mathcal{T}_{H}$ we construct $\mathcal{T}_{h}$ with $h=H/2$ by joining the mid-points of the edges of the triangles of $\mathcal{T}_{H}$. We first consider the finite element space $V_h$ defined by \begin{equation*} V_h = \{\varphi_h| \varphi_h\in C^0(\bar{\Omega}); { \varphi_h\mid}_{\mathbb{T}}\in P_1, \forall\, {\mathbb{T}}\in\mathcal{T}_h\} \end{equation*} with $P_1$ the space of the polynomials of two variables of degree $\leq 1$. Two useful sub-spaces of $V_h$ are \begin{equation*} V_{0h} =\{\varphi_h| \varphi_h\in V_h, \varphi_h\mid_{\Gamma}=0\}:=V_h\cap H_0^1(\Omega), \end{equation*} and (assuming that $g(t)\in C^0(\Gamma)$) \begin{eqnarray*} V_{gh}(t) =\{\varphi_h| \varphi_h\in V_h, \varphi_h(Q)=g(Q,t), \forall\, Q ~\text{vertex of} ~\mathcal{T}_h~\text{located on}~\Gamma \}. \end{eqnarray*} In order to construct the discrete control space, we introduce first \begin{equation*} \Lambda_H = \{\varphi_H| \varphi_H\in C^0(\bar{\Omega}); { \varphi_H\mid}_{\mathbb{T}}\in P_1, \forall\, {\mathbb{T}}\in\mathcal{T}_H\},~\text{and}~\Lambda_{0H} =\{\varphi_H| \varphi_H\in \Lambda_H, \varphi_H\mid_{\Gamma}=0\}. \end{equation*} Then, the discrete control space $\mathcal{U}_h^{\Delta t}$ is defined by \begin{equation*} \mathcal{U}_h^{\Delta t}=(\mathbb{S}_h)^N,~\text{with}~\mathbb{S}_h=\{\bm{v}_h|\bm{v}_h\in V_h\times V_h,\int_\Omega \nabla\cdot\bm{v}_hq_Hdx\left(=-\int_\Omega\bm{v}_h\cdot\nabla q_Hdx\right)=0,\forall q_H\in \Lambda_{0H}\}. \end{equation*} With the above finite element spaces, we approximate (BCP) and (BCP)$^{\Delta t}$ by (BCP)$_h^{\Delta t}$ defined by \begin{flalign*} &\hspace{-4.2cm}\text{(BCP)}_h^{\Delta t}\qquad\qquad\qquad\qquad\qquad\qquad\left\{ \begin{aligned} & \bm{u}_h^{\Delta t}\in \mathcal{U}_h^{\Delta t}, \\ &J_h^{\Delta t}(\bm{u}_h^{\Delta t})\leq J_h^{\Delta t}(\bm{v}_h^{\Delta t}),\forall \bm{v}_h^{\Delta t}\in\mathcal{U}_h^{\Delta t}, \end{aligned} \right. \end{flalign*} where the fully discrete cost functional $J_h^{\Delta t}$ is defined by \begin{equation}\label{obj_fuldis} J_h^{\Delta t}(\bm{v}_h^{\Delta t})=\frac{1}{2}\Delta t \sum^N_{n=1}\int_\Omega |\bm{v}_{n,h}|^2dx+\frac{\alpha_1}{2}\Delta t \sum^N_{n=1}\int_\Omega |y_{n,h}-y_d^n|^2dx+\frac{\alpha_2}{2}\int_\Omega|y_{N,h}-y_T|^2dx \end{equation} with $\{y_{n,h}\}^N_{n=1}$ the solution of the following fully discrete state equation: $y_{0,h}=\phi_h\in V_h$, where $\phi_h$ verifies $$ \phi_h\in V_h, \forall\, h>0,~\text{and}~\lim_{h\rightarrow 0}\phi_h=\phi,~\text{in}~L^2(\Omega), $$ then, for $n=1,\ldots,N$, with $y_{n-1,h}$ being known, we obtain $y_{n,h}\in V_{gh}(n\Delta t)$ from the solution of the following linear variational problem: \begin{equation}\label{state_fuldis} \int_\Omega\frac{{y}_{n,h}-{y}_{n-1,h}}{\Delta t}\varphi dx+\nu \int_\Omega\nabla{y}_{n,h}\cdot\nabla\varphi dx+\int_\Omega\bm{v}_n\cdot\nabla{y}_{n-1,h}\varphi dx+\int_\Omega a_0{y}_{n-1,h}\varphi dx= \int_\Omega f_{n}\varphi dx,\forall \varphi\in V_{0h}. \end{equation} In the following discussion, the subscript $h$ in all variables will be omitted for simplicity. In a similar way as what we have done in the continuous case, one can show that the first-order differential of $J_h^{\Delta t} $ at $\bm{v}\in\mathcal{U}_h^{\Delta t}$ is $DJ_h^{\Delta t}(\bm{v})=\{\bm{g}_n\}_{n=1}^N\in (\mathbb{S}_h)^N$ where \begin{equation}\label{gradient_ful} \left\{ \begin{aligned} &\bm{g}_n\in \mathbb{S}_h,\\ &\int_\Omega \bm{g}_n\cdot \bm{z}dx=\int_\Omega(\bm{v}_n-p_n\nabla y_{n-1})\cdot \bm{z}dx,\forall\bm{z}\in \mathbb{S}_h, \end{aligned} \right. \end{equation} and the vector-valued function $\{p_n\}^N_{n=1}$ is the solution of the following fully discrete adjoint system: \begin{equation}\label{ful_adjoint_1} {p}_{N+1}=\alpha_2({y}_N-y_T); \end{equation} for $n=N$, solve \begin{flalign}\label{ful_adjoint_2} \qquad \left\{ \begin{aligned} &p_N\in V_{0h},\\ &\int_\Omega\frac{{p}_N-{p}_{N+1}}{\Delta t}\varphi dx+\nu\int_\Omega \nabla{p}_N\cdot\nabla \varphi dx= \int_\Omega\alpha_1({y}_N-y_d^N)\varphi dx,\forall \varphi\in V_{0h}, \end{aligned} \right. \end{flalign} then, for $n=N-1,\cdots,1,$, solve \begin{flalign}\label{ful_adjoint_3} \qquad \left\{ \begin{aligned} &p_n\in V_{0h},\\ &\int_\Omega\frac{{p}_n-{p}_{n+1}}{\Delta t}\varphi dx+\nu\int_\Omega\nabla{p}_n\cdot\nabla\varphi dx-\int_\Omega\bm{v}_{n+1}\cdot\nabla{p}_{n+1}\varphi dx\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad+a_0\int_\Omega{p}_{n+1}\varphi dx=\int_\Omega \alpha_1({y}_n-y_d^n)\varphi dx,\forall \varphi\in V_{0h}. \end{aligned} \right. \end{flalign} It is worth mentioning that the so-called discretize-then-optimize strategy is employed here, which implies that we first discretize (BCP), and to compute the gradient in a discrete setting, the fully discrete adjoint equation (\ref{ful_adjoint_1})--(\ref{ful_adjoint_3}) has been derived from the fully discrete cost functional $J_h^{\Delta t}(\bm{v})$ (\ref{obj_fuldis}) and the fully discrete state equation (\ref{state_fuldis}). This implies that the fully discrete state equation (\ref{state_fuldis}) and the fully discrete adjoint equation (\ref{ful_adjoint_1})--(\ref{ful_adjoint_3}) are strictly in duality. This fact guarantees that $-DJ_h^{\Delta t}(\bm{v})$ is a descent direction of the fully discrete bilinear optimal control problem (BCP)$_h^{\Delta t}$. \begin{remark} A natural alternative has been advocated in the literature: (i) Derive the adjoint equation to compute the first-order differential of the cost functional in a continuous setting; (ii) Discretize the state and adjoint state equations by certain numerical schemes; (iii) Use the resulting discrete analogs of $y$ and $p$ to compute a discretization of the differential of the cost functional. The main problem with this optimize-then-discretize approach is that it may not preserve a strict duality between the discrete state equation and the discrete adjoint equation. This fact implies in turn that the resulting discretization of the continuous gradient may not be a gradient of the discrete optimal control problem. As a consequence, the resulting algorithm is not a descent algorithm and divergence may take place (see \cite{GH1998} for a related discussion). \end{remark} \subsection{A Nested CG Method for Solving the Fully Discrete Problem (BCP)$_h^{\Delta t}$}\label{DCG} In this subsection, we propose a nested CG method for solving the fully discrete problem (BCP)$_h^{\Delta t}$. As discussed in Section \ref{se:cg}, the implementation of CG requires the knowledge of $DJ_h^{\Delta t}(\bm{v})$ and an appropriate stepsize. In the following discussion, we address these two issues by extending the results for the continuous case in Sections \ref{com_gra} and \ref{com_step} to the fully discrete settings; and derive the corresponding CG algorithm. First, it is clear that one can compute $DJ_h^{\Delta t}(\bm{v})$ via the solution of the $N$ linear variational problems encountered in (\ref{gradient_ful}). For this purpose, we introduce a Lagrange multiplier $\lambda\in \Lambda_{0H}$ associated with the divergence-free constraint, then problem (\ref{gradient_ful}) is equivalent to the following saddle point system \begin{equation}\label{fulgradient_e} \left\{ \begin{aligned} &(\bm{g}_n,\lambda)\in (V_h\times V_h)\times \Lambda_{0H},\\ &\int_\Omega \bm{g}_n\cdot \bm{z}dx=\int_\Omega (\bm{v}_n-p_n\nabla y_{n-1})\cdot \bm{z}dx+\int_\Omega\lambda\nabla\cdot \bm{z}dx,\forall \bm{z}\in V_h\times V_h,\\ &\int_\Omega\nabla\cdot\bm{g}_nqdx=0,\forall q\in \Lambda_{0H}. \end{aligned} \right. \end{equation} As discussed in Section \ref{com_gra}, problem (\ref{fulgradient_e}) can be solved by the following preconditioned CG algorithm, which is actually a discrete analogue of (\textbf{G1})--(\textbf{G5}). \begin{enumerate} \item [\textbf{DG1}] Choose $\lambda^0\in \Lambda_{0H}$. \item [\textbf{DG2}] Solve \begin{equation*} \left\{ \begin{aligned} &\bm{g}_n^0\in V_h\times V_h,\\ &\int_\Omega\bm{g}_n^0\cdot \bm{z}dx=\int_\Omega(\bm{v}_n-p_n\nabla y_{n-1})\cdot \bm{z}dx+\int_\Omega\lambda^0\nabla\cdot \bm{z}dx,\forall \bm{z}\in V_h\times V_h, \end{aligned} \right. \end{equation*} and \begin{equation*} \left\{ \begin{aligned} &r^0\in \Lambda_{0H},\\ &\int_\Omega \nabla r^0\cdot \nabla qdx=\int_\Omega\nabla\cdot \bm{g}_n^0qdx,\forall q\in \Lambda_{0H}. \end{aligned} \right. \end{equation*} \smallskip If $\frac{\int_\Omega|\nabla r^0|^2dx}{\max\{1,\int_\Omega|\nabla \lambda^0|^2dx\}}\leq tol_1$, take $\lambda=\lambda^0$ and $\bm{g}_n=\bm{g}_n^0$; otherwise set $w^0=r^0$. For $k\geq 0$, $\lambda^k,\bm{g}_n^k, r^k$ and $w^k$ being known with the last two different from 0, we define $\lambda^{k+1},\bm{g}_n^{k+1}, r^{k+1}$ and if necessary $w^{k+1}$, as follows: \smallskip \item[\textbf{DG3}] Solve \begin{equation*} \left\{ \begin{aligned} &\bar{\bm{g}}_n^k\in V_h\times V_h,\\ &\int_\Omega \bar{\bm{g}}_n^k\cdot \bm{z}dx=\int_\Omega w^k\nabla\cdot \bm{z}dx,\forall \bm{z}\in V_h\times V_h, \end{aligned} \right. \end{equation*} and \begin{equation*} \left\{ \begin{aligned} &\bar{r}^k\in \Lambda_{0H},\\ &\int_\Omega \nabla \bar{r}^k\cdot \nabla qdx=\int_\Omega\nabla\cdot \bar{\bm{g}}_n^kqdx,\forall q\in\Lambda_{0H}, \end{aligned} \right. \end{equation*} and compute $$ \eta_k=\frac{\int_\Omega|\nabla r^k|^2dx}{\int_\Omega\nabla\bar{r}^k\cdot\nabla w^kdx}. $$ \item[\textbf{DG4}] Update $\lambda^k,\bm{g}_n^k$ and $r^k$ via $$\lambda^{k+1}=\lambda^k-\eta_kw^k,\bm{g}_n^{k+1}=\bm{g}_n^k-\eta_k\bar{\bm{g}}_n^k,~\text{and}~r^{k+1}=r^k-\eta_k \bar{r}^k.$$ \smallskip If $\frac{\int_\Omega|\nabla r^{k+1}|^2dx}{\max\{1,\int_\Omega|\nabla r^0|^2dx\}}\leq tol_1$, take $\lambda=\lambda^{k+1}$ and $\bm{g}_n=\bm{g}_n^{k+1}$; otherwise, \item[\textbf{DG5}] Compute $$\gamma_k=\frac{\int_\Omega|\nabla r^{k+1}|^2dx}{\int_\Omega|\nabla r^k|^2dx},$$ and update $w^k$ via $$w^{k+1}=r^{k+1}+\gamma_k w^{k}.$$ Do $k+1\rightarrow k$ and return to \textbf{DG3}. \end{enumerate} To find an appropriate stepsize in the CG iteration for the solution of (BCP)$_h^{\Delta t}$, we note that, for any $\{\bm{w}_n\}_{n=1}^N\in (\mathbb{S}_h)^N$, the fully discrete analogue of $Q_k(\rho)$ in (\ref{q_rho}) reads as $ Q_h^{\Delta t}(\rho)=\frac{1}{2}\Delta t \sum^N_{n=1}\int_\Omega |\bm{u}_n-\rho\bm{w}_n|^2dx+\frac{\alpha_1}{2}\Delta t \sum^N_{n=1}\int_\Omega |y_{n}-\rho z_{n}-y_d^n|^2dx+\frac{\alpha_2}{2}\int_\Omega|y_{N}-\rho z_{N}-y_T|^2dx, $ where the vector-valued function $\{z_n\}^N_{n=1}$ is obtained as follows: $z_0=0$; then for $n=1,\ldots,N$, with $z_{n-1}$ being known, $z_n$ is obtained from the solution of the linear variational problem $ \left\{ \begin{aligned} &z_n\in V_{0h},\\ &\int_\Omega\frac{{z}_n-{z}_{n-1}}{\Delta t}\varphi dx+\nu\int_\Omega \nabla{z}_n\cdot\nabla\varphi dx+\int_\Omega\bm{w}_n\cdot\nabla y_n\varphi dx\\ &\qquad\qquad\qquad\qquad\qquad+\int_\Omega\bm{u}_n\cdot\nabla{z}_{n-1}\varphi dx+a_0\int_\Omega{z}_{n-1}\varphi dx= 0,\forall\varphi\in V_{0h}. \\ \end{aligned} \right. $$ As discussed in Section \ref{com_step} for the continuous case, we take the unique solution of ${Q_h^{\Delta t}}'(\rho)=0$ as the stepsize in each CG iteration, that is \begin{equation}\label{step_ful} \hat{\rho}_h^{\Delta t} =\frac{\Delta t\sum_{n=1}^{N}\int_\Omega\bm{g}_n\cdot \bm{w}_n dx}{\Delta t\sum_{n=1}^{N}\int_\Omega|\bm{w}_n|^2dxdt+ \alpha_1\Delta t\sum_{n=1}^{N}\int_\Omega|z_n|^2dxdt+\alpha_2\int_\Omega|z_N|^2dx}. \end{equation} Finally, with above preparations, we propose the following nested CG algorithm for the solution of the fully discrete control problem (BCP)$_h^{\Delta t}$. \begin{enumerate} \item[\textbf{DI.}] Given $\bm{u}^0:=\{\bm{u}_n^0\}_{n=1}^N\in (\mathbb{S}_h)^N$. \item[\textbf{DII.}] Compute $\{y_n^0\}_{n=0}^N$ and $\{p^0_n\}_{n=1}^{N+1}$ by solving the fully discrete state equation (\ref{state_fuldis}) and the fully discrete adjoint equation (\ref{ful_adjoint_1})--(\ref{ful_adjoint_3}) corresponding to $\bm{u}^0$. Then, for $n=1,\cdots, N$ solve \begin{equation*} \left\{ \begin{aligned} &\bm{g}_n^0\in \mathbb{S}_h,\\ &\int_\Omega \bm{g}_n^0\cdot \bm{z}dx=\int_\Omega(\bm{u}_n^0-p_n^0\nabla y_{n-1}^0)\cdot \bm{z}dx,\forall \bm{z}\in \mathbb{S}_h, \end{aligned} \right. \end{equation*} by the preconditioned CG algorithm (\textbf{DG1})--(\textbf{DG5}), and set $\bm{w}^0_n=\bm{g}_n^0.$ \medskip \noindent For $k\geq 0$, $\bm{u}^k, \bm{g}^k$ and $\bm{w}^k$ being known, the last two different from $\bm{0}$, one computes $\bm{u}^{k+1}, \bm{g}^{k+1}$ and $\bm{w}^{k+1}$ as follows: \medskip \item[\textbf{DIII.}] Compute the stepsize $\hat{\rho}_k$ by (\ref{step_ful}). \item[\textbf{DIV.}] Update $\bm{u}^{k+1}$ by $$\bm{u}^{k+1}=\bm{u}^k-\hat{\rho}_k\bm{w}^k.$$ Compute $\{y_n^{k+1}\}_{n=0}^N$ and $\{p_n^{k+1}\}_{n=1}^{N+1}$ by solving the fully discrete state equation (\ref{state_fuldis}) and the fully discrete adjoint equation (\ref{ful_adjoint_1})--(\ref{ful_adjoint_3}) corresponding to $\bm{u}^{k+1}$. Then, for $n=1,\cdots,N$, solve \begin{equation}\label{dis_gradient} \left\{ \begin{aligned} &\bm{g}_n^{k+1}\in \mathbb{S}_h,\\ &\int_\Omega \bm{g}_n^{k+1}\cdot \bm{z}dx=\int_\Omega(\bm{u}_n^{k+1}-p_n^{k+1}\nabla y_{n-1}^{k+1})\cdot \bm{z}dx,\forall \bm{z}\in \mathbb{S}_h, \end{aligned} \right. \end{equation} by the preconditioned CG algorithm (\textbf{DG1})--(\textbf{DG5}). \medskip \noindent If $\frac{\Delta t\sum_{n=1}^N\int_\Omega|\bm{g}_n^{k+1}|^2dx}{\Delta t\sum_{n=1}^N\int_\Omega|\bm{g}_n^{0}|^2dx}\leq tol$, take $\bm{u} = \bm{u}^{k+1}$; else \medskip \item[\textbf{DV.}] Compute $$\beta_k=\frac{\Delta t\sum_{n=1}^N\int_\Omega|\bm{g}_n^{k+1}|^2dx}{\Delta t\sum_{n=1}^N\int_\Omega|\bm{g}_n^{k}|^2dx},~\text{and}~\bm{w}^{k+1} = \bm{g}^{k+1} + \beta_k\bm{w}^k.$$ Do $k+1\rightarrow k$ and return to \textbf{DIII}. \end{enumerate} Despite its apparent complexity, the CG algorithm (\textbf{DI})-(\textbf{DV}) is easy to implement. Actually, one of the main computational difficulties in the implementation of the above algorithm seems to be the solution of $N$ linear systems (\ref{dis_gradient}), which is time-consuming. However, it is worth noting that the linear systems (\ref{dis_gradient}) are separable with respect to different $n$ and they can be solved in parallel. As a consequent, one can compute the gradient $\{\bm{g}^{k}_n\}_{n=1}^N$ simultaneously and the computation time can be reduced significantly. Moreover, it is clear that the computation of $\{\bm{g}^{k}_n\}_{n=1}^N$ requires the storage of the solutions of (\ref{state_fuldis}) and (\ref{ful_adjoint_1})-(\ref{ful_adjoint_3}) at all points in space and time. For large scale problems, especially in three space dimensions, it will be very memory demanding and maybe even impossible to store the full sets $\{y_n^k\}_{n=0}^N$ and $\{p_n^k\}_{n=1}^{N+1}$ simultaneously. To tackle this issue, one can employ the strategy described in e.g., \cite[Section 1.12]{glowinski2008exact} that can drastically reduce the storage requirements at the expense of a small CPU increase. \section{Numerical Experiments}\label{se:numerical} In this section, we report some preliminary numerical results validating the efficiency of the proposed CG algorithm (\textbf{DI})--(\textbf{DV}) for (BCP). All codes were written in MATLAB R2016b and numerical experiments were conducted on a Surface Pro 5 laptop with 64-bit Windows 10.0 operation system, Intel(R) Core(TM) i7-7660U CPU (2.50 GHz), and 16 GB RAM. \medskip \noindent\textbf{Example 1.} We consider the bilinear optimal control problem (BCP) on the domain $Q=\Omega\times(0,T)$ with $\Omega=(0,1)^2$ and $T=1$. In particular, we take the control $\bm{v}(x,t)$ in a finite-dimensional space, i.e. $\bm{v}\in L^2(0,T;\mathbb{R}^2)$. In addition, we set $\alpha_2=0$ in (\ref{objective_functional}) and consider the following tracking-type bilinear optimal control problem: \begin{equation}\label{model_ex1} \min_{\bm{v}\in L^2(0,T;\mathbb{R}^2)}J(\bm{v})=\frac{1}{2}\int_0^T|\bm{v}(t)|^2dt+\frac{\alpha_1}{2}\iint_Q|y-y_d|^2dxdt, \end{equation} where $|\bm{v}(t)|=\sqrt{\bm{v}_1(t)^2+\bm{v}_2(t)^2}$ is the canonical 2-norm, and $y$ is obtained from $\bm{v}$ via the solution of the state equation (\ref{state_equation}). Since the control $\bm{v}$ is considered in a finite-dimensional space, the divergence-free constraint $\nabla\cdot\bm{v}=0$ is verified automatically. As a consequence, the first-order differential $DJ(\bm{v})$ can be easily computed. Indeed, it is easy to show that \begin{equation}\label{oc_finite} DJ(\bm{v})=\left\{\bm{v}_i(t)+\int_\Omega y(t)\frac{\partial p(t)}{\partial x_i}dx\right \}_{i=1}^2,~\text{a.e.~on}~(0,T),\forall \bm{v}\in L^2(0,T;\mathbb{R}^2), \end{equation} where $p(t)$ is the solution of the adjoint equation (\ref{adjoint_equation}). The inner preconditioned CG algorithm (\textbf{DG1})-(\textbf{DG5}) for the computation of the gradient $\{\bm{g}_n\}_{n=1}^N$ is thus avoided. In order to examine the efficiency of the proposed CG algorithm (\textbf{DI})--(\textbf{DV}), we construct an example with a known exact solution. To this end, we set $\nu=1$ and $a_0=1$ in (\ref{state_equation}), and $$ y=e^t(-3\sin(2\pi x_1)\sin(\pi x_2)+1.5\sin(\pi x_1)\sin(2\pi x_2)),\quad p=(T-t)\sin \pi x_1 \sin \pi x_2. $$ Substituting these two functions into the optimality condition $DJ(\bm{u}(t))=0$, we have $$ \bm{u}=(\bm{u}_1,\bm{u}_2)^\top=(2e^t(T-t),-e^t(T-t))^\top. $$ We further set \begin{eqnarray*} &&f=\frac{\partial y}{\partial t}-\nabla^2y+{\bm{u}}\cdot \nabla y+y, \quad\phi=-3\sin(2\pi x_1)\sin(\pi x_2)+1.5\sin(\pi x_1)\sin(2\pi x_2),\\ &&y_d=y-\frac{1}{\alpha_1}\left(-\frac{\partial p}{\partial t} -\nabla^2p-\bm{u}\cdot\nabla p +p\right),\quad g=0. \end{eqnarray*} Then, it is easy to verify that $\bm{u}$ is a solution point of the problem (\ref{model_ex1}). We display the solution $\bm{u}$ and the target function $y_d$ at different instants of time in Figure \ref{exactU_ex1} and Figure \ref{target_ex1}, respectively. \begin{figure}[htpb] \centering{ \includegraphics[width=0.43\textwidth]{exact_u.pdf} } \caption{The exact optimal control $\bm{u}$ for Example 1.} \label{exactU_ex1} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{target25.pdf} \includegraphics[width=0.3\textwidth]{target50.pdf} \includegraphics[width=0.3\textwidth]{target75.pdf} } \caption{The target function $y_d$ at $t=0.25, 0.5$ and $0.75$ (from left to right) for Example 1.} \label{target_ex1} \end{figure} The stopping criterion of the CG algorithm (\textbf{DI})--(\textbf{DV}) is set as $$ \frac{\Delta t\sum_{n=1}^N|\bm{g}^{k+1}_n|^2}{\Delta t\sum_{n=1}^N|\bm{g}^{0}_n|^2}\leq 10^{-5}. $$ The initial value is chosen as $\bm{u}^0=(0,0)^\top$; and we denote by $\bm{u}^{\Delta t}$ and $y_h^{\Delta t}$ the computed control and state, respectively. First, we take $h=\frac{1}{2^i}, i=5,6,7,8$, $\Delta t=\frac{h}{2}$ and $\alpha_1=10^6$, and implement the proposed CG algorithm (\textbf{DI})--(\textbf{DV}) for solving the problem (\ref{model_ex1}). The numerical results reported in Table \ref{tab:mesh_EX1} show that the CG algorithm converges fairly fast and is robust with respect to different mesh sizes. We also observe that the target function $y_d$ has been reached within a good accuracy. Similar comments hold for the approximation of the optimal control $\bm{u}$ and of the state $y$ of problem (\ref{model_ex1}). By taking $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$, the computed state $y_h^{\Delta t}$ and $y_h^{\Delta t}-y_d$ at $t=0.25,0.5$ and $0.75$ are reported in Figures \ref{stateEx1_1}, \ref{stateEx1_2} and \ref{stateEx1_3}, respectively; and the computed control $\bm{u}^{\Delta t}$ and error $\bm{u}^{\Delta t}-\bm{u}$ are visualized in Figure \ref{controlEx1}. \begin{table}[htpb] {\small\centering \caption{Results of the CG algorithm (\textbf{DI})--(\textbf{DV}) with different $h$ and $\Delta t$ for Example 1.} \label{tab:mesh_EX1} \begin{tabular}{|c|c|c|c|c|c|} \hline Mesh sizes &$Iter$& $\|\bm{u}^{\Delta t}-\bm{u}\|_{L^2(0,T;\mathbb{R}^2)}$&$\|y_h^{\Delta t}-y\|_{L^2(Q)}$& ${\|y_h^{\Delta t}-y_d\|_{L^2(Q)}}/{\|y_d\|_{{L^2(Q)}}}$ \\ \hline $h=1/2^5,\Delta t=1/2^6$ & 117 &2.8820$\times 10^{-2}$ &1.1569$\times 10^{-2}$&3.8433$\times 10^{-3}$ \\ \hline $h=1/2^6,\Delta t=1/2^7$ &48&1.3912$\times 10^{-2}$& 2.5739$\times 10^{-3}$&8.5623$\times 10^{-4}$ \\ \hline $h=1/2^7,\Delta t=1/2^8$ &48&6.9095$\times 10^{-3}$& 4.8574$\times 10^{-4}$ &1.6516$\times 10^{-4}$ \\ \hline $h=1/2^8,\Delta t=1/2^9$& 31 &3.4845$\times 10^{-3}$ &6.6231$\times 10^{-5}$ &2.2196$\times 10^{-5}$ \\ \hline \end{tabular}} \end{table} \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{soln_y25.pdf} \includegraphics[width=0.3\textwidth]{err_y25.pdf} \includegraphics[width=0.3\textwidth]{dis_y_25.pdf}} \caption{Computed state $y^{\Delta t}_h$, error $y^{\Delta t}_h-y$ and $y^{\Delta t}_h-y_d$ (from left to right) at $t=0.25$ for Example 1.} \label{stateEx1_1} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{soln_y50.pdf} \includegraphics[width=0.3\textwidth]{err_y.pdf} \includegraphics[width=0.3\textwidth]{dis_y_50.pdf}} \caption{Computed state $y^{\Delta t}_h$, error $y^{\Delta t}_h-y$ and $y^{\Delta t}_h-y_d$ (from left to right) at $t=0.5$ for Example 1.} \label{stateEx1_2} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{soln_y75.pdf} \includegraphics[width=0.3\textwidth]{err_y75.pdf} \includegraphics[width=0.3\textwidth]{dis_y_75.pdf}} \caption{Computed state $y^{\Delta t}_h$, error $y^{\Delta t}_h-y$ and $y^{\Delta t}_h-y_d$ (from left to right) at $t=0.75$ for Example 1.} \label{stateEx1_3} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.45\textwidth]{soln_u.pdf} \includegraphics[width=0.45\textwidth]{err_u.pdf} } \caption{Computed optimal control $\bm{u}^{\Delta t}$ and error $\bm{u}^{\Delta t}-\bm{u}$ for Example 1.} \label{controlEx1} \end{figure} Furthermore, we tested the proposed CG algorithm (\textbf{DI})--(\textbf{DV}) with $h=\frac{1}{2^6}$ and $\Delta t=\frac{1}{2^7}$ for different penalty parameter $\alpha_1$. The results reported in Table \ref{reg_EX1} show that the performance of the proposed CG algorithm is robust with respect to the penalty parameter, at least for the example being considered. We also observe that as $\alpha_1$ increases, the value of $\frac{\|y_h^{\Delta t}-y_d\|_{L^2(Q)}}{\|y_d\|_{{L^2(Q)}}}$ decreases. This implies that, as expected, the computed state $y_h^{\Delta t}$ is closer to the target function $y_d$ when the penalty parameter gets larger. \begin{table}[htpb] {\small \centering \caption{Results of the CG algorithm (\textbf{DI})--(\textbf{DV}) with different $\alpha_1$ for Example 1.} \begin{tabular}{|c|c|c|c|c|c|} \hline $\alpha_1$ &$Iter$& $CPU(s)$&$\|\bm{u}^{\Delta t}-\bm{u}\|_{L^2(0,T;\mathbb{R}^2)}$&$\|y_h^{\Delta t}-y\|_{L^2(Q)}$& $\frac{\|y_h^{\Delta t}-y_d\|_{L^2(Q)}}{\|y_d\|_{{L^2(Q)}}}$ \\ \hline $10^4$ & 46 & 126.0666&1.3872$\times 10^{-2}$ &2.5739$\times 10^{-3}$ & 8.7666$\times 10^{-4}$ \\ \hline $10^5$ & 48 & 126.4185 &1.3908$\times 10^{-2}$ &2.5739$\times 10^{-3}$ &8.6596$\times 10^{-4}$ \\ \hline $10^6$ &48&128.2346 &1.3912$\times 10^{-2}$ & 2.5739$\times 10^{-3}$ &8.5623$\times 10^{-4}$ \\ \hline $10^7$ &48 & 127.1858&1.3912$\times 10^{-2}$&2.5739$\times 10^{-3}$ &8.5612$\times 10^{-4}$ \\ \hline $10^8$& 48 & 124.1160&1.3912$\times 10^{-2}$&2.5739$\times 10^{-3}$ &8.5610$\times 10^{-4}$ \\ \hline \end{tabular} \label{reg_EX1} } \end{table} \medskip \noindent\textbf{Example 2.} As in Example 1, we consider the bilinear optimal control problem (BCP) on the domain $Q=\Omega\times(0,T)$ with $\Omega=(0,1)^2$ and $T=1$. Now, we take the control $\bm{v}(x,t)$ in the infinite-dimensional space $\mathcal{U}=\{\bm{v}|\bm{v}\in [L^2(Q)]^2, \nabla\cdot\bm{v}=0\}.$ We set $\alpha_2=0$ in (\ref{objective_functional}), $\nu=1$ and $a_0=1$ in (\ref{state_equation}), and consider the following tracking-type bilinear optimal control problem: \begin{equation}\label{model_ex2} \min_{\bm{v}\in\mathcal{U}}J(\bm{v})=\frac{1}{2}\iint_Q|\bm{v}|^2dxdt+\frac{\alpha_1}{2}\iint_Q|y-y_d|^2dxdt, \end{equation} where $y$ is obtained from $\bm{v}$ via the solution of the state equation (\ref{state_equation}). First, we let \begin{eqnarray*} &&y=e^t(-3\sin(2\pi x_1)\sin(\pi x_2)+1.5\sin(\pi x_1)\sin(2\pi x_2)),\\ &&p=(T-t)\sin \pi x_1 \sin \pi x_2, ~ \text{and} ~\bm{u}=P_{\mathcal{U}}(p\nabla y), \end{eqnarray*} where $P_{\mathcal{U}}(\cdot)$ is the projection onto the set $\mathcal{U}$. We further set \begin{eqnarray*} &&f=\frac{\partial y}{\partial t}-\nabla^2y+{\bm{u}}\cdot \nabla y+y, \quad\phi=-3\sin(2\pi x_1)\sin(\pi x_2)+1.5\sin(\pi x_1)\sin(2\pi x_2),\\ &&y_d=y-\frac{1}{\alpha_1}\left(-\frac{\partial p}{\partial t} -\nabla^2p-\bm{u}\cdot\nabla p +p\right),\quad g=0. \end{eqnarray*} Then, it is easy to show that $\bm{u}$ is a solution point of the problem (\ref{model_ex2}). We note that $\bm{u}=P_{\mathcal{U}}(p\nabla y)$ has no analytical solution and it can only be solved numerically. Here, we solve $\bm{u}=P_{\mathcal{U}}(p\nabla y)$ by the preconditioned CG algorithm (\textbf{DG1})--(\textbf{DG5}) with $h=\frac{1}{2^9}$ and $\Delta t=\frac{1}{2^{10}}$, and use the resulting control $\bm{u}$ as a reference solution for the example we considered. \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{ex2_target25.pdf} \includegraphics[width=0.3\textwidth]{ex2_target50.pdf} \includegraphics[width=0.3\textwidth]{ex2_target75.pdf} } \caption{The target function $y_d$ with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ at $t=0.25, 0.5$ and $0.75$ (from left to right) for Example 2.} \label{target_ex2} \end{figure} The stopping criteria of the outer CG algorithm (\textbf{DI})--(\textbf{DV}) and the inner preconditioned CG algorithm (\textbf{DG1})--(\textbf{DG5}) are respectively set as $$ \frac{\Delta t\sum_{n=1}^N\int_\Omega|\bm{g}_n^{k+1}|^2dx}{\Delta t\sum_{n=1}^N\int_\Omega|\bm{g}_n^{0}|^2dx}\leq 5\times10^{-8}, ~\text{and}~\frac{\int_\Omega|\nabla r^{k+1}|^2dx}{\max\{1,\int_\Omega|\nabla r^0|^2dx\}}\leq 10^{-8}. $$ The initial values are chosen as $\bm{u}^0=(0,0)^\top$ and $\lambda^0=0$; and we denote by $\bm{u}_h^{\Delta t}$ and $y_h^{\Delta t}$ the computed control and state, respectively. First, we take $h=\frac{1}{2^i}, i=6,7,8$, $\Delta t=\frac{h}{2}$, $\alpha_1=10^6$, and implement the proposed nested CG algorithm (\textbf{DI})--(\textbf{DV}) for solving the problem (\ref{model_ex2}). The numerical results reported in Table \ref{tab:mesh_EX2} show that the CG algorithm converges fast and is robust with respect to different mesh sizes. In addition, the preconditioned CG algorithm (\textbf{DG1})--(\textbf{DG5}) converges within 10 iterations for all cases and thus is efficient for computing the gradient $\{\bm{g}_n\}_{n=1}^N$. We also observe that the target function $y_d$ has been reached within a good accuracy. Similar comments hold for the approximation of the optimal control $\bm{u}$ and of the state $y$ of problem (\ref{model_ex2}). \begin{table}[htpb] {\small \centering \caption{Results of the nested CG algorithm (\textbf{DI})--(\textbf{DV}) with different $h$ and $\Delta t$ for Example 2.} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Mesh sizes &{{$Iter_{CG}$}}&$MaxIter_{PCG}$& $\|\bm{u}_h^{\Delta t}-\bm{u}\|_{L^2(Q)}$&$\|y_h^{\Delta t}-y\|_{L^2(Q)}$& $\frac{\|y_h^{\Delta t}-y_d\|_{L^2(Q)}}{\|y_d\|_{{L^2(Q)}}}$ \\ \hline $h=1/2^6,\Delta t=1/2^7$ &443&9&3.7450$\times 10^{-3}$& 9.7930$\times 10^{-5}$&1.0906$\times 10^{-6}$ \\ \hline $h=1/2^7,\Delta t=1/2^8$ &410&9&1.8990$\times 10^{-3}$& 1.7423$\times 10^{-5}$ & 3.3863$\times 10^{-7}$ \\ \hline $h=1/2^8,\Delta t=1/2^9$& 405&8 &1.1223$\times 10^{-3}$ &4.4003$\times 10^{-6}$ &1.0378$\times 10^{-7}$ \\ \hline \end{tabular} \label{tab:mesh_EX2} } \end{table} Taking $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$, the computed state $y_h^{\Delta t}$, the error $y_h^{\Delta t}-y$ and $y_h^{\Delta t}-y_d$ at $t=0.25,0.5,0.75$ are reported in Figures \ref{stateEx2_1}, \ref{stateEx2_2} and \ref{stateEx2_3}, respectively; and the computed control $\bm{u}_h^{\Delta t}$, the exact control $\bm{u}$, and the error $\bm{u}_h^{\Delta t}-\bm{u}$ at $t=0.25,0.5,0.75$ are presented in Figures \ref{controlEx2_1}, \ref{controlEx2_2} and \ref{controlEx2_3}. \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{ex2_soln_y25.pdf} \includegraphics[width=0.3\textwidth]{ex2_err_y25.pdf} \includegraphics[width=0.3\textwidth]{ex2_dis_y_25.pdf}} \caption{Computed state $y^{\Delta t}_h$, error $y^{\Delta t}_h-y$ and $y^{\Delta t}_h-y_d$ with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ (from left to right) at $t=0.25$ for Example 2.} \label{stateEx2_1} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{ex2_soln_y50.pdf} \includegraphics[width=0.3\textwidth]{ex2_err_y.pdf} \includegraphics[width=0.3\textwidth]{ex2_dis_y_50.pdf}} \caption{Computed state $y^{\Delta t}_h$, error $y^{\Delta t}_h-y$ and $y^{\Delta t}_h-y_d$ with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ (from left to right) at $t=0.5$ for Example 2.} \label{stateEx2_2} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.3\textwidth]{ex2_soln_y75.pdf} \includegraphics[width=0.3\textwidth]{ex2_err_y75.pdf} \includegraphics[width=0.3\textwidth]{ex2_dis_y_75.pdf}} \caption{Computed state $y^{\Delta t}_h$, error $y^{\Delta t}_h-y$ and $y^{\Delta t}_h-y_d$ with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ (from left to right) at $t=0.75$ for Example 2.} \label{stateEx2_3} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.45\textwidth]{ex2_u25.pdf} \includegraphics[width=0.45\textwidth]{ex2_erru25.pdf}} \caption{Computed control $\bm{u}^{\Delta t}_h$ and exact control $\bm{u}$ (left, from top to bottom) and the error $\bm{u}^{\Delta t}_h-\bm{u}$ (right) with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ at $t=0.25$ for Example 2.} \label{controlEx2_1} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.45\textwidth]{ex2_u50.pdf} \includegraphics[width=0.45\textwidth]{ex2_erru50.pdf}} \caption{Computed control $\bm{u}^{\Delta t}_h$ and exact control $\bm{u}$ (left, from top to bottom) and the error $\bm{u}^{\Delta t}_h-\bm{u}$ (right) with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ at $t=0.5$ for Example 2.} \label{controlEx2_2} \end{figure} \begin{figure}[htpb] \centering{ \includegraphics[width=0.45\textwidth]{ex2_u75.pdf} \includegraphics[width=0.45\textwidth]{ex2_erru75.pdf}} \caption{Computed control $\bm{u}^{\Delta t}_h$ and exact control $\bm{u}$ (left, from top to bottom) and the error $\bm{u}^{\Delta t}_h-\bm{u}$ (right) with $h=\frac{1}{2^7}$ and $\Delta t=\frac{1}{2^8}$ at $t=0.75$ for Example 2.} \label{controlEx2_3} \end{figure} \newpage \section{Conclusion and Outlook}\label{se:conclusion} We studied the bilinear control of an advection-reaction-diffusion system, where the control variable enters the model as a velocity field of the advection term. Mathematically, we proved the existence of optimal controls and derived the associated first-order optimality conditions. Computationally, the conjugate gradient (CG) method was suggested and its implementation is nontrivial. In particular, an additional divergence-free constraint on the control variable leads to a projection subproblem to compute the gradient; and the computation of a stepsize at each CG iteration requires solving the state equation repeatedly due to the nonlinear relation between the state and control variables. To resolve the above issues, we reformulated the gradient computation as a Stokes-type problem and proposed a fast preconditioned CG method to solve it. We also proposed an efficient inexactness strategy to determine the stepsize, which only requires the solution of one linear parabolic equation. An easily implementable nested CG method was thus proposed. For the numerical discretization, we employed the standard piecewise linear finite element method and the Bercovier-Pironneau finite element method for the space discretizations of the bilinear optimal control and the Stokes-type problem, respectively, and a semi-implicit finite difference method for the time discretization. The resulting algorithm was shown to be numerically efficient by some preliminary numerical experiments. We focused in this paper on an advection-reaction-diffusion system controlled by a general form velocity field. In a real physical system, the velocity field may be determined by some partial differential equations (PDEs), such as the Navier-Stokes equations. As a result, we meet some bilinear optimal control problems constrained by coupled PDE systems. Moreover, instead of (\ref{objective_functional}), one can also consider other types of objective functionals in the bilinear optimal control of an advection-reaction-diffusion system. For instance, one can incorporate $\iint_{Q}|\nabla \bm{v}|^2dxdt$ and $\iint_{Q}|\frac{\partial \bm{v}}{\partial t}|^2dxdt$ into the objective functional to promote that the optimal velocity field has the least rotation and is almost steady, respectively, which are essential in e.g., mixing enhancement for different flows \cite{liu2008}. All these problems are of practical interest but more challenging from algorithmic design perspectives, and they have not been well-addressed numerically in the literature. Our current work has laid a solid foundation for solving these problems and we leave them in the future. \bibliographystyle{amsplain} {\small
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Terminology and Introduction} Let $G$ be a finite undirected graph with multiple edges but without loops. We denote $V(G)$ as the vertex set and $E(G)$ as the edge set of $G$. By $n=n(G)=|V(G)|$ and $m=m(G)=|E(G)|$ we refer to the {\it order} and the {\it size} of $G$, respectively. For an edge $e\in E(G)$ we use the notations $uv$ and $\{u,v\}$, if $e$ connects the vertices $u$ and $v$ in $G$. Moreover, we says that $u$ and $v$ are {\it adjacent} and $u$ is a {\it neighbor} of $v$. For a vertex $v\in V(G)$ denote $N_G(v)$ the set of neighbors and $d_G(v)=|N_G(v)|$ the {\it degree} of $v$.\\ A {\it subgraph} $H$ of $G$ is a graph satisfying $V(H)\subseteq V(G)$ and $E(H)\subseteq E(G)$. For any subset of edges $F\subseteq E(G)$ we define $G-F$ as the subgraph with vertex set $V(G)$ and edge set $E(G)\setminus F$. Considering a vertex set $X\subseteq V(G)$ a subgraph $H$ of $G$ is {\it induced} by $X$ and write $H=G[X]$, if $V(H)=X$ and $E(H)=\{uv \in E(G)\,|\, u,v\in X\}$. Furthermore, we use the notation $G-X$ for the induced subgraph of $G$ with vertex set $V\setminus X$. We simply write $G-x$ instead of $G-\{x\}$.\\ Let $\{v_1,\ldots,v_{p+1}\}\subseteq V(G)$ and $\{e_1\ldots,e_{p}\}\subseteq E(G)$. The sequence $v_1e_1v_2e_2\ldots e_{p}v_{p+1}$ is a {\it path}, if the vertices $v_1,\ldots,v_{p+1}$ are pairwise distinct and $e_i=v_iv_{i+1}$ for $1\le i\le p$. The same sequence is a {\it cycle}, if $v_1,\ldots,v_{p}$ are pairwise distinct, $v_1=v_{p+1}$, and $e_i=v_iv_{i+1}$ for $1\le i\le p$. In both cases $p$ is the length of the path or cycle, respectively.\\ We say $G$ is {\it connected}, if there is a path from $u$ to $v$ for every pair of distinct vertices $u,v\in V(G)$. By a {\it component} we refer to a maximal connected induced subgraph of $G$. A tree is a connected graph that does not contain any cycle of positive length.\\ A {\it labeling} of a graph $G$ is a bijective function $f:V(G)\rightarrow \{1,\ldots,n\}$. The graph $G$ together with a labeling $f$ is a {\it labeled graph} denoted by $G_f$. For simplicity we usually omit $f$ and identify the vertex set of a labeled graph with $\{1,\ldots,n\}$.\\ An {\it orientation} of $G$ is a directed graph $D$ with vertex set $V(D)=V(G)$ in which every edge $uv\in E$ is assigned with a direction. Thus, a directed edge is an ordered pair of vertices that we denote as an {\it arc}. We write $A(D)$ for the set of arcs in $D$ and $(u,v)$ for an element of $A(D)$, if the direction of the arc in $D$ is from $u$ to $v$. In this case we also denote $v$ as an {\it out neighbor} of $u$. By the {\it out degree} of $v$ we refer to the total number of out neighbors of a given vertex $v$ and write $d^+_D(v)$. Obviously, we have $0\le d^+_D(v)\le d_G(v)$ for every $v\in V(G)$.\\ Now, let $G$ be a labeled graph with vertex set $\{1,2,\ldots,n\}$ and $D$ an orientation of $G$. A vector of nonnegative integers $s=(s_1,\ldots,s_n)$ is the {\it out degree vector} of $D$ or just a {\it degree vector} of $G$, if $d^+_D(i)=s_i$ for $1\le i\le n$. In this case we also say that the out degree vector $s$ {\it is realized} by $D$.\\ The following partial order ``$\preccurlyeq$'' on the set of nonnegative integer vectors is closely related to the well-known dominance order. Let $s$ and $t$ be two nonnegative integer vectors of dimension $n$, then $$s\preccurlyeq t:\Leftrightarrow \sum_{i=1}^k{s_i}\le \sum_{i=1}^k{t_i}\quad \textrm{for all } 1\le k\le n$$ and equality holds for $k=n$.\\[3mm] Consider the two orientations $D^r$ and $D^l$ of $G$ with arc sets $A_G^r=\{(i,j)\,|\, ij \in E(G), i<j\}$ and $A_G^l=\{(i,j)\,|\, ij \in E(G), i>j\}$. We define $s_G^r$ and $s_G^l$ as the out degree vectors $D^r$ and $D^l$, respectively. To visualize these orientations suppose all vertices of $G$ are written on a horizontal line with increasing labels from left to right. Now, $A_G^r$ is the arc set of the orientation of $G$, where all edges are oriented from left to right. In the same arrangement of vertices all arcs in $A_G^l$ point from right to left. Hence we have $$\sum_{i=1}^k{\left(s_G^l\right)_i}=m\left(G[\{1,\ldots,k\}]\right)\quad \text{and}\quad \sum_{i=1}^k{\left(s_G^r\right)_i}=m\left(G[\{1,\ldots,k\}]\right)+m_k,$$ for $1\le k\le n$, where $m_k$ is the number of edges between the vertex sets $\{1,\ldots,k\}$ and $\{k+1,\ldots,n\}$ in $G$. Thus, on the one hand, for every out degree vector $s$ of $G$ holds \begin{align} s_G^l \preccurlyeq s \preccurlyeq s_G^r\quad \textrm{and}\quad 0\le s_i\le d_G(i)\;\textrm{ for }\; i=1,\ldots,n. \label{PartialOrderCond} \end{align} On the other hand, there may be nonnegative integer vectors satisfying (\ref{PartialOrderCond}) which are not realized by an orientation of $G$.\\ In \cite{Qian06} Qian introduced the concept of degree complete graphs. A labeled graph $G$ is {\it degree complete}, if every nonnegative integer vector $s$ satisfying (\ref{PartialOrderCond}) is a degree vector of $G$. Qian also proved the following characterization of degree complete graphs. \begin{thm}[Qian \cite{Qian06} (2006)]\label{thm_Qian} A labeled graph $G$ is degree complete if and only if $G$ does not contain one of the two subgraphs $H_1$ and $H_2$, where $$V(H_1)=V(H_2)=\{k_1,k_2,k_3,k_4\},\quad k_1<k_2<k_3<k_4$$ and $$E(H_1)=\{k_1k_3,k_2k_4\},\quad E(H_2)=\{k_1k_4,k_2k_3\}.$$ \end{thm} As in \cite{Qian06}, we will denote the subgraphs $H_1$ and $H_2$ also as {\it forbidden configurations}.\\ Reminding the embedding of $G$ along a horizontal line with respect to the vertex indices these forbidden configurations can be visualized (see Figure \ref{fig_H1H2}) in the following way. \begin{figure}[ht]\label{fig_H1H2} \begin{center} \begin{tikzpicture}[scale=0.65, vertex/.style={circle,inner sep=1pt,draw,thick}, myarrow/.style={thick}] \node at (-40pt,30pt) {$H_1:$}; \node at (31pt,0pt) {$\cdots$}; \node at (91pt,0pt) {$\cdots$}; \node at (151pt,0pt) {$\cdots$}; \node (1) at (0pt,0pt) [vertex] {$k_1$}; \node (2) at (60pt,0pt) [vertex] {$k_2$}; \node (3) at (120pt,0pt) [vertex] {$k_3$}; \node (4) at (180pt,0pt) [vertex] {$k_4$}; \draw[myarrow] (1) to[bend left=45] (3); \draw[myarrow] (2) to[bend left=45] (4); \node at (260pt,30pt) {$H_2:$}; \node at (331pt,0pt) {$\cdots$}; \node at (391pt,0pt) {$\cdots$}; \node at (451pt,0pt) {$\cdots$}; \node (1a) at (300pt,0pt) [vertex] {$k_1$}; \node (2a) at (360pt,0pt) [vertex] {$k_2$}; \node (3a) at (420pt,0pt) [vertex] {$k_3$}; \node (4a) at (480pt,0pt) [vertex] {$k_4$}; \draw[myarrow] (1a) to[bend left=60] (4a); \draw[myarrow] (2a) to[bend left=45] (3a); \end{tikzpicture} \end{center} \caption{Forbidden configurations $H_1$ and $H_2$ from Theorem \ref{thm_Qian}.} \end{figure} If we draw all edges on one site (above or below) of the line, the subgraph $H_1$ is a pair crossing independent edges. Similarly, $H_2$ refers to a pair of overleaping independent edges in $G$.\\ An important property concerning the concept of degree complete graphs is illustrated by the following example. \begin{exmp}[Qian \cite{Qian06} (2006)]\label{exmp1} Consider the labeled graphs $G_1$ and $G_2$ from Figure \ref{fig_exmp1}. \begin{figure}[ht] \begin{center} \begin{tabular}{cp{1cm}c} \begin{tikzpicture}[scale=0.7, vertex/.style={circle,inner sep=2pt,draw,thick}, myarrow/.style={thick}] \node at (-40pt,40pt) {$G_1$:}; \node (1) at (0pt,0pt) [vertex] {$1$}; \node (2) at (60pt,0pt) [vertex] {$2$}; \node (3) at (120pt,0pt) [vertex] {$3$}; \node (4) at (180pt,0pt) [vertex] {$4$}; \draw[myarrow] (1) to (2); \draw[myarrow] (2) to (3); \draw[myarrow] (3) to (4); \end{tikzpicture} & & \begin{tikzpicture}[scale=0.7, vertex/.style={circle,inner sep=2pt,draw,thick}, myarrow/.style={thick}] \node at (-40pt,40pt) {$G_2$:}; \node (1) at (0pt,0pt) [vertex] {$1$}; \node (2) at (60pt,0pt) [vertex] {$2$}; \node (3) at (120pt,0pt) [vertex] {$3$}; \node (4) at (180pt,0pt) [vertex] {$4$}; \draw[myarrow] (1) to[bend left=45] (3); \draw[myarrow] (2) to (3); \draw[myarrow] (2) to[bend left=45] (4); \end{tikzpicture} \end{tabular} \end{center} \caption{Graphs $G_1$ and $G_2$ from Example \ref{exmp1}.}\label{fig_exmp1} \end{figure} Obviously, $G_1$ does not contain any of the subgraphs $H_1$ and $H_2$. Thus, from Theorem \ref{thm_Qian} follows that $G_1$ is degree complete. On the other hand, in $G_2$ the edges $\{1,3\}$ and $\{2,4\}$ form a forbidden configuration $H_2$. Therefore $G_2$ is not degree complete. \end{exmp} Since $G_1$ and $G_2$ are both labeled versions of a path of length $3$ we observe that the property of being degree complete depends on the vertex labeling of the graph. Qian also noticed this fact and stated the following problem. \begin{prob}[Qian \cite{Qian06} (2006)]\label{prob_Qian} Characterize the graphs which are not degree complete no matter how we label its vertices. \end{prob} To approach this topic we say that an unlabeled graph $G$ has a {\it degree complete labeling}, if there exists a labeling $f$ of its vertices such that the labeled graph $G_f$ is degree complete. Thus, Problem \ref{prob_Qian} asks for a characterization of the graphs which do not have any degree complete labeling.\\ In addition to the above mentioned problem two more questions arise. Firstly, can we find an efficient procedure to recognize graphs having a degree complete labeling? Secondly, if we know that a given graph has such a labeling, how can we determine it?\\ An unlabeled graph has a huge number of vertex labeling in general (even if we just count the labelings modulo the automorphism group of the graph). Therefore, it is not useful to test different labelings by applying Theorem \ref{thm_Qian} and we need a new approach for this problem.\\ The main theorem of the next section gives us three characterizations of unlabeled graphs which have a degree complete labeling. The first equivalent formulation describes the structure of these graphs in terms of forbidden subgraphs. The other characterizations yield a polynomial procedure to recognize unlabeled graphs with degree complete labeling. Furthermore, these characterizations can be used as starting points for two similar algorithms which determine a desired labeling. \section{Degree complete labeling} We start with a simple but important observation which is a direct consequence of Theorem \ref{thm_Qian}. Moreover it motivates a characterization of graphs with degree complete labeling which is based on forbidden subgraphs. \begin{obs}\label{obs_subgraph} Let $G$ be a graph and $H$ a subgraph of $G$. If $G$ has a degree complete labeling, then $H$ has a degree complete labeling. \end{obs} Next, we want to obtain some necessary conditions for unlabeled graphs with degree complete labeling. Thus we consider those graphs that contain at least one of the forbidden configurations $H_1$ and $H_2$ in every labeling. In particular, we are interested in a set of pairwise not including graphs with this property. Denote $C_k$ the graph consisting of a cycle of length $k\ge 3$. Furthermore, we define the graphs $T_1$ and $T_2$ as in Figure \ref{fig_T1T2}. \begin{figure}[ht] \begin{center} \begin{tikzpicture}[scale=0.95, vertex/.style={circle,inner sep=2pt,draw,thick}, myarrow/.style={thick}] \node at (-80pt,60pt) {$T_1$:}; \node (1) at (0pt,0pt) [vertex] {}; \node (2) at (0pt,30pt) [vertex] {}; \node (3) at (26pt,-15pt) [vertex] {}; \node (4) at (-26pt,-15pt) [vertex] {}; \node (5) at (0pt,60pt) [vertex] {}; \node (6) at (52pt,-30pt) [vertex] {}; \node (7) at (-52pt,-30pt) [vertex] {}; \draw[myarrow] (1) to (2); \draw[myarrow] (1) to (3); \draw[myarrow] (1) to (4); \draw[myarrow] (2) to (5); \draw[myarrow] (3) to (6); \draw[myarrow] (4) to (7); \node at (130pt,60pt) {$T_2$:}; \node (v1) at (200pt,17pt) [vertex] {}; \node (v2) at (215pt,-9pt) [vertex] {}; \node (v3) at (185pt,-9pt) [vertex] {}; \node (v4) at (200pt,47pt) [vertex] {}; \node (v5) at (241pt,-24pt) [vertex] {}; \node (v6) at (159pt,-24pt) [vertex] {}; \draw[myarrow] (v1) to (v2); \draw[myarrow] (v1) to (v3); \draw[myarrow] (v2) to (v3); \draw[myarrow] (v1) to (v4); \draw[myarrow] (v2) to (v5); \draw[myarrow] (v3) to (v6); \end{tikzpicture} \end{center} \caption{Graphs $T_1$ and $T_2$}\label{fig_T1T2} \end{figure} \begin{lem}\label{lem1} Let $G\in \{T_1,T_2\}\, \cup \, \{C_k\,|\, k\ge 4\}$. For every vertex labeling $f$ of $G$ there exists a forbidden configuration $H_1$ or $H_2$ in $G_f$. \end{lem} \begin{prf} We consider an arbitrary vertex labeling of $G$. Therefore, without loss of generality we identify each element in $V(G)$ with exactly one of the integers from 1 to $|V(G)|$. There are three cases.\\[2mm] {\it Case 1: $G=T_1$.}\\ Denote $i_1$ the unique vertex of degree 3. There are three distinct vertices in $V(G)$ which are adjacent to $i_1$. Furthermore, we can find two of these vertices $i_2$ and $i_3$ such that exactly one the following two conditions holds. Either we have $i_1<i_2<i_3$ or $i_3<i_2<i_1$. In both cases there is a vertex $i_4\in V(G)$ that has $i_2$ as its unique neighbor. If $i_1<i_4<i_3$ (respectively $i_3<i_4<i_1$), then $\{i_1i_3,i_2i_4\}$ forms a forbidden configuration $H_2$. Otherwise, $\{i_1i_3,i_2i_4\}$ is the edge set of a copy of $H_1$.\\[2mm] {\it Case 2: $G=T_2$.}\\ Denote $i_1,i_2,i_3$ the three vertices of degree 3 in $T_2$ such that $i_1<i_2<i_3$. There is a vertex $i_4\in V(G)$ that has $i_2$ as unique neighbor. Now, we have to distinct two cases. If either $i_1<i_4<i_2$ or $i_2<i_4<i_3$ holds, then $\{i_1i_3,i_2i_4\}$ is the edge set of a forbidden configuration $H_2$ in $G$. Otherwise we have $i_4<i_1$ or $i_4>i_3$. In this case $G$ contains the edges $i_1i_3$ and $i_2i_4$, that is a copy of $H_1$.\\[2mm] {\it Case 3: $G=C_k$, $k\ge 4$.}\\ We observe that vertex $1$ has a neighbor $i_1$ with $2<i_1\le k$. Moreover, vertex $2$ is adjacent to a vertex $i_2$ satisfying $2<i_2\le k$ and $i_1\neq i_2$. If $i_1<i_2$, then the edges $\{1,i_1\}$ and $\{2,i_2\}$ form a forbidden configuration $H_1$. In the case $i_1>i_2$ the same edges give us $H_2$ as a subgraph of $G$. \end{prf} Combining Theorem \ref{thm_Qian}, Observation \ref{obs_subgraph}, and Lemma \ref{lem1} we deduce that every graph containing a subgraph $T_1,\,T_2$ or a cycle of length $k\ge4$ does not have a degree complete labeling. This shows that graphs with degree complete labeling have a structure that is similar to trees since they may only have cycles of length 3.\\[3mm] Now suppose for a moment that $G$ is a tree. Obviously, $G$ cannot contain a subgraph isomorphic to $T_2$ or a cycle but it can have copy of $T_1$ as a subgraph. A tree without a copy of $T_1$ has a path $P$ such that all vertices which are not included in this path have degree 1 and are adjacent to a vertex of $P$. These trees are also known as caterpillars. A {\it Caterpillar} is defined (see \cite{Harary73}) as a tree $G$ such that $G-X_1$ is a path, where $X_1$ denotes the set of vertices of degree 1 in $G$. It is not difficult to see that caterpillars can be characterized as such trees without a subgraph isomorphic to $T_1$. Let $P=v_1e_1v_2e_2\ldots e_{p}v_{p+1}$, where $p$ is a nonnegative integer. Obviously, the vertex labeling $f$ defined by $f(v_i)=i$ is a degree complete labeling of $P$. Furthermore, we can extent $f$ to a degree complete labeling of the whole caterpillar $G$ by a repetition of the following step. Consider an unlabeled vertex $x\in X_1$ and denote $u\in V(P)$ the unique neighbor of $x$ in $G$. We add 1 to the label of every labeled vertex $w$ with $f(w)>f(u)$ and set $f(x)=f(u)+1$. Going on with this procedure we terminate with a labeling of all vertices that fulfills the following condition. For every edge $uv\in E(P)$ and every vertex $w$ satisfying $f(u)<f(w)<f(v)$ holds $w$ is adjacent to $u$. Hence $G_f$ does not contain a forbidden configuration $H_1$ or $H_2$. By Theorem \ref{thm_Qian} the labeling $f$ yields a degree complete labeling of $G$.\\ On the left hand side of Figure \ref{fig_caterpillar} there is a caterpillar. The graph to the right refers to a labeled version of the same caterpillar. The sequence of vertices of the labeled graph is with respect to the labeling and shows that it is a degree complete labeling.\\[3mm] \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.8, vertex/.style={circle,inner sep=2pt,draw,thick}, myarrow/.style={thick}] \node (11) at (-161pt,0pt) [vertex] {}; \node (12) at (-182pt,21pt) [vertex] {}; \node (13) at (-182pt,-21pt) [vertex] {}; \node (14) at (-131pt,0pt) [vertex] {}; \node (15) at (-131pt,30pt) [vertex] {}; \node (16) at (-101pt,0pt) [vertex] {}; \node (17) at (-101pt,30pt) [vertex] {}; \node (18) at (-80pt,21pt) [vertex] {}; \node (19) at (-80pt,-21pt) [vertex] {}; \node (20) at (-101pt,-30pt) [vertex] {}; \draw[myarrow] (11) to (12); \draw[myarrow] (11) to (13); \draw[myarrow] (11) to (14); \draw[myarrow] (14) to (15); \draw[myarrow] (14) to (16); \draw[myarrow] (16) to (17); \draw[myarrow] (16) to (18); \draw[myarrow] (16) to (19); \draw[myarrow] (16) to (20); \node (1) at (0pt,0pt) [vertex,label=below:$1$] {}; \node (2) at (30pt,0pt) [vertex,label=below:$2$] {}; \node (3) at (60pt,0pt) [vertex,label=below:$3$] {}; \node (4) at (90pt,0pt) [vertex,label=below:$4$] {}; \node (5) at (120pt,0pt) [vertex,label=below:$5$] {}; \node (6) at (150pt,0pt) [vertex,label=below:$6$] {}; \node (7) at (180pt,0pt) [vertex,label=below:$7$] {}; \node (8) at (210pt,0pt) [vertex,label=below:$8$] {}; \node (9) at (240pt,0pt) [vertex,label=below:$9$] {}; \node (10) at (270pt,0pt) [vertex,label=below:$10$] {}; \draw[myarrow] (1) to (2); \draw[myarrow] (1) to[bend left=30] (3); \draw[myarrow] (1) to[bend left=40] (4); \draw[myarrow] (4) to (5); \draw[myarrow] (4) to[bend left=30] (6); \draw[myarrow] (6) to (7); \draw[myarrow] (6) to[bend left=30] (8); \draw[myarrow] (6) to[bend left=40] (9); \draw[myarrow] (6) to[bend left=50] (10); \end{tikzpicture} \end{center} \caption{An unlabeled caterpillar and labeled version of the same caterpillar with degree complete labeling.}\label{fig_caterpillar} \end{figure} We try to adapt the characterization known for caterpillars. Thus, we return to an arbitrary graph $G$ not containing $T_1,\,T_2$ as a subgraph or a cycle of length $k\ge4$. Additional to the vertices of degree 1 in $X_1$ we have to delete a vertex or at least an edge of every triangle to obtain a path. Therefore we define the following sets. \begin{def1}\label{X1X2} Let $G$ be a graph with $V(G)=\{v_1,\ldots,v_n\}$ and $E(G)=\{e_1,\ldots,e_m\}$. We define the vertex set \begin{align*} X_1(G) &:=\{v\in V(G)\,|\, d_G(v)=1\}. \end{align*} Furthermore, we construct $X_2(G)$ by a procedure. Initialize $X_2(G)=\emptyset$. For every $i$ from $1$ to $n$ we add $v_i$ to $X_2(G)$, if all following conditions are fulfilled \begin{itemize} \item $v_i$ has exactly two neighbors in $G$ denoted by $u$ and $v$. \item $v_i$ is the unique common neighbor of $u$ and $v$. \item $u$ and $v$ are adjacent. \item $u$ and $v$ are not in $X_2(G)$. \end{itemize} Finally, we define the edge set $F(G)$. Start with $F(G)= \emptyset$. For every $j$ from $1$ to $m$ add the edge $e_j$ to $F(G)$, if all following conditions are fulfilled \begin{itemize} \item $e_j$ joints the vertices $u$ and $w$. \item $u$ and $w$ have a unique common neighbor $v$. \item $uv$ and $vw$ are not in $F(G)$. \end{itemize} \end{def1} By these definions we see that $w\in X_2(G)$ has the following properties. The vertex $w$ has degree 2 and is part of a triangle. Moreover, there does not exist a further triangle in $G$ containing both neighbors of $w$. We also notice that for every triangle in $G$ at most one of its vertices is a part of $X_2(G)$. In general there is more than one possible choice for a maximal set satisfying all conditions of $X_2(G)$.\\ Now, consider a triangle in $G$ where at least one vertex $v$ has degree 2. By its definition $F(G)$ contains the edge of the triangle opposing $v$, if this edge is not contained in a further triangle. Similar to $X_2(G)$ there does not exist a triangle which has two edges in $F(G)$.\\ The above mentioned constructions show the following. The procedures which yield $X_2(G)$ and $F(G)$ are basically greedy algorithms that additionally use some graph search elements. Thus, $X_2(G)$ and $F(G)$ can be determined in polynomial time with respect to the order of $G$.\\[3mm] We are now able to formulate and prove the main theorem of this section. \begin{thm}\label{thm_char_list} Let $G$ be a graph. The following statements are equivalent: \begin{enumerate \item[(i)] $G$ has a degree complete labeling. \item[(ii)] $G$ does not contain a subgraph isomorphic to $T_1$, $T_2$ or $C_k$ ($k\ge 4$). \item[(iii)] $G-X_1(G)- X_2(G)$ is a disjoint union of paths. \item[(iv)] $G-X_1(G)-F(G)$ is a disjoint union of paths. \end{enumerate} \end{thm} \begin{prf} From (i) to (ii): Follows from Theorem \ref{thm_Qian}, Observation \ref{obs_subgraph}, and Lemma \ref{lem1}.\\[3mm] From (ii) to (iii) and (ii) to (iv): Suppose $G$ does not contain a subgraph isomorphic to $T_1$, $T_2$ or $C_k$ ($k\ge 4$). If $G$ is not connected we consider each component separately. Thus we assume that $G$ is connected. There does not exist an edge in $G$ which is part of two triangles, otherwise $G$ contains a cycle of length 4. Furthermore, every triangle has a vertex $v$ of degree 2 since $T_2$ is not a subgraph in $G$. Notice that $G-v$ is connected. Hence $G-X_2(G)$ is connected because $X_2(G)$ contains at most one vertex in a triangle. On the other hand $X_2(G)$ has also at least one vertex in every triangle. Thus, $G-X_2(G)$ is an induced tree in $G$. Taking into account that $T_1$ is not a subgraph in $G-X_2(G)$ we deduce that it is a caterpillar. From the definition of caterpillars follows that $G-X_1(G)- X_2(G)$ is a path.\\ Considering $F(G)$ we observe by the same arguments as before that every triangle of $G$ contains exactly one edge of $F(G)$. Therefore, $G-F(G)$ is a connected subgraph of $G$ without a cycle. Hence it is a tree not including $T_1$, that is a caterpillar. Again, $G-X_1(G)-F(G)$ is a path.\\[3mm] From (iii) to (i): Let $G$ be a graph such that $G-X_1(G)- X_2(G)$ is a disjoint union of paths. Obviously, $G$ has a degree complete labeling if and only if every component of $G$ has a degree complete labeling. Furthermore, if a component does not consist of a single edge, then it corresponds to exactly one of the paths in $G-X_1(G)- X_2(G)$. Since a single edge has a (trivial) degree complete labeling we can assume that there is a path in $G-X_1(G)- X_2(G)$ for each component. In the following we give a labeling procedure for an arbitrary component of $G$ and prove that the obtained labeled graph is degree complete.\\ Suppose $\tilde{G}$ is a component of $G$ and $P$ the corresponding path in $G-X_1(G)- X_2(G)$. We define $$X_1(\tilde{G})=X_1(G)\cap V(\tilde{G})\quad \text{and}\quad X_2(\tilde{G})=X_2(G)\cap V(\tilde{G}).$$ Let $P$ be of length $p\ge0$ and denote $v_1,\ldots,v_{p+1}$ the vertices of $P$ such that $v_i$ and $v_{i+1}$ are adjacent for $1\le i\le p$. First we initialize the labeling function $f$ by $f(v_i)=i$. Next, we extent $f$ to the vertices in $X_2(\tilde{G})$ one by one as follows. From the definition of $X_2(\tilde{G})$ we observe that every unlabeled vertex $v\in X_2(\tilde{G})$ has exactly two neighbors, say $v_j$ and $v_{j+1}$, in $P$ which satisfy $f(v_j)<f(v_{j+1})$. Now, we add 1 to the label of every labeled vertex $w$ with $f(w)>f(v_j)$ and set $f(v)=f(v_j)+1$. If all elements in $X_2(\tilde{G})$ are labeled, we finally consider an unlabeled vertex in $x\in X_1(\tilde{G})$. Notice that $x$ has a labeled vertex $u\in V(P)$ as its unique neighbor. Again, we add 1 to the label of every labeled vertex $w$ with $f(w)>f(u)$ and set $f(x)=f(u)+1$.\\ It is not difficult to see that $f$ is a bijective function from $V(\tilde{G})$ to $\{1,\ldots,|V(\tilde{G})|\}$, that is a labeling of $\tilde{G}$. To verify that $f$ is also a degree complete labeling, let $v_iv_{i+1}\in E(P)$ and consider an arbitrary vertex $y$ such that $f(v_i)<f(y)<f(v_{i+1})$. Obviously, we have $y\in X_1(\tilde{G})\cup X_2(\tilde{G})$. If $y\in X_2(G)$, then $y$ is adjacent to both $v_i$ and $v_{i+1}$. Furthermore, by its construction there does not exist a further vertex in $X_2(G)$ with the property of $y$. Thus every other vertex $z$ with $f(v_i)<f(z)<f(v_{i+1})$ is in $X_1(\tilde{G})$. From the above mentioned labeling procedure we deduce that $z$ has $v_i$ as its unique neighbor and $f(z)<f(y)$. Therefore, the subgraph induced by all vertices with labels from $f(v_i)$ to $f(v_{i+1})$ does not contain a forbidden configuration from Theorem \ref{thm_Qian}. Moreover, this also holds for $\tilde{G}_f$ because $\tilde{G}$ does not have an edge $uv\in E(\tilde{G})$ such that $f(u)<f(v_i)<f(v)$ for $1\le i\le p+1$ Hence $G_f$ hence is degree complete.\\[3mm] From (iv) to (i): Let $G$ be a graph such that $G-X_1(G)-F(G)$ is a disjoint union of paths. By similar arguments as mentioned before it is sufficient to show that each component of $G$ has a degree complete labeling. Again, we assume that every component corresponds to exactly one of the paths in $G-X_1(G)-F(G)$.\\ Let $\tilde{G}$ be a component of $G$ and $P$ the corresponding path in $G-X_1(G)-F(G)$ with vertices $v_1,\ldots,v_{p+1}$ such that $v_i$ and $v_{i+1}$ are consecutive in $P$ for $1\le i\le p$. We define $$X_1(\tilde{G})=X_1(G)\cap V(\tilde{G})\quad \text{and}\quad F(\tilde{G})=F(G)\cap E(\tilde{G}).$$ Furthermore, we initialize $f$ by $f(v_i)=i$ for every $1\le i\le p+1$. Now, consider an unlabeled vertex $x\in X_1(\tilde{G})$ and denote $u$ the unique neighbor of $x$ in $P$. Again, we extent $f$ by adding 1 to the label of every labeled vertex $w$ with $f(w)>f(u)$ and set $f(v)=f(u)+1$.\\ Obviously, $f$ is a labeling of $\tilde{G}$. Moreover, for every edge $v_iv_{i+1}\in E(P)$ there are two possibilities. If there is an edge in $F(\tilde{G})$ joining $v_{i-1}$ and $v_{i+1}$, then $f(v_{i+1})=f(v_{i})+1$. Thus there does not exist a vertex with label between $f(v_i)$ and $f(v_{i+1})$ is this case. Otherwise there might be a vertex $z$ with $f(v_i)<f(z)<f(v_{i+1})$. As seen before, we have $z\in X_1(\tilde{G})$ and $z$ has $v_i$ as its unique neighbor. Therefore, $\tilde{G}_f$ is degree complete as it does not contain a forbidden configuration $H_1$ or $H_2$. \end{prf} We finish this section with an example on the labeling procedures from the proof of Theorem \ref{thm_char_list}. Let $G$ be the graph form Figure \ref{fig_exmp2}. \begin{figure}[ht] \begin{center} \begin{tikzpicture}[scale=0.6, vertex/.style={circle,inner sep=0.5pt,draw,thick}, vertex2/.style={circle,inner sep=2pt,draw,thick}, myarrow/.style={thick}] \node (1) at (-15pt,36pt) [vertex2] {$v_{5}$}; \node (2) at (8pt,83pt) [vertex2] {$v_2$}; \node (3) at (-50pt,81pt) [vertex2] {$v_1$}; \node (4) at (70pt,1pt) [vertex] {$v_{10}$}; \node (5) at (40pt,43pt) [vertex2] {$v_6$}; \node (6) at (-25pt,-5pt) [vertex2] {$v_9$}; \node (7) at (-71pt,38pt) [vertex2] {$v_4$}; \node (8) at (154pt,3pt) [vertex] {$v_{11}$}; \node (9) at (105pt,31pt) [vertex2] {$v_7$}; \node (10) at (145pt,53pt) [vertex2] {$v_8$}; \node (11) at (47pt,93pt) [vertex2] {$v_3$}; \draw[myarrow] (1) to (3); \draw[myarrow] (1) to (5); \draw[myarrow] (1) to (6); \draw[myarrow] (1) to (7); \draw[myarrow] (2) to (5); \draw[myarrow] (3) to (7); \draw[myarrow] (4) to (5); \draw[myarrow] (4) to (9); \draw[myarrow] (5) to (9); \draw[myarrow] (5) to (11); \draw[myarrow] (8) to (9); \draw[myarrow] (9) to (10); \end{tikzpicture} \end{center} \caption{Graph $G$}\label{fig_exmp2} \end{figure} First, we determine the following sets $$X_1(G)=\{v_2, v_3, v_8, v_9, v_{11}\},\quad X_2(G)=\{v_1,v_{10}\}\quad \text{and}\quad F(G)=\{v_4v_5,v_6v_7\}.$$ We observe that $G-X_1(G)-X_2(G)$ is a path with vertex set $\{v_4,v_5,v_6,v_7\}$. Similarly, $G-X_1(G)-F(G)$ consists of the path $v_4v_1v_5v_6v_{10}v_7$. Thus, $G$ has a degree complete labeling.\\ Next, as described in the proof from (iii) to (i), we initialize the labeling $f$ by $$f(v_4)=1,\quad f(v_5)=2,\quad f(v_6)=3,\quad\text{and}\quad f(v_7)=4.$$ Since $v_1\in X_2(G)$ is unlabeled and $N_G(v_1)=\{v_4,v_5\}$ we set $$f(v_1)=2,\quad f(v_5)=2+1=3,\quad f(v_6)=3+1=4,\quad\text{and}\quad f(v_7)=4+1=5.$$ Analogously, for $v_{10}\in X_2(G)$ the procedure yields $$f(v_{10})=5\quad\text{and}\quad f(v_7)=5+1=6.$$ For the unlabeled vertex $v_2\in X_1(G)$ we continue with $$f(v_{2})=5,\quad f(v_{10})=5+1=6,\quad\text{and}\quad f(v_{7})=6+1=7.$$ \begin{figure}[ht] \begin{center} \begin{tikzpicture}[scale=0.6, vertex/.style={circle,inner sep=0.5pt,draw,thick}, vertex2/.style={circle,inner sep=2pt,draw,thick}, myarrow/.style={thick},myarrow2/.style={thick,opacity=0.1}] \node (1) at (100pt,-70pt) [vertex2,label=below:$3$] {$v_5$}; \node (2) at (300pt,-70pt) [vertex2,label=below:$7$] {$v_3$}; \node (3) at (50pt,-70pt) [vertex2,label=below:$2$] {$v_1$}; \node (4) at (350pt,-70pt) [vertex,label=below:$8$] {$v_{10}$}; \node (5) at (200pt,-70pt) [vertex2,label=below:$5$] {$v_6$}; \node (6) at (150pt,-70pt) [vertex2,label=below:$4$] {$v_9$}; \node (7) at (0pt,-70pt) [vertex2,label=below:$1$] {$v_4$}; \node (8) at (500pt,-70pt) [vertex,label=below:$11$] {$v_{11}$}; \node (9) at (400pt,-70pt) [vertex2,label=below:$9$] {$v_7$}; \node (10) at (450pt,-70pt) [vertex2,label=below:$10$] {$v_8$}; \node (11) at (250pt,-70pt) [vertex2,label=below:$6$] {$v_2$}; \draw[myarrow] (1) to (3); \draw[myarrow, bend left=45] (1) to (5); \draw[myarrow] (1) to (6); \draw[myarrow, bend right=45] (1) to (7); \draw[myarrow, bend right=45] (2) to (5); \draw[myarrow] (3) to (7); \draw[myarrow, bend right=60] (4) to (5); \draw[myarrow] (4) to (9); \draw[myarrow, bend left=75] (5) to (9); \draw[myarrow] (5) to (11); \draw[myarrow, bend right=45] (8) to (9); \draw[myarrow] (9) to (10); \end{tikzpicture} \end{center} \caption{Labeled Graph $G_f$}\label{fig_exmp2_labeled} \end{figure} Repeating this step until all vertices are labeled we arrive at the labeling $f$ from Figure \ref{fig_exmp2_labeled}. This also shows that $G_f$ is degree complete. Finally, it is not difficult to prove that the procedure from (iv) to (i) yields the same labeling.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The problem of realizing and manipulating Majorana fermions in condensed matter systems is currently a topic of great theoretical and experimental interest. Roughly, Majorana fermions constitute `half' of a usual fermion. That is, creating an ordinary fermion $f$ requires superposing two Majorana modes $\gamma_{1,2}$---which can be separated by arbitrary distances---via $f = \gamma_1 + i \gamma_2$. The presence of $2n$ well-separated Majorana bound states thus allows for the construction of $n$ ordinary fermions, producing (ideally) a manifold of $2^n$ degenerate states. Braiding Majorana fermions around one another produces not just a phase factor, as in the case of conventional bosons or fermions, but rather transforms the state nontrivially inside of this degenerate manifold: their exchange statistics is non-Abelian\cite{ReadGreen,Ivanov}. Quantum information encoded in this subspace can thus be manipulated by such braiding operations, providing a method for decoherence-free topological quantum computation\cite{Kitaev,TQCreview}. Majorana fermions are therefore clearly of great fundamental as well as practical interest. At present, there is certainly no dearth of proposals for realizing Majorana fermions. Settings as diverse as fractional quantum Hall systems\cite{ReadGreen} at filling $\nu = 5/2$, strontium ruthenate thin films\cite{SrRu}, cold atomic gases\cite{pwaveColdAtoms,TQCcoldatoms,TopologicalSF,FujimotoColdAtoms}, superfluid He-3\cite{MajoranaHe3}, the surface of a topological insulator\cite{FuKane}, semiconductor heterostructures\cite{Sau}, and non-centrosymmetric superconductors\cite{SatoFujimoto,PatrickProposal} have all been theoretically predicted to host Majorana bound states under suitable conditions. Nevertheless, their unambiguous detection remains an outstanding problem, although there has been recent progress in this direction in quantum Hall systems\cite{Willett1,Willett2}. Part of the experimental challenge stems from the fact that stabilizing topological phases supporting Majorana fermions can involve significant engineering obstacles and/or extreme conditions such as ultra-low temperatures, ultra-clean samples, and high magnetic fields in the case of the $\nu = 5/2$ fractional quantum Hall effect. The proposal by Fu and Kane\cite{FuKane} noted above for realizing a topological superconducting state by depositing a conventional $s$-wave superconductor on a three-dimensional topological insulator surface appears quite promising in this regard. This setting should in principle allow for a rather robust topological superconducting phase to be created without such extreme conditions, although experiments demonstrating this await development. Moreover, Fu and Kane proposed methods in such a setup for creating and manipulating Majorana fermions for quantum computation. The more recent solid state proposals noted above involving semiconductor heterostructures\cite{Sau} and non-centrosymmetric superconductors\cite{SatoFujimoto,PatrickProposal} utilize clever ways of creating an environment similar to the surface of a topological insulator (\emph{i.e.}, eliminating a sort of fermion doubling problem\cite{Shinsei}) in order to generate topological phases supporting Majorana modes. The present work is inspired by the semiconductor proposal of Sau \emph{et al}.\ in Ref.\ \onlinecite{Sau}, so we briefly elaborate on it here. These authors demonstrated that a semiconductor with Rashba spin-orbit coupling, sandwiched between an $s$-wave superconductor and a ferromagnetic insulator as in Fig.\ \ref{QuantumWellSetups}(a), can realize a topological superconducting phase supporting Majorana modes. The basic principle here is that the ferromagnetic insulator produces a Zeeman field perpendicular to the semiconductor, which separates the two spin-orbit-split bands by a finite gap. If the Fermi level lies inside of this gap, a weak superconducting pair field generated via the proximity effect drives the semiconductor into a topological superconducting state that smoothly connects to a spinless $p_x+ip_y$ superconductor. Sau \emph{et al}.\ also discussed how such a device can be exploited along the lines of the Fu-Kane proposal for topological quantum computation. The remarkable aspect of this proposal is the conventional ingredients it employs---semiconductors benefit from many more decades of study compared to the relatively nascent topological insulators---making this a promising experimental direction. The main question addressed in this paper is largely a practical one---can this proposed setup be further simplified and made more tunable, thus (hopefully) streamlining the route towards experimental realization of a topological superconducting phase in semiconductor devices? To this end, there are two obvious modifications that one might try. First, replacing the ferromagnetic insulator with an external magnetic field applied perpendicular to the semiconductor certainly simplifies the setup, but unfortunately induces undesirable orbital effects which change the problem significantly and likely spoil the topological phase. The second obvious modification, then, would be applying an in-plane magnetic field. While this sidesteps the problem of unwanted orbital effects, unfortunately in-plane fields do not open a gap between the spin-orbit-split bands in a Rashba-coupled semiconductor. Physically, opening a gap requires a component of the Zeeman field perpendicular to the plane in which the electron spins orient; with Rashba coupling this always coincides with the semiconductor plane. (See Sec.\ III for a more in-depth discussion.) Our main result is that a topological superconducting state supporting Majorana fermions \emph{can} be generated by in-plane magnetic fields if one alternatively considers a semiconductor grown along the (110) direction with both Rashba \emph{and} Dresselhaus coupling [see Fig.\ \ref{QuantumWellSetups}(b)]. What makes this possible in (110) semiconductors is the form of Dresselhaus coupling specific to this growth direction, which favors aligning the spins normal to the semiconductor plane. When Rashba coupling is also present, the two spin-orbit terms conspire to rotate the plane in which the spins orient away from the semiconductor plane. In-plane magnetic fields then \emph{do} open a finite gap between the bands. Under realistic conditions which we detail below, the proximity effect can then drive the system into a topological superconducting phase supporting Majorana modes, just as in the proposal from Ref.\ \onlinecite{Sau}. This alternative setup offers a number of practical advantages. It eliminates the need for a good interface between the ferromagnetic insulator (or magnetic impurities intrinsic to the semiconductor\cite{Sau}), reducing considerably the experimental challenge of fabricating the device, while still largely eliminating undesired orbital effects. Furthermore, explicitly controlling the Zeeman field in the semiconductor is clearly advantageous, enabling one to readily sweep across a quantum phase transition into the topological superconducting state and thus unambiguously identify the topological phase experimentally. We propose that InSb quantum wells, which enjoy sizable Dresselhaus coupling and a large $g$-factor, may provide an ideal candidate for the semiconductor in such a device. While not without experimental challenges (discussed in some detail below), we contend that this setup provides perhaps the simplest, most tunable semiconductor realization of a topological superconducting phase, so we hope that it will be pursued experimentally. The rest of the paper is organized as follows. In Sec.\ II we provide a pedagogical overview of the proposal from Ref.\ \onlinecite{Sau}, highlighting the connection to a spinless $p_x+ip_y$ superconductor, which makes the existence of Majorana modes in this setup more intuitively apparent. We also discuss in some detail the stability of the topological superconducting phase as well as several experimental considerations. In Sec.\ III we introduce our proposal for (110) semiconductor quantum wells. We show that the (110) quantum well Hamiltonian maps onto the Rashba-only model considered by Sau \emph{et al}.\ in an (unphysical) limit, and explore the stability of the topological superconductor here in the realistic parameter regime. Experimental issues related to this proposal are also addressed. Finally, we summarize the results and discuss several future directions in Sec.\ IV. \begin{figure} \centering{ \includegraphics[width=3.2in]{QuantumWellSetups.eps} \caption{(a) Setup proposed by Sau \emph{et al}.\cite{Sau} for realizing a topological superconducting phase supporting Majorana fermions in a semiconductor quantum well with Rashba spin-orbit coupling. The $s$-wave superconductor generates the pairing field in the well via the proximity effect, while the ferromagnetic insulator induces the Zeeman field required to drive the topological phase. As noted by Sau \emph{et al.}, the Zeeman field can alternatively be generated by employing a magnetic semiconductor quantum well. (b) Alternative setup proposed here. We show that a (110)-grown quantum well with both Rashba and Dresselhaus spin-orbit coupling can be driven into a topological superconducting state by applying an in-plane magnetic field. The advantages of this setup are that the Zeeman field is tunable, orbital effects are expected to be minimal, and the device is simpler, requiring neither a good interface with a ferromagnetic insulator nor the presence of magnetic impurities which provide an additional disorder source. } \label{QuantumWellSetups}} \end{figure} \section{Overview of Sau-Lutchyn-Tewari-Das Sarma proposal} To set the stage for our proposal, we begin by pedagogically reviewing the recent idea by Sau \emph{et al}.\ for creating Majorana fermions in a ferromagnetic insulator/semiconductor/s-wave superconductor hybrid system \cite{Sau} [see Fig.\ \ref{QuantumWellSetups}(a)]. These authors originally proved the existence of Majorana modes in this setup by explicitly solving the Bogoliubov-de Gennes Hamiltonian with a vortex in the superconducting order parameter. An index theorem supporting this result was subsequently proven\cite{index}. We will alternatively follow the approach employed in Ref.\ \onlinecite{FujimotoTSC} (see also Ref.\ \onlinecite{TopologicalSF}), and highlight the connection between the semiconductor Hamiltonian (in a certain limit) and a spinless $p_x+ip_y$ superconductor. The advantage of this perspective is that the topological character of the proximity-induced superconducting state of interest becomes immediately apparent, along with the existence of a Majorana bound state at vortex cores. In this way, one circumvents the cumbersome problem of solving the Bogoliubov-de Gennes equation for these modes. The stability of the superconducting phase, which we will also discuss in some detail below, becomes more intuitive from this viewpoint as well. \subsection{Connection to a spinless $p_x + ip_y$ superconductor} Consider first an isolated zincblende semiconductor quantum well, grown along the (100) direction for concreteness. Assuming layer (but not bulk) inversion asymmetry and retaining terms up to quadratic order in momentum\cite{SpinOrbitHigherOrder}, the relevant Hamiltonian reads \begin{equation} H_0 = \int d^2{\bf r}\psi^\dagger \left[ -\frac{\nabla^2}{2m}-\mu - i\alpha (\sigma^x \partial_y-\sigma^y \partial_x) \right] \psi, \label{H0} \end{equation} where $m$ is the effective mass, $\mu$ is the chemical potential, $\alpha$ is the Rashba spin-orbit\cite{Rashba} coupling strength, and $\sigma^j$ are Pauli matrices that act on the spin degree of freedom in $\psi$. (We set $\hbar = 1$ throughout.) The Rashba terms above can be viewed as an effective magnetic field that aligns the spins in the quantum well plane, normal to their momentum. Equation (\ref{H0}) admits two spin-orbit-split bands that appear `Dirac-like' at sufficiently small momenta where the $\nabla^2/2m$ kinetic term can be neglected. The emergence of Majorana modes can ultimately be traced to this simple fact. Coupling the semiconductor to a ferromagnetic insulator whose magnetization points perpendicular to the 2D layer is assumed to induce a Zeeman interaction \begin{equation} H_Z = \int d^2{\bf r} \psi^\dagger[V_z \sigma^z]\psi \label{HZ} \end{equation} but negligible orbital coupling. Orbital effects will presumably be unimportant in the case where, for instance, $V_z$ arises primarily from exchange interactions rather than direct coupling of the spins to the field emanating from the ferromagnetic moments. With this coupling, the spin-orbit-split bands no longer cross, and resemble a gapped Dirac point at small momenta. Crucially, when $|\mu | < |V_z|$ the electrons in the quantum well then occupy only the lower band and exhibit a single Fermi surface. We focus on this regime for the remainder of this section. What differentiates the present problem from a conventional single band (without spin-orbit coupling) is the structure of the wavefunctions inherited from the Dirac-like physics encoded in $H_0$ at small momenta. To see this, it is illuminating to first diagonalize $H_0 + H_Z$ by writing \begin{equation} \psi({\bf k}) = \phi_-({\bf k}) \psi_-({\bf k}) + \phi_+({\bf k})\psi_+({\bf k}), \end{equation} where $\psi_\pm$ annihilate states in the upper/lower bands and $\phi_\pm$ are the corresponding normalized wavefunctions, \begin{eqnarray} \phi_+({\bf k}) &=& \left( \begin{array}{l} A_\uparrow(k) \\ A_\downarrow(k) \frac{i k_x-k_y}{k} \end{array} \right) \label{phiplus} \\ \phi_-({\bf k}) &=& \left( \begin{array}{l} B_\uparrow(k)\frac{i k_x+k_y}{k} \\ B_\downarrow(k) \end{array} \right). \label{phiminus} \end{eqnarray} The expressions for $A_{\uparrow,\downarrow}$ and $B_{\uparrow,\downarrow}$ are not particularly enlightening, but for later we note the following useful combinations: \begin{eqnarray} f_p(k) &\equiv& A_\uparrow A_\downarrow = B_\uparrow B_\downarrow = \frac{-\alpha k}{2\sqrt{V_z^2 + \alpha^2 k^2}} \\ f_s(k) &\equiv& A_\uparrow B_\downarrow -B_\uparrow A_\downarrow = \frac{V_z}{\sqrt{V_z^2 + \alpha^2 k^2}}. \end{eqnarray} In terms of $\psi_\pm$, the Hamiltonian becomes \begin{equation} H_0 + H_Z = \int d^2{\bf k}[\epsilon_+(k)\psi^\dagger_+({\bf k})\psi_+({\bf k}) + \epsilon_-(k)\psi_-^\dagger({\bf k})\psi_-({\bf k})], \end{equation} with energies \begin{eqnarray} \epsilon_\pm(k) = \frac{k^2}{2m}-\mu \pm \sqrt{V_z^2 + \alpha^2 k^2}. \end{eqnarray} Now, when the semiconductor additionally comes into contact with an s-wave superconductor, a pairing term will be generated via the proximity effect, so that the full Hamiltonian describing the quantum well becomes \begin{equation} H = H_0 + H_Z + H_{SC} \end{equation} with \begin{equation} H_{SC} = \int d^2{\bf r}[\Delta \psi^\dagger_\uparrow \psi^\dagger_\downarrow + h.c.]. \label{Hsc} \end{equation} (We note that $H$ is a continuum version of the lattice model discussed in Ref.\ \onlinecite{FujimotoColdAtoms} in the context of topological superfluids of cold fermionic atoms.) Rewriting $H_{SC}$ in terms of $\psi_\pm$ and using the wavefunctions in Eqs.\ (\ref{phiplus}) and (\ref{phiminus}) yields \begin{eqnarray} H_{SC} &=& \int d^2{\bf k}\bigg{[}\Delta_{+-}(k) \psi^\dagger_+({\bf k})\psi^\dagger_-(-{\bf k}) \nonumber \\ &+& \Delta_{--}({\bf k})\psi^\dagger_-({\bf k})\psi^\dagger_-(-{\bf k}) \nonumber \\ &+& \Delta_{++}({\bf k})\psi^\dagger_+({\bf k})\psi^\dagger_+(-{\bf k}) + h.c. \bigg{]}, \label{Hsc2} \end{eqnarray} with \begin{eqnarray} \Delta_{+-}(k) &=& f_s(k) \Delta \\ \Delta_{++}({\bf k}) &=& f_p(k)\left(\frac{k_y + i k_x}{k}\right) \Delta \\ \Delta_{--}({\bf k}) &=& f_p(k)\left(\frac{k_y - i k_x}{k}\right) \Delta . \end{eqnarray} The proximity effect thus generates not only interband s-wave pairing encoded in the first term, but also \emph{intra}band $p_x\pm i p_y$ pairing with opposite chirality for the upper/lower bands. This is exactly analogous to spin-orbit-coupled superconductors, where the pairing consists of spin-singlet and spin-triplet components due to non-conservation of spin\cite{SpinOrbitSC}. We can now immediately understand the appearance of a topological superconducting phase in this system. Consider $\Delta$ much smaller than the spacing $|V_z-\mu|$ to the upper band. In this case the upper band plays essentially no role and can simply be projected away by sending $\psi_+\rightarrow 0$ above. The problem then maps onto that of spinless fermions with $p_x+ip_y$ pairing, which is the canonical example of a topological superconductor supporting a single Majorana bound state at vortex cores\cite{ReadGreen,Ivanov}. (The dispersion $\epsilon_-(k)$ is, however, somewhat unconventional. But one can easily verify that the dispersion can be smoothly deformed into a conventional $k^2/2m-\mu$ form, with $\mu>0$, without closing a gap.) Thus, in this limit introducing a vortex in the order parameter $\Delta$ must produce a single Majorana bound state in this semiconductor context as well. We emphasize that in the more general case where $\Delta$ is not negligible compared to $|V_z-\mu|$, the mapping to a spinless $p_x+ip_y$ superconductor is no longer legitimate. Nevertheless, since the presence of a Majorana fermion has a topological origin, it can not disappear as long as the bulk excitation gap remains finite. We will make extensive use of this fact in the remainder of the paper. Here we simply observe that the topological superconducting state and Majorana modes will persist even when one incorporates both bands---which we do hereafter---provided the pairing $\Delta$ is sufficiently small that the gap does not close, as found explicitly by studying the full unprojected Hamiltonian with a vortex in Ref.\ \onlinecite{Sau}. It is also important to stress that when $\Delta$ greatly exceeds $V_z$, it is the Zeeman field that essentially plays no role. A topological superconducting state is no longer expected in this limit, since one is not present when $V_z = 0$. Thus as $\Delta$ increases, the system undergoes a quantum phase transition from a topological to an ordinary superconducting state, as discussed by Sau \emph{et al}.\cite{Sau} and Sato \emph{et al}.\cite{FujimotoColdAtoms} in the cold-atoms context. The transition is driven by the onset of interband $s$-wave pairing near zero momentum. \subsection{Stability of the topological superconducting phase} The stability of the topological superconducting state was briefly discussed in Ref.\ \onlinecite{Sau}, as well as Ref.\ \onlinecite{FujimotoColdAtoms} in the cold-atoms setting. Here we address this issue in more detail, with the aim of providing further intuition as well as guidance for experiments. Given the competition between ordinary and topological superconducting order inherent in the problem, it is useful to explore, for instance, how the chemical potential, spin-orbit strength, proximity-induced pair field, and Zeeman field should be chosen so as to maximize the bulk excitation gap in the topological phase of interest. Furthermore, what limits the size of this gap, and how does it decay as these parameters are tuned away from the point of maximum stability? And how are other important factors such as the density impacted by the choice of these parameters? Solving the full Bogoliubov-de Gennes Hamiltonian assuming uniform $\Delta$ yields energies that satisfy \begin{eqnarray} E_\pm^2 &=& 4|\Delta_{++}|^2 +\Delta_{+-}^2 + \frac{\epsilon_+^2 + \epsilon_-^2}{2} \nonumber \\ &\pm& |\epsilon_+-\epsilon_-|\sqrt{\Delta_{+-}^2 + \frac{(\epsilon_+ + \epsilon_-)^2}{4}}. \end{eqnarray} We are interested in the lower branch $E_-(k)$, in particular its value at zero momentum and near the Fermi surface. The minimum of these determines the bulk superconducting gap, $E_g \equiv \Delta G(\frac{\mu}{V_z},\frac{m\alpha^2}{V_z},\frac{\Delta}{V_z})$. To make the topological superconducting state as robust as possible, one clearly would like to maximize the $p$-wave pairing at the Fermi momentum, \begin{equation} k_F = \sqrt{2m\left[m\alpha^2 + \mu + \sqrt{V_z^2 + m\alpha^2(m\alpha^2 + 2 \mu)}\right]}. \label{kF} \end{equation} Doing so requires $m \alpha^2/V_z \gg 1$. In this limit we have $|\Delta_{++}(k_F)| \sim \Delta/2$ while the $s$-wave pairing at the Fermi momentum is negligible, $\Delta_{+-}(k_F) \sim 0$. We thus obtain \begin{equation} E_-(k_F) \sim \Delta, \label{EkF} \end{equation} which increases monotonically with $\Delta$. At zero momentum, however, we have \begin{equation} E_-(k = 0) = |V_z-\sqrt{\Delta^2+\mu^2}|. \label{Ek0} \end{equation} This initially \emph{decreases} with $\Delta$ as interband $s$-wave pairing begins to set in, and vanishes when $\Delta = \sqrt{V_z^2-\mu^2}$ signaling the destruction of the topological superconducting state\cite{Sau,FujimotoColdAtoms}. It follows that for a given $V_z$, the topological superconductor is most robust when $m \alpha^2/V_z \gg 1$, $\mu = 0$, and $\Delta = V_z/2$; here the bulk excitation gap is maximized and given by $E_g = V_z/2$. \begin{figure} \centering{ \subfigure{ \includegraphics[width=3.5in]{GapFig.eps} \label{fig:subfig1} } \subfigure{ \includegraphics[width=3.5in]{GapFigB.eps} \label{fig:subfig2}} \caption{Excitation gap $E_g$ normalized by $\Delta$ in the proximity-induced superconducting state of a Rashba-coupled quantum well adjacent to a ferromagnetic insulator. In (a), the chemical potential is chosen to be $\mu = 0$. For $\Delta/V_z <1$ the system realizes a topological superconducting phase supporting a single Majorana mode at a vortex core, while for $\Delta/V_z >1$ an ordinary superconducting state emerges. In the topological phase, the gap is maximized when $\Delta/V_z = 1/2$ and $m\alpha^2/V_z \gg 1$, where it is given by $E_g = V_z/2$. In contrast, the gap vanishes as $m\alpha^2/V_z\rightarrow 0$ because the effective $p$-wave pair field at the Fermi momentum vanishes in this limit. In (b), we have taken $m\alpha^2/V_z = 0.1$ to illustrate that $V_z$ can exceed $m\alpha^2$ by more than an order of magnitude and still yield a sizable gap in the topological superconducting phase. } \label{GapFig}} \end{figure} As will become clear below, for practical purposes it is also useful to explore the limit where $V_z$ is much \emph{larger} than both $\Delta$ and $m \alpha^2$. Here the gap is determined solely by the $p$-wave pair field near the Fermi surface [except for $\mu$ very close to $V_z$, where it follows from Eq.\ (\ref{Ek0})]. This pairing will certainly be reduced compared to the $m \alpha^2/V_z \gg 1$ limit, because the lower band behaves like a conventional quadratically dispersing band in the limit $m\alpha^2/V_z \rightarrow 0$. To leading order in $m\alpha^2/V_z$ and $\Delta/V_z$, the gap is given by \begin{equation} E_g \approx \sqrt{\frac{2m\alpha^2}{V_z}\left(1+\frac{\mu}{V_z}\right)} \Delta. \end{equation} There are two noteworthy features of this expression. First, although the gap indeed vanishes as $m\alpha^2/V_z \rightarrow 0$, it does so very slowly; $V_z$ can exceed $m \alpha^2$ by more than an order of magnitude and still yield a gap that is a sizable fraction of the bare proximity-induced $\Delta$. Second, in this limit the gap can be enhanced by raising $\mu$ near the bottom of the upper band. These results are graphically summarized in Fig.\ \ref{GapFig}, which displays the gap $E_g$ normalized by $\Delta$. Figure \ref{GapFig}(a) assumes $\mu = 0$ and illustrates the dependence on $m\alpha^2/V_z$ and $\Delta/V_z$; Fig.\ \ref{GapFig}(b) assumes $m\alpha^2/V_z = 0.1$ and illustrates the dependence on $\Delta/V_z$ and $\mu/V_z$. Note that despite the relatively small value of $m \alpha^2/V_z$ chosen here, the gap remains a sizable fraction of $\Delta$ over much of the topological superconductor regime. \subsection{Experimental considerations} The quantity $m\alpha^2$ comprises a crucial energy scale regarding experimental design. Ideally, this should be as large as possible for at least two reasons. First, the scale of $m\alpha^2$ limits how large a Zeeman splitting $V_z$ is desirable. If $m\alpha^2/V_z$ becomes too small, then as discussed above the effective $p$-wave pairing at the Fermi surface will eventually be strongly suppressed compared to $\Delta$, along with the bulk excitation gap. At the same time, having a large $V_z$ is advantageous in that the topological superconductor can then exist over a broad range of densities. This leads us to the second reason why large $m\alpha^2$ is desired: this quantity strongly impacts the density in the topological superconductor regime, \begin{eqnarray} n = \frac{(m\alpha)^2}{2\pi}\left[1+\frac{\mu}{m\alpha^2}+\sqrt{1+\left(\frac{V_z}{m\alpha^2}\right)^2+\frac{2\mu}{m\alpha^2}}\right]. \label{n} \end{eqnarray} One should keep in mind that if the density is too small, disorder may dominate the physics\cite{LowDensity2DEG}.\footnote{Lowering the density does, however, lead to a smaller Fermi energy and thus a larger `mini-gap' associated with the vortex-core bound states. Large mini-gaps are ultimately desirable for topological quantum computation. In this paper we are concerned with a more modest issue---namely, simply finding a stable topological superconducting phase in the first place. With this more limited goal in mind, large densities are clearly desired to reduce disorder effects. } Experimental values for the Rashba coupling $\alpha$ depend strongly on the properties of the quantum well under consideration, and, importantly, are tunable in gated systems\cite{RashbaTuning} (see also Ref.\ \onlinecite{RashbaTuningB}). In GaAs quantum wells, for instance, $\alpha \approx 0.005$eV\AA\cite{GaAsSpinOrbit1} and $\alpha \approx 0.0015$eV\AA\cite{GaAsSpinOrbit2} have been measured. Using the effective mass $m = 0.067m_e$ ($m_e$ is the bare electron mass), these correspond to very small energy scales $m\alpha^2 \sim 3$mK for the former and a scale an order of magnitude smaller for the latter. In the limit $m\alpha^2/V_z \ll 1$, Eq.\ (\ref{n}) yields a density for the topological superconductor regime of $n \sim 10^7$cm$^{-2}$ and $\sim 10^6$cm$^{-2}$, respectively. Disorder likely dominates at such low densities. Employing Zeeman fields $V_z$ which are much larger than $m\alpha^2$ can enhance these densities by one or two orders of magnitude without too dramatically reducing the gap (the density increases much faster with $V_z$ than the gap decreases), though this may still be insufficient to overcome disorder effects. Due to their stronger spin-orbit coupling, quantum wells featuring heavier elements such as In and Sb appear more promising. A substantially larger $\alpha \approx 0.06$eV\AA~ has been measured\cite{InAsSpinOrbit} in InAs quantum wells with effective mass $m \approx 0.04m_e$, yielding a much greater energy scale $m \alpha^2 \sim 0.2$K. The corresponding density in the $m\alpha^2/V_z \ll 1$ limit is now $n \sim 10^8$cm$^{-2}$. While still small, a large Zeeman field corresponding to $m\alpha^2/V_z = 0.01$ raises the density to a more reasonable value of $n \sim 10^{10}$cm$^{-2}$. As another example, the Rashba coupling in InGaAs quantum wells with $m \approx 0.05m_e$ was tuned over the range $\alpha \sim 0.05-0.1$eV\AA~ with a gate\cite{RashbaTuning}, resulting in a range of energy scales $m\alpha^2 \sim 0.2-0.8$K. The densities here are even more promising, with $n \sim 10^8-10^9$cm$^{-2}$ in the limit $m\alpha^2/V_z \ll 1$; again, these can be enhanced significantly by considering $V_z$ large compared to $m\alpha^2$. To conclude this section, we comment briefly on the setups proposed by Sau \emph{et al}., wherein the Zeeman field arises either from a proximate ferromagnetic insulator or magnetic impurities in the semiconductor. In principle, the Rashba coupling and chemical potential should be separately tunable in either case by applying a gate voltage and adjusting the Fermi level in the $s$-wave superconductor. The strength of the Zeeman field, however, will largely be dictated by the choice of materials, doping, geometry, \emph{etc}. Unless the value of $m\alpha^2$ can be greatly enhanced compared to the values quoted above, it may be advantageous to consider Zeeman fields which are much larger than this energy scale, in order to raise the density at the expense of suppressing the bulk excitation gap somewhat. A good interface between the ferromagnetic insulator and the quantum well will be necessary to achieve a large $V_z$, if this setup is chosen. Allowing $V_z$ to arise from magnetic impurities eliminates this engineering challenge, but has the drawback that the dopants provide another disorder source which can deleteriously affect the device's mobility\cite{MagneticSCdisorder}. Nevertheless, since semiconductor technology is so well advanced, it is certainly worth pursuing topological phases in this setting, especially if alternative setups minimizing these challenges can be found. Providing one such alternative is the goal of the next section. \section{Proposed setup for (110) quantum wells} We now ask whether one can make the setup proposed by Sau \emph{et al}.\ simpler and more tunable by replacing the ferromagnetic insulator (or magnetic impurities embedded in the semiconductor) responsible for the Zeeman field with an experimentally controllable parameter. As mentioned in the introduction, the most naive possible way to achieve this would be to do away with the magnetic insulator (or magnetic impurities) and instead simply apply an external magnetic field perpendicular to the semiconductor. In fact, this possibility was pursued earlier in Refs.\ \onlinecite{FujimotoTSC} and \onlinecite{SatoFujimoto}. It is far from obvious, however, that the Zeeman field dominates over orbital effects here, which was a key ingredient in the proposal by Sau \emph{et al}. Thus, these references focused on the regime where the Zeeman field was smaller than $\Delta$, which is insufficient to drive the topological superconducting phase. (We note, however, that a proximity-induced spin-triplet order parameter, if large enough, was found to stabilize a topological state\cite{SatoFujimoto}.) An obvious alternative would be applying a parallel magnetic field, along the quantum well plane, since this (largely) rids of the unwanted orbital effects. This too is insufficient, since replacing $V_z\sigma^z$ with $V_y \sigma^y$ in Eq.\ (\ref{HZ}) does not gap out the bands at $k = 0$, but only shifts the crossing to finite momentum. \subsection{Topological superconducting phase in a (110) quantum well} We will show that if one alternatively considers a zincblende quantum well grown along the (110) direction, a topological superconducting state \emph{can} be driven by application of a parallel magnetic field. What makes this possible in (110) quantum wells is their different symmetry compared to (100) quantum wells. Assuming layer inversion symmetry is preserved, the most general Hamiltonian for the well up to quadratic order in momentum\cite{SpinOrbitHigherOrder} is \begin{equation} \mathcal{H}_0 = \int d^2{\bf r}\psi^\dagger\left[-\left(\frac{\partial_x^2}{2m_x} + \frac{\partial_y^2}{2m_y}\right)-\mu -i \beta \partial_x \sigma^z\right]\psi \end{equation} Here we allow for anisotropic effective masses $m_{x,y}$ due to a lack of in-plane rotation symmetry, and $\beta$ is the Dresselhaus spin-orbit\cite{Dresselhaus} coupling strength. Crucially, the Dresselhaus term favors alignment of the spins \emph{normal} to the plane, in contrast to the Rashba coupling in Eq.\ (\ref{H0}) which aligns spins \emph{within} the plane. Although we did not incorporate Dresselhaus terms in the previous section, we note that in a (100) quantum well they, too, favor alignment of spins within the plane. As an aside, we note that the above Hamiltonian has been of interest in the spintronics community because it preserves the $S^z$ component of spin as a good quantum number, resulting in long lifetimes for spins aligned normal to the quantum well\cite{LongSpinLifetime}. ($\mathcal{H}_0$ also exhibits a `hidden' SU(2) symmetry\cite{PersistentSpinHelix} which furthered interest in this model, but this is not a microscopic symmetry and will play no role here.) We are uninterested in spin lifetimes, however, and wish to explicitly break layer inversion symmetry by imbalancing the quantum well using a gate voltage and/or chemical means. The Hamiltonian for the (110) quantum well then becomes $\mathcal{H}^{(110)} = \mathcal{H}_0 + \mathcal{H}_R$, where \begin{equation} \mathcal{H}_R = \int d^2{\bf r}\psi^\dagger\left[-i(\alpha_x \sigma^x \partial_y-\alpha_y\sigma^y\partial_x\right)]\psi \end{equation} represents the induced Rashba spin-orbit coupling terms up to linear order in momentum. While one would naively expect $\alpha_x = \alpha_y$ here, band structure effects will generically lead to unequal coefficients, again due to lack of rotation symmetry. We can recast the quantum well Hamiltonian into a more useful form by rescaling coordinates so that $\partial_x \rightarrow (m_x/m_y)^{1/4}\partial_x$ and $\partial_y \rightarrow (m_y/m_x)^{1/4}\partial_y$. We then obtain \begin{eqnarray} \mathcal{H}^{(110)} &=& \int d^2{\bf r} \psi^\dagger \bigg{[}-\frac{\nabla^2}{2m_*}-\mu-i\lambda_D \partial_x\sigma^z \nonumber \\ &-& i\lambda_R(\sigma^x \partial_y-\gamma \sigma^y \partial_x)\bigg{]}\psi. \label{H110} \end{eqnarray} The effective mass is $m_* = \sqrt{m_x m_y}$ and the spin-orbit parameters are $\lambda_D = \beta (m_x/m_y)^{1/4}$, $\lambda_R = \alpha_x(m_y/m_x)^{1/4}$, and $\gamma = (\alpha_y/\alpha_x)\sqrt{m_x/m_y}$. With both Dresselhaus and Rashba terms present, the spins will no longer align normal to the quantum well, but rather lie within the plane perpendicular to the vector $\lambda_D{\hat{\bf y}} + \gamma \lambda_R{\hat{\bf z}}$. Consider for the moment the important special case $\gamma = 0$ and $\lambda_D = \lambda_R$. In this limit, $\mathcal{H}^{(110)}$ becomes essentially \emph{identical} to Eq.\ (\ref{H0}), with the important difference that here the spins point in the $(x,z)$ plane rather than the $(x,y)$ plane. It follows that a field applied along the $y$ direction, \begin{equation} \mathcal{H}_Z = \int d^2{\bf r} \psi^\dagger[V_y \sigma^y]\psi, \end{equation} with $V_y = g\mu_B B_y/2$, then plays \emph{exactly} the same role as the Zeeman term $V_z$ in Sau \emph{et al}.'s proposal\cite{Sau} discussed in the preceding section---the bands no longer cross at zero momentum, and only the lower band is occupied when $|\mu| < |V_y|$. In this regime, when the system comes into contact with an $s$-wave superconductor, the proximity effect generates a topological superconducting state supporting Majorana fermions at vortex cores, provided the induced pairing in the well is not too large\cite{Sau}. The full problem we wish to study, then, corresponds to a (110) quantum well with both Dresselhaus and Rashba coupling, subjected to a parallel magnetic field and contacted to an $s$-wave superconductor. The complete Hamiltonian is \begin{equation} \mathcal{H} = \mathcal{H}^{(110)} + \mathcal{H}_Z + \mathcal{H}_{SC}, \end{equation} with $\mathcal{H}_{SC}$ the same as in Eq.\ (\ref{Hsc}). Of course in a real system $\gamma$ will be non-zero, and likely of order unity, and $\lambda_R$ generally differs from $\lambda_D$. The question we must answer then is how far the topological superconducting phase survives as we increase $\gamma$ from zero and change the ratio $\lambda_R/\lambda_D$ from unity. Certainly our proposal will be viable only if this state survives relatively large changes in these parameters. \subsection{Stability of the topological superconducting phase in (110) quantum wells} To begin addressing this issue, it is useful to proceed as in the previous section and express the Hamiltonian in terms of operators $\psi_\pm^\dagger({\bf k})$ which add electrons to the upper/lower bands: \begin{eqnarray} \mathcal{H} &=& \int d^2{\bf k}[\tilde \epsilon_+({\bf k})\psi^\dagger_+({\bf k})\psi_+({\bf k}) + \tilde \epsilon_-({\bf k})\psi_-^\dagger({\bf k})\psi_-({\bf k})] \nonumber \\ &+&\bigg{[}\tilde\Delta_{+-}({\bf k}) \psi^\dagger_+({\bf k})\psi^\dagger_-(-{\bf k}) + \tilde\Delta_{--}({\bf k})\psi^\dagger_-({\bf k})\psi^\dagger_-(-{\bf k}) \nonumber \\ &+& \tilde\Delta_{++}({\bf k})\psi^\dagger_+({\bf k})\psi^\dagger_+(-{\bf k}) + h.c. \bigg{]}. \end{eqnarray} The energies $\tilde \epsilon_\pm$ are given by \begin{eqnarray} \tilde \epsilon_\pm({\bf k}) &=& \frac{k^2}{2m}-\mu \pm \delta \tilde\epsilon({\bf k}) \nonumber \\ \delta \tilde \epsilon({\bf k}) &=& \sqrt{(V_y-\gamma \lambda_R k_x)^2 + (\lambda_D k_x)^2 + (\lambda_R k_y)^2}, \label{epsilontilde} \end{eqnarray} while the interband $s$- and intraband $p$-wave pair fields now satisfy \begin{eqnarray} |\tilde \Delta_{+-}({\bf k})|^2 &=& \frac{\Delta^2}{2}\bigg{[}1-\frac{(\lambda_D^2 + \gamma^2\lambda_R^2)k_x^2 + \lambda_R^2 k_y^2-V_y^2}{\delta\tilde\epsilon({\bf k})\delta \tilde \epsilon(-{\bf k})}\bigg{]} \nonumber \\ |\tilde \Delta_{++}({\bf k})|^2 &=& |\tilde \Delta_{--}({\bf k})|^2 \label{Deltatilde} \\ &=& \frac{\Delta^2}{8}\bigg{[}1+\frac{(\lambda_D^2 + \gamma^2\lambda_R^2)k_x^2 + \lambda_R^2 k_y^2-V_y^2}{\delta\tilde\epsilon({\bf k})\delta \tilde \epsilon(-{\bf k})}\bigg{]}. \nonumber \end{eqnarray} Increasing $\gamma$ from zero to of order unity affects the above pair fields rather weakly. The dominant effect of $\gamma$, which can be seen from Eq.\ (\ref{epsilontilde}), is to lift the $k_x\rightarrow -k_x$ symmetry of the $\Delta = 0$ bands. Physically, this symmetry breaking arises because when $\gamma \neq 0$ the spins lie within a plane that is not perpendicular to the magnetic field. This, in turn, suppresses superconductivity since states with ${\bf k}$ and $-{\bf k}$ will generally have different energy. While in this case the Bogoliubov-de Gennes equation no longer admits a simple analytic solution, one can numerically compute the bulk energy gap for the uniform superconducting state, $\mathcal{E}_g \equiv \Delta \mathcal{G}(\frac{\mu}{V_y},\frac{m\lambda_D^2}{V_y},\frac{\Delta}{V_y},\frac{\lambda_R}{\lambda_D},\gamma)$, to explore the stability of the topological superconducting phase. Consider first the illustrative case with $\mu = 0$, $m \lambda_D^2/V_y = 2$, and $\Delta/V_y = 0.66$. The corresponding gap as a function of $\lambda_R/\lambda_D$ and $\gamma$ appears in Fig.\ \ref{GapFig110}(a). At $\lambda_R/\lambda_D=1$ and $\gamma = 0$, where our proposal maps onto that of Sau \emph{et al}., the gap is $E_g \approx 0.52 \Delta$, somewhat reduced from its maximum value since we have taken $\Delta/V_y > 1/2$. Remarkably, as the figure demonstrates \emph{this gap persists unaltered even beyond $\gamma = 1$, provided the scale of Rashba coupling $\lambda_R/\lambda_D$ is suitably reduced}. Throughout this region, the lowest-energy excitation is created at zero momentum, where the energy gap is simply $\mathcal{E}_g = V_y-\Delta$. This clearly demonstrates the robustness of the topological superconducting state well away from the Rashba-only model considered by Sau \emph{et al}., and supports the feasibility of our modified proposal in (110) quantum wells. \begin{figure} \centering \subfigure{ \includegraphics[width=3.5in]{GapFig110.eps} \label{fig:110subfig1} } \subfigure{ \includegraphics[width=3.5in]{GapFig110B.eps} \label{fig:110subfig2} } \subfigure{ \includegraphics[width=3.5in]{GapFig110C.eps} \label{fig:110subfig3} } \caption{Excitation gap $\mathcal{E}_g$ normalized by $\Delta$ in the proximity-induced superconducting state of a (110) quantum well, with both Rashba and Dresselhaus spin-orbit coupling, in a parallel magnetic field. In (a) we set $\mu = 0$, $m\lambda_D^2/V_y = 2$, and $\Delta/V_y = 0.66$, and illustrate the dependence of the gap on the Rashba coupling anisotropy $\gamma$ as well as $\lambda_R/\lambda_D$. When $\gamma = 0$ and $\lambda_R/\lambda_D = 1$, the problem maps onto the Rashba-only model considered by Sau \emph{et al}.\cite{Sau} Remarkably, the gap survives unaltered here even in the physically relevant case with $\gamma$ of order one, provided the Rashba coupling is reduced. In (b) and (c), we focus on the realistic case with $\gamma = 1$ to illustrate the stability of the topological phase in more detail. We take $\mu = 0$ and $m\lambda_D^2/V_y = 2$ in (b), and allow $\Delta/V_y$ as well as $\lambda_R/\lambda_D$ to vary. In (c), we fix $\Delta/V_y = 0.66$ and $m\lambda_D^2/V_y = 2$, allowing $\mu/V_y$ and $\lambda_R/\lambda_D$ to vary.} \label{GapFig110} \end{figure} Let us understand the behavior of the gap displayed in Fig.\ \ref{GapFig110}(a) in more detail. As described above, the plane in which the spins reside is tilted away from the $(x,z)$ plane by an angle $\theta = \cos^{-1}[1/\sqrt{1+(\gamma \lambda_R/\lambda_D)^2}]$. Non-zero $\theta$ gives rise to the anisotropy under $k_x \rightarrow -k_x$, which again tends to suppress superconductivity. One can see here that reducing $\lambda_R/\lambda_D$ therefore can compensate for an increase in $\gamma$, leading to the rather robust topological superconducting phase evident in the figure. On the other hand, at fixed $\lambda_R/\lambda_D$ which is sufficiently large ($\gtrsim 0.3$ in the figure), increasing $\gamma$ eventually results in the minimum energy excitation occurring at $k_y = 0$ and $k_x$ near the Fermi momentum. Further increasing $\gamma$ then shrinks the gap and eventually opens pockets of gapless excitations, destroying the topological superconductor. Conversely, if $\lambda_R/\lambda_D$ is sufficiently small ($\lesssim 1/3$ in the figure), the gap becomes independent of $\gamma$. In this region the minimum energy excitations are created at $k_x = 0$ and $k_y$ near the Fermi momentum. As $\lambda_R/\lambda_D\rightarrow 0$, the lower band transitions from a gapped topological $p_x+ip_y$ superconductor to a gapless nodal $p_x$ superconductor. This follows from Eq.\ (\ref{Deltatilde}), which in the limit $\lambda_R = 0$ yields a pair field $\tilde\Delta_{--} = \Delta \lambda_D k_x/[2 \sqrt{V_y^2 + \lambda_D^2 k_x^2}]$ that vanishes along the line $k_x = 0$. While a gapless $p_x$ superconducting phase is not our primary focus, we note that realizing such a state in a (110) quantum well with negligible Rashba coupling would be interesting in its own right. To gain a more complete picture of topological superconductor's stability in the physically relevant regime, we further illustrate the behavior of the bulk excitation gap in Figs.\ \ref{GapFig110}(b) and (c), fixing for concreteness $\gamma = 1$ and $m\lambda_D^2/V_y = 2$. Figure \ref{GapFig110}(b) plots the dependence of the gap on $\Delta/V_y$ and $\lambda_R/\lambda_D$ when $\mu = 0$, while Fig.\ \ref{GapFig110}(c) displays the gap as a function of $\lambda_R/\lambda_D$ and $\mu/V_y$ when $\Delta/V_y = 0.66$. \subsection{Experimental considerations for (110) quantum wells} The main drawback of our proposal compared to the Rashba-only model discussed by Sau \emph{et al}.\ can be seen in Fig.\ \ref{GapFig110}(b). In the previous section, we discussed that it may be desirable to intentionally suppress the gap for the topological superconducting state by considering Zeeman splittings which greatly exceed the Rashba energy scale $m\alpha^2$, in order to achieve higher densities and thereby reduce disorder effects. Here, however, this is possible to a lesser extent since the desired strength of $V_y$ is limited by the induced pairing field $\Delta$. If $\Delta/V_y$ becomes too small, then the system enters the gapless regime as shown in Fig.\ \ref{GapFig110}(b). Nevertheless, our proposal has a number of virtues, such as its tunability. As in the proposal of Sau \emph{et al}.\cite{Sau}, the strength of Rashba coupling can be controlled by applying a gate voltage\cite{RashbaTuning}, and the chemical potential in the semiconductor can be independently tuned by changing the Fermi level in the proximate $s$-wave superconductor. In our case the parameter $\gamma\propto \sqrt{m_x/m_y}$ can be controlled to some extent by applying pressure to modify the mass ratio $m_x/m_y$, although this is not essential. More importantly, one has additional control over the Zeeman field, which is generated by an externally applied in-plane magnetic field that largely avoids unwanted orbital effects. Such control enables one to readily tune the system across the quantum phase transition separating the ordinary and topological superconducting phases [see Fig.\ \ref{GapFig110}(b)]. This feature not only opens up the opportunity to study this quantum phase transition experimentally, but also provides an unambiguous diagnostic for identifying the topological phase. For example, the value of the critical current in the quantum well should exhibit a singularity at the phase transition, which would provide one signature for the onset of the topological superconducting state. We also emphasize that realizing the required Zeeman splitting through an applied field is technologically far simpler than coupling the quantum well to a ferromagnetic insulator, and avoids the additional source of disorder generated by doping the quantum well with magnetic impurities. Since the extent to which one can enhance the density in the topological superconducting phase by applying large Zeeman fields is limited here, it is crucial to employ materials with appreciable Dresselhaus coupling. We suggest that InSb quantum wells may be suitable for this purpose. Bulk InSb enjoys quite large Dresselhaus spin-orbit interactions of strength 760eV\AA$^3$ (for comparison, the value in bulk GaAs is 28eV\AA$^3$; see Ref.\ \onlinecite{WinklerBook}). For a quantum well of width $w$, one can crudely estimate the Dresselhaus coupling to be $\lambda_D\sim 760$eV\AA$^3/w^2$; assuming $w = 50$\AA, this yields a sizable $\lambda_D\sim 0.3$eV\AA. Bulk InSb also exhibits a spin-orbit enhanced $g$-factor of roughly 50 (though confinement effects can substantially diminish this value in a quantum well\cite{WinklerBook}). The large $g$-factor has important benefits. For one, it ensures that Zeeman energies $V_y$ of order a Kelvin, which we presume is the relevant scale for $\Delta$, can be achieved with fields substantially smaller than a Tesla. The ability to produce Zeeman energies of this scale with relatively small fields should open up a broad window where $V_y$ exceeds $\Delta$ but the applied field is smaller than the critical field for the proximate $s$-wave superconductor (which can easily exceed 1T). Both conditions are required for realizing the topological superconducting state in our proposed setup. A related benefit is that the Zeeman field felt by the semiconductor will be significantly larger than in the $s$-wave superconductor, since the $g$-factor for the latter should be much smaller. This further suggests that $s$-wave superconductivity should therefore be disturbed relatively little by the required in-plane fields. \section{Discussion} Amongst the proposals noted in the introduction, the prospect for realizing Majorana fermions in a semiconductor sandwiched between a ferromagnetic insulator and $s$-wave superconductor stands out in part because it involves rather conventional ingredients (semiconductor technology is extraordinarily well developed). Nevertheless, this setup is not without experimental challenges, as we attempted to highlight in Sec.\ II above. For instance, a good interface between a ferromagnetic insulator and the semiconductor is essential, which poses an important engineering problem. If one employs a magnetic semiconductor instead, this introduces an additional source of disorder (in any case magnetic semiconductors are typically hole doped). The main goal of this paper was to simplify this setup even further, with the hope of hastening the experimental realization of Majorana fermions in semiconductor devices. We showed that a topological superconducting state can be driven by applying a (relatively weak) in-plane magnetic field to a (110) semiconductor quantum well coupled \emph{only} to an $s$-wave superconductor. The key to realizing the topological phase here was an interplay between Dresselhaus and Rashba couplings; together, they cause the spins to orient within a plane which tilts away from the quantum well. An in-plane magnetic field then plays the same role as the ferromagnetic insulator or an applied perpendicular magnetic field plays in the Rashba-only models considered in Refs.\ \onlinecite{FujimotoTSC,SatoFujimoto,Sau}, but importantly without the detrimental orbital effects of the perpendicular field. This setup has the virtue of simplicity---eliminating the need for a proximate ferromagnetic insulator or magnetic impurities---as well as tunability. Having control over the Zeeman field allows one to, for instance, readily sweep across the quantum phase transition from the ordinary to the topological superconducting state. Apart from fundamental interest, this phase transition can serve as a diagnostic for unambiguously identifying the topological phase experimentally (\emph{e.g.}, through critical current measurements). As a more direct probe of Majorana fermions, a particularly simple proposal for their detection on the surface of a topological insulator was recently put forth by Law, Lee, and Ng\cite{MajoranaDetection}. This idea relies on `Majorana induced resonant Andreev reflection' at a chiral edge. In a topological insulator, such an edge exists between a proximity-induced superconducting region and a ferromagnet-induced gapped region of the surface. In our setup, this effect can be realized even more simply, since the semiconductor will exhibit a chiral Majorana edge at its boundary, without the need for a ferromagnet. Finally, since only one side of the semiconductor need be contacted to the $s$-wave superconductor, in principle this leaves open the opportunity to probe the quantum well directly from the other. The main disadvantage of our proposal is that if the Zeeman field in the semiconductor becomes too large compared to the proximity-induced pair field $\Delta$, the topological phase gets destroyed [see Fig.\ \ref{GapFig110}(b)]. By contrast, in the setup proposed by Sau \emph{et al}., the topological superconductor survives even when the Zeeman field greatly exceeds both $\Delta$ and $m\alpha^2$. Indeed, we argued that this regime is where experimentalists may wish to aim, at least initially, if this setup is pursued. Although the gap in the topological phase is somewhat suppressed in the limit $m\alpha^2/V_z \ll 1$, large Zeeman fields allow the density in this phase to be increased by one or two orders of magnitude, thus reducing disorder effects. (We should note, however, that the actual size of Zeeman fields that can be generated by proximity to a ferromagnetic insulator or intrinsic magnetic impurities is uncertain at present.) Since one is not afforded this luxury in our (110) quantum well setup, it is essential to employ materials with large Dresselhaus spin-orbit coupling in order to achieve reasonable densities in the semiconductor. We argued that fairly narrow InSb quantum wells may be well-suited for this purpose. Apart from exhibiting large spin-orbit coupling, InSb also enjoys a large $g$-factor, which should allow for weak fields (much less than 1T) to drive the topological phase in the quantum well while disturbing the proximate $s$-wave superconductor relatively little. There are a number of open questions which are worth exploring to further guide experimental effort in this direction. As an example, it would be worthwhile to carry out more accurate modeling, including for instance cubic Rashba and Dresselhaus terms\cite{SpinOrbitHigherOrder} and (especially) disorder, to obtain a more quantitative phase diagram for either of the setups discussed here. Exploring the full spectrum of vortex bound states (beyond just the zero-energy Majorana mode) is another important problem. The associated `mini-gap' provides one important factor determining the feasibility of quantum computation with such devices. We also think it is useful to explore other means of generating topological superconducting phases in such semiconductor settings. One intriguing possibility would be employing nuclear spins to produce a Zeeman field in the semiconductor\footnote{We thank Gil Refael and Jim Eisenstein for independently suggesting this possibility.}. More broadly, the proposals considered here can be viewed as examples of a rather general idea discussed recently\cite{Shinsei} for eliminating the so-called fermion-doubling problem that can otherwise destroy the non-Abelian statistics\cite{Doron} necessary for topological quantum computation. Very likely, we have by no means exhausted the possible settings in which Majorana fermions can emerge, even within the restricted case of semiconductor devices. Might hole-doped semiconductors be exploited in similar ways to generate topological superconducting phases, for instance, or perhaps heavy-element thin films such as bismuth? \acknowledgments{It is a pleasure to acknowledge a number of stimulating discussions with D.\ Bergman, S.\ Das Sarma, J.\ Eisenstein, M.\ P.\ A.\ Fisher, S.\ Fujimoto, R.\ Lutchyn, O.\ Motrunich, G.\ Refael, J.\ D.\ Sau, and A.\ Stern. We also acknowledge support from the Lee A.\ DuBridge Foundation.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In this paper, we prove an inverse Strichartz theorem for certain Schr\"{o}dinger evolutions on the real line with $L^2$ initial data. Recall that solutions to the linear Schr\"{o}dinger equation \begin{align} \label{e:intro_free_particle} i\partial_t u = -\tfrac{1}{2} \Delta u \quad\text{with}\quad u(0, \cdot) \in L^2(\mf{R}^d), \end{align} satisfy the Strichartz inequality \begin{align} \label{e:intro_str} \| u\|_{L^{\fr{2(d+2)}{2}}_{t,x} (\mf{R} \times \mf{R}^d)} \le C \| u(0, \cdot) \|_{L^2 (\mf{R}^d)}. \end{align} In this translation-invariant setting, it was proved that if $u$ comes close to saturating the above inequality, then the initial data $u(0)$ must exhibit some ``concentration"; see \cite{carles-keraani,merle-vega,moyua-vargas-vega,begout-vargas}. We seek analogues of this result when the right side of~\eqref{e:intro_free_particle} is replaced by a more general Schr\"{o}dinger operator $-\tfr{1}{2}\Delta + V(t,x)$. Such refinements of the Strichartz inequality have provided a key technical tool in the study of the $L^2$-critical nonlinear Schr\"{o}dinger equation \begin{align} \label{e:intro_m-crit} i\partial_t u = -\tfrac{1}{2}\Delta u \pm |u|^{\fr{4}{d}} u \quad\text{with}\quad u(0, \cdot) \in L^2(\mf{R}^d). \end{align} The term ``$L^2$-critical" or ``mass-critical" refers to the property that the rescaling \begin{align*} u(t,x) \mapsto u_{\lambda}(t,x) := \lambda^{\fr{d}{2}} u(\lambda^2 t, \lambda x), \quad \lambda > 0, \end{align*} preserves both the class of solutions and the conserved \emph{mass} $ M[u] := \| u(t)\|_{L^2}^2 = \| u(0)\|_{L^2}^2. $ An inverse theorem for~\eqref{e:intro_str} begets profile decompositions that underpin the large data theory by revealing how potential blowup solutions may concentrate. The reader may consult for instance the notes~\cite{claynotes} for a more detailed account of this connection. The initial-value problem \eqref{e:intro_m-crit} was shown to be globally wellposed in \cite{dodson_mass-crit_high-d,dodson_mass-crit_2d,dodson_mass-crit_1d, dodson_focusing, Killip2009, Killip2008, Tao2007}. Characterizing near-optimizers of the inequality~\eqref{e:intro_str} involves significant technical challenges due to the presence of noncompact symmetries. Besides invariance under rescaling and translations in space and time, the inequality also possesses \emph{Galilean invariance} \begin{align*} u(t,x)\mapsto u_{\xi_0}(t, x) := e^{i[x \xi_0 - \fr{1}{2}t |\xi_0|^2]} u(t, x - t \xi_0), \quad u_{\xi_0}(0) = e^{ix \xi_0} u(0), \quad \xi_0 \in \mf{R}^d. \end{align*} Because of this last degeneracy, the $L^2$-critical setting is much more delicate compared to variants of~\eqref{e:intro_str} with higher regularity Sobolev norms on the right side, such as the energy-critical analogue \begin{align} \label{e:intro_e-crit_str} \| u\|_{L^{\fr{2(d+2)}{d-2}}_{t,x}(\mf{R} \times \mf{R}^d)} \le C \| \nabla u(0, \cdot)\|_{L^2(\mf{R}^d)}. \end{align} In particular, Littlewood-Paley theory has little use when seeking an inverse to \eqref{e:intro_str} because $u(0)$ can concentrate anywhere in frequency space, not necessarily near the origin. The works cited above use spacetime orthogonality arguments and appeal to Fourier restriction theory, such as Tao's bilinear estimate for paraboloids (when $d \ge 3$)~\cite{Tao2003}. Ultimately, we wish to consider the large data theory for the equation \begin{align} \label{e:intro_m-crit_v} i\partial_t u = -\fr{1}{2}\Delta u + V u \pm |u|^{\fr{4}{d}} u, \quad u(0, \cdot) \in L^2(\mf{R}^d), \end{align} where $V(x)$ is a real-valued potential. The main example we have in mind is the harmonic oscillator $V = \sum_j \omega_j^2 x_j^2$, which has obvious physical relevance and arises in the study of Bose-Einstein condensates~\cite{zhang_bec}. Although the scaling symmetry is broken, solutions initially concentrated at a point are well-approximated for short times by (possibly modulated) solutions to the genuinely scale-invariant mass-critical equation~\eqref{e:intro_m-crit}. As described in Lemma~\ref{l:galilei} below, the harmonic oscillator also admits a more complicated analogue of Galilean invariance. This is related to the fact that $u$ solves equation~\eqref{e:intro_m-crit} iff its \emph{Lens transform} $\mcal{L} u$ satisfies equation~\eqref{e:intro_m-crit_v} with $V = \tfr{1}{2}|x|^2$, where \begin{align*} \mcal{L} u (t,x) := \tfr{1}{(\cos t)^{d/2}} u \Bigl( \tan t, \tfrac{x}{\cos t} \Bigr) e^{-\fr{i |x|^2 \tan t}{2}}. \end{align*} The energy-critical counterpart to~\eqref{e:intro_m-crit_v} with $V = \tfr{1}{2}|x|^2$ was recently studied by the first author~\cite{me_quadratic_potential}. While the Lens transform may be inverted to deduce global wellposedness for the mass-critical harmonic oscillator when $\omega_j \equiv \tfr{1}{2}$, this miraculous connection with equation~\eqref{e:intro_m-crit} evaporates as soon as the $\omega_j$ are not all equal. Studying the equation in greater generality therefore requires a more robust line of attack, such as the concentration-compactness and rigidity paradigm. To implement that strategy one needs appropriate inverse $L^2$ Strichartz estimates. This is no small matter since the Fourier-analytic techniques underpinning the proofs of the constant-coefficient theorems---most notably, Fourier restriction estimates---are ill-adapted to large variable-coefficient perturbations. We present a different approach to these inverse estimates in one space dimension. By eschewing Fourier analysis for physical space arguments, we can treat a family of Schr\"{o}dinger operators that includes the free particle and the harmonic oscillator. Moreover, our potentials are allowed to depend on time. \subsection{The setup} Consider a (possibly time-dependent) Schr\"{o}dinger operator on the real line \[ H(t) = -\tfrac{1}{2} \partial^2_x + V(t, x) \quad x \in \mf{R}, \] and assume $V$ is a subquadratic potential. Specifically, we require that $V$ satisfies the following hypotheses: \begin{itemize} \item For each $k \ge 2$, there exists there exists $M_k < \infty$ so that \begin{equation} \label{e:V_h1} \|V(t, x)\|_{L^\infty_t L^\infty_x( |x| \le 1)} + \| \partial^k_x V(t, x) \|_{L^\infty_{t,x}} + \| \partial^k_x \partial_t V(t, x)\|_{L^\infty_{t,x}} \le M_k. \end{equation} \item There exists some $\varepsilon > 0$ so that \begin{equation} \label{e:V_h2} | \langle x \rangle^{1+\varepsilon} \partial^3_x V| + | \langle x \rangle^{1+\varepsilon}\partial^3_x \partial_t V| \in L^\infty_{t,x}. \end{equation} By the fundamental theorem of calculus, this implies that the second derivative $\partial^2_x V(t, x)$ converges as $x \to \pm \infty$. Here and in the sequel, we write $\langle x \rangle := (1 + |x|^2)^{1/2}$. \end{itemize} Note that the potentials $V = 0$ and $V = \tfr{1}{2}x^2$ both fall into this class. The first set of conditions on the space derivatives of $V$ are quite natural in view of classical Fourier integral operator constructions, from which one can deduce dispersive and Strichartz estimates; see Theorem~\ref{t:propagator}. We also need some time regularity of solutions for our spacetime orthogonality arguments. However, the decay hypothesis on the third derivative $\partial^3_x V$ is technical; see the discussion surrounding Lemma~\ref{l:technical_lma_2} below. The propagator $U(t, s)$ for such Hamiltonians is known to obey Strichartz estimates at least locally in time: \begin{equation} \label{e:loc_str} \| U(t, s) f\|_{L^6_{t,x} (I \times \mf{R})} \lesssim_{I} \|f\|_{L^2(\mf{R})} \end{equation} for any compact interval $I$ and any fixed $s \in \mf{R}$; see Corollary~\ref{c:strichartz}. Note that $U(t, s) = e^{-i(t-s)H}$ is a one-parameter group if one assumes that $V = V(x)$ is time-independent, but our methods do not require this assumption. Our main result asserts that if the left side is nontrivial relative to the right side, then the evolution of initial data must contain a ``bubble" of concentration. Such concentration will be detected by probing the solution with suitably scaled, translated, and modulated test functions. For $\lambda > 0$ and $(x_0, \xi_0) \in T^* \mf{R} \cong \mf{R}_x \times \mf{R}_\xi$, define the scaling and phase space translation operators \[ S_\lambda f (x) = \lambda^{-1/2} f(\lambda^{-1} x) \quad\text{and}\quad \pi(x_0, \xi_0) f (x) = e^{i(x-x_0)\xi_0} f(x - x_0). \] Let $\psi$ denote a real even Schwartz function with $\|\psi\|_2 = (2\pi)^{-1/2}$. Its phase space translate $\pi(x_0, \xi_0) \psi$ is localized in space near $x_0$ and in frequency near $\xi_0$. \begin{thm} \label{t:inv_str} There exists $\beta > 0$ such that if $0 < \varepsilon \le \| U(t, 0) f\|_{L^6([-\fr{1}{2}, \fr{1}{2}] \times \mf{R})}$ and $\| f\|_{L^2} \le A$, then \[ \sup_{z \in T^* \mf{R}, \ 0 < \lambda \le 1, \ |t| \le 1/2} | \langle \pi(z) S_\lambda \psi, U(t, 0) f \rangle_{L^2(\mf{R})}| \ge C \varepsilon (\tfr{\varepsilon}{A} )^{\beta} \] for some constant $C$ depending on the seminorms in \eqref{e:V_h1} and~\eqref{e:V_h2}. \end{thm} By repeatedly applying the following corollary, one can obtain a linear profile decomposition. For simplicity, we state it assuming the potential is time-independent (so that $U(t, 0) = e^{-itH}$). \begin{cor} \label{c:lpd_profile_extraction} Let $\{ f_n\} \subset L^2(\mf{R})$ be a sequence such that $0 < \varepsilon \le \| e^{-itH} f_n\|_{L^6_{t,x} ([-\fr{1}{2}, \fr{1}{2}] \times \mf{R})}$ and $\|f\|_{L^2} \le A$ for some constants $A, \varepsilon > 0$. Then, after passing to a subsequence, there exist a sequence of parameters \[\{(\lambda_n, t_n, z_n)\}_n \subset (0, 1] \times [-1/2, 1/2] \times T^* \mf{R}\] and a function $0 \ne \phi \in L^2$ such that, \begin{gather} S_{\lambda_n}^{-1} \pi(z_n)^{-1} e^{-it_n H} f_n \rightharpoonup \phi \text{ in } L^2 \nonumber \\ \| \phi\|_{L^2} \gtrsim \varepsilon (\tfr{\varepsilon}{ A} )^{\beta}. \label{e:profile_nonzero} \end{gather} Further, \begin{gather} \| f_n\|_2^2 - \| f_n - e^{it_n H} \pi(z_n) S_{\lambda_n} \phi \|_2^2 - \| e^{it_n H} \pi(z_n) S_{\lambda_n} \phi\|_2^2 \to 0. \label{e:L2_decoupling} \end{gather} \end{cor} \begin{proof} By Theorem~\ref{t:inv_str}, there exist $(\lambda_n, t_n, z_n)$ such that $| \langle \pi(z_n) S_{\lambda_n} \psi, e^{-it_n H}f_n \rangle| \gtrsim \varepsilon (\tfr{\varepsilon}{ A} )^{\beta}$. As the sequence $S_{\lambda_n}^{-1} \pi(z_n)^{-1} e^{-it_n H}f_n$ is bounded in $L^2$, it has a weak subsequential limit $\phi \in L^2$. Passing to this subsequence, we have \[ \| \phi\|_2 \gtrsim | \langle \psi, \phi \rangle| = \lim_{n \to \infty} | \langle \psi, S_{\lambda_n}^{-1} \pi(z_n)^{-1} e^{-it_n H} f_n \rangle| \gtrsim \varepsilon (\tfr{\varepsilon}{A})^{\beta}. \] To obtain~\eqref{e:L2_decoupling}, write the left side as \[ \begin{split} 2 \opn{Re} \langle f_n - e^{it_n H} \pi(z_n) S_{\lambda_n} \phi, e^{it_n H} \pi(z_n) S_{\lambda_n} \phi \rangle = 2 \opn{Re} \langle S_{\lambda_n}^{-1} \pi(z_n)^{-1} e^{-it_n H} f_n - \phi, \phi \rangle \to 0, \end{split} \] by the definition of $\phi$. \end{proof} The restriction to a compact time interval in the above statements is dictated by the generality of our hypotheses. For a generic subquadratic potential, the $L^6_{t,x}$ norm of a solution need not be finite on $\mf{R}_t \times \mf{R}_x$. For example, solutions to the harmonic oscillator (for which $V(x) = \frac12 x^2$) are periodic in time. However, the conclusions may be strengthened in some cases. In particular, our methods specialize to the case $V = 0$ to yield \begin{thm} \label{t:inv_str_free_particle} If $0 < \varepsilon \le \|e^{\fr{it\Delta}{2}} f\|_{L^6_{t,x}(\mf{R} \times \mf{R})} \lesssim \|f\|_{L^2} = A$, then \[ \sup_{z \in T^* \mf{R}, \ \lambda > 0, \ t \in \mf{R}} | \langle \pi(z) S_\lambda \psi, e^{\fr{it \Delta}{2}} f \rangle| \gtrsim \varepsilon (\tfr{\varepsilon}{A})^{\beta}. \] \end{thm} This yields an analogue to Corollary~\ref{c:lpd_profile_extraction}, which can be used to derive a linear profile decomposition for the one dimensional free particle. Such a profile decomposition was obtained originally by Carles and Keraani~\cite{carles-keraani} using different methods. \subsection{Ideas of proof} We shall assume in the sequel that the initial data $f$ is Schwartz. This assumption will justify certain applications of Fubini's theorem and may be removed a posteriori by an approximation argument. Further, we prove the theorem with the time interval $[-\tfr{1}{2}, \tfr{1}{2}]$ replaced by $[-\delta_0, \delta_0]$, where $\delta_0$ is furnished by Theorem~\ref{t:propagator} according to the seminorms $M_k$ of the potential. Indeed, the interval $[-\tfr{1}{2}, \tfr{1}{2}]$ can then be tiled by subintervals of length $\delta_0$. Given these preliminary reductions, we describe the main ideas of the proof of Theorem~\ref{t:inv_str}. Our goal is to locate the parameters describing a concentration bubble in the evolution of the initial data. The relevant parameters are a length scale $\lambda_0$, spatial center $x_0$, frequency center $\xi_0$, and a time $t_0$ describing when the concentration occurs. Each parameter is associated with a noncompact symmetry or approximate symmetry of the Strichartz inequality. For instance, when $V = 0$ or $V = \tfr{1}{2}x^2$, both sides of~\eqref{e:loc_str} are preserved by translations $f \mapsto f(\cdot - x_0)$ and modulations $f \mapsto e^{i(\cdot) \xi_0} f$ of the initial data, while more general $V$ admit an approximate Galilean invariance; see Lemma~\ref{l:galilei} below. The existing approaches to inverse Strichartz inequalities for the free particle can be roughly summarized as follows. First, one uses Fourier analysis to isolate a scale $\lambda_0$ and frequency center $\xi_0$. For example, Carles-Keraani prove in their Proposition 2.1 that for some $1 < p < 2$, \[ \| e^{it \partial_x^2 } f\|_{L^6_{t,x}(\mf{R} \times \mf{R})} \lesssim_p \Bigl( \sup_{J} |J|^{\fr{1}{2} - \fr{1}{p}} \| \hat{f}\|_{L^p (J)} \Bigr)^{1/3} \|f\|_{L^2(\mf{R})}, \] where $J$ ranges over all intervals and $\hat{f}$ is the Fourier transform of $f$. Then one uses a separate argument to determine $x_0$ and $t_0$. This strategy ultimately relies on the fact that the propagator for the free particle is diagonalized by the Fourier transform. One does not enjoy that luxury with general Schr\"{o}dinger operators as the momenta of particles may vary with time and in a position-dependent manner. Thus it is natural to consider the position and frequency parameters together in phase space. To this end, we use a wavepacket decomposition as a partial substitute for the Fourier transform. Unlike the Fourier transform, however, the wavepacket transform requires that one first chooses a length scale. This is not entirely trivial because the Strichartz inequality~\eqref{e:loc_str} which we are trying to invert has no intrinsic length scale; the rescaling \[f \mapsto \lambda^{-d/2} f(\lambda^{-1} \cdot), \ 0 < \lambda \ll 1 \] preserves both sides of the inequality exactly when $V = 0$ and at least approximately for subquadratic potentials $V$. We obtain the parameters in a different order. Using a direct physical space argument, we show that if $u(t,x)$ is a solution with nontrivial $L^6_{t,x}$ norm, then there exists a time interval $J$ such that $u$ is large in $L^q_{t,x}(J \times \mf{R})$ for some $q < 6$. Unlike the $L^6_{t,x}$ norm, the $L^q_{t,x}$ is not scale-invariant, hence the interval $J$ identifies a significant time $t_0$ and physical scale $\lambda_0 = \sqrt{|J|}$. By an interpolation and rescaling argument, we then reduce matters to a refined $L^2_x \to L^4_{t, x}$ estimate. This is then proved using a wavepacket decomposition, integration by parts, and analysis of bicharacteristics, revealing the parameters $x_0$ and~$\xi_0$ simultaneously. This paper is structured as follows. Section~\ref{s:prelim} collects some preliminary definitions and lemmas. The heart of the argument is presented in Sections~\ref{s:inv_hls} and~\ref{s:L4}. As the identification of a time interval works in any number of space dimensions, Section~\ref{s:inv_hls} is written for a general subquadratic Schr\"{o}dinger operator on $\mf{R}^d$. In fact the argument there applies to any linear propagator that satisfies the dispersive estimate. In the later sections we specialize to $d = 1$. Further insights seem to be needed in two or more space dimensions. A naive attempt to extend our methods to higher dimensions would require us to prove a refined $L^p$ estimate for some $2 < p < 4$; our arguments in this paper exploit the fact that $4$ is an even integer. There is also a more conceptual barrier: while a timescale should serve as a proxy for \emph{one} spatial scale, there may \emph{a priori} exist more than one interesting physical scale in higher dimensions. For instance, the nonelliptic Schr\"{o}dinger equation \begin{align*} i\partial_t u = -\partial_x \partial_y u \end{align*} in two dimensions satisfies the Strichartz estimate~\eqref{e:intro_str} and admits the scaling symmetry $u \mapsto u(t, \lambda x, \lambda^{-1} y)$ in addition to the usual one. A refinement of the Strichartz inequality for this particular example was obtained using Fourier-analytic methods by Rogers and Vargas~\cite{Rogers2006}. Any higher-dimensional generalization of our methods must somehow distinguish the elliptic and nonelliptic cases. \subsection*{Acknowledgements} This work was partially supported by NSF grants DMS-1265868 (PI R. Killip), DMS-1161396, and DMS-1500707 (both PI M. Visan). \section{Preliminaries} \label{s:prelim} \subsection{Phase space transforms} We briefly recall the (continuous) wavepacket decomposition; see for instance~\cite{folland}. Fix a real, even Schwartz function $\psi \in \mcal{S}(\mf{R}^d)$ with $\| \psi\|_{L^2} = (2\pi)^{-d/2}$. For a function $f \in L^2(\mf{R}^d)$ and a point $z = (x, \xi) \in T^* \mf{R}^d = \mf{R}^d_x \times \mf{R}^d_{\xi}$ in phase space, define \[ T f(z) = \int_{\mf{R}^d} e^{i(x-y)\xi} \psi(x-y) f(y) \, dy = \langle f, \psi_z \rangle_{L^2(\mf{R}^d)}. \] By taking the Fourier transform in the $x$ variable, we get \[ \mcal{F}_x Tf (\eta, \xi) = \int_{\mf{R}^d} e^{-iy\eta} \hat{\psi}(\eta - \xi) f(y) \, dy = \hat{\psi}(\eta - \xi) \hat{f}(\eta). \] Thus $T$ maps $\mcal{S}(\mf{R}^d) \to \mcal{S}(\mf{R}^d \times \mf{R}^d)$ and is an isometry $L^2 (\mf{R}^d) \to L^2( T^* \mf{R}^d)$. The hypothesis that $\psi$ is even implies the adjoint formula \[ T^* F(y) = \int_{T^* \mf{R}^d} F(z) \psi_z(y) \, dz \] and the inversion formula \[ f = T^*T f = \int_{T^* \mf{R}^d} \langle f, \psi_z \rangle_{L^2(\mf{R}^d)} \psi_z \, dz. \] \subsection{Estimates for bicharacteristics} Let $V(t,x)$ satisfy $\partial_x^k V(t, \cdot) \in L^\infty(\mf{R}^d)$ for all $k \ge 2$, uniformly in $t$. The time-dependent symbol $h(t, x, \xi) = \tfr{1}{2}|\xi|^2 + V(t,x)$ defines a globally Lipschitz Hamiltonian vector field $\xi \partial_x - (\partial_x V) \partial_\xi$ on $T^* \mf{R}^d$, hence the flow map $\Phi(t, s) : T^* \mf{R}^d \to T^* \mf{R}^d$ is well-defined for all $s$ and $t$. For $z = (x, \xi)$, write $z^t = ( x^t(z), \xi^t(z) ) = \Phi(t, 0)(z)$ denote the bicharacteristic starting from $z$ at time $0$. Fix $z_0, z_1 \in T^* \mf{R}^d$. We obtain by integration \[ \begin{split} x_0^t - x_1^t &= x_0^s - x_1^s + (t-s) (\xi_0^s - \xi_1^s) - \int_s^t (t - \tau) ( \partial_x V(\tau, x_0^\tau) - \partial_x V(\tau, x_1^\tau)) \, d\tau\\ \xi_0^t - \xi_1^t &= \xi_0^s - \xi_1^s - \int_s^t (\partial_x V(\tau, x_0^\tau) - \partial_x V (\tau, x_1^\tau) ) \, d \tau. \end{split} \] As $|\partial_x V (\tau, x_0^\tau) - \partial_x V (\tau, x_1^\tau)| \le \| \partial^2_x V \|_{L^\infty} |x_0^\tau - x_1^\tau|$, we have for $|t-s| \le 1$ \begin{equation} \label{e:difference_of_trajectories} \begin{split} &|x_0^t - x_1^t| \le ( |x_0^s - x_1^s| + |t-s| |\xi_0^s - \xi_1^s|) e^{ \| \partial_x^2 V\|_{L^\infty} },\\ &| \xi_0^t - \xi_1^t - (\xi_0^s - \xi_1^s)| \le ( |t-s| |x_0^s - x_1^s| + |t-s|^2 |\xi_0^s - \xi_1^s|) \|\partial^2_x V\|_{L^\infty} e^{ \| \partial_x^2 V\|_{L^\infty} } ,\\ &|x_0^t - x_1^t - (x_0^s - x_1^s) - (t-s) (\xi_0^s - \xi_1^s)| \le (|t-s|^2 |x_0^s - x_1^s| + |t-s|^3 |\xi_0^s - \xi_1^s|) e^{\| \partial^2_x V \|_{L^\infty}}. \end{split} \end{equation} In the sequel, we shall always assume that $|t-s| \le 1$, and all implicit constants shall depend on $\partial^2_x V$ or finitely many higher derivatives. We also remark that this time restriction may be dropped if $\partial^2_x V \equiv 0$ (as in Theorem~\ref{t:inv_str_free_particle}). The preceding computations immediately yield the following dynamical consequences: \begin{lma} \label{l:collisions} Assume the preceding setup. \begin{itemize} \item There exists $\delta>0$, depending on $\|\partial_x^2 V \|_{L^\infty}$, such that $|t-s| \le \delta$ implies \[ |x^t_0 - x_1^t - (x_0^s - x_1^s) - (t-s) (\xi_0^s - \xi_1^s)| \le \fr{1}{100} ( |x_0^s - x_1^s| + |t-s| |\xi_0^s - \xi_1^s| ). \] Hence if $|x_0^s - x_1^s | \le r$ and $C \ge 2$, then $|x_0^t - x_1^t| \ge Cr$ for $\tfr{2Cr}{|\xi_0^s - \xi_1^s|} \le |t-s| \le \delta$. Informally, two particles colliding with sufficiently large relative velocity will interact only once during a length $\delta$ time interval. \item With $\delta$ and $C$ as above, if $|x_0^s - x_1^s| \le r$, then \[ | \xi_0^t - \xi_1^t - (\xi_0^s - \xi_1^s)| \le \min \Bigl( \delta, \fr{2Cr}{|\xi_0^s - \xi_1^s|} \Bigr) Cr \|\partial^2_x V\|_{L^\infty} e^{\|\partial^2_x V\|_{L^\infty}} \] for all $t$ such that $|t-s|\leq \delta$ and $|x_0^{\tau} - x_1^{\tau} | \le Cr$ for all $s\leq \tau\leq t$. That is, the relative velocity of two particles remains essentially constant during an interaction. \end{itemize} \end{lma} The following technical lemma will be used in Section~\ref{s:L4_estimate}. \begin{lma} \label{l:technical_lma} There exists a constant $C > 0$ so that if $Q_\eta = (0, \eta) +[-1, 1]^{2d}$ and $r \ge 1$, then \[ \bigcup_{|t - t_0| \le \min(|\eta|^{-1}, 1) } \Phi(t, 0)^{-1} (z_0^t + rQ_\eta) \subset \Phi(t_0, 0)^{-1} (z_0^{t_0}+ Cr Q_\eta). \] \end{lma} In other words, if the bicharacteristic $z^t$ starting at $z \in T^*\mf{R}^d$ passes through the cube $z_0^t + rQ_{\eta}$ in phase space during some time window $|t - t_0| \le \min(|\eta|^{-1},1)$, then it must lie in the dilate $z_0^{t_0} + CrQ_{\eta}$ at time $t_0$. \begin{proof} If $z^{s} \in z_0^{s} + rQ_\eta$, then~\eqref{e:difference_of_trajectories} and $|t-s| \le \min(|\eta|^{-1}, 1)$ imply that \begin{gather*} |x^t - x_0^t| \lesssim |x^s - x_0^s| + \min(|\eta|^{-1}, 1) (|\eta| + r) \lesssim r,\\ |\xi^t - \xi_0^t - (\xi^s - \xi_0^s)| \lesssim r\min(|\eta|^{-1}, 1). \end{gather*} \end{proof} \subsection{The Schr\"{o}dinger propagator} In this section we recall some facts regarding the quantum propagator for subquadratic potentials. First, we have the following oscillatory integral representation: \begin{thm}[{Fujiwara~\cite{fujiwara_fundamental_solution,fujiwara_path_integrals}}] \label{t:propagator} Let $V(t, x)$ satisfy \[ M_k := \| \partial^k_x V(t, x) \|_{L^\infty_{t,x}} +\| V(t, x)\|_{L^\infty_t L^\infty_x ( |x| \le 1)} < \infty \] for all $k \ge 2$. There exists a constant $\delta_0 > 0$ such that for all $0 < |t-s| \le \delta_0$ the propagator $U(t, s)$ for $H = -\tfr{1}{2} \Delta + V(t, x)$ has Schwartz kernel \[ U(t, s) (x, y) = \Bigl (\fr{1}{2\pi i (t-s)} \Bigr )^{d/2} a(t, s, x, y) e^{iS(t, s, x, y)}, \] where for each $m > 0$ there is a constant $\gamma_m > 0$ such that \[ \| a(t, s, x, y) - 1 \|_{C^m (\mf{R}^d_x \times \mf{R}^d_y)} \le \gamma_m |t-s|^2. \] Moreover \[ S(t, s, x, y) = \fr{|x-y|^2}{2(t-s)} + (t-s) r(t, s, x, y), \] with \[ |\partial_xr| + |\partial_yr| \le C(M_2) (1 + |x| + |y|), \] and for each multindex $\alpha$ with $|\alpha| \ge 2$, the quantity \begin{equation} \nonumber C_\alpha = \| \partial^\alpha_{x, y} r(t, s, \cdot, \cdot ) \|_{L^\infty} \end{equation} is finite. The map $U(t, s) : \mcal{S}(\mf{R}^d) \to \mcal{S}(\mf{R}^d)$ is a topological isomorphism, and all implicit constants depend on finitely many $M_k$. \end{thm} \begin{cor}[Dispersive and Strichartz estimates] \label{c:strichartz} If $V$ satisfies the hypotheses of the previous theorem, then $U(t, s)$ admits the fixed-time bounds \[ \|U(t,s)\|_{L^1_x(\mf{R}^d) \to L^\infty_x(\mf{R}^d)} \lesssim |t-s|^{-d/2} \] whenever $|t-s| \le \delta_0$. For any compact time interval $I$ and any exponents $(q,r)$ satisfying $2 \le q, r \le \infty$, $\tfr{2}{q} + \tfr{d}{r} = \tfr{d}{2}$, and $(q, r, d) \ne (2, \infty, 2)$, we have \[ \| U(t, s) f\|_{L^{q}_t L^r_x ( I \times \mf{R}^d )} \lesssim_{I} \|f\|_{L^2(\mf{R}^d)}. \] \end{cor} \begin{proof} Combining Theorem~\ref{t:propagator} with the general machinery of Keel-Tao~\cite{keel-tao}, we obtain \[ \| U(t, s) f\|_{ L^q_t L^r_x ( \{ |t-s| \le \delta_0 \} \times \mf{R}^d)} \lesssim \|f\|_{L^2}. \] If $I = [T_0, T_1]$ is a general time interval, partition it into subintervals $[t_{j-1}, t_{j}]$ of length at most $\delta_0$. For each such subinterval we can write $U(t, s) = U(t, t_{j-1}) U(t_{j-1}, s)$, thus \[ \| U(t, s) f\|_{L^q_t L^r_x ( [t_{j-1}, t_{j}] \times \mf{R}^d)} \lesssim \|U (t_{j-1}, s) f\|_{L^2} = \|f\|_{L^2}. \] The corollary follows from summing over the subintervals. \end{proof} Recall that solutions to the free particle equation $i\partial_t u = -\tfr{1}{2}\Delta u$ with $ u(0) = \phi$ transform as follows under phase space translations of the initial data: \begin{equation} \label{e:galilei_free_particle} e^{\fr{it\Delta}{2}} \pi(x_0, \xi_0) \phi(x) = e^{i[(x-x_0)\xi_0 - \fr{1}{2}t |\xi_0|^2]} (e^{\fr{it\Delta}{2}} \phi)(x - x_0 - t\xi_0). \end{equation} Physically, $\pi(x_0, \xi_0)\phi$ represents the state of a quantum particle with position $x_0$ and momentum $\xi_0$. The above relation states that the time evolution of $\pi(x_0, \xi_0)\phi$ in the absence of a potential oscillates in space and time at frequency $\xi_0$ and $-\tfr{1}{2}|\xi_0|^2$, respectively, and tracks the classical trajectory $t \mapsto x_0 + t \xi_0$. In the presence of a potential, the time evolution of such modified initial data admits an analogous description: \begin{lma} \label{l:galilei} If $U(t, s)$ is the propagator for $H = -\tfr{1}{2}\Delta + V(t, x)$, then \[ \begin{split} U(t,s) \pi(z_0^s) \phi(x) &= e^{i [ (x-x_0^t) \xi_0^t + \int_s^t \fr{1}{2} |\xi_0^\tau|^2 - V(\tau, x_0^\tau) \, d \tau]} U^{z_0}(t, s) \phi (x - x_0^t) \\ &=e^{i \alpha(t, s, z_0)} \pi( z_0^t) [U^{z_0} (t, s) \phi](x), \end{split} \] where \[ \alpha(t, s, z) = \int_s^t \fr{1}{2} |\xi_0^\tau|^2 - V(\tau, x_0^\tau) \, d\tau \] is the classical action, $U^{z_0}(t, s)$ is the propagator for $H^{z_0} = -\tfr{1}{2} \Delta + V^{z_0} (t, x)$ with \[ V^{z_0}(t, x) = V(t, x_0^t + x) - V(t, x_0^t) - x \partial_x V (t, x_0^t) = \langle x, Qx \rangle \] where \[ Q(t,x) = \int_0^1 (1-\theta) \partial^2_x V(t, x_0^t + \theta x) \, d\theta, \] and $z_0^t = (x_0^t, \xi_0^t)$ is the trajectory of $z_0$ under the Hamiltonian flow of the symbol $h = \tfr{1}{2} |\xi|^2 + V(t, x)$. The propagator $U^{z_0}(t, s)$ is continuous on $\mcal{S}(\mf{R}^d)$ uniformly in $z_0$ and $|t-s| \le \delta_0$. \end{lma} \begin{proof} The formula for $U(t, s) \pi(z_0^s) \phi$ is verified by direct computation. To obtain the last statement, we notice that $\|\partial^k_x V^{z_0}\|_{L^\infty} = \| \partial^k_x V \|_{L^\infty}$ for $k \ge 2$, and appeal to the last part of Theorem~\ref{t:propagator}. \end{proof} \begin{rmks}\leavevmode\\[-5mm] \begin{itemize} \item Lemma~\ref{l:galilei} reduces to~\eqref{e:galilei_free_particle} when $V = 0$ and gives analogous formulas when $V$ is a polynomial of degree at most $2$. When $V = Ex$ is the potential for a constant electric field, we recover the Avron-Herbst formula by setting $z_0 = 0$ (hence $V^{z_0} = 0$). For $V = \sum_j \omega_j x_j^2$, we get the generalized Galilean symmetry mentioned in the introduction. \item Direct computation shows that the above identity extends to semilinear equations of the form \[ i\partial_t u = (-\tfrac{1}{2}\Delta + V) u \pm |u|^p u. \] That is, if $u$ is the solution with $u(0) = \pi(z_0) \psi$, then \[ u(t) = e^{i\int_0^t \fr{1}{2} |\xi_0^\tau|^2 - V(\tau, x_0^\tau) \, d\tau} \pi(z_0^t) u_{z_0}(t) \] where $u_{z_0}$ solves \[ i\partial_t u_{z_0} = \bigl(-\tfrac{1}{2} \Delta + V^{z_0} \bigr) u_{z_0} \pm |u_{z_0}|^p u_{z_0} \quad\text{with}\quad u_{z_0}(0) = \psi, \] where the potential $V^{z_0}$ is as defined in Lemma~\ref{l:galilei}. \item One can combine this lemma with a wavepacket decomposition to represent a solution $U(t, 0) f$ as a sum of wavepackets \[ U(t, 0)f = \int_{z_0 \in T^* \mf{R}^d} \langle f , \psi_{z_0} \rangle U(t, 0)(\psi_{z_0}) \, dz_0, \] where the oscillation of each wavepacket $U(t, 0)(\psi_{z_0})$ is largely captured in the phase \[ (x-x_0^t)\xi_0^t + \int_0^t \fr{1}{2}|\xi_0^\tau|^2 - V(\tau, x_0^\tau) \, d\tau. \] Our arguments will make essential use of this information. Analogous wavepacket representations have been constructed by Koch and Tataru for a broad class of pseudodifferential operators; see~\cite[Theorem 4.3]{KochTataru2005} and its proof. \end{itemize} \end{rmks} \section{Locating a length scale} \label{s:inv_hls} The first step in the proof of Theorem~\ref{t:inv_str} is to identify both a characteristic time scale and temporal center for our sought-after bubble of concentration. Recall that the usual $TT^*$ proof of the non-endpoint Strichartz inequality combines the dispersive estimate with the Hardy-Littlewood-Sobolev inequality in time. By using a refinement of the latter, one can locate a time interval on which the solution is large in a non-scale invariant spacetime norm. \begin{prop} \label{p:inv_hls} Consider a pair $(q, r)$ in Corollary~\ref{c:strichartz} with $2 < q < \infty$, and suppose $u = U(t, 0)f$ solves \[ i\partial_t u = \bigl ( -\tfrac{1}{2}\Delta + V \bigr )u \quad\text{with}\quad u(0) = f \in L^2(\mf{R}^d) \] with $\|f\|_{L^2 (\mf{R}^d)} = 1$ and $\|u\|_{L^q_t L^r_x ([-\delta_0, \delta_0] \times \mf{R}^d)} \ge \varepsilon$, where $\delta_0$ is the constant from Theorem~\ref{t:propagator}. Then there is a time interval $J\subset [-\delta_0, \delta_0]$ such that \[ \|u\|_{L^{q-1}_t L^r_x (J \times \mf{R}^d)} \gtrsim |J|^{\fr{1}{q(q-1)}} \varepsilon^{\fr q{q-2}}. \] \end{prop} \begin{rmk} That this estimate singles out a special length scale is easiest to see when $V = 0$. For ease of notation, suppose $J=[0,1]$ in Proposition~\ref{p:inv_hls}. As $\|u\|_{L^q_t L^r_x(\mf{R} \times \mf{R}^d)} \lesssim \|f\|_{L^2} < \infty$, for each $\eta > 0$ there exists $T > 0$ so that (suppressing the region of integration in $x$) $\|u\|_{L^q_t L^r_x (\{ |t| \geq T\})} < \eta$. With $u_{\lambda}(t, x) = \lambda^{-d/2} u(\lambda^{-2}t, \lambda^{-1} x) = e^{\fr{it\Delta}{2}} (f_\lambda)$ where $f_\lambda = \lambda^{-d/2} f(\lambda^{-1})$, we have \[ \begin{split} \|u_{\lambda} \|_{L^{q-1}_t L^r_x ([0, 1])} &\le \| u_{\lambda}\|_{L^{q-1}_t L^r_x ([0, \lambda^2 T]) }+ \| u_{\lambda} \|_{L^{q-1}_t L^r_x ([\lambda^2 T, 1])} \\ &\le (\lambda^2 T)^{\fr{1}{q(q-1)}} \| u_\lambda \|_{L^q_t L^r_x ( [0, \lambda^2 T])} + \|u_\lambda \|_{L^q_t L^r_x ([ \lambda^2 T, 1])}\\ &\le (\lambda^2 T)^{\fr{1}{q(q-1)}} \|u\|_{L^q_t L^r_x} + \eta, \end{split} \] which shows that \[ \| u_{\lambda} \|_{L^{q-1}_t L^r_x ([0, 1] \times \mf{R}^d)} \to 0 \text{ as } \lambda \to 0. \] Thus, Proposition~\ref{p:inv_hls} shows that concentration of the solution cannot occur at arbitrarily small scales. Similar considerations preclude $\lambda \to \infty$. \end{rmk} We shall need the following inverse Hardy-Littlewood-Sobolev estimate. For $0 < s < d$, denote by $I_s f(x) = (|D|^{-s} f) (x) = c_{s,d} \int_{\mf{R}^d} \fr{f(x-y)}{|y|^{d-s}} \, dy$ the fractional integration operator. \begin{lma}[Inverse HLS] Fix $d\geq 1$, $0<\gamma<d$, and $1<p<q<\infty$ obeying $\tfrac dp = \tfrac dq+d - \gamma$. If $f\in L^p({\mathbb{R}}^d)$ is such that $$ \| f \|_{L^p({\mathbb{R}}^d)} \leq 1 \qtq{and} \| |x|^{-\gamma} * f \|_{L^q} \geq \varepsilon, $$ then there exists $r>0$ and $x_0\in {\mathbb{R}}^d$ so that \begin{equation}\label{E:inv con} \int_{r<|x-x_0|<2r} |f(x)|\,dx \gtrsim \varepsilon^{\frac{q}{q-p}} r^{\frac{d}{p'}}. \end{equation} \end{lma} \begin{proof} Our argument is based off the proof of the usual Hardy--Littlewood--Sobolev inequality due to Hedberg \cite{Hedberg1972}; see also \cite[\S VIII.4.2]{stein}. Suppose, in contradiction to \eqref{E:inv con}, that \begin{equation}\label{E:inv con'} \sup_{x_0,r} \ r^{-\frac{d}{p'}} \!\int_{r<|x-x_0|<2r} |f(x)|\,dx \leq \eta \varepsilon^{\frac{q}{q-p}} \end{equation} for some small $\eta=\eta(d,p,\gamma)>0$ to be chosen later. As in the Hedberg argument, a layer-cake decomposition of $|y|^{-\gamma}$ yields the following bound in terms of the maximal function: $$ \int_{|y|\leq r} |f(x-y)| |y|^{-\gamma} \,dy \lesssim r^{d-\gamma} [Mf](x) . $$ On the other hand, summing \eqref{E:inv con'} over dyadic shells yields $$ \int_{|y|\geq r} |f(x-y)| |y|^{-\gamma} \,dy \lesssim \varepsilon^{\frac{q}{q-p}} \,\eta\, r^{\frac d{p'}-\gamma} . $$ Combining these two estimates and optimizing in $r$ then yields $$ \biggl| \int_{{\mathbb{R}}^d} |f(x-y)| |y|^{-\gamma} \,dy \biggr| \lesssim \varepsilon \eta^{1-\frac pq} | [Mf](x) |^{\frac pq} $$ and hence $$ \| |x|^{-\gamma} * f \|_{L^q} \lesssim \varepsilon \eta^{1-\frac pq} \| Mf \|_{L^p({\mathbb{R}}^d)}^{\frac pq} \lesssim \varepsilon \eta^{1-\frac pq} . $$ Choosing $\eta>0$ sufficiently small then yields the sought-after contradiction. \end{proof} \begin{proof}[Proof of Proposition~\ref{p:inv_hls}] Define the map $T: L^2_x \to L^q_t L^r_x$ by $Tf(t) = U(t, 0) f$, which by Corollary~\ref{c:strichartz} is continuous. By duality, $\varepsilon \le \| u \|_{L^q_t L^r_x}$ implies $\varepsilon \le \|T^* \phi \|_{L^2_x}$, where \[ \phi = \fr{ |u|^{r-2} u}{ \|u(t)\|_{L^r_x}^{r-1}} \fr{ \|u(t) \|_{L^r_x}^{q-1}}{ \|u \|_{L^q_t L^r_x}^{q-1}} \] satisfies $\| \phi\|_{L^{q'}_t L^{r'}_x} = 1$ and \[ T^* \phi = \int U(0, s) \phi(s) \, ds. \] By the dispersive estimate of Corollary~\ref{c:strichartz}, \begin{align*} \varepsilon^2&\le \langle T^* \phi, T^* \phi \rangle_{L^2_x} = \langle \phi, T T^* \phi \rangle_{L^2_{t,x}} = \iiint \overline{\phi(t)} U(t,s) \phi(s) \,dx ds dt \lesssim \iint \fr{ G(t) G(s)} { |t-s|^{2/q}} \, dsdt\\ & \lesssim \|G\|_{L_t^{q'}} \||t|^{-\frac2q}*G\|_{L_t^q}, \end{align*} where $G(t) = \| \phi(t)\|_{L^{r'}_x}$. Appealing to the previous lemma with $p = q'$, we derive \begin{align*} \sup_J |J|^{-\frac1q}\|G\|_{L^1_t(J)}\gtrsim \varepsilon^{\frac{2(q-1)}{q-2}}, \end{align*} which, upon rearranging, yields the claim. \end{proof} \section{A refined $L^4$ estimate} \label{s:L4} Now we specialize to the one-dimensional setting $d = 1$. We are particularly interested in the Strichartz exponents \[ (q_0, r_0) = \Bigl( \fr{ 7 + \sqrt{33} }{2}, \fr{5 + \sqrt{33}}{2} \Bigr) \] determined by the conditions $\tfr{2}{q_0} + \tfr{1}{r_0} = \tfr{1}{2}$ and $q_0-1 = r_0$. Note that $5 < r_0 < 6$. Suppose $\|f\|_{L^2} = A $ and that $u = U(t, 0) f$ satisfies $\| u \|_{L^6_{t,x} ( [-\delta_0, \delta_0] \times \mf{R})} = \varepsilon$. Using the inequality $\|u\|_{L^6_{t,x}} \le \| u\|_{L^5_t L^{10}_x}^{1-\theta} \| u\|_{L_t^{q_0} L_x^{r_0}}^\theta$ for some $0 < \theta < 1$, estimating the first factor by Strichartz, and applying Proposition~\ref{p:inv_hls}, we find a time interval $J = [t_0 - \lambda^2, t_0 + \lambda^2]$ such that \[ \| u \|_{L^{q_0-1}_t L^{r_0}_x ( J \times \mf{R})} \gtrsim A |J|^{\fr{1}{q_0(q_0-1)}} \bigl ( \tfrac{\varepsilon}{A} \bigr )^{ \fr{q_0}{\theta(q_0-2)}}. \] Setting \[ u(t, x) = \lambda^{-1/2} u_\lambda (\lambda^{-2} (t-t_0), \lambda^{-1} x), \] we get \[ i\partial_t u_\lambda = (-\tfr{1}{2} \partial^2_x + V_\lambda )u_\lambda = 0 \quad\text{with}\quad u_\lambda(0, x) = \lambda^{1/2} u(t_0, \lambda x) \] and $V_\lambda(t, x) = \lambda^2 V(t_0 + \lambda^2 t, \lambda x)$ satisfies the hypotheses~\eqref{e:V_h1} and~\eqref{e:V_h2} for all $0 < \lambda \le 1$. By the corollary and a change of variables, \[ \| u_\lambda \|_{L^{q_0-1}_{t,x} ([-1, 1] \times \mf{R})} \gtrsim A (\tfr{\varepsilon}{A})^{\fr{q_0}{\theta(q_0-2)}}. \] As $4 < q_0-1 < 6$, Theorem~\ref{t:inv_str} will follow by interpolating between the $L^2_x \to L^6_{t,x}$ Strichartz estimate and the following $L^2_x \to L^4_{t,x}$ estimate. Recall that $\psi$ is the test function fixed in the introduction. \begin{prop}\label{P:4.1} Let $V$ be a potential satisfying the hypotheses~\eqref{e:V_h1} and~\eqref{e:V_h2}, and denote by $U_V(t, s)$ the linear propagator. There exists $\delta_0 > 0$ so that if $\eta \in C^\infty_0( (-\delta_0, \delta_0))$, \label{p:refined_L4} \[ \|U_V(t, 0) f\|_{L^4_{t,x}(\eta(t) dx dt)} \lesssim \|f\|_{2}^{1-\beta} \sup_z | \langle \psi_z, f \rangle |^{\beta} \] for some absolute constant $0 < \beta < 1$. \end{prop} Note that this estimate is trivial if the right side is replaced by $\|f\|_2$ since $L^4_{t,x}$ is controlled by $L^2_{t,x}$ and $L^{6}_{t,x}$, which on a compact time interval are bounded above by $\|f\|_2$ by unitarity and Strichartz, respectively. \subsection{Proof of Proposition~\ref{p:refined_L4}} \label{s:L4_estimate} We fix the potential $V$ and drop the subscript $V$ from the propagator. It suffices to prove the proposition for $f\in \mathcal S({\mathbb{R}})$. Decomposing $f$ into wavepackets $f = \int_{T^* \mf{R}} \langle f, \psi_z \rangle \psi_z \, dz$ and expanding the $L^4_{t,x}$ norm, we get \[ \begin{split} \| U(t,0) f\|^4_{L^4_{t, x}(\eta(t) dx dt)} \le \int_{ (T^* \mf{R})^4} K(z_1, z_2, z_3, z_4) \prod_{j=1}^4 |\langle f, \psi_{z_j} \rangle| \, dz_1 dz_2 dz_3 dz_4, \end{split} \] where \begin{equation} \label{e:kernel_def} K (z_1,z_2,z_3,z_4):= |\langle U(t,0)( \psi_{z_1}) U(t,0)( \psi_{z_2}), U(t,0) ( \psi_{z_3}) U(t,0) ( \psi_{z_4}) \rangle_{L^2_{t,x}( \eta(t) dx dt)}|. \end{equation} There is no difficulty with interchanging the order of integration as $f$ was assumed to be Schwartz. We claim \begin{prop} \label{p:L2_kernel_bound} For some $0 < \theta < 1$ the kernel \[ K(z_1, z_2, z_3, z_4) \max (\langle z_1 - z_2 \rangle^\theta, \langle z_3 - z_4 \rangle^\theta) \] is bounded as a map on $L^2(T^* \mf{R} \times T^* \mf{R})$. \end{prop} Let us first see how this proposition implies the previous one. Writing $a_z = | \langle f, \psi_z \rangle|$, we have \[ \begin{split} \|U(t,0) f\|_{L^4_{t, x}(\eta(t) dx dt)}^4 &\lesssim \Bigl( \int_{(T^* \mf{R})^2} a_{z_1}^2 a_{z_2}^2 \langle z_1 - z_2 \rangle^{-2\theta} \, dz_1 dz_2 \Bigr)^{1/2} \Bigl( \int_{ (T^* \mf{R})^2} a_{z_3}^2 a_{z_4}^2 \, dz_3 dz_4 \Bigr)^{1/2}\\ &\lesssim \|f\|_{L^2}^2 \Bigl( \int_{(T^* \mf{R})^2} a_{z_1}^2 a_{z_2}^2\langle z_1 - z_2 \rangle^{-2\theta} \, dz_1 dz_2 \Bigr)^{1/2}. \end{split} \] By Young's inequality, the convolution kernel $k(z_1, z_2) = \langle z_1 - z_2 \rangle^{-2\theta}$ is bounded from $L^p_z$ to $L^{p'}_{z}$ for some $p \in (1, 2) $, and the integral on the right is bounded by \[ \Bigl(\int_{T^* \mf{R} } a_{z}^{2p} \, dz \Bigr)^{2/p} \le \| f\|_{L^2}^{4/p} \sup_{z} a_z^{4/p'}. \] This yields \[ \|U(t,0) f\|_{L^4_{t, x}(\eta(t) dx dt)} \lesssim \| f\|_{L^2}^{\fr{1}{2} + \fr{1}{2p}} \sup_z a_z^{\fr{1}{2p'}}, \] which settles Proposition~\ref{P:4.1} with $\beta=\fr{1}{2p'}$. It remains to prove Proposition~\ref{p:L2_kernel_bound}. Lemma~\ref{l:galilei} implies that $ U(t, 0)(\psi_{z_j})(x) = e^{i\alpha_j} [U_j(t, 0) \psi](x - x_j^t)$, where \[ \alpha_j(t, x) = (x - x_j^t) \xi_j^t + \int_0^t \fr{1}{2} |\xi_j^\tau|^2 - V(\tau, x_j^\tau) \, d\tau \] and $U_j$ is the propagator for $H_j = -\tfr{1}{2}\partial_x^2 + V_j(t, x)$ with \begin{equation} \label{e:Vj} V_j(t, x) = x^2 \int_0^1 (1-s)\partial^2_x V (t, x_j^t + sx) \, ds. \end{equation} The envelopes $[U_j(t, 0) \psi] ( x - x_j^t)$ concentrate along the classical trajectories $t \mapsto x_j^t$: \begin{equation} \label{e:wavepacket_schwartz_tails} |\partial_x^k [U_j(t, 0) \psi] (x - x_j^t)| \lesssim_{k, N} \langle x- x_j^t \rangle^{-N}. \end{equation} The kernel $K$ therefore admits the crude bound \[ K(\vec{z}) \lesssim_N \int \prod_{j=1}^4 \langle x - x_j^t \rangle^{-N} \, \eta(t) dx dt \lesssim \max ( \langle z_1 - z_2 \rangle, \langle z_3 - z_4 \rangle)^{-1}, \] and Proposition~\ref{p:L2_kernel_bound} will follow from \begin{prop} \label{p:L2_kernel_bound_delta} For $\delta > 0$ sufficiently small, the operator with kernel $K^{1-\delta}$ is bounded on $L^2(T^* \mf{R} \times T^* \mf{R})$. \end{prop} \begin{proof} We partition the 4-particle phase space $(T^* \mf{R})^4$ according to the degree of interaction between the particles. Define \[ \begin{split} E_0 &=\{ \vec{z} \in (T^* \mf{R})^4 : \min_{|t| \le \delta_0} \max_{k, k'} |x_k^t - x_{k'}^t| \le 1\}\\ E_m &= \{ \vec{z} \in (T^* \mf{R})^4 : 2^{m-1} < \min_{|t| \le \delta_0} \max_{k, k'} |x_k^t - x_{k'}^t| \le 2^m \}, \ m \ge 1, \end{split} \] and decompose \[ K = K \mr{1}_{E_0} + \sum_{m \ge 1} K \mr{1}_{E_m} = K_0 + \sum_{m \ge 1} K_m. \] Then \[ K^{1-\delta} = K_0^{1-\delta} + \sum_{m \ge 1} K_m^{1-\delta}. \] The $K_0$ term heuristically corresponds to the 4-tuples of wavepackets that all collide at some time $t \in [-\delta_0, \delta_0]$ and will be the dominant term thanks to the decay in~\eqref{e:wavepacket_schwartz_tails}. We will show that for any $m\geq 0$ and any $N > 0$, \begin{equation} \label{e:Km_L2_bound} \|K_m^{1-\delta} \|_{L^2 \to L^2} \lesssim_N 2^{-mN}, \end{equation} which immediately implies the proposition upon summing. In turn, this will be a consequence of the following pointwise bound: \begin{lma} \label{l:K0_ptwise} For each $m\geq 0$ and $\vec{z} \in E_m$, let $t(\vec{z})$ be a time witnessing the minimum in the definition of $E_m$. Then for any $N_1, N_2 \ge 0$, \begin{equation} \nonumber \begin{split} &| K_m(\vec{z}) | \lesssim_{N_1,N_2} 2^{-mN_1} \min \biggl[\fr{| \xi^{t(\vec{z})}_1 + \xi^{t(\vec{z})}_2 - \xi^{t(\vec{z})}_3 - \xi^{t(\vec{z})}_4|^{-N_2}}{1 + |\xi^{t(\vec{z})}_1 - \xi^{t(\vec{z})}_2| + |\xi^{t(\vec{z})}_3 - \xi^{t(\vec{z})}_4|}, \fr{ 1 + |\xi^{t(\vec{z})}_1 - \xi^{t(\vec{z})}_2| + |\xi^{t(\vec{z})}_3 - \xi^{t(\vec{z})}_4|}{ \bigl | (\xi^{t(\vec{z})}_1 - \xi^{t(\vec{z})}_2)^2 - (\xi^{t(\vec{z})}_3 - \xi^{t(\vec{z})}_4)^2 \bigr|^2} \biggr]. \end{split} \end{equation} \end{lma} Deferring the proof for the moment, let us see how Lemma~\ref{l:K0_ptwise} implies \eqref{e:Km_L2_bound}. By Schur's test and symmetry, it suffices to show that \begin{align} \label{e:Km_schur} \sup_{z_3,z_4}\int K_m(z_1, z_2, z_3, z_4)^{1-\delta} \, dz_1 dz_2\lesssim_N 2^{-mN}, \end{align} where the supremum is taken over all $(z_3, z_4)$ in the image of the projection $E_m \subset (T^* \mf{R})^4 \to T^*\mf{R}_{z_3} \times T^*\mf{R}_{z_4}$. Fix such a pair $(z_3,z_4)$ and let \[ E_m(z_3, z_4) = \{ (z_1, z_2) \in (T^*\mf{R})^2 : (z_1, z_2, z_3, z_4) \in E_m\}. \] Choose $t_1 \in [-\delta_0, \delta_0]$ minimizing $|x_3^{t_1} - x_4^{t_1}|$; the definition of $E_m$ implies that $|x_3^{t_1} - x_4^{t_1}| \le 2^{m}$. Suppose $(z_1, z_2) \in E_m(z_3, z_4)$. By Lemma~\ref{l:collisions}, any ``collision time" $t(z_1, z_2, z_3, z_4)$ must belong to the interval \[ I = \bigl\{ t \in [-\delta_0, \delta_0] : |t - t_1| \lesssim \min\Bigl (1, \fr{2^m}{|\xi_3^{t_1} - \xi_4^{t_1} |}\Bigr ) \bigr\}, \] and for such $t$ one has \[ \begin{split} |x_3^t - x_4^t| \lesssim 2^{m}, \quad | \xi_3^t - \xi_4^t - (\xi_3^{t_1} - \xi_4^{t_1})| \lesssim \min \Bigl ( 2^m, \fr{2^{2m}}{ |\xi_3^{t_1} - \xi_4^{t_1}|} \Bigr ). \end{split} \] The contribution of each $(z_1, z_2) \in E_m(z_3, z_4)$ to the integral~\eqref{e:Km_schur} will depend on their relative momenta at the collision time. We organize the integration domain $E_m(z_1, z_2)$ accordingly. Write $Q_{\xi} = (0, \xi) + [-1, 1]^2 \subset T^* \mf{R}$, and denote by $\Phi(t, s)$ the classical propagator for the Hamiltonian \[ h = \tfrac{1}{2}|\xi|^2 + V(t, x). \] Using the shorthand $z^t = \Phi(t, 0)(z)$, for $\mu_1, \mu_2\in {\mathbb{R}}$ we define \[ Z_{\mu_1, \mu_2} = \bigcup_{t \in I} (\Phi(t, 0) \otimes \Phi(t, 0))^{-1} \Bigl( \fr{z_3^t + z_4^t}{2} + 2^{m} Q_{\mu_1} \Bigr) \times \Bigl( \fr{z_3^t + z_4^t}{2} + 2^{m} Q_{\mu_2} \Bigr), \] where $\Phi(t, 0) \otimes \Phi(t, 0)(z_1, z_2) = (z_1^t, z_2^t)$ is the product flow on $T^* \mf{R} \times T^* \mf{R}$. This set is depicted schematically in Figure~\ref{f:fig} when $m = 0$. This corresponds to the pairs of wavepackets $(z_1, z_2)$ with momenta $(\mu_1, \mu_2)$ relative to the wavepackets $(z_3, z_4)$, when all four wavepackets interact. We have \[ E_m(z_3, z_4) \subset \bigcup_{\mu_1, \mu_2 \in \mf{Z}} Z_{\mu_1, \mu_2}. \] \begin{figure} \includegraphics[scale=1]{drawing.eps} \caption{$Z_{\mu_1, \mu_2}$ comprises all $(z_1, z_2)$ such that $z_1^t$ and $z_2^t$ belong to the depicted phase space boxes for $t$ in the interval $\textrm{I}$.} \label{f:fig} \end{figure} \begin{lma} \label{l:measure} $|Z_{\mu_1, \mu_2}| \lesssim 2^{4m} \max(1,|\mu_1|, |\mu_2|) |I|$, where $| \cdot |$ on the left denotes Lebesgue measure on $(T^* \mf{R})^2$. \end{lma} \begin{proof} Without loss assume $|\mu_1| \ge |\mu_2|$. Partition the interval $I$ into subintervals of length $|\mu_1|^{-1}$ if $\mu_1\neq 0$ and in subintervals of length $1$ if $\mu_1=0$. For each $t'$ in the partition, Lemma~\ref{l:technical_lma} implies that for some constant $C > 0$ we have \[ \begin{split} \bigcup_{|t-t'| \le \min(1,|\mu_1|^{-1})} \Phi(t, 0)^{-1} \Bigl( \fr{z_3^t+z_4^t}{2} + 2^{m}Q_{\mu_1} \Bigr) &\subset \Phi(t', 0)^{-1} \Bigl ( \fr{z_3^{t'} + z_4^{t'}}{2} + C2^{m} Q_{\mu_1} \Bigr)\\ \bigcup_{|t-t'| \le \min(1,|\mu_1|^{-1})} \Phi(t, 0)^{-1} \Bigl( \fr{z_3^t+z_4^t}{2} + 2^{m}Q_{\mu_2} \Bigr) &\subset \Phi(t', 0)^{-1} \Bigl ( \fr{z_3^{t'} + z_4^{t'}}{2} + C2^{m} Q_{\mu_2} \Bigr), \end{split} \] and so \[ \begin{split} &\bigcup_{|t-t'| \le \min(1,|\mu_1|^{-1})} (\Phi(t, 0) \otimes \Phi(t, 0))^{-1} \Bigl( \fr{z_3^t + z_4^t}{2} + 2^{m} Q_{\mu_1} \Bigr) \times \Bigl( \fr{z_3^t + z_4^t}{2} + 2^{m} Q_{\mu_2} \Bigr)\\ &\subset (\Phi(t', 0) \otimes \Phi(t', 0))^{-1} \Bigl ( \fr{z_3^{t'} + z_4^{t'}}{2} + C2^{m} Q_{\mu_1} \Bigr) \times \Bigl ( \fr{z_3^{t'} + z_4^{t'}}{2} + C2^{m} Q_{\mu_2} \Bigr). \end{split} \] By Liouville's theorem, the right side has measure $O(2^{4m})$ in $(T^* \mf{R})^2$. The claim follows by summing over the partition. \end{proof} For each $(z_1, z_2) \in E_m(z_3, z_4) \cap Z_{\mu_1, \mu_2}$, suppose that $z_j^t \in \tfr{z_3^t + z_4^t}{2} + 2^m Q_{\mu_j}$ for some $t \in I$. As \[ \xi_j^{t} = \fr{\xi_3^t + \xi_4^t}{2} + \mu_j + O(2^m), \ j = 1, 2, \] the second assertion of Lemma~\ref{l:collisions} implies that \begin{gather*} \xi_1^{t(\vec{z})} + \xi_2^{t(\vec{z})} - \xi_3^{t(\vec{z})} - \xi_4^{t(\vec{z})} = \mu_1 + \mu_2 + O(2^m)\\ \xi_1^{t(\vec{z})} - \xi_2^{ t(\vec{z})} = \mu_1 - \mu_2 + O(2^m). \end{gather*} Hence by Lemma~\ref{l:K0_ptwise}, \begin{equation} \nonumber \begin{split} | K_m | &\lesssim_{N} 2^{-3mN} \min \biggl[\fr{\langle \mu_1 + \mu_2 + O(2^m)\rangle^{-N}}{1 + \bigl | |\mu_1 - \mu_2| + |\xi^{t_1}_3 - \xi^{t_1}_4| + O(2^m) \bigr |}, \fr{ 1 + |\mu_1 - \mu_2| + |\xi^{t_1}_3 - \xi^{t_1}_4| + O(2^m)}{ \bigl | (\mu_1 - \mu_2)^2 - (\xi^{t_1}_3 - \xi^{t_1}_4)^2 + O(2^{2m}) \bigr|^2} \biggr]\\ &\lesssim_N 2^{(5-2N)m} \min\biggl[ \fr{ \langle \mu_1 + \mu_2 \rangle^{-N}} {1 + |\mu_1 - \mu_2| + |\xi_3^{t_1} - \xi_4^{t_1}| }, \fr{ 1 + |\mu_1 - \mu_2| + |\xi_3^{t_1} - \xi_4^{t_1}|}{ \bigl | (\mu_1 - \mu_2)^2 - (\xi_3^{t_1} - \xi_4^{t_1})^2 \bigr |^2} \biggr]. \end{split} \end{equation} Applying Lemma~\ref{l:measure}, writing $\max(|\mu_1|, |\mu_2|) \le |\mu_1 + \mu_2| + |\mu_1 - \mu_2|$, and absorbing $|\mu_1 + \mu_2|$ into the factor $\langle \mu_1 + \mu_2 \rangle^{-N}$ by adjusting $\delta$, \[ \begin{split} &\int K_m(z_1, z_2, z_3, z_4)^{1-\delta} \, dz_1 dz_2 \le \sum_{\mu_1, \mu_2 \in \mf{Z}} \int K_m(z_1, z_2, z_3, z_4)^{1-\delta} \mr{1}_{ Z_{\mu_1, \mu_2}}(z_1, z_2) \, dz_1 dz_2\\ &\lesssim \sum_{\mu_1, \mu_2 \in \mf{Z}} 2^{-mN} \min\Bigl( \fr{ \langle \mu_1 + \mu_2 \rangle^{-N}} {1 + |\mu_1 - \mu_2| + |\xi_3^{t_1} - \xi_4^{t_1}| }, \fr{ 1 + |\mu_1 - \mu_2| + |\xi_3^{t_1} - \xi_4^{t_1}|}{ \bigl | (\mu_1 - \mu_2)^2 - (\xi_3^{t_1} - \xi_4^{t_1})^2 \bigr |^2} \Bigr )^{1-\delta} \fr{1 + |\mu_1 - \mu_2|}{1 + |\xi_3^{t_1} - \xi_4^{t_1}|}. \end{split} \] When $|\mu_1 - \mu_2| \le 1$, we choose the first term in the minimum to see that the sum is of size $2^{-mN}$. If $|\mu_1-\mu_2|\geq\max(1, 2|\xi_3^{t_1} - \xi_4^{t_1}|)$, the above expression is bounded by \[ \sum_{\mu_1, \mu_2 \in \mf{Z}} 2^{-mN} \min \Bigl ( \frac{\langle \mu_1 + \mu_2 \rangle^{-N}}{\langle\mu_1-\mu_2\rangle}, \fr{1}{\langle\mu_1-\mu_2\rangle^3} \Bigr )^{1-\delta} \langle\mu_1-\mu_2\rangle\lesssim_N 2^{-mN}. \] If instead $1\leq |\mu_1-\mu_2|\leq 2|\xi_3^{t_1} - \xi_4^{t_1}|$, we obtain the bound \[ \sum_{\mu_1, \mu_2 \in \mf{Z}} 2^{-mN} \min \Bigl ( \frac{\langle \mu_1 + \mu_2 \rangle^{-N}}{\langle|\xi_3^{t_1} - \xi_4^{t_1}|\rangle}, \fr{1}{\bigl[|\mu_1-\mu_2|-|\xi_3^{t_1} - \xi_4^{t_1}|\bigr]^2} \Bigr )^{1-\delta} \lesssim_N 2^{-mN}. \] Therefore \[ \int K_m (z_1, z_2, z_3, z_4)^{1-\delta} dz_1 dz_2 \lesssim_N 2^{-mN}, \] which gives \eqref{e:Km_schur}. Modulo Lemma~\ref{l:K0_ptwise}, this completes the proof of Proposition~\ref{p:L2_kernel_bound_delta}. \end{proof} \section{Proof of Lemma~\ref{l:K0_ptwise}} The spatial localization and the definition of $E_m$ immediately imply the cheap bound \[ |K_m(\vec{z})| \lesssim_N 2^{-mN}. \] However, we can often do better by exploiting oscillation in space and time. As the argument is essentially the same for all $m$, we shall for simplicity take $m = 0$ in the sequel. We shall also assume that $t(\vec{z}) = 0$ as the general case involves little more than replacing all instances of $\xi$ in the sequel by $\xi^{t(\vec{z})}$. By Lemma~\ref{l:galilei}, \begin{equation} \nonumber K_0(\vec{z}) = \Bigl | \iint e^{i\Phi} \prod_{j=1}^4 U_j(t, 0) \psi (x - x_j^t) \, \eta(t) dx dt \Bigr |, \end{equation} where \[ \Phi = \sum_j \sigma_j \Bigl[ (x - x_j^t)\xi_j^t + \int_0^t \fr{1}{2}|\xi_j^\tau|^2 - V(\tau, x_j^\tau) \, d\tau \Bigr] \] with $\sigma = (+, +, -, -)$ and $\prod_{j=1}^4 c_j := c_1 c_2 \overline{c}_3 \overline{c}_4$. To save space we abbreviate $U_j(t, 0)$ as $U_j$. Let $1 =\sum_{\ell \ge 0} \theta_{\ell}$ be a partition of unity such that $\theta_0$ is supported in the unit ball and $\theta_{\ell}$ is supported in the annulus $\{ 2^{\ell-1} < |x| < 2^{\ell + 1} \}$. Also choose $\chi \in C^\infty_0$ equal to $1$ on $|x| \le 8$. Further bound $K_0 \le \sum_{\vec{\ell}} K_0^{\vec{\ell}}$, where \begin{equation} \nonumber K_0^{\vec{\ell}}(\vec z) = \Bigl | \iint e^{i\Phi} \prod_{j} U_j \psi(x - x_j^t) \theta_{\ell_j}(x - x_j^t) \, \eta(t) dx dt \Bigr |. \end{equation} Fix $\vec{\ell}$ and write $\ell^* = \max \ell_j$. By Lemma~\ref{l:collisions}, the integrand is nonzero only in the spacetime region \begin{equation} \label{e:support} \{(t, x) : |t| \lesssim \min(1, \tfr{2^{\ell^*} }{ \max |\xi_j - \xi_k|})\quad\text{and}\quad |x-x_j^t| \lesssim 2^{\ell_j} \}, \end{equation} and for all $t$ subject to the above restriction we have \begin{equation} \label{e:xi_constancy} |x_j^t - x_k^t | \lesssim 2^{\ell^*}\quad\text{and}\quad |\xi_j^t - \xi_k^t - (\xi_j-\xi_k)| \lesssim \min\bigl( 2^{\ell^*}, \tfrac{2^{2\ell^*}}{\max |\xi_j - \xi_k|}\bigr). \end{equation} We estimate $K_0^{\vec{\ell}}$ using integration by parts. The relevant derivatives of the phase function are \begin{equation} \partial_x \Phi = \sum_j \sigma_j \xi_j^t, \qquad \partial^2_x \Phi = 0, \qquad -\partial_t \Phi = \sum_j \sigma_j h(t, z_j^t) + \sum_j \sigma_j (x - x_j^t) \partial_x V (t, x_j^t). \end{equation} Integrating by parts repeatedly in $x$ and using \eqref{e:wavepacket_schwartz_tails}, for any $N \ge 0$, we get \begin{equation} \label{e:x-int_by_parts} \begin{split} |K_0^{\vec{\ell}}(\vec z)| &\lesssim_N \int |\xi_1^t + \xi_2^t - \xi_3^t - \xi_4^t|^{-N} \bigl | \partial_x^N \prod U_j \psi (x - x_j^t) \theta_{\ell_j} (x - x_j^t) \bigr | \, \eta(t) dx dt \\ &\lesssim_N \fr{2^{-\ell^* N} \langle \xi_1 + \xi_2 - \xi_3 - \xi_4 \rangle^{-N} }{1 + |\xi_1 - \xi_2| + |\xi_3 - \xi_4|}, \end{split} \end{equation} where we have used~\eqref{e:xi_constancy} to replace $ \xi_1^t + \xi_2^t - \xi_3^t - \xi_4^t$ with $\xi_1 + \xi_2 - \xi_3 - \xi_4 + O(2^{\ell^*})$. We can also exhibit gains from oscillation in time. Naively, one might integrate by parts using the differential operator $\partial_t$, but better decay can be obtained by accounting for the bulk motion of the wavepackets in addition to the phase. If one pretends that the envelope $U_j \psi( x - x_j^t) ``\approx" \phi(x - x_j^t)$ is simply transported along the classical trajectory, then \begin{align*} (\partial_t + \xi_j^t \partial_x)U_j \psi(x - x_j^j) ``\approx" (-\xi_j^t + \xi_j^t) \phi'(x-x_j^t) = 0. \end{align*} Motivated by this heuristic, we introduce a vector field adapted to the average bicharacteristic for the four wavepackets. This will have the greatest effect when the wavepackets all follow nearby bicharacteristics; when they are far apart in phase space, we can exploit the strong spatial localization and the fact that two wavepackets widely separated in momentum will interact only for a short time. Define \begin{gather*} \overline{x^t} = \tfrac{1}{4} \sum_j x_j^t, \qquad \overline{\xi^t} = \tfrac{1}{4}\sum_j \xi_j^t, \qquad x_j^t = \overline{x^t} + \overline{x^t}_j, \qquad \xi_j^t = \overline{\xi^t} + \overline{\xi^t}_j. \end{gather*} The variables $(\overline{x^t}_j, \overline{\xi^t}_j)$ describe the location of the $j$th wavepacket in phase space relative to the average $(\overline{x^t}, \overline{\xi^t})$; see Figure~\ref{f:fig2}. \begin{figure} \includegraphics[scale=0.7]{drawing_2.eps} \caption{Phase space coordinates relative to the ``center of mass".} \label{f:fig2} \end{figure} We have \begin{equation} \label{e:cm_derivs} \begin{split} \frac{d}{dt} \overline{x^t}_j &= \overline{\xi^t}_j = O( \max_{j, k} |\xi_j^t - \xi_k^t|) = O\Bigl( |\xi_j - \xi_k| + \min\bigl(2^{\ell^*}, \tfrac{ 2^{2\ell^*}}{ \max |\xi_j - \xi_k|} \bigr)\Bigr) \\ \frac{d}{dt} \overline{\xi^t}_j &= \tfrac{1}{4} \sum_k \partial_x V(t, x^t_k) - \partial_x V(t, x^t_j)\\ &= \tfrac{1}{4} \sum_k (x^t_k - x^t_j) \int_0^1 \partial^2_x V(t, (1-\theta) x_j^t + \theta x_k^t) \, d\theta\\ &= O(2^{\ell^*}). \end{split} \end{equation} Note that \begin{equation} \label{e:cm_var_max} \max_j |\overline{x^t}_j| \sim \max_{j, k} |x^t_j - x^t_k|, \quad \max_{j} | \overline{\xi^t}_j | \sim \max_{j, k} | \xi^t_j - \xi^t_k|. \end{equation} Consider the operator \[ D = \partial_t + \overline{\xi^t} \partial_x. \] We compute \begin{equation} \nonumber \begin{split} -D\Phi &= \sum \sigma_j h(t, z_j^t) + \sum \sigma_j [(x - x_j^t) \partial_x V(t, x_j^t) - \overline{\xi^t} \xi_j^t ]\\ &= \tfrac{1}{2}\sum \sigma_j |\overline{\xi^t}_j|^2 + \sum \sigma_j \bigl[ V(t, x_j^t) + (x-x_j^t) \partial_x V (t, x_j^t)\bigr]. \end{split} \end{equation} This is more transparent when expressed in the relative variables $\overline{x^t}_j$ and $\overline{\xi^t}_j$. Each term in the second sum can be written as \begin{align*} V(t, \overline{x^t} &+ \overline{x^t}_j) + (x - x_j^t) \partial_x V(t,\overline{x^t} + \overline{x^t}_j)\\ &= V(t, \overline{x^t} + \overline{x^t}_j) - V(t, \overline{x^t}) -\overline{x^t}_j \partial_x V(t, \overline{x^t}) + V(t, \overline{x^t}) + \overline{x^t}_j \partial_x V (t, \overline{x^t})+ (x-x_j^t) \partial_x V(t, \overline{x^t})\\ &\quad+ (x-x_j^t) ( \partial_x V(t, \overline{x^t} + \overline{x^t}_j)- \partial_x V(t, \overline{x^t}) )\\ &= V^{\overline{z}} (t, \overline{x^t}_j) + V(t, \overline{x^t}) +(x-x_j^t) [\partial_x V^{\overline{z}}](t,\overline{x^t}_j) +(x-\overline{x^t}) \partial_x V(t, \overline{x^t}), \end{align*} where \begin{equation} \label{e:V_cm} V^{\overline{z}}(t, x) = V(t, \overline{x^t} + x) - V(t, \overline{x^t}) - x \partial_x V(t, \overline{x^t}) = x^2 \int_0^1 (1-s)\partial^2_x V(t, \overline{x^t} + s x) \, ds. \end{equation} The terms without the subscript $j$ cancel upon summing, and we obtain \begin{equation} \label{e:vf_deriv} -D\Phi = \tfrac{1}{2} \sum \sigma_j |\overline{\xi^t}_j|^2 + \sum \sigma_j [V^{\bar{z}}(t, \overline{x^t}_j) + (x-x_j^t) [\partial_x V^{\overline{z}}](t, \overline{x^t}_j)]. \end{equation} Therefore the contribution to~$D\Phi$ from $V$ depends essentially only on the differences $x^t_j - x^t_k$. Invoking \eqref{e:support}, \eqref{e:xi_constancy}, and~\eqref{e:cm_var_max}, we see that the second sum is at most $O(2^{2\ell^*})$. Note also that \[ (\overline{\xi^t}_j)^2 = (\overline{\xi}_j)^2 + O(2^{2\ell^*}), \] as can be seen via \eqref{e:cm_derivs}, the fundamental theorem of calculus, and the time restriction \eqref{e:support}. It follows that if \begin{equation} \label{e:energy_cond} \Bigl | \sum_j \sigma_j ( \overline{\xi}_j)^2 \Bigr | \ge C \cdot 2^{2\ell^*} \end{equation} for some large constant $C > 0$, then on the support of the integrand \begin{equation} \label{e:phasederiv_lower_bound} \begin{split} |D \Phi| \gtrsim \Bigl | \sum_{j} \sigma_j (\overline{\xi}_j)^2 \Bigr | &= \tfrac{1}{2}\bigl | (\overline{\xi}_1 + \overline{\xi}_2)^2 - (\overline{\xi}_3 + \overline{\xi}_4)^2 + (\overline{\xi}_1 - \overline{\xi}_2)^2 - (\overline{\xi}_3 - \overline{\xi}_4)^2\bigr| \\ &=\tfrac12 \bigl | |\xi_1 - \xi_2|^2 - |\xi_3 - \xi_4|^2 \bigr |, \end{split} \end{equation} where the last inequality follows from the fact that $\overline{\xi}_1 + \overline{\xi}_2 + \overline{\xi}_3 + \overline{\xi}_4 = 0$. The second derivative of the phase is \begin{equation} \nonumber \begin{split} -D^2 \Phi &= \sum \sigma_j \overline{\xi^t}_j \bigl[\tfrac{1}{4}\sum_k \partial_x V (t, x_k^t) - \partial_x V (t, x_j^t) \bigr]+ \sum_j \sigma_j (x -x_j^t) \xi_j^t \partial^2_x V(t, x_j^t) \\ &\quad+ \overline{\xi^t} \sum \sigma_j \partial_x V (t, x_j^t)+ \sum \sigma_j [\partial_t V(t, x_j^t) +(x-x_j^t) \partial_t \partial_x V(t, x_j^t)]\\ &= \sum \sigma_j \overline{\xi^t}_j \bigl[\tfrac{1}{4}\sum_k \partial_x V(t, x_k^t) - \partial_x V(t, x_j^t) \bigr]+ \sum \sigma_j (x - x_j^t) \overline{\xi^t}_j \partial_x^2 V(t, x_j^t)\\ &\quad+ \sum \sigma_j [\partial_t V (t, x_j^t) + (x-x_j^t) \partial_t \partial_x V (t, x_j^t)] + \overline{\xi^t} \sum \sigma_j[ \partial_x V(t, x_j^t) +(x-x_j^t) \partial^2_x V (t, x_j^t)]. \end{split} \end{equation} We rewrite the last two sums as before to obtain \begin{equation} \label{e:phase_second_deriv} \begin{split} -D^2 \Phi &= \sum \sigma_j \overline{\xi^t}_j \bigl[\tfrac{1}{4} \sum_k \partial_x V(t, x_k^t) - \partial_x V(t, x_j^t) \bigr]+ \sum \sigma_j (x - x_j^t) \overline{\xi^t}_j \partial_x^2 V(t, x_j^t) \\ &\quad+ \sum \sigma_j [(\partial_t V)^{\overline{z}} (t, \overline{x^t}_j) + (x-x_j^t) \partial_x (\partial_t V)^{\overline{z}} (t, \overline{x^t}_j)]\\ &\quad+ \overline{\xi^t} \sum \sigma_j [(\partial_x V)^{\overline{z}} (t, \overline{x^t}_j) + (x-x_j^t) \partial_x (\partial_x V)^{\overline{z}} (t, \overline{x^t}_j)], \end{split} \end{equation} where \[ \begin{split} (\partial_t V)^{\overline{z}}(t, x) &= x^2 \int_0^1 (1-s)\partial^2_x \partial_t V(t, \overline{x^t} + s x) \, ds\\ (\partial_x V)^{\overline{z}}(t, x) &= x^2 \int_0^1 (1-s)\partial^3_x V(t, \overline{x^t} + s x) \, ds. \end{split} \] Assume that~\eqref{e:energy_cond} holds. Write $e^{i\Phi} = \frac{D\Phi}{i|D\Phi|^2} \cdot D e^{i\Phi}$ and integrate by parts to get \[ \begin{split} K^{\vec{\ell}}_0(\vec z) &\lesssim \Bigl | \int e^{i\Phi} \frac{ D^2 \Phi}{ (D \Phi)^2} \prod U_j \psi(x - x_j^t) \theta_{\ell_j} (x-x_j^t) \, \eta(t) dx dt \Bigr| \\ &+ \Bigl| \int e^{i\Phi} \frac{1}{(D\Phi)} D \prod U_j \psi(x - x_j^t) \theta_{\ell_j} (x-x_j^t) \, \eta(t) dx dt \Bigr|\\ &\lesssim \Bigl | \int e^{i\Phi} \fr{ D^2 \Phi}{(D \Phi)^2} \prod U_j \psi (x - x_j^t) \theta_{\ell_j} (x - x_j^t) \, \eta(t) dx dt \Bigr |\\ &+ \Bigl | \int e^{i\Phi} \fr{2D^2 \Phi}{(D \Phi)^3} D \prod U_j \psi(x - x_j^t) \theta_{\ell_j} (x - x_j^t) \, \eta(t) dx dt\Bigr | \\ &+ \Bigl | \int e^{i\Phi} \fr{1}{(D \Phi)^2} D^2\prod U_j \psi(x - x_j^t) \theta_{\ell_j} (x - x_j^t) \, \eta(t) dx dt\Bigr |\\ &= I + II + III. \end{split}\] Note that after the first integration by parts, we only repeat the procedure for the second term. The point of this is to avoid higher derivatives of $\Phi$, which may be unacceptably large due to factors of $\overline{\xi^t}$. Consider first the contribution from $I$. Write $I \le I_a + I_b + I_c$, where $I_a$, $I_b$, $I_c$ correspond respectively to the first, second, and third lines in the expression~\eqref{e:phase_second_deriv} for $D^2 \Phi$. In view of \eqref{e:wavepacket_schwartz_tails}, \eqref{e:xi_constancy}, \eqref{e:cm_derivs}, and \eqref{e:phasederiv_lower_bound}, we have \[ \begin{split} I_a &\lesssim_N \int \fr{ 2^{\ell^*} \sum_j |\overline{\xi^t}_j|}{ |D \Phi|^2} \prod_j 2^{-\ell_j N} \chi \Bigl ( \fr{x - x_j^t}{2^{\ell_j}} \Bigr ) \, \eta(t) dx dt\\ &\lesssim_N \fr{ 2^{2\ell^*}(1 + \sum |\overline{\xi}_j|)}{ \bigl | (\xi_1 - \xi_2)^2 - (\xi_3 - \xi_4)^2 \bigr |^2} \cdot \int \prod_j 2^{-\ell_j N} \chi\Bigl (\fr{x - x_j^t}{2^{\ell_j}}\Bigr ) \, \eta(t) dx dt\\ &\lesssim_N 2^{-\ell^*N} \cdot \fr{ \langle \xi_1 + \xi_2 - \xi_3 - \xi_4 \rangle + |\xi_1 - \xi_2| + |\xi_3 - \xi_4|}{ \bigl | (\xi_1 - \xi_2)^2 - (\xi_3 - \xi_4)^2 \bigr |^2} \cdot \fr{1}{1 + |\xi_1 - \xi_2| + |\xi_3 - \xi_4|}, \end{split} \] where we have observed that \[ \begin{split} \sum_j |\overline{\xi}_j| &\sim \bigl (\sum_j |\overline{\xi}_j|^2\bigr)^{1/2} \sim \bigl( |\overline{\xi}_1 + \overline{\xi}_2|^2 + |\overline{\xi}_1 - \overline{\xi}_2|^2 + |\overline{\xi}_3 + \overline{\xi}_4|^2 + |\overline{\xi}_3 - \overline{\xi}_4|^2 \bigr)^{1/2}\\ &\lesssim |\xi_1 + \xi_2 - \xi_3 - \xi_4|+ |\xi_1 - \xi_2| + |\xi_3 - \xi_4|. \end{split} \] Similarly, \[ \begin{split} I_b &\lesssim_N \int \fr{2^{2\ell^*}}{|D\Phi|^2} \prod_j 2^{-\ell_j N} \chi \Bigl (\fr{x - x_j^t}{2^{\ell_j}} \Bigr ) \, \eta(t) dx dt\\ &\lesssim_N \fr{2^{-\ell^* N}}{ \bigl | (\xi_1 - \xi_2)^2 - (\xi_3 - \xi_4)^2 \bigr |^2} \cdot \fr{1}{1 + |\xi_1 - \xi_2| + |\xi_3 - \xi_4|}. \end{split} \] To estimate $I_c$, use the decay hypothesis $|\partial_x^3 V| \lesssim \langle x \rangle^{-1-\varepsilon}$ to obtain \[ \begin{split} I_c &\lesssim_N \int \fr{2^{2\ell^*} |\overline{\xi^t}|}{|D \Phi|^2} \Bigl(\int_0^1 \sum_j \langle \overline{x^t} + s\overline{x^t}_j \rangle^{-1-\varepsilon} \, ds\Bigr) \prod_j 2^{-\ell_j N} \chi \Bigl (\fr{x - x_j^t}{2^{\ell_j} } \Bigr ) \, \eta(t) dx dt\\ &\lesssim_N \fr{2^{-\ell^* N}}{ |(\xi_1 - \xi_2)^2 - (\xi_3 - \xi_4)^2|^2} \int_0^1\sum_j \int_{|t| \le \delta_0} |\overline{\xi^t}| \langle \overline{x^t} + s \overline{x^t}_j \rangle^{-1-\varepsilon} \, dt ds. \end{split} \] The integral on the right is estimated in the following technical lemma. \begin{lma} \label{l:technical_lma_2} \begin{equation} \nonumber \int_0^1\sum_j \int_{|t| \le \delta_0} |\overline{\xi^t}| \langle \overline{x^t} + s \overline{x^t}_j \rangle^{-1-\varepsilon} \, dt ds = O(2^{(2 + \varepsilon)\ell^*}). \end{equation} \end{lma} \begin{proof} It will be convenient to replace the average bicharacteristic $(\overline{x^t}, \overline{\xi^t})$ with the ray $(\overline{x}^t, \overline{\xi}^t)$ starting from the average initial data. We claim that \[ |\overline{x^t} - \overline{x}^t| + | \overline{\xi^t} - \overline{\xi}^t | = O(2^{\ell^*}) \] during the relevant $t$, for Hamilton's equations imply that \[ \begin{split} \overline{x^t} - \overline{x}^t &= - \int_0^t (t-\tau) \Bigl ( \fr{1}{4} \sum_k \partial_x V(\tau, x_k^\tau) - \partial_x V (\tau, \overline{x}^\tau) \Bigr ) \, d\tau\\ &= - \int_0^t (t-\tau) \Bigl ( \fr{1}{4} \sum_k ( \overline{x^\tau}_k + \overline{x^\tau} - \overline{x}^\tau) \int_0^1 \partial^2_x V (\tau, \overline{x}^\tau + s(x_k^\tau - \overline{x}^\tau)) \, ds \Bigr ) \, d\tau\\ &= -\int_0^t(t-\tau)(\overline{x^\tau} - \overline{x}^\tau) \Bigl[\int_0^1 \fr{1}{4}\sum_k \partial^2_x V (\tau, \overline{x}^\tau + s(x_k^\tau - \overline{x}^\tau)) \, ds \Bigr] \, d\tau + O(2^{\ell^*}t^2), \end{split} \] and we can invoke Gronwall. Similar considerations yield the bound for $|\overline{\xi^t} - \overline{\xi}^t|$. As also $\overline{x^t}_j = O(2^{\ell^*})$, we are reduced to showing \begin{equation} \label{e:Ic_simplified_integral} \int_{|t| \le \delta_0} |\overline{\xi}^t| \langle \overline{x}^t \rangle^{-1-\varepsilon} \, dt = O(1). \end{equation} Integrating the ODE \[ \tfr{d}{dt} \overline{x}^t = \overline{\xi}^t \quad \text{and}\quad \tfr{d}{dt} \overline{\xi}^t = - \partial_x V(t, \overline{x}^t) \] yields the estimates \[ \begin{split} |\overline{x}^t - \overline{x}^s - (t-s) \overline{\xi}^s| &\le C |t-s|^2(|\overline{x}^s| + |(t-s) \overline{\xi}^s|)\\ |\overline{\xi}^t - \bar{\xi}^s| &\le C |t-s| (|\overline{x}^s| + |(t-s)\overline{\xi}^s|) \end{split} \] for some constant $C$ depending on $ \| \partial_x^2 V\|_{L^\infty}$. By subdividing the time interval $[-\delta_0, \delta_0]$ if necessary, we may assume in~\eqref{e:Ic_simplified_integral} that $(1+C)|t| \le 1/10$. Consider separately the cases $|\overline{x}| \le |\overline{\xi}|$ and $|\overline{x}| \ge |\overline{\xi}|$. When $|\overline{x}| \le |\overline{\xi}|$ we have \[ 2 |\overline{\xi}| \ge |\overline{\xi}^t| \ge |\overline{\xi}| - \tfr{1}{5}|\overline{\xi}| \ge \tfr{1}{2} |\overline{\xi}| \] (assuming, as we may, that $|\overline{\xi}| \ge 1$) and the bound~\eqref{e:Ic_simplified_integral} follows from the change of variables $y = \bar{x}^t$. If instead $|\overline{x}| \ge |\overline{\xi}|$, then $|\overline{x}^t| \ge \tfr{1}{2}|\overline{x}|$ and $ |\overline{\xi}^t| \le 2 |\overline{x}|$, which also yields the desired bound. \end{proof} Returning to $I_c$, we conclude that \[ I_c \lesssim_N \fr{2^{-\ell^*N}}{ | (\xi_1 - \xi_2)^2 - (\xi_3 - \xi_4)^2 |^2}. \] Overall, \[ \begin{split} I &\le I_a + I_b + I_c \lesssim_N 2^{-\ell^* N} \fr{\langle \xi_1 + \xi_2 - \xi_3 - \xi_4 \rangle}{ |(\xi_1 - \xi_2)^2 - (\xi_3 - \xi_4)^2|^2}. \end{split} \] For $II$, we have \begin{equation} \label{e:II_one_deriv} D[U_j \psi(x - x_j^t)] = -i H_j U_j \psi(x - x_j^t) - \overline{\xi^t}_j \partial_x U_j \psi(x - x_j^t) \end{equation} and estimating as for $I$ we get \[ \begin{split} II &\lesssim_N \fr{1+ \sum_j |\overline{\xi}_j|}{ |(\xi_1 - \xi_2)^2 - (\xi_3 - \xi_4)^2|} \int \fr{|D^2 \Phi|}{|D \Phi|^2} \prod 2^{-\ell_j N} \chi \Bigl(\fr{x - x_j^t}{2^{\ell_j}} \Bigr) \, \eta dx dt\\ &\lesssim_N 2^{-\ell^* N} \Bigl (\fr{ \langle \xi_1 + \xi_2 - \xi_3 - \xi_4\rangle + |\xi_1 - \xi_2 | + |\xi_3 - \xi_4|}{ | (\xi_1 - \xi_2)^2 - (\xi_3 - \xi_4)^2|} \Bigr)\fr{\langle \xi_1 + \xi_2 - \xi_3 - \xi_4 \rangle}{ |(\xi_1 - \xi_2)^2 - (\xi_3 - \xi_4)^2|^2}. \end{split} \] % % It remains to consider $III$. The derivatives can distribute in various ways: \begin{equation} \begin{split}\label{e:III_expansion} III &\lesssim \fr{1}{ |(\xi_1 - \xi_2)^2 - (\xi_3 - \xi_4)^2|^2} \Bigl \{\int |D^2[U_1 \psi(x - x_1^t)] \prod_{j=2}^4 U_j \psi (x - x^t_j) \prod_{k=1}^4 \theta_{\ell_k} (x - x_k^t) \eta| \, dx dt\\ &\quad+ \int \Bigl | D [U_1 \psi(x - x_1^t)] D [U_2 \psi(x - x_2^t)] \prod_{j=3}^4 U_j \psi(x - x^t_j) \prod_{k=1}^4 \theta_{\ell_k} (x - x_k^t) \eta \Bigr | \, dx dt\\ &\quad+ \int \Bigl| D \prod_j U_j \psi(x - x^t_j) D \prod_k \theta_{\ell_k}(x - x_k^t) \eta \Bigr | \, dx dt\\ &\quad+ \int \Bigl| \prod_j U_j \psi(x - x^t_j) D^2 \prod_{k} \theta_{\ell_k} (x - x_k^t) \eta \Bigr | \, dx dt\Bigr\}, \end{split} \end{equation} where the first two terms represent sums over the appropriate permutations of indices. We focus on the terms involving double derivatives of $U_j$ as the other terms can be dealt with as in the estimate for~$II$. From~\eqref{e:II_one_deriv}, \begin{equation} \label{e:III_two_derivs} \begin{split} D^2[U_j \psi(x - x_j^t)] &= -i\partial_t V_j (t, x-x_j^t) U_j \psi(x-x_j^t) - (H_j)^2 U_j \psi(x - x_j^t) \\ & + 2i \overline{\xi^t}_j \partial_x H_j U_j \psi(x - x_j^t) - \Bigl[\tfrac{1}{4} \sum_k \partial_x V(t, x_k^t) - \partial_x V(t, x_j^t) \Bigr] \partial_x U_j \psi(x - x_j^t) \\ &+ (\overline{\xi^t}_j)^2 \partial^2_x U_j \psi(x - x_j^t). \end{split} \end{equation} Recalling from~\eqref{e:Vj} that \[ \partial_t V_j(t, x) = x^2 \Bigl [\xi_j^t \int_0^1 (1-s) \partial_x^3V(t, x_j^t + s x) ds +\int_0^1 (1-s) \partial_t \partial^2_x V (t, x_j^t + sx) \, ds \Bigr ], \] it follows that \[ \begin{split} \int \Bigl | \partial_t V_1 (t, x - x_1^t) & U_1 \psi(x - x_1^t) \prod_{j=2}^4 U_j \psi(x - x_j^t) \prod_{k=1}^4 \theta_{\ell_{k}} (x - x_k^t)\Bigr| \, \eta(t) dx dt\\ &\lesssim 2^{2 \ell_1} \int \Bigl[ \int_0^1 |\xi_1^t \partial^3V_x(t, x_1^t + s(x - x_1^t))| \, ds \\ &\quad+ \int_0^1 |\partial_t \partial^2_x V (t, x_j^t + s(x-x_j^t)) \, ds \Bigr] \prod_j 2^{-\ell_j N} \chi \Bigl ( \fr{x - x_j^t}{ 2^{\ell_j} } \Bigr) \, \eta dx dt\\ &\lesssim_N 2^{-\ell^*N}, \end{split} \] where the terms involving $\partial^3_x V$ are handled as in $I_c$ above. Also, from~\eqref{e:wavepacket_schwartz_tails} and~\eqref{e:xi_constancy}, \[ \begin{split} &\int \Bigl | (\overline{\xi}_1^t)^2 \partial_x^2 U_1 \psi(x - x_1^t) \prod_{j=2}^4 U_j \psi(x - x_j^t) \prod_{k=1}^4 \theta_{\ell_{k}} (x - x_k^t)\Bigr| \, \eta(t) dx dt\lesssim_N \fr{2^{-\ell^*N} (1 + |\overline{\xi}_1|^2) }{1 + |\xi_1 - \xi_2| + |\xi_3 - \xi_4|}. \end{split} \] The intermediate terms in~\eqref{e:III_two_derivs} and the other terms in the the expansion~\eqref{e:III_expansion} yield similar upper bounds. We conclude overall that \[ \begin{split} III &\lesssim_N 2^{-\ell^*N} \Bigl( \fr{1}{ |(\xi_1 - \xi_2)^2 - (\xi_3 - \xi_4)^2|^2} + \fr{(1 + \sum_j |\overline{\xi}_j|)^2}{ | (\xi_1 - \xi_2)^2 - (\xi_3 - \xi_4)^2|^2 \cdot (1 + |\xi_1 - \xi_2| + |\xi_3 - \xi_4|)} \Bigr)\\ &\lesssim_N 2^{-\ell^*N}\Bigl ( \fr{ 1}{| (\xi_1 - \xi_2)^2 - (\xi_3 - \xi_4)^2|^2} + \fr{\langle \xi_1 + \xi_2 - \xi_3 - \xi_4 \rangle^2 + (|\xi_1 - \xi_2| + |\xi_3 - \xi_4|)^2}{\bigl| |\xi_1 - \xi_2|^2 - |\xi_3 - \xi_4|^2 \bigr|^2 \cdot (1+ |\xi_1 - \xi_2| + |\xi_3 - \xi_4|)} \Bigr)\\ &\lesssim_N 2^{-\ell^*N} \fr{ \langle \xi_1 + \xi_2 - \xi_3 - \xi_4 \rangle^2 + |\xi_1 - \xi_2| + |\xi_3 - \xi_4|}{ | (\xi_1 - \xi_2)^2 - (\xi_3 - \xi_4)^2|^2}. \end{split} \] Note also that in each of the integrals $I$, $II$, and $III$ we may integrate by parts in $x$ to obtain arbitrarily many factors of $|\xi_1 + \xi_2 - \xi_3 - \xi_4|^{-1}$. All instances of $\langle \xi_1 + \xi_2 - \xi_3 - \xi_4 \rangle$ in the above estimates may therefore be replaced by $1$. Combining $I$, $II$, and $III$, under the hypothesis~\eqref{e:energy_cond} we obtain \begin{equation} \nonumber \begin{split} |K^{\vec{\ell}}_0(\vec z)| \lesssim_N 2^{-\ell^*N} \fr{ 1 + |\xi_1 - \xi_2| + |\xi_3 - \xi_4|}{ \bigl | (\xi_1 - \xi_2)^2 - (\xi_3 - \xi_4)^2 \bigr|^2}. \end{split} \end{equation} Combining this with~\eqref{e:x-int_by_parts}, we get \begin{equation} \begin{split} | K^{\vec{\ell}}_0 (\vec z)| &\lesssim_{N_1, N_2} 2^{-\ell^* N_1} \min \Bigl(\frac{\langle \xi_1 + \xi_2 - \xi_3 - \xi_4\rangle^{-N_2}}{ 1 + |\xi_1 - \xi_2| + |\xi_3 - \xi_4|}, \fr{ 1 + |\xi_1 - \xi_2| + |\xi_3 - \xi_4|}{ \bigl | (\xi_1 - \xi_2)^2 - (\xi_3 - \xi_4)^2 \bigr|^2} \Bigr) \end{split} \end{equation} for any $N_1, N_2 > 0$. Lemma~\ref{l:K0_ptwise} now follows from summing in $\vec{\ell}$. \bibliographystyle{myamsalpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3ex plus .2ex}{\large\bf}} \def\subsection{\@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\normalsize\bf}} \makeatother \makeatletter \def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}} \newcommand{\sect}[1]{\setcounter{equation}{0}\section{#1}} \@addtoreset{equation}{section} \renewcommand{\arabic{section}.\arabic{equation}}{\thesection.\arabic{equation}} \makeatother \renewcommand{\baselinestretch}{1.15} \textwidth 150mm \textheight 210mm \topmargin -.05in \oddsidemargin 5mm \def\begin{equation}\begin{aligned}{\begin{equation}\begin{aligned}} \def\end{aligned}\end{equation}{\end{aligned}\end{equation}} \def{\textsc{uv}}{{\textsc{uv}}} \def{\textsc{ir}}{{\textsc{ir}}} \def{g_\textsc{x}}{{g_\textsc{x}}} \def{g_\textsc{y}}{{g_\textsc{y}}} \def\raisebox{1pt}{$\slash$} \hspace{-7pt} \partial{\raisebox{1pt}{$\slash$} \hspace{-7pt} \partial} \def\raisebox{1pt}{$\slash$} \hspace{-7pt} p{\raisebox{1pt}{$\slash$} \hspace{-7pt} p} \def\raisebox{1pt}{$\slash$} \hspace{-7pt} q{\raisebox{1pt}{$\slash$} \hspace{-7pt} q} \def\hspace{3pt}\raisebox{1pt}{$\slash$} \hspace{-9pt} A{\hspace{3pt}\raisebox{1pt}{$\slash$} \hspace{-9pt} A} \def\hspace{3pt}\raisebox{1pt}{$\slash$} \hspace{-9pt} D{\hspace{3pt}\raisebox{1pt}{$\slash$} \hspace{-9pt} D} \def\hspace{3pt}\raisebox{1pt}{$\slash$} \hspace{-7pt} D{\hspace{3pt}\raisebox{1pt}{$\slash$} \hspace{-7pt} D} \def\begin{eqnarray}} \def\eea{\end{eqnarray}{\begin{eqnarray}} \def\eea{\end{eqnarray}} \def\begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber{\begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber} \def& \hspace{-7pt}} \def\c{\hspace{-5pt}} \def\Z{{\bf Z}{& \hspace{-7pt}} \def\c{\hspace{-5pt}} \def\Z{{\bf Z}} \def\overline{z}} \def\ov{\overline} \def\I{1\hspace{-4pt}1{\overline{z}} \def\ov{\overline} \def\I{1\hspace{-4pt}1} \def\displaystyle} \def\de{\partial} \def\deb{\ov{\partial}{\displaystyle} \def\de{\partial} \def\deb{\ov{\partial}} \def\partial\hspace{-6pt}/} \def\psl{p\hspace{-5pt}/{\partial\hspace{-6pt}/} \def\psl{p\hspace{-5pt}/} \def\bar \tau} \def\R{\mathcal R} \def\RR{ R{\bar \tau} \def\R{\mathcal R} \def\RR{ R} \def\mathcal{\mathcal} \defZ\hspace{-5pt}Z{Z\hspace{-5pt}Z} \def C{ C} \def\raisebox{14pt}{}\raisebox{-7pt}{}$\!\!${\raisebox{14pt}{}\raisebox{-7pt}{}$\!\!$} \def\tilde{\tilde} \def\ZZ{Z\hspace{-5pt}Z} \def{\alpha^\prime}{{\alpha^\prime}} \def\hspace{10pt}{\hspace{10pt}} \def\theta^\prime{\theta^\prime} \def\tilde\theta^\prime{\tilde\theta^\prime} \def\omega{\omega} \def{\bf h}{{\bf h}} \def\tilde \mu{\tilde \mu} \newcommand{\Int}[2]{{\int}_{\hspace{-0.35cm}\begin{array}{c}\vspace{-0.35cm} \\{\scriptstyle #1}\end{array}}\hspace{-0.35cm} #2} \newcommand{\CInt}[2]{{\oint}_{\hspace{-0.35cm}\begin{array}{c}\vspace{-0.35cm} \\{\scriptstyle #1}\end{array}}\hspace{-0.05cm} #2} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\text{Tr}}{\text{Tr}} \newcommand{\mathrm{d}}{\mathrm{d}} \newcommand{\text{SO}}{\text{SO}} \newcommand{\text{SU}}{\text{SU}} \newcommand{\text{U}}{\text{U}} \newcommand{\mathcal{G}}{\mathcal{G}} \newcommand{\egamma}[1]{\Gamma\!\left(#1\right)} \newcommand{\rzeta}[1]{\zeta\!\left(#1\right)} \setlength\arraycolsep{2pt} \newcommand\tw[2]{ \Bigg[\hspace{-1pt}\raisebox{1pt} {$\begin{array}{c} \displaystyle{#1} \\ \displaystyle{#2} \end{array}$} \hspace{-1pt}\Bigg]} \newcommand{\eqalign}[1]{\hspace{-10pt}\begin{array}{ll}#1\end{array} \hspace{-10pt}} \newcommand{\promille}{% \relax\ifmmode\promillezeichen \else\leavevmode\(\mathsurround=0pt\promillezeichen\)\fi} \newcommand{\promillezeichen}{% \kern-.05em% \raise.5ex\hbox{\the\scriptfont0 0}% \kern-.15em/\kern-.15em% \lower.25ex\hbox{\the\scriptfont0 00}} \newcommand{\star}{\star} \newcommand{\overline}{\overline} \newcommand{\tilde}{\tilde} \newcommand{\boldsymbol}{\boldsymbol} \newcommand{\mbox{\tiny $GR$ \normalsize}}{\mbox{\tiny $GR$ \normalsize}} \newcommand{\abs}[1]{ \left| #1\right|} \setlength{\evensidemargin}{0cm} \setlength{\oddsidemargin}{0cm} \setlength{\topmargin}{0.00cm} \setlength{\textwidth}{16cm} \setlength{\textheight}{22cm} \setlength{\headheight}{0cm} \setlength{\headsep}{0cm} \setlength{\voffset}{0cm} \setlength{\paperheight}{27cm} \newcommand{\Ale}[1]{{\color{green} #1}} \newcommand{\Emtinan}[1]{{\color{magenta} #1}} \newcommand{\Marco}[1]{{\color{blue} #1}} \newcommand{\Denis}[1]{{\color{red} #1}} \usepackage[colorlinks,linkcolor=black,citecolor=blue,urlcolor=blue,linktocpage]{hyperref} \newcommand{\hhref}[2][]{\href{http://arxiv.org/abs/#2#1}{arXiv:#2}} \frenchspacing \begin{document} \thispagestyle{empty} \begin{center} \vspace*{-.6cm} \hfill SISSA 21/2015/FISI \\ \begin{center} \vspace*{1.1cm} {\Large\bf Deconstructing Conformal Blocks in 4D CFT} \end{center} \vspace{0.8cm} {\bf Alejandro Castedo Echeverri$^{a}$, Emtinan Elkhidir$^{a}$,\\[3mm]} {\bf Denis Karateev$^{a}$, Marco Serone$^{a,b}$} \vspace{1.cm} ${}^a\!\!$ {\em SISSA and INFN, Via Bonomea 265, I-34136 Trieste, Italy} \vspace{.1cm} ${}^b\!\!$ {\em ICTP, Strada Costiera 11, I-34151 Trieste, Italy} \end{center} \vspace{1cm} \centerline{\bf Abstract} \vspace{2 mm} \begin{quote} We show how conformal partial waves (or conformal blocks) of spinor/tensor correlators can be related to each other by means of differential operators in four dimensional conformal field theories. We explicitly construct such differential operators for all possible conformal partial waves associated to four-point functions of arbitrary traceless symmetric operators. Our method allows any conformal partial wave to be extracted from a few ``seed" correlators, simplifying dramatically the computation needed to bootstrap tensor correlators. \end{quote} \newpage \tableofcontents \section{Introduction} There has been a revival of interest in recent years in four dimensional (4D) Conformal Field Theories (CFTs), after the seminal paper \cite{Rattazzi:2008pe} resurrected the old idea of the bootstrap program \cite{Ferrara:1973yt,Polyakov:1974gs}. A 4D CFT is determined in terms of its spectrum of primary operators and the coefficients entering three-point functions among such primaries. Once this set of CFT data is given, any correlator is in principle calculable. Starting from this observation, ref.\cite{Rattazzi:2008pe} has shown how imposing crossing symmetry in four point functions can lead to non-trivial sets of constraints on the CFT data. These are based on first principles and apply to any CFT, with or without a Lagrangian description. Although any correlator can in principle be ``bootstrapped", in practice one has to be able to sum, for each primary operator exchanged in the correlator in some kinematical channel, the contribution of its infinite series of descendants. Such contribution is often called a conformal block. In fact, the crucial technical ingredient in ref.\cite{Rattazzi:2008pe} was the work of refs.\cite{Dolan:2000ut,Dolan:2003hv}, where such conformal blocks have been explicitly computed for scalar four-point functions. Quite remarkably, the authors of refs.\cite{Dolan:2000ut,Dolan:2003hv} were able to pack the contributions of traceless symmetric operators of any spin into a very simple formula. Since ref.\cite{Rattazzi:2008pe}, there have been many developments, both analytical \cite{Heemskerk:2009pn,Fitzpatrick:2011ia,Costa:2011mg,Costa:2011dw,Maldacena:2011jn,SimmonsDuffin:2012uy,Pappadopulo:2012jk,Fitzpatrick:2012yx,Komargodski:2012ek,Hogervorst:2013sma,Hogervorst:2013kva,Behan:2014dxa,Alday:2014tsa,Costa:2014rya,Vos:2014pqa,Elkhidir:2014woa,Goldberger:2014hca,Kaviraj:2015cxa,Alday:2015eya,Kaviraj:2015xsa} and numerical \cite{Rychkov:2009ij,Caracciolo:2009bx,Poland:2010wg,Rattazzi:2010gj,Rattazzi:2010yc,Vichi:2011ux,Poland:2011ey,Liendo:2012hy,Beem:2013qxa,Gliozzi:2013ysa,Alday:2013opa,Berkooz:2014yda,Alday:2014qfa,Caracciolo:2014cxa,Beem:2014zpa,Bobev:2015jxa} in the 4D bootstrap. All numerical studies are still based on identical scalar correlators, unless supersymmetry or global symmetries are present.\footnote{The techniques to bootstrap correlators with non identical fields were developed in refs.\cite{Kos:2014bka,Simmons-Duffin:2015qma}. They have been used so far in 3D only, although they clearly apply in any number of space-time dimensions.} There is an obvious reason for this limitation. Determining the conformal blocks relevant for four-point functions involving tensor primary operators is significantly more complicated. First of all, contrary to their scalar counterpart, tensor four-point correlators are determined in terms of several functions, one for each independent allowed tensor structure. Their number $N_4$ grows very rapidly with the spin of the external operators. The whole contribution of primary operators in any given channel is no longer parametrized by a single conformal block as in the scalar case, but in general by $N_4\times N_4$ conformal blocks, $N_4$ for each independent tensor structure. For each exchanged primary operator, it is convenient not to talk of individual conformal blocks but of Conformal Partial Waves (CPW), namely the entire contribution given by several conformal blocks, one for each tensor structure. Second, the exchanged operator is no longer necessarily traceless symmetric, but can be in an arbitrary representation of the 4D Lorentz group, depending on the external operators and on the channel considered. CPW can be determined in terms of the product of two three-point functions, each involving two external operators and the exchanged one. If it is possible to relate a three-point function to another simpler one, a relation between CPW associated to different four-point functions can be obtained. Using this simple observation, building on previous work \cite{Costa:2011mg}, in ref.\cite{Costa:2011dw} the CPW associated to a correlator of traceless symmetric operators (in arbitrary space-time dimensions), which exchange a traceless symmetric operator, have been related to the scalar conformal block of refs.\cite{Dolan:2000ut,Dolan:2003hv}. Despite this significant progress, bootstrapping tensor four-point functions in 4D requires the knowledge of the CPW associated to the exchange of non-traceless symmetric operators. Even for traceless symmetric exchange, the methods of refs.\cite{Costa:2011mg,Costa:2011dw} do not allow to study correlators with external non-traceless symmetric fields (although generalizations that might do that have been proposed, see ref.\cite{Costa:2014rya}). The aim of this paper is to make a step forward and generalize the relation between CPW found for traceless symmetric operators in ref.\cite{Costa:2011dw} to arbitrary CPW in 4D CFTs. We will perform this task by using the 6D embedding formalism in terms of twistors. Our starting point is the recent general classification of 3-point functions found in ref.\cite{Elkhidir:2014woa}. We will see how three-point functions of spinors/tensors can be related to three-point functions of lower spin fields by means of differential operators. We explicitly construct a basis of differential operators that allows one to express any three-point function of two traceless symmetric and an arbitrary bosonic operator ${\cal O}^{l,\bar l}$ with $l\neq \bar l$, in terms of ``seed" three-point functions, that admit a unique tensor structure. This would allow to express all the CPW entering a four-point function of traceless symmetric correlators in terms of a few CPW seeds. We do not attempt to compute such seeds explicitly, although it might be done by developing the methods of refs.\cite{Dolan:2000ut,Dolan:2003hv}. The structure of the paper is as follows. In section 2 we will briefly review the 6D embedding formalism in twistor space in index-free notation and the results of ref.\cite{Elkhidir:2014woa} on the three-point function classification. In section 3 we recall how a relation between three-point functions leads to a relation between CPW. We introduce our differential operators in section 4. We construct an explicit basis of differential operators in section 5 for external symmetric traceless operators. In subsection 5.1 we reproduce (and somewhat improve) the results of ref.\cite{Costa:2011dw} in our formalism where the exchanged operator is traceless symmetric and then pass to the more involved case of mixed tensor exchange in subsection 5.2. In section 6 we discuss the basis of the tensor structures of four-point functions and propose a set of seed CPW needed to get CPW associated with the exchange of a bosonic operator ${\cal O}^{l,\bar l}$. A couple of examples are proposed in section 7. In subsection 7.1 we consider a four fermion correlator and in subsection 7.2 we schematically deconstruct spin one and spin two correlators, and show how to impose their conservation. We conclude in section 8, where we discuss in particular the computations yet to be done to bootstrap tensor correlators in 4D CFTs. A (non-exhaustive) list of relations between $SU(2,2)$ invariants entering four-point functions is listed in appendix A. \section{Three-Point Function Classification} \label{sec:class} General three-point functions in 4D CFTs involving bosonic or fermionic operators in irreducible representations of the Lorentz group have recently been classified and computed in ref.\cite{Elkhidir:2014woa} (see refs.\cite{Osborn:1993cr,Erdmenger:1996yc} for important early works on tensor correlators and refs.\cite{Weinberg:2010fx,Costa:2011mg,Costa:2011dw,Stanev:2012nq,Zhiboedov:2012bm,Dymarsky:2013wla,Costa:2014rya,Li:2014gpa,Korchemsky:2015ssa} for other recent studies) using the 6D embedding formalism \cite{Dirac:1936fq,Mack:1969rr,Ferrara2,Dobrev:1977qv} formulated in terms of twistors in an index-free notation \cite{SimmonsDuffin:2012uy} (see e.g. refs.\cite{Siegel:1992ic,Siegel:2012di,Goldberger:2011yp,Goldberger:2012xb,Fitzpatrick:2014oza,Khandker:2014mpa} for applications mostly in the context of supersymmetric CFTs). We will here briefly review the main results of ref.\cite{Elkhidir:2014woa}. A 4D primary operator ${\cal O}_{\alpha_1\ldots \alpha_l}^{\dot{\beta}_1\ldots \dot{\beta}_{\bar l}}$ with scaling dimension $\Delta$ in the $(l,\bar l)$ representation of the Lorentz group can be embedded in a 6D multi-twistor field $O^{a_1\ldots a_l}_{b_1\ldots b_{\bar l}}$, homogeneous of degree $n=\Delta+ (l+\bar l)/2$, as follows: \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber {\cal O}_{\alpha_1\ldots \alpha_l}^{\dot{\beta}_1\ldots \dot{\beta}_{\bar l}}(x) = (X^+)^{\Delta-(l+\bar l)/2} \mathbf{X}_{\alpha_1 a_1}\ldots \mathbf{X}_{\alpha_l a_l} \overline{\mathbf{X}}^{\dot\beta_1 b_1} \ldots \overline{\mathbf{X}}^{\dot\beta_{\bar l} b_{\bar l}} O^{a_1\ldots a_l}_{b_1\ldots b_{\bar l}}(X)\,. \label{fFrelation} \ee In eq.(\ref{fFrelation}), 6D and 4D coordinates are denoted as $X^M$ and $x^\mu$, where $x^\mu = X^\mu/X^+$, ${\mathbf{X}}$ and $\overline{\mathbf{X}}$ are 6D twistor space-coordinates defined as \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber \mathbf{X}_{ab} \equiv X_M \Sigma^M_{ab} = - \mathbf{X}_{ba} \,, \ \ \ \overline{\mathbf{X}}^{ab} \equiv X_M \overline\Sigma^{Mab}= - \overline{\mathbf{X}}^{ba}\,, \label{TwistorCoord} \ee in terms of the 6D chiral Gamma matrices $\Sigma^M$ and $\overline{\Sigma}^M$ (see Appendix A of ref.\cite{Elkhidir:2014woa} for further details). One has $\mathbf{X} \overline{\mathbf{X}} =\overline{\mathbf{X}} \mathbf{X} =X_MX^M=X^2$, which vanishes on the null 6D cone. It is very useful to use an index-free notation by defining \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber O(X,S,\bar S) \equiv O^{a_1\ldots a_l}_{b_1\ldots b_{\bar l}}(X)\ S_{a_1} \ldots S_{a_l} \bar S^{b_1} \ldots \bar S^{b_{\bar l}} \,. \label{Findexfree} \ee A 4D field ${\cal O}$ is actually uplifted to an equivalence class of 6D fields $O$. Any two fields $O$ and $\hat O= O+ \overline{\mathbf{X}} V$ or $\hat O= O+ {\mathbf{X}} \overline{W}$, for some multi twistors $V$ and $\overline{W}$, are equivalent uplifts of ${\cal O}$. Given a 6D multi-twistor field $O$, the corresponding 4D field ${\cal O}$ is obtained by taking \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber {\cal O}_{\alpha_1\ldots \alpha_l}^{\dot{\beta}_1\ldots \dot{\beta}_{\bar l}}(x) = \frac{ (X^+)^{\Delta-\frac{l+\bar l}{2}} }{l!\bar l!} \Big( {\mathbf{X}}\frac{\partial}{\partial S}\Big)_{\alpha_1} \ldots \Big( {\mathbf{X}}\frac{\partial}{\partial S}\Big)_{\alpha_l} \Big( {\overline{\mathbf{X}}}\frac{\partial}{\partial \bar S}\Big)^{\dot\beta_1}\ldots \Big( {\overline{\mathbf{X}}}\frac{\partial}{\partial \bar S}\Big)^{\dot\beta_{\bar l}} O\Big(X,S,\bar S\Big) \,. \label{f4dExp} \ee The 4D three-point functions are conveniently encoded in their scalar 6D counterpart $\langle O_1 O_2 O_3 \rangle$ which must be a sum of $SU(2,2)$ invariant quantities constructed out of the $X_i$, $S_i$ and $\bar S_i$, with the correct homogeneity properties under rescaling. Notice that quantities proportional to $\bar S_i \mathbf{X}_i$, $\overline{\mathbf{X}}_i S_i$ or $\bar S_i S_i$ ($i=1,2,3$) are projected to zero in 4D. The non-trivial $SU(2,2)$ possible invariants are ($i\neq j\neq k$, indices not summed) \cite{SimmonsDuffin:2012uy}: \begin{align} \label{eq:invar1} I_{ij} & \equiv \bar S_i S_j \,, \\ \label{eq:invar2} K_{i,jk} &\equiv N_{i,jk} S_j \overline{\mathbf{X}}_i S_k \,, \\ \label{eq:invar3} \overline{K}_{i,jk} &\equiv N_{i,jk} \bar S_j \mathbf{X}_i \bar S_k \,,\\ \label{eq:invar4} J_{i,jk} &\equiv N_{jk} \bar S_i \mathbf{X}_j \overline{\mathbf{X}}_k S_i \,, \end{align} where \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber N_{jk} \equiv \frac{1}{X_{jk}}\,, \ \ \ N_{i,jk} \equiv \sqrt{\frac{X_{jk}}{X_{ij}X_{ik}}}\,. \label{eq:Ninva} \ee Two-point functions are easily determined. One has \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber \langle O_1(X_1,S_1,\bar S_1) O_2(X_2,S_2,\bar S_2) \rangle = X_{12}^{-\tau_1} I_{21}^{l_1} I_{12}^{\bar l_1} \delta_{l_1,\bar l_2} \delta_{l_2,\bar l_1} \delta_{\Delta_1, \Delta_2} \,, \label{2ptFun} \ee where $X_{ij}\equiv X_i\cdot X_j$ and $\tau_i\equiv \Delta_i+(l_i+\bar l_i)/2$. As can be seen from eq.(\ref{2ptFun}), any operator $O^{l,\bar l}$ has a non-vanishing two-point function with a conjugate operator $O^{\bar l,l}$ only. The main result of ref.\cite{Elkhidir:2014woa} can be recast in the following way. The most general three-point function $\langle O_1 O_2 O_3 \rangle$ can be written as\footnote{The points $X_1$, $X_2$ and $X_3$ are assumed to be distinct.} \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber\label{eq:ff3pf} \langle O_1 O_2 O_3 \rangle= \sum_{s=1}^{N_3} \lambda_s \langle O_1 O_2 O_3 \rangle_s \,, \ee where \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber \langle O_1 O_2 O_3 \rangle_s = \mathcal{K}_3 \Big(\prod_{i\neq j=1}^3 I_{ij}^{m_{ij}} \Big) C_{1,23}^{n_1}C_{2,31}^{n_2}C_{3,12}^{n_3} \,. \label{eq:ff3pfV2} \ee In eq.(\ref{eq:ff3pfV2}), $\mathcal{K}_3$ is a kinematic factor that depends on the scaling dimension and spin of the external fields, \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber\label{eq:kinematicfactor1} \mathcal{K}_3=\frac{1}{X_{12}^{a_{12}} X_{13}^{a_{13}} X_{23}^{a_{23}}}, \ee with $a_{ij} =(\tau_i+\tau_j-\tau_k)/2$, $i\neq j\neq k$. The index $s$ runs over all the independent tensor structures parametrized by the integers $m_{ij}$ and $n_i$, each multiplied by a constant OPE coefficient $\lambda_s$. The invariants $C_{i,jk}$ equal to one of the three-index invariants (\ref{eq:invar2})-(\ref{eq:invar4}), depending on the value of \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber\label{eq:dl} \Delta l\equiv l_1+l_2+l_3-(\bar l_1+\bar l_2+\bar l_3)\,, \ee of the external fields. Three-point functions are non-vanishing only when $\Delta l$ is an even integer \cite{Mack:1976pa,Elkhidir:2014woa}. We have \begin{itemize} \item{$\Delta l=0$: $C_{i,jk}=J_{i,jk}$.} \item{$\Delta l>0$: $C_{i,jk}=J_{i,jk}, K_{i,jk}$.} \item{$\Delta l<0$: $C_{i,jk}=J_{i,jk}, \overline K_{i,jk}$.} \end{itemize} A redundance is present for $\Delta l=0$. It can be fixed by demanding, for instance, that one of the three integers $n_i$ in eq.(\ref{eq:ff3pfV2}) vanishes. The total number of $K_{i,jk}$'s ($\overline K_{i,jk}$'s) present in the correlator for $\Delta l>0$ ($\Delta l<0$) equal $\Delta l/2$ ($-\Delta l/2$). The number of tensor structures is given by all the possible allowed choices of nonnegative integers $m_{ij}$ and $n_i$ in eq.(\ref{eq:ff3pf}) subject to the above constraints and the ones coming from matching the correct powers of $S_i$ and $\bar S_i$ for each field. The latter requirement gives in total six constraints. Conserved 4D operators are encoded in multitwistors $O$ that satisfy the current conservation condition \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber D\cdot O(X,S,\bar S) = 0 \,, \ \ \ \ \ D = \Big(X_M \Sigma^{MN} \frac{\partial}{\partial X^N}\Big)_{a}^{\;b} \frac{\partial}{\partial S_a} \frac{\partial}{\partial \bar S^b}\,. \label{ConservedD} \ee When eq.(\ref{ConservedD}) is imposed on eq.(\ref{eq:ff3pf}), we generally get a set of linear relations between the OPE coefficients $\lambda_s$'s, which restrict the possible allowed tensor structures in the three point function. Under a 4D parity transformation, the invariants (\ref{eq:invar1})-(\ref{eq:invar4}) transform as follows: \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber\begin{split} I_{ij} \stackrel{{\cal P}}{\longrightarrow} \;\; & - I_{ji} \,,\\ K_{i,jk} \stackrel{{\cal P}}{\longrightarrow} \;\; & + \overline{K}_{i,jk} \,, \\ \overline{K}_{i,jk} \stackrel{{\cal P}}{\longrightarrow} \;\; & + K_{i,jk} \,, \\ J_{i,jk} \stackrel{{\cal P}}{\longrightarrow} \;\; & + J_{i,jk} \,. \label{6Dparity} \end{split} \ee \section{Relation between CPW} \label{sec:cpw} A CFT is defined in terms of the spectrum of primary operators, their scaling dimensions $\Delta_i$ and $SL(2,C)$ representations $(l_i,\bar l_i)$, and OPE coefficients, namely the coefficients entering the three-point functions among such primaries. Once this set of CFT data is given, any correlator is in principle calculable. Let us consider for instance the 4-point function of four primary tensor operators: \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber \langle {\cal O}^{I_1}_1(x_1) {\cal O}^{I_2}_2(x_2) {\cal O}^{I_3}_3(x_3) {\cal O}^{I_4}_4(x_4)\rangle = \mathcal{K}_4 \sum_{n=1}^{N_4} g_n(u,v) {\cal T}_n^{I_1I_2I_3I_4}(x_i) \,. \label{Gen4pt} \ee In eq.(\ref{Gen4pt}) we have schematically denoted by $I_{i}$ the Lorentz indices of the operators ${\cal O}_i(x_i)$, $x_{ij}^2=(x_i-x_j)_\mu(x_i-x_j)^\mu$, \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber\label{eq:4kinematic} \mathcal{K}_4 = \bigg(\frac{x_{24}^2}{x_{14}^2}\bigg)^{\frac{\tau_1-\tau_2}2} \bigg(\frac{x_{14}^2}{x_{13}^2}\bigg)^{\frac{\tau_3-\tau_4}{2}} (x_{12}^2)^{-\frac{\tau_1+\tau_2}2} (x_{34}^2)^{-\frac{\tau_3+\tau_4}2} \ee is a kinematical factor, $u$ and $v$ are the usual conformally invariant cross ratios \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber u=\frac{x_{12}^2x_{34}^2}{x_{13}^2x_{24}^2}\,, \ \ \ v=\frac{x_{14}^2 x_{23}^2}{x_{13}^2x_{24}^2}\,, \label{uv4d} \ee ${\cal T}_n^{I_1I_2I_3I_4}(x_i)$ are tensor structures and $\tau_i$ are defined below eq.(\ref{2ptFun}). These are functions of the $x_i$'s and can be kinematically determined. Their total number $N_4$ depends on the Lorentz properties of the external primaries. For correlators involving scalars only, one has $N_4=1$, but in general $N_4>1$ and rapidly grows with the spin of the external fields. For instance, for four traceless symmetric operators with identical spin $l$, one has $N_4(l)\sim l^7$ for large $l$ \cite{Elkhidir:2014woa}. All the non-trivial dynamical information of the 4-point function is encoded in the $N_4$ functions $g_n(u,v)$. In any given channel, by using the OPE we can write the 4-point function (\ref{Gen4pt}) in terms of the operators exchanged in that channel. In the s-channel (12-34), for instance, we have \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber \langle {\cal O}^{I_1}_1(x_1) {\cal O}^{I_2}_2(x_2) {\cal O}^{I_3}_3(x_3) {\cal O}^{I_4}_4(x_4)\rangle = \sum_r\sum_{p=1}^{N_{3r}^{12}} \sum_{q=1}^{N_{3\bar r}^{34}}\sum_{{\cal O}_r} \lambda_{{\cal O}_1{\cal O}_2{\cal O}_r}^p \lambda_{\bar {\cal O}_{\bar r}{\cal O}_3{\cal O}_4}^q W_{{\cal O}_1{\cal O}_2{\cal O}_3{\cal O}_4,{\cal O}_r}^{(p,q)I_1I_2I_3I_4}(x_i)\,, \label{sch4pt} \ee where $p$ and $q$ run over the possible independent tensor structures associated to the three point functions $\langle {\cal O}_1{\cal O}_2{\cal O}_r\rangle$ and $\langle {\bar {\cal O}}_{\bar r}{\cal O}_3{\cal O}_4\rangle$, whose total number is $N_{3r}^{12}$ and $N_{3\bar r}^{34}$ respectively,\footnote{Strictly speaking these numbers depend also on ${\cal O}_r$, particularly on its spin. When the latter is large enough, however, $N_{3r}^{12}$ and $N_{3\bar r}^{34}$ are only functions of the external operators.} the $\lambda$'s being their corresponding structure constants, and $r$ and ${\cal O}_r$ runs over the number of primary operators that can be exchanged in the correlator. We divide the (infinite) sum over the exchanged operators in a {\it finite} sum over the different classes of representations that can appear, e.g. $(l,l)$, $(l+2,l)$, etc., while the sum over ${\cal O}_r$ includes the sum over the scaling dimension and spin $l$ of the operator exchanged within the class $r$. For example, four-scalar correlators can only exchange traceless symmetric operators and hence the sum over $r$ is trivial. Finally, in eq.(\ref{sch4pt}) $W_{{\cal O}_1{\cal O}_2{\cal O}_3{\cal O}_4}^{(p,q)I_1I_2I_3I_4}(u,v)$ are the so-called CPW associated to the four-point function. They depend on the external as well as the exchanged operator scaling dimension and spin, dependence we omitted in order not to clutter further the notation.\footnote{For further simplicity, in what follows we will often omit the subscript indicating the external operators associated to the CPW.} By comparing eqs.(\ref{Gen4pt}) and (\ref{sch4pt}) one can infer that the number of allowed tensor structures in three and four-point functions is related:\footnote{We do not have a formal proof of eq.(\ref{N4N3}), although the agreement found in ref.\cite{Elkhidir:2014woa} using eq.(\ref{N4N3}) in different channels is a strong indication that it should be correct.} \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber N_4=\sum_r N_{3r}^{12} N_{3\bar r}^{34}\,. \label{N4N3} \ee There are several CPW for each exchanged primary operator ${\cal O}_r$, depending on the number of allowed 3-point function structures. They encode the contribution of all the descendant operators associated to the primary ${\cal O}_r$. Contrary to the functions $g_n(u,v)$ in eq.(\ref{Gen4pt}), the CPW do not carry dynamical information, being determined by conformal symmetry alone. They admit a parametrization like the 4-point function itself, \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber W_{{\cal O}_1{\cal O}_2{\cal O}_3{\cal O}_4,{\cal O}_r}^{(p,q)I_1I_2I_3I_4}(x_i) = \mathcal{K}_4 \sum_{n=1}^{N_4} {\cal G}_{{\cal O}_r,n}^{(p,q)}(u,v) {\cal T}_n^{I_1I_2I_3I_4}(x_i) \,, \label{WGen} \ee where ${\cal G}^{(p,q)}_{{\cal O}_r,n}(u,v)$ are conformal blocks depending on $u$ and $v$ and on the dimensions and spins of the external and exchanged operators. Once the CPW are determined, by comparing eqs.(\ref{Gen4pt}) and (\ref{sch4pt}) we can express $g_n(u,v)$ in terms of the OPE coefficients of the exchanged operators. This procedure can be done in other channels as well, $(13-24)$ and $(14-23)$. Imposing crossing symmetry by requiring the equality of different channels is the essence of the bootstrap approach. The computation of CPW of tensor correlators is possible, but technically is not easy. In particular it is desirable to have a relation between different CPW, so that it is enough to compute a small subset of them, which determines all the others. In order to understand how this reduction process works, it is very useful to embed the CPW in the 6D embedding space with an index-free notation. We use here the formalism in terms of twistors as reviewed in section \ref{sec:class}. It is useful to consider the parametrization of CPW in the shadow formalism \cite{Ferrara:1972xe,Ferrara:1972uq,Ferrara:1972ay, Ferrara:1973vz}. It has been shown in ref.\cite{SimmonsDuffin:2012uy} that a generic CPW can be written in 6D as \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber W_{O_1 O_2 O_3 O_4,O_r}^{(p,q)}(X_i) \propto \! \int d^4Xd^4Y \langle O_1(X_1) O_2(X_2) O_r(X,S,\bar S)\rangle_p G \langle \bar O_{\bar r}(Y,T,\bar T) O_3(X_3) O_4(X_4)\rangle_q \,. \label{shadow} \ee In eq.(\ref{shadow}), $O_i(X_i)=O_i(X_i,S_i,\bar S_i)$ are the index-free 6D fields associated to the 4D fields ${\cal O}_i(x_i)$, $O_r(X,S,\bar S)$ and $\bar O_{\bar r}(Y,T,\bar T)$ are the exchanged operator and its conjugate, $G$ is a sort of ``propagator", function of $X,Y$ and of the twistor derivatives $\partial/\partial S$, $\partial/\partial T$, $\partial/\partial \bar S$ and $\partial/\partial \bar T$, and the subscripts $p$ and $q$ label the three-point function tensor structures. Finally, in order to remove unwanted contributions, the transformation $X_{12}\rightarrow e^{4\pi i} X_{12}$ should be performed and the integral should be projected to the suitable eigenvector under the above monodromy. We do not provide additional details, which can be found in ref.\cite{SimmonsDuffin:2012uy}, since they are irrelevant for our considerations. Suppose one is able to find a relation between three-point functions of this form: \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber \langle O_1(X_1) O_2(X_2) O_r(X,S,\bar S)\rangle_p = D_{pp^\prime}(X_{12},S_{1,2},\bar S_{1,2}) \langle O_1^\prime(X_1) O_2^\prime(X_2) O_r(X,S,\bar S)\rangle_{p^\prime}\,, \label{OOOpp} \ee where $D_{pp^\prime}$ is some operator that depends on $X_{12},S_{1,2},\bar S_{1,2}$ and their derivatives, but is crucially independent of $X$, $S$, and $\bar S$, and $O_i^\prime(X_i)$ are some other, possibly simpler, tensor operators. As long as the operator $D_{pp^\prime}(X_{12},S_{1,2},\bar S_{1,2})$ does not change the monodromy properties of the integral, one can use eq.(\ref{OOOpp}) in both three-point functions entering eq.(\ref{shadow}) and move the operator $D_{pp^\prime}$ outside the integral. In this way we get, with obvious notation, \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber W_{O_1 O_2 O_3 O_4,O_r}^{(p,q)}(X_i) = D_{pp^\prime}^{12} D_{qq^\prime}^{34} W_{O_1^\prime O_2^\prime O_3^\prime O_4^\prime,O_r}^{(p^\prime,q^\prime)}(X_i) \,. \label{shadow2} \ee Using the embedding formalism in vector notation, ref.\cite{Costa:2011dw} has shown how to reduce, in any space-time dimension, CPW associated to a correlator of traceless symmetric operators which exchange a traceless symmetric operator to the known CPW of scalar correlators \cite{Dolan:2000ut,Dolan:2003hv}. Focusing on 4D CFTs and using the embedding formalism in twistor space, we will see how the reduction of CPW can be generalized for arbitrary external and exchanged operators. \section{Differential Representation of Three-Point Functions} \label{sec:OB} We look for an explicit expression of the operator $D_{pp^\prime}$ defined in eq.(\ref{OOOpp}) as a linear combination of products of simpler operators. They must raise (or more generically change) the degree in $S_{1,2}$ and have to respect the gauge redundancy we have in the choice of $O$. As we recalled in subsection \ref{sec:class}, multitwistors $O$ and $\hat O$ of the form \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber \hat O = O + (\bar S X) G + (\overline X S) G'\,, \ \ \ \ \ \hat O = O + (X^2) G \,, \label{F1} \ee where $G$ and $G'$ are some other multi-twistors fields, are equivalent uplifts of the same 4D tensor field. Eq.(\ref{OOOpp}) is gauge invariant with respect to the equivalence classes (\ref{F1}) only if we demand \begin{equation} D_{pp^\prime}(\mathbf{\overline X}_i\mathbf{X}_i,\mathbf{\overline X}_i S_i, \overline S_i\mathbf{X}_i, X_i^2,\overline S_i S_i )\propto (\mathbf{\overline X}_i\mathbf{X}_i,\mathbf{\overline X}_i S_i, \overline S_i\mathbf{X}_i, X_i^2, \overline S_i S_i)\,, \ \ i=1,2\,. \label{Consist} \end{equation} It is useful to classify the building block operators according to their value of $\Delta l$, as defined in eq.(\ref{eq:dl}). At zero order in derivatives, we have three possible operators, with $\Delta l = 0$: \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber \sqrt{X_{12}}, I_{12}\,, I_{21}\,. \label{I12I21} \ee At first order in derivatives (in $X$ and $S$), four operators are possible with $\Delta l= 0$: \begin{equation}\begin{aligned} D_1&\equiv\frac{1}{2}\overline S_1 \Sigma^M \overline\Sigma^N S_1\Big(X_{2M}\frac{\partial}{\partial X^N_1}-X_{2N}\frac{\partial}{\partial X^M_1}\Big)\,, \\ D_2&\equiv\frac{1}{2}\overline S_2 \Sigma^M \overline\Sigma^N S_2\Big(X_{1M}\frac{\partial}{\partial X^N_2}-X_{1N}\frac{\partial}{\partial X^M_2}\Big)\,, \\ \widetilde D_1&\equiv\overline S_1 \mathbf{X}_2 \overline\Sigma^N S_1\frac{\partial}{\partial X^N_2}+2I_{12}\,S_{1a}\frac{\partial}{\partial S_{2a}}-2I_{21}\,\overline S^a_1\frac{\partial}{\partial\overline S^a_2}\,,\\ \widetilde D_2&\equiv\overline S_2 \mathbf{X}_1 \overline\Sigma^N S_2\frac{\partial}{\partial X^N_1}+2I_{21}\,S_{2a}\frac{\partial}{\partial S_{1a}}-2I_{12}\,\overline S^a_2\frac{\partial}{\partial\overline S^a_1}\,.\\ \end{aligned} \label{DDtilde} \end{equation} The extra two terms in the last two lines of eq.(\ref{DDtilde}) are needed to satisfy the condition (\ref{Consist}). The $SU(2,2)$ symmetry forbids any operator at first order in derivatives with $\Delta l= \pm 1$. When $\Delta l= 2$, we have the two operators \begin{equation}\begin{aligned} d_{1} \equiv S_2 \overline X_{1} \frac{\partial}{\partial\overline S_1}\,, \ \ \ \ \ \ d_{2} \equiv S_1 \overline X_{2} \frac{\partial}{\partial\overline S_2}\,, \end{aligned} \label{D12} \end{equation} and their conjugates with $\Delta l= - 2$: \begin{equation}\begin{aligned} \overline d_{1}\equiv \overline S_2 X_{1} \frac{\partial}{\partial S_1}\,, \ \ \ \ \ \ \ \overline d_{2}\equiv \overline S_1 X_{2} \frac{\partial}{\partial S_2}\,. \end{aligned} \label{Dbar12} \end{equation} The operator $\sqrt{X_{12}}$ just decreases the dimensions at both points 1 and 2 by one half. The operator $I_{12}$ increases by one the spin $\bar l_{1}$ and by one $l_{2}$. The operator $D_{1}$ increases by one the spin $l_{1}$ and by one $\bar l_{1}$, increases by one the dimension at point 1 and decreases by one the dimension at point 2. The operator $\widetilde D_1$ increases by one the spin $l_{1}$ and by one the spin $\bar l_{1}$ and it does not change the dimension of both points 1 and 2. The operator $d_1$ increases by one the spin $l_{2}$ and decreases by one $\bar l_{1}$, decreases by one the dimension at point 1 and does not change the dimension at point 2. The action of the remaining operators is trivially obtained by $1\leftrightarrow 2$ exchange or by conjugation. Two more operators with $\Delta l=2$ are possible: \begin{equation}\begin{aligned} \widetilde d_{1}& \equiv X_{12} S_1\overline\Sigma^M S_2\frac{\partial}{\partial X^N_1}-I_{12}S_{1a}\mathbf{\overline X}_2^{ab}\frac{\partial}{\partial\overline S^b_1}\,, \\ \widetilde d_{2}& \equiv X_{12} S_2\overline\Sigma^M S_1\frac{\partial}{\partial X^N_2}- I_{21}S_{2a}\mathbf{\overline X}_1^{ab}\frac{\partial}{\partial\overline S^b_2}\,, \end{aligned} \label{Dtilde12} \end{equation} together with their conjugates with $\Delta l=-2$. We will shortly see that the operators (\ref{Dtilde12}) are redundant and can be neglected. The above operators satisfy the commutation relations \begin{equation}\begin{aligned} & [D_i,\widetilde D_j ] = [d_i,d_j] = [\bar d_i,\bar d_j] = [d_i,\widetilde d_j] = [\bar d_i,\overline{\widetilde d}_j] =[\widetilde d_i,\widetilde d_j] = [\overline{\widetilde d}_i,\overline{\widetilde d}_j] = 0\,,\ \ \ \ i,j=1,2 \,, \\ & [D_1, D_2 ] = 4 I_{12} I_{21} \Big( -X_1^M \frac{\partial}{\partial X_1^M}+X_2^M \frac{\partial}{\partial X_2^M} \Big)\,, \\ & [\widetilde D_1, \widetilde D_2 ] = 4 I_{12} I_{21} \Big( X_1^M \frac{\partial}{\partial X_1^M}-X_2^M \frac{\partial}{\partial X_2^M}+S_1 \frac{\partial}{\partial S_1}+\bar S_1 \frac{\partial}{\partial \bar S_1}-S_2 \frac{\partial}{\partial S_2}-\bar S_2 \frac{\partial}{\partial \bar S_2} \Big)\,, \\ & [\widetilde d_1,\overline{\widetilde d}_2] = 2 X_{12} I_{12} I_{21} \Big(- X_1^M \frac{\partial}{\partial X_1^M}+X_2^M \frac{\partial}{\partial X_2^M}-\bar S_1 \frac{\partial}{\partial \bar S_1}+S_2 \frac{\partial}{\partial S_2} \Big)\,, \\ & [d_i, \bar d_j ] = 2X_{12} \Big(S_j \frac{\partial}{\partial S_j}-\bar S_i \frac{\partial}{\partial \bar S_i}\Big) (1-\delta_{i,j}) \,, \ \ \ \ i,j=1,2\,, \\ &[ d_i, D_j] = -2 \delta_{i,j} \widetilde d_i \,,\ \ \ \ i,j=1,2 \,, \\ &[ d_1, \widetilde D_1] = 2 \widetilde d_2 \,,\hspace{1.9cm} [ d_2, \widetilde D_1] = 0\,, \\ & [\widetilde d_1,D_1]=0\,, \hspace{2.3cm} [\widetilde d_2,D_1]=-2 I_{12} I_{21} d_2\,, \\ & [\widetilde d_1,\widetilde D_1]=2 I_{12} I_{21} d_2 \,, \hspace{.95cm} [\widetilde d_2,\widetilde D_1]=0 \,, \\ & [d_1,\overline{\widetilde d}_1] =-X_{12} \widetilde D_2 \,, \hspace{1.27cm} [d_1,\overline{\widetilde d}_2] = X_{12} D_2 \,. \end{aligned} \label{commutators} \end{equation} Some other commutators are trivially obtained by exchanging 1 and 2 and by the parity transformation (\ref{parity}). The operators $\sqrt{X}_{12}$, $I_{12}$ and $I_{21}$ commute with all the differential operators. Acting on the whole correlator, we have \begin{equation} S_i \frac{\partial}{\partial S_i} \rightarrow l_i \,, \ \ \ \bar S_i \frac{\partial}{\partial \bar S_i} \rightarrow \bar l_i \,, \ \ \ \ X_i^M \frac{\partial}{\partial X_i^M} \rightarrow - \tau_i \,, \label{Xssaction} \end{equation} and hence the above differential operators, together with $X_{12}$ and $I_{12} I_{21}$, form a closed algebra when acting on three-point correlators. Useful information on conformal blocks can already be obtained by considering the rather trivial operator $\sqrt{X_{12}}$. For any three point function tensor structure, we have \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber \langle O_1 O_2 O_3 \rangle_s = (\sqrt{X_{12}})^{a} \langle O_1^{\frac a2} O_2^{\frac a2} O_3 \rangle_s\,, \label{X12n} \ee where $a$ is an integer (in order not to induce a monodromy for $X_{12}\rightarrow e^{4\pi i} X_{12}$) and the superscript indicates a shift in dimension. If $\Delta({\cal O})=\Delta_{\cal O}$, then $\Delta({\cal O}^a)=\Delta_{\cal O}+a$. Using eqs.(\ref{X12n}) and (\ref{shadow2}), we get for any 4D CPW and pair of integers $a$ and $b$: \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber W_{{\cal O}_1 {\cal O}_2 {\cal O}_3 {\cal O}_4,{\cal O}_r}^{(p,q)} = x_{12}^{a} x_{34}^{b} W_{{\cal O}_1^{\frac a2} {\cal O}_2^{\frac a2} {\cal O}_3^{\frac b2} {\cal O}_4^{\frac b2} ,{\cal O}_r}^{(p,q)} \,. \label{shadow2M} \ee In terms of the conformal blocks defined in eq.(\ref{WGen}) one has \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber {\cal G}_{{\cal O}_r,n}^{(p,q)}(u,v)={\cal G}_{{\cal O}_r,n}^{(p,q)\frac a2,\frac a2,\frac b2,\frac b2}(u,v) \,, \label{Gshifts} \ee where the superscripts indicate the shifts in dimension in the four external operators. Equation (\ref{Gshifts}) significantly constrains the dependence of ${\cal G}_{{\cal O}_r,n}^{(p,q)}$ on the external operator dimensions $\Delta_i$. The conformal blocks can be periodic functions of $\Delta_1$, $\Delta_2$ and $\Delta_3$, $\Delta_4$, but can arbitrarily depend on $\Delta_1-\Delta_2$, $\Delta_3-\Delta_4$. This is in agreement with the known form of scalar conformal blocks. Since in this paper we are mostly concerned in deconstructing tensor structures, we will neglect in the following the operator $\sqrt{X_{12}}$. The set of differential operators is redundant, namely there is generally more than 1 combination of products of operators that lead from one three-point function structure to another one. In particular, without any loss of generality we can forget about the operators (\ref{Dtilde12}), since their action is equivalent to commutators of $d_i$ and $D_j$. On the other hand, it is not difficult to argue that the above operators do not allow to connect any three-point function structure to any other one. For instance, it is straightforward to verify that there is no way to connect a three-point correlator with one $(l,\bar l)$ field to another correlator with a ($l\pm1,\bar l \mp 1$) field, with the other fields left unchanged. This is not an academic observation because, as we will see, connections of this kind will turn out to be useful in order to simplify the structure of the CPW seeds. The problem is solved by adding to the above list of operators the following second-order operator with $\Delta l=0$: \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber \nabla_{12} \equiv \frac{(\overline{\mathbf X}_1{\mathbf X}_2)^a_b}{X_{12}}\frac{\partial^2}{\partial\overline{S}_1^a\partial S_{2,b}} \label{nablaS} \ee and its conjugate $\nabla_{21}$. The above operators transform as follows under 4D parity: \begin{equation} D_i \stackrel{{\cal P}}{\longrightarrow} D_i \,, \ \ \ \ \ \widetilde D_i \stackrel{{\cal P}}{\longrightarrow} \widetilde D_i \,, \ \ \ d_i \stackrel{{\cal P}}{\longleftrightarrow}- \overline d_i \,, \ \ \ \ \widetilde d_i \stackrel{{\cal P}}{\longleftrightarrow} \widetilde{\overline d}_i \,, \ \ (i=1,2)\,, \ \ \ \nabla_{12} \stackrel{{\cal P}}{\longleftrightarrow} -\nabla_{21}\,. \label{parity} \end{equation} It is clear that all the operators above are invariant under the monodromy $X_{12}\rightarrow e^{4\pi i} X_{12}$. The addition of $\nabla_{12}$ and $\nabla_{21}$ makes the operator basis even more redundant. It is clear that the paths connecting two different three-point correlators that make use of the least number of these operators are preferred, in particular those that also avoid (if possible) the action of the second order operators $\nabla_{12}$ and $\nabla_{21}$. We will not attempt here to explicitly construct a minimal differential basis connecting two arbitrary three-point correlators. Such an analysis is in general complicated and perhaps not really necessary, since in most applications we are interested in CPW involving external fields with spin up to two. Given their particular relevance, we will instead focus in the next section on three-point correlators of two traceless symmetric operators with an arbitrary field $O^{(l,\bar l)}$. \section{Differential Basis for Traceless Symmetric Operators} \label{sec:DBTSO} In this section we show how three-point correlators of two traceless symmetric operators with an arbitrary field $O^{(l_3,\bar l_3)}$ can be reduced to seed correlators, with one tensor structure only. We first consider the case $l_3=\bar l_3$, and then go on with $l_3 \neq \bar l_3$. \subsection{Traceless Symmetric Exchanged Operators} \label{subsec:DBTSO} The reduction of traceless symmetric correlators to lower spin traceless symmetric correlators has been successfully addressed in ref.\cite{Costa:2011dw}. In this subsection we essentially reformulate the results of ref.\cite{Costa:2011dw} in our formalism. This will turn out to be crucial to address the more complicated case of mixed symmetry operator exchange. Whenever possible, we will use a notation as close as possible to that of ref.\cite{Costa:2011dw}, in order to make any comparison more transparent to the reader. Three-point correlators of traceless symmetric operators can be expressed only in terms of the $SU(2,2)$ invariants $I_{ij}$ and $J_{i,jk}$ defined in eqs.(\ref{eq:invar1})-(\ref{eq:invar4}), since $\Delta l$ defined in eq.(\ref{eq:dl}) vanishes. It is useful to consider separately parity even and parity odd tensor structures. Given the action of parity, eq.(\ref{6Dparity}), the most general parity even tensor structure is given by products of the following invariants: \begin{equation} \label{RedBasisST} (I_{21}I_{13}I_{32}-I_{12}I_{31}I_{23}), (I_{12}I_{21}), (I_{13}I_{31}), (I_{23}I_{32}), J_{1,23},J_{2,31},J_{3,12}\,. \end{equation} These structures are not all independent, because of the identity \begin{equation} J_{1,23}J_{2,31}J_{3,12}=8(I_{12}I_{31}I_{23}-I_{21}I_{13}I_{32})-4(I_{23}I_{32}J_{1,23}+I_{13}I_{31}J_{2,31}+I_{12}I_{21}J_{3,12}) \,. \label{J3Rel} \end{equation} In ref.\cite{Elkhidir:2014woa}, eq.(\ref{J3Rel}) has been used to define an independent basis where no tensor structure contains the three $SU(2,2)$ invariants $J_{1,23}$, $J_{2,31}$ and $J_{3,12}$ at the same time. A more symmetric and convenient basis is obtained by using eq.(\ref{J3Rel}) to get rid of the first factor in eq.(\ref{RedBasisST}). We define the most general parity even tensor structure of traceless symmetric tensor correlator as \begin{equation} \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}\end{array}\right] \equiv \mathcal{K}_3 (I_{12}I_{21})^{m_{12}}(I_{13}I_{31})^{m_{13}}(I_{23}I_{32})^{m_{23}}J_{1,23}^{j_1}J_{2,31}^{j_2}J_{3,12}^{j_3} \,, \label{ParityevenInd} \end{equation} where $l_i$ and $\Delta_i$ are the spins and scaling dimensions of the fields, the kinematical factor $\mathcal{K}_3$ is defined in eq.(\ref{eq:kinematicfactor1}) and \begin{equation} \begin{array}{l}j_1=l_1-m_{12}-m_{13}\geq 0 \,, \\ j_2=l_2-m_{12}-m_{23} \geq 0 \,, \\ j_3=l_3-m_{13}-m_{23} \geq 0 \,. \end{array} \label{j123} \end{equation} Notice the similarity of eq.(\ref{ParityevenInd}) with eq.(3.15) of ref.\cite{Costa:2011dw}, with $(I_{ij}I_{ji})\rightarrow H_{ij}$ and $J_{i,jk}\rightarrow V_{i,jk}$. The structures (\ref{ParityevenInd}) can be related to a seed scalar-scalar-tensor correlator. Schematically \begin{equation} \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}\end{array}\right] = {\cal D} \left[\begin{array}{ccc}\Delta_1^\prime&\Delta_2^\prime&\Delta_3\\0 & 0&l_3\\ 0 & 0 & 0 \end{array}\right] \,, \label{OBeven} \end{equation} where ${\cal D}$ is a sum of products of the operators introduced in section \ref{sec:OB}. Since symmetric traceless correlators have $\Delta l=0$, it is natural to expect that only the operators with $\Delta l=0$ defined in eqs.(\ref{I12I21}) and (\ref{DDtilde}) will enter in ${\cal D}$. Starting from the seed, we now show how one can iteratively construct all tensor structures by means of recursion relations. The analysis will be very similar to the one presented in ref.\cite{Costa:2011dw} in vector notation. We first construct tensor structures with $m_{13}=m_{32}=0$ for any $l_1$ and $l_2$ by iteratively using the relation (analogue of eq.(3.27) in ref.\cite{Costa:2011dw}, with $D_1\rightarrow D_{12}$ and $\widetilde D_1\rightarrow D_{11}$) \begin{equation}\begin{aligned} & D_1\left[\begin{array}{ccc}\Delta_1&\Delta_2+1&\Delta_3\\l_1-1&l_2&l_3\\0&0&m_{12}\end{array}\right]+\tilde D_1\left[\begin{array}{ccc}\Delta_1+1&\Delta_2&\Delta_3\\l_1-1&l_2&l_3\\0&0&m_{12}\end{array}\right] = \\ & (2+2m_{12}-l_1-l_2-\Delta_3)\left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\0&0&m_{12}\end{array}\right] -8(l_2-m_{12})\left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\0&0&m_{12}+1\end{array}\right] \,. \end{aligned} \label{RR1} \end{equation} The analogous equation with $D_2$ and $\widetilde D_2$ is obtained from eq.(\ref{RR1}) by exchanging $1\leftrightarrow 2$ and changing sign of the coefficients in the right hand side of the equation. The sign change arises from the fact that $J_{1,23}\rightarrow -J_{2,31}$, $J_{2,31}\rightarrow -J_{1,23}$ and $J_{3,12}\rightarrow -J_{3,12}$ under $1\leftrightarrow 2$. Hence structures that differ by one spin get a sign change. This observation applies also to eq.(\ref{RR2}) below. Structures with $m_{12}>0$ are deduced using (analogue of eq(3.28) in ref.\cite{Costa:2011dw}) \begin{equation} \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}\end{array}\right]= (I_{12}I_{21})\left[\begin{array}{ccc} \Delta_1+1& \Delta_2+1&\Delta_3\\l_1-1&l_2-1&l_3\\m_{23}&m_{13}&m_{12}-1\end{array}\right] \,. \end{equation} Structures with non-vanishing $m_{13}$ ($m_{23}$) are obtained by acting with the operator $D_{1}$ ($D_2$): \begin{equation} \begin{array}{l}4(l_3-m_{13}-m_{23}) \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}+1&m_{12}\end{array}\right]= D_1\left[\begin{array}{ccc}\Delta_1&\Delta_2+1&\Delta_3\\l_1-1&l_2&l_3\\m_{23}&m_{13}&m_{12}\end{array}\right]\\+4(l_2-m_{12}-m_{23})\left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}+1\end{array}\right]-\\ \frac{1}{2}(2+2m_{12}-2m_{13}+\Delta_2-\Delta_1-\Delta_3-l_1-l_2+l_3)\left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}\end{array}\right] \,, \end{array} \label{RR2} \end{equation} and is the analogue of eq (3.29) in ref.\cite{Costa:2011dw}. In this way all parity even tensor structures can be constructed starting from the seed correlator. Let us now turn to parity odd structures. The most general parity odd structure is given by \begin{equation} \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}\end{array}\right]_{odd} \equiv (I_{12}I_{23} I_{31}+I_{21}I_{32} I_{13}) \left[\begin{array}{ccc}\Delta_1+1&\Delta_2+1&\Delta_3+1\\l_1-1&l_2-1&l_3-1\\m_{23}&m_{13}&m_{12}\end{array}\right] \,. \label{ParityoddInd} \end{equation} Since the parity odd combination $(I_{12}I_{23} I_{31}+I_{21}I_{32} I_{13})$ commutes with $D_{1,2}$ and $\widetilde D_{1,2}$, the recursion relations found for parity even structures straightforwardly apply to the parity odd ones. One could define a ``parity odd seed" \begin{equation} 16 l_3 (\Delta_3-1) \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\1&1&l_3\\0&0& 0\end{array}\right]_{odd} =(d_2 \bar d_1-\bar d_2 d_1)D_1D_2 \left[\begin{array}{ccc}\Delta_1+2&\Delta_2+2&\Delta_3\\0 & 0&l_3\\ 0 & 0 & 0 \end{array}\right] \label{parityoddseed} \end{equation} and from here construct all the parity odd structures. Notice that the parity odd seed cannot be obtained by applying only combinations of $D_{1,2}$, $\widetilde D_{1,2}$ and $(I_{12}I_{21})$, because these operators are all invariant under parity, see eq.(\ref{parity}). This explains the appearance of the operators $d_i$ and $\bar d_i$ in eq.(\ref{parityoddseed}). The counting of parity even and odd structures manifestly agrees with that performed in ref.\cite{Costa:2011mg}. Once proved that all tensor structures can be reached by acting with operators on the seed correlator, one might define a differential basis which is essentially identical to that defined in eq.(3.31) of ref. \cite{Costa:2011dw}: \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber {\small{ \left\{\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}\end{array}\right\}_{\!\! 0} }= (I_{12} I_{21})^{m_{12}} D_1^{m_{13}} D_2^{m_{23}} \widetilde D_1^{j_1} \widetilde D_2^{j_2} {\small\left[\begin{array}{ccc}\Delta_1^\prime&\Delta_2^\prime&\Delta_3\\0 & 0&l_3\\ 0 & 0 & 0 \end{array}\right]} }\,, \label{DBeven} \ee where $\Delta_1^\prime=\Delta_1+l_1+m_{23}-m_{13}$, $\Delta_2^\prime=\Delta_2+l_2+m_{13}-m_{23}$. The recursion relations found above have shown that the differential basis (\ref{DBeven}) is complete: all parity even tensor structures can be written as linear combinations of eq.(\ref{DBeven}). The dimensionality of the differential basis matches the one of the ordinary basis for any spin $l_1$, $l_2$ and $l_3$. Since both bases are complete, the transformation matrix relating them is ensured to have maximal rank. Its determinant, however, is a function of the scaling dimensions $\Delta_i$ and the spins $l_i$ of the fields and one should check that it does not vanish for some specific values of $\Delta_i$ and $l_i$. We have explicitly checked up to $l_1=l_2=2$ that for $l_3\geq l_1+l_2$ the rank of the transformation matrix depends only on $\Delta_3$ and $l_3$ and never vanishes, for any value of $\Delta_3$ allowed by the unitarity bound \cite{Mack:1975je}. On the other hand, a problem can arise when $l_3<l_1+l_2$, because in this case a dependence on the values of $\Delta_1$ and $\Delta_2$ arises and the determinant vanishes for specific values (depending on the $l_i$'s) of $\Delta_1-\Delta_2$ and $\Delta_3$, even when they are within the unitarity bounds.\footnote{A similar problem seems also to occur for the basis (3.31) of ref. \cite{Costa:2011dw} in vector notation.} This issue is easily solved by replacing $\widetilde D_{1,2}\rightarrow (\widetilde D_{1,2}+D_{1,2})$ in eq.(\ref{DBeven}), as suggested by the recursion relation (\ref{RR1}), and by defining an improved differential basis \begin{equation}\begin{aligned} & {\small{\left\{\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}\end{array}\right\} }}= (I_{12} I_{21})^{m_{12}} D_1^{m_{13}} D_2^{m_{23}} \!\!\sum_{n_1=0}^{j_1}\!\!\Big( \begin{array}{c} j_1 \\ n_1\end{array}\Big) D_1^{n_1} \widetilde D_1^{j_1-n_1} \!\!\sum_{n_2=0}^{j_2}\!\!\Big( \begin{array}{c} j_2 \\ n_2\end{array}\Big) D_2^{n_2} \widetilde D_2^{j_2-n_2} {\small{\left[\begin{array}{ccc}\Delta_1^{\prime}&\Delta_2^{\prime} &\Delta_3\\0 & 0&l_3\\ 0 & 0 & 0 \end{array}\right] } } \label{DBevenNewST} \end{aligned} \end{equation} where $\Delta_1^{\prime} = \Delta_1 + l_1+m_{23}-m_{13}+n_2-n_1$, $\Delta_2^{\prime} = \Delta_2 + l_2+m_{13}-m_{23}+n_1-n_2$. A similar basis for parity odd structures is given by \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber {\small{ \left\{\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}\end{array}\right\}_{odd} }} \!\!\! =(d_2 \bar d_1-\bar d_2 d_1)D_1 D_2 {\small{\left\{\begin{array}{ccc}\Delta_1+2 &\Delta_2+2&\Delta_3\\ l_1-1 & l_2-1 &l_3\\ m_{23} & m_{13} & m_{12} \end{array}\right\} }} \,. \label{DBodd} \ee In practical computations it is more convenient to use the differential basis rather than the recursion relations and, if necessary, use the transformation matrix to rotate the results back to the ordinary basis. We have explicitly constructed the improved differential basis (\ref{DBevenNewST}) and (\ref{DBodd}) up to $l_1=l_2=2$. The rank of the transformation matrix depends on $\Delta_3$ and $l_3$ for any value of $l_3$, and never vanishes, for any value of $\Delta_3$ allowed by the unitary bound.\footnote{The transformation matrix is actually not of maximal rank when $l_3=0$ and $\Delta_3=1$. However, this case is quite trivial. The exchanged scalar is free and hence the CFT is the direct sum of at least two CFTs, the interacting one and the free theory associated to this scalar. So, either the two external $l_1$ and $l_2$ tensors are part of the free CFT, in which case the whole correlator is determined, or the OPE coefficients entering the correlation function must vanish.\label{foot:1}} \subsection{Mixed Symmetry Exchanged Operators} \label{subsec:DBAO} In this subsection we consider correlators with two traceless symmetric and one mixed symmetry operator $O^{(l_3,\bar l_3)}$, with $l_3-\bar l_3=2\delta$, with $\delta$ an integer. A correlator of this form has $\Delta l= 2 \delta$ and according to the analysis of section \ref{sec:class}, any of its tensor structures can be expressed in a form containing an overall number $\delta$ of $K_{i,jk}$'s if $\delta>0$, or $\overline K_{i,jk}$'s if $\delta<0$. We consider in the following $\delta>0$, the case $\delta<0$ being easily deduced from $\delta>0$ by means of a parity transformation. The analysis will proceed along the same lines of subsection \ref{subsec:DBTSO}. We first show a convenient parametrization for the tensor structures of the correlator, then we prove by deriving recursion relations how all tensor structures can be reached starting from a single seed, to be determined, and finally present a differential basis. We first consider the situation where $l_3\geq l_1+l_2-\delta$ and then the slightly more involved case with unconstrained $l_3$. \subsubsection{Recursion Relations for $l_3\geq l_1+l_2-\delta$} \label{subsubsec:largel3} It is convenient to look for a parametrization of the tensor structures which is as close as possible to the one (\ref{ParityevenInd}) valid for $\delta=0$. When $l_3\geq l_1+l_2-\delta$, any tensor structure of the correlator contains enough $J_{3,12}$'s invariants to remove all possible $K_{3,12}$'s invariants using the identity \begin{equation}\label{JK3} J_{3,12} K_{3,12} =2 I_{31} K_{1,23} - 2 I_{32} K_{2,31}\,. \end{equation} There are four possible combinations in which the remaining $K_{1,23}$ and $K_{2,31}$ invariants can enter in the correlator: $K_{1,23} I_{23}$, $K_{1,23} I_{21} I_{13}$ and $K_{2,31} I_{13}$, $K_{2,31} I_{12} I_{23}$. These structures are not all independent. In addition to eq.(\ref{JK3}), using the two identities \begin{equation}\begin{aligned}\label{JK12} 2 I_{12} K_{2,31} &= J_{1,23} K_{1,23} + 2 I_{13} K_{3,12} \,, \\ 2 I_{21} K_{1,23} &= - J_{2,31} K_{2,31} + 2 I_{23} K_{3,12} \,, \end{aligned}\end{equation} we can remove half of them and keep only, say, $K_{1,23} I_{23}$ and $K_{2,31} I_{13}$. The most general tensor structure can be written as \begin{equation}{\small{ \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}\end{array}\right] _p \equiv \Big(\frac{K_{1,23} I_{23}}{X_{23}}\Big)^{\delta-p} \Big(\frac{K_{2,31} I_{13}}{X_{13}}\Big)^{p} \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1-p&l_2-\delta+p &l_3\\m_{23}&m_{13}&\widetilde m_{12}\end{array}\right] \,, \ \ \ p=0,\ldots,\delta \,,}} \label{deltaGenExpST} \end{equation} expressed in terms of the parity even structures (\ref{ParityevenInd}) of traceless symmetric correlators, where \begin{equation}\begin{aligned}\label{j123AO} j_1 & = l_1-p - \widetilde m_{12}- m_{13} \geq 0 \,, \\ j_2 & = l_2-\delta+p -\widetilde m_{12}-m_{23}\geq 0 \,, \\ j_3 & =l_3 -m_{13}-m_{23} \geq 0 \end{aligned} \hspace{2cm} \begin{aligned} \widetilde m_{12} = \left\{\begin{array}{l}m_{12} \,\,\,\,\;\;\; if \;\;\;\;\;p=0 \,\,\,or \,\,\, p= \delta \\ 0 \,\,\,\,\,\,\,\,\, \,\;\;otherwise \end{array}\right. \,. \end{aligned} \end{equation} The condition in $m_{12}$ derives from the fact that, using eqs.(\ref{JK12}), one can set $m_{12}$ to zero in the tensor structures with $p\neq 0,\delta$, see below. Attention should be paid to the subscript $p$. Structures with no subscript refer to purely traceless symmetric correlators, while those with the subscript $p$ refer to three-point functions with two traceless symmetric and one mixed symmetry field. All tensor structures are classified in terms of $\delta+1$ classes, parametrized by the index $p$ in eq.(\ref{deltaGenExpST}). The parity odd structures of traceless symmetric correlators do not enter, since they can be reduced in the form (\ref{deltaGenExpST}) by means of the identities (\ref{JK12}). The class $p$ exists only when $l_1\geq p$ and $l_2\geq \delta-p$. If $l_1+l_2<\delta$, the entire correlator vanishes. Contrary to the symmetric traceless exchange, there is no obvious choice of seed that stands out. The allowed correlator with the lowest possible spins in each class, $l_1=p$, $l_2=\delta-p$, $m_{ij}=0$, can all be seen as possible seeds with a unique tensor structure. Let us see how all the structures (\ref{deltaGenExpST}) can be iteratively constructed using the operators defined in section \ref{sec:OB} in terms of the $\delta+1$ seeds. It is convenient to first construct a redundant basis where $m_{12}\neq 0$ for any $p$ and then impose the relation that leads to the independent basis (\ref{deltaGenExpST}). The procedure is similar to that followed for the traceless symmetric exchange. We first construct all the tensor structures with $m_{13}=m_{32}=0$ for any spin $l_1$ and $l_2$, and any class $p$, using the following relations: {\small{\begin{eqnarray}} \def\eea{\end{eqnarray} && D_1 \left[\begin{array}{ccc}\Delta_1&\Delta_2+1&\Delta_3\\l_1-1&l_2&l_3\\ 0& 0&m_{12}\end{array}\right] _p+ \widetilde D_1 \left[\begin{array}{ccc}\Delta_1+1&\Delta_2&\Delta_3\\l_1-1&l_2&l_3\\0& 0&m_{12}\end{array}\right] _p = (\delta-p) \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\ 0& 0&m_{12}\end{array}\right] _{p+1} \\ && \!\!\!\! -8(l_2-\delta+p-m_{12}) \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\ 0& 0&m_{12}+1\end{array}\right] _{p}+(2m_{12}-l_1-l_2-\Delta_3+2+\delta-p) \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\ 0& 0&m_{12}\end{array}\right] _{p}, \nn \eea}} together with the relation \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber {\small{ \left[\begin{array}{ccc}\Delta_1-1&\Delta_2-1&\Delta_3\\l_1+1&l_2+1&l_3\\ 0& 0&m_{12}+1\end{array}\right] _p = (I_{12}I_{21}) \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\0& 0&m_{12}\end{array}\right] _p \,.}} \ee Notice that the operators $D_{1,2}$ and $\widetilde D_{1,2}$ relate nearest neighbour classes and the iteration eventually involves all classes at the same time. The action of the $D_2$ and $\widetilde D_2$ derivatives can be obtained by replacing $1\leftrightarrow 2$, $p\leftrightarrow (\delta-p)$ in the coefficients multiplying the structures and $p+1\rightarrow p-1$ in the subscripts, and by changing sign on one side of the equation. Structures with non-vanishing $m_{13}$ and $m_{23}$ are obtained using {\small{\begin{eqnarray}} \def\eea{\end{eqnarray} && 4(l_3-m_{13}-m_{23}+\delta-p) \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}+1&m_{12}\end{array}\right]_p - 4(\delta-p) \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}+1&m_{13}&m_{12}\end{array}\right]_{p+1} = \nn \\ && \hspace{2cm} 4 (l_2-\delta+p-m_{23}-m_{12}) \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}+1\end{array}\right]_p +D_1 \left[\begin{array}{ccc}\Delta_1&\Delta_2+1&\Delta_3\\l_1-1&l_2&l_3\\m_{23}&m_{13}&m_{12}\end{array}\right] _p \\ && \hspace{2cm} - \frac 12 (2m_{12}-2m_{13}+\Delta_2-\Delta_1-\Delta_3 -l_1-l_2+l_3+2\delta-2p+2) \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}\end{array}\right]_p \nn \eea}} together with the corresponding relation with $1\leftrightarrow 2$ and $p\rightarrow p+1$. All the structures (\ref{deltaGenExpST}) are hence derivable from $\delta+1$ seeds by acting with the operators $D_{1,2}$, $\widetilde D_{1,2}$ and $(I_{12}I_{21})$. The seeds, on the other hand, are all related by means of the following relation: \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber (\delta-p)^2\left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\ p+1 &\delta-p-1 &l_3\\0&0&0\end{array}\right] _{p+1} =R \left[\begin{array}{ccc}\Delta_1+1&\Delta_2+1&\Delta_3\\ p &\delta-p &l_3\\0&0&0\end{array}\right] _{p} \,, \ee where \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber R\equiv -\frac 12 \bar d_2 d_2 \,. \ee We conclude that, starting from the single seed correlator with $p=0$, \begin{equation}{\small{ \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\0&\delta &l_3\\0&0&0\end{array}\right] _0 \equiv \Big(\frac{K_{1,23} I_{23}}{X_{23}}\Big)^{\delta} \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\0& 0 &l_3\\0&0&0\end{array}\right] \,,}} \label{deltaGenExpSTseed} \end{equation} namely the three-point function of a scalar, a spin $\delta$ traceless symmetric operator and the mixed symmetry operator with spin $(l_3+2\delta,l_3)$, we can obtain all tensor structures of higher spin correlators. Let us now see how the constraint on $m_{12}$ in eq.(\ref{j123AO}) arises. When $p\neq 0, \delta$, namely when both $K_1$ and $K_2$ structures appear at the same time, combining eqs.(\ref{JK12}), the following relation is shown to hold: {\small{ \begin{eqnarray}} \def\eea{\end{eqnarray} && \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}\!+\!1\end{array}\right] _p = -\frac 14 \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}\end{array}\right]_p- \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}\!+\!1&m_{12}\end{array}\right]_p- \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}\!+\!1&m_{13}&m_{12}\end{array}\right]_p \nn \\ &&\hspace{1.5cm} -8 \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}\!+\!1&m_{13}\!+\!1&m_{12}\end{array}\right]_{p}+ \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}\!+\!1&m_{12}\end{array}\right]_{p-1}+4 \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}\!+\!2&m_{12}\end{array}\right]_{p-1}\nn \\ && \hspace{1.5cm} + \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}\!+\!1&m_{13}&m_{12}\end{array}\right]_{p+1}+4 \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}\!+\!2&m_{13}&m_{12}\end{array}\right]_{p+1}\,. \label{deltaGenRelInd} \eea}} Using it iteratively, we can reduce all structures with $p\neq 0, \delta$ to those with $m_{12}=0$ and with $p=0,\delta$, any $m_{12}$.\footnote{One has to recall the range of the parameters (\ref{j123AO}), otherwise it might seem that non-existant structures can be obtained from eq.(\ref{deltaGenRelInd}).} This proves the validity of eq.(\ref{deltaGenExpST}). As a further check, we have verified that the number of tensor structures obtained from eq.(\ref{deltaGenExpST}) agrees with those found from eq.(3.38) of ref.\cite{Elkhidir:2014woa}. \subsubsection{Recursion Relations for general $l_3$} \label{subsubsec:smalll3} The tensor structures of correlators with $l_3< l_1+l_2-\delta$ cannot all be reduced in the form (\ref{deltaGenExpST}), because we are no longer ensured to have enough $J_{3,12}$ invariants to remove all the $K_{3,12}$'s by means of eq.(\ref{JK3}). In this case the most general tensor structure reads {\small{ \begin{equation} \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}\end{array}\right] _{p,q} \!\!\! \equiv\eta \Big(\frac{K_{1,23} I_{23}}{X_{23}}\Big)^{\delta-p} \Big(\frac{K_{2,31} I_{13}}{X_{13}}\Big)^{q} \Big(\frac{K_{3,12} I_{13} I_{23}}{\sqrt{X_{12}X_{13}X_{23}}}\Big)^{p-q} \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1-p &l_2-\delta+q &l_3\\m_{23}&m_{13}&\widetilde m_{12}\end{array}\right] \,, \label{deltaGenExpSTlowl3} \end{equation}}} with $p=0,\ldots,\delta$, $q=0,\ldots,\delta$, $p-q\geq 0$ and \begin{equation}\begin{aligned}\label{j123AOlowl} j_1 & = l_1-p - \widetilde m_{12}- m_{13} \geq 0\,, \\ j_2 & =l_2-\delta+q -\widetilde m_{12}-m_{23}\geq 0 \,,\\ j_3 & =l_3 -m_{13}-m_{23} \geq 0 \,, \end{aligned}\hspace{1cm} \begin{aligned}\ \widetilde m_{12} & = \left\{\begin{array}{l}m_{12} \,\,\,\,\;\;\; if \;\;\;\;\;q=0 \,\,\,or \,\,\, p= \delta \\ 0 \,\,\,\,\,\,\,\,\, \,\;\;otherwise \end{array}\right. \\ \eta & = \left\{\begin{array}{l}0 \,\,\,\,\;\;\; if \;\;\;\; j_3>0 \,\,\,and \,\,\, p\neq q\\ 1 \,\,\, \;\;\;otherwise \end{array}\right. \,. \end{aligned}\end{equation} The parameter $\eta$ in eq.(\ref{j123AOlowl}) is necessary because the tensor structures involving $K_{3,12}$ (i.e. those with $p\neq q$) are independent only when $j_3=0$, namely when the traceless symmetric structure does not contain any $J_{3,12}$ invariant. All the tensor structures (\ref{deltaGenExpSTlowl3}) can be reached starting from the single seed with $p=0$, $q=0$, $l_1=0$, $l_2=\delta$ and $m_{ij}=0$. The analysis follows quite closely the one made for $l_3\geq l_1+l_2-\delta$, although it is slightly more involved. As before, it is convenient to first construct a redundant basis where $m_{12}\neq 0$ for any $p,q$ and we neglect the factor $\eta$ above, and impose only later the relations that leads to the independent basis (\ref{deltaGenExpSTlowl3}). We start from the structures with $p=q$, which are the same as those in eq.(\ref{deltaGenExpST}): first construct the structures with $m_{13}=m_{23}=0$ by applying iteratively the operators $D_{1,2}+\widetilde D_{1,2}$, and then apply $D_1$ and $D_2$ to get the structures with non-vanishing $m_{13}$ and $m_{23}$. Structures with $p\neq q$ appear when acting with $D_1$ and $D_2$. We have: {\small{\begin{eqnarray}} \def\eea{\end{eqnarray} && D_1 \left[\begin{array}{ccc}\Delta_1&\Delta_2+1&\Delta_3\\l_1-1&l_2&l_3\\m_{23}&m_{13}&m_{12}\end{array}\right] _{p,p} = 2(\delta-p) \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}\end{array}\right] _{p+1,p} \label{D1lowl3} \\ && -4(l_2+p-\delta-m_{12}-m_{23}) \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}+1\end{array}\right] _{p,p} +4(l_3-m_{13}-m_{23}) \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}+1&m_{12}\end{array}\right] _{p,p} \nn \\ && \hspace{2cm} + \frac 12 \Big(2m_{12}-2m_{13}+\Delta_2-\Delta_1-\Delta_3 -l_1-l_2+l_3+2(\delta-p+1)\Big) \left[\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}\end{array}\right]_{p,p} \nn \,. \eea}} The action of $D_2$ is obtained by exchanging $1\leftrightarrow 2$ and $\delta-p\leftrightarrow q$ in the coefficients multiplying the structures and replacing the subscript $(p+1,p)$ with $(p,p-1)$. For $m_{13}+m_{23}<l_3$ the first term in eq.(\ref{D1lowl3}) is redundant and can be expressed in terms of the known structures with $p=q$. An irreducible structure is produced only when we reach the maximum allowed value $m_{13}+m_{23}=l_3$, in which case the third term in eq.(\ref{D1lowl3}) vanishes and we can use the equation to get the irreducible structures with $p\neq q$. Summarizing, all tensor structures can be obtained starting from a single seed upon the action of the operators $D_{1,2}$, $(D_{1,2}+\widetilde D_{1,2})$, $I_{12} I_{21}$ and $R$. \subsubsection{Differential Basis} \label{subsubsec:DBAnti} A differential basis that is well defined for any value of $l_1$, $l_2$, $l_3$ and $\delta$ is \begin{equation}\begin{aligned} {\small{\left\{\begin{array}{ccc}\Delta_1&\Delta_2&\Delta_3\\l_1&l_2&l_3\\m_{23}&m_{13}&m_{12}\end{array}\right\}_{p,q} }}& = \eta\, (I_{12} I_{21})^{\widetilde m_{12}} D_1^{m_{13}+p-q} D_2^{m_{23}}\sum_{n_1=0}^{j_1}\Big( \begin{array}{c} j_1 \\ n_1\end{array}\Big) D_1^{n_1} \widetilde D_1^{j_1-n_1} \sum_{n_2=0}^{j_2}\Big( \begin{array}{c} j_2 \\ n_2\end{array}\Big) \\ & D_2^{n_2} \widetilde D_2^{j_2-n_2} R^q {\small{\left[\begin{array}{ccc}\Delta_1^\prime&\Delta_2^\prime &\Delta_3\\0 & \delta &l_3\\ 0 & 0 & 0 \end{array}\right]_0 } }, \label{DBevenNew} \end{aligned} \end{equation} where $\Delta_1^{\prime} = \Delta_1 + l_1+m_{23}-m_{13}+n_2-n_1-p +q$, $\Delta_2^{\prime} = \Delta_2 + l_2+m_{13}-m_{23}+n_1-n_2+2q-\delta$, and all parameters are defined as in eq.(\ref{j123AOlowl}). The recursion relations found above have shown that the differential basis (\ref{DBevenNew}) is complete. One can also check that its dimensionality matches the one of the ordinary basis for any $l_1$, $l_2$, $l_3$ and $\delta$. Like in the purely traceless symmetric case, the specific choice of operators made in eq.(\ref{DBevenNew}) seems to be enough to ensure that the determinant of the transformation matrix is non-vanishing regardless of the choice of $\Delta_1$ and $\Delta_2$. We have explicitly checked this result up to $l_1=l_2=2$, for any $l_3$. The transformation matrix is always of maximal rank, except for the case $l_3=0$ and $\Delta_3=2$, which saturates the unitarity bound for $\delta=1$. Luckily enough, this case is quite trivial, being associated to the exchange of a free $(2,0)$ self-dual tensor \cite{Siegel:1988gd} (see footnote \ref{foot:1}). The specific ordering of the differential operators is a choice motivated by the form of the recursion relations, as before, and different orderings can be trivially related by using the commutators defined in eq.(\ref{commutators}). \section{Computation of Four-Point Functions} We have shown in section \ref{sec:cpw} how relations between three-point functions lead to relations between CPW. The latter are parametrized by 4-point, rather than 3-point, function tensor structures, so in order to make further progress it is important to classify four-point functions. It should be clear that even when acting on scalar quantities, tensor structures belonging to the class of 4-point functions are generated. For example $\widetilde D_{1}U=-UJ_{1,24}$. We postpone to another work a general classification, yet we want to show in the following subsection a preliminary analysis, enough to study the four fermion correlator example in subsection \ref{subsec:4fer}. \subsection{Tensor Structures of Four-Point Functions} In 6D, the index-free uplift of the four-point function (\ref{Gen4pt}) reads \begin{equation} \langle O_1O_2O_3O_4 \rangle=\mathcal{K}_4\;\sum_{n=1}^{N_{4}}g_n(U,V) \; \mathcal{T}^n(S_1,\bar{S}_1,..,S_4,\bar{S}_4), \end{equation} where $\mathcal{T}^n$ are the 6D uplifts of the tensor structures appearing in eq.(\ref{Gen4pt}). The 6D kinematic factor $\mathcal{K}_4$ and the conformally invariant cross ratios $(U,V)$ are obtained from their 4D counterparts by the replacement $x_{ij}^2\rightarrow X_{ij}$ in eqs.(\ref{eq:4kinematic}) and (\ref{uv4d}). The tensor structures $\mathcal{T}^n$ are formed from the three-point invariants~(\ref{eq:invar1})-(\ref{eq:invar4}) (where $i,j,k$ now run from 1 to 4) and the following new ones: \begin{align} \label{eq:invar5} J_{ij,kl} &\equiv N_{kl}\, \bar S_i \mathbf{X}_k \overline{\mathbf{X}}_l S_j \,, \\ \label{eq:invar6} K_{i,jkl} &\equiv N_{jkl}\, S_i \overline{\mathbf{X}}_j \mathbf{X}_k \overline{\mathbf{X}}_l S_i \,, \\ \label{eq:invar7} \overline{K}_{i,jkl} &\equiv N_{jkl}\, \bar S_i \mathbf{X}_j \overline{\mathbf{X}}_k \mathbf{X}_l \bar S_i \,, \end{align} where $i\neq j\neq k\neq l=1,2,3,4$; $K_{i,jkl}$ and $\overline{K}_{i,jkl}$ are totally anti-symmetric in the last three indices and the normalization factor is given by \begin{equation} N_{jkl}\equiv \frac{1}{\sqrt{X_{jk}X_{kl}X_{lj}}}. \end{equation} The invariants $J_{ij,kl}$ satisfy the relations $J_{ij,kl}=-J_{ij,lk}+2I_{ij}$. Given that, and the 4D parity transformations $K_{i,jkl} \stackrel{{\cal P}}{\longleftrightarrow} \overline{K}_{i,jkl}$ and $J_{ij,kl} \stackrel{{\cal P}}{\longleftrightarrow}-J_{ji,lk}$, a convenient choice of index ordering in $J_{ij,kl}$ is $(i<j,\,k<l)$ and $(i>j,\,k>l)$. Two other invariants $H\equiv S_1 S_2 S_3 S_4$ and $\bar H\equiv \bar S_1 \bar S_2 \bar S_3 \bar S_4$ formed by using the epsilon $SU(2,2)$ symbols, are redundant. For instance, one has $X_{12}H=K_{2,14}K_{1,23}-K_{1,24}K_{2,13}$. Any four-point function can be expressed as a sum of products of the invariants (\ref{eq:invar1})-(\ref{eq:invar4}) and (\ref{eq:invar5})-(\ref{eq:invar7}). However, not every product is independent, due to several relations between them. Leaving to a future work the search of all possible relations, we report in Appendix~\ref{app:relations} a small subset of them. Having a general classification of 4-point tensor structures is crucial to bootstrap a four-point function with non-zero external spins. When we equate correlators in different channels, we have to identify all the factors in front of the same tensor structure, thus it is important to have a common basis of independent tensor structures. \subsection{Counting 4-Point Function Structures} In absence of a general classification of 4-point functions, we cannot directly count the number $N_4$ of their tensor structures. However, as we already emphasized in ref.\cite{Elkhidir:2014woa}, the knowledge of 3-point functions and the OPE should be enough to infer $N_4$ by means of eq.(\ref{N4N3}). In this subsection we show how to use eq.(\ref{N4N3}) to determine $N_4$, in particular when parity and permutation symmetries are imposed. If the external operators are traceless symmetric, the CPW can be divided in parity even and odd. This is clear when the exchanged operator is also traceless symmetric: $W^{(p,q)}_{O(l,l)+} \stackrel{{\cal P}}{\longrightarrow}W^{(p,q)}_{O(l,l)+}$ if the 3-point structures $p$ and $q$ are both parity even or both parity odd, $W^{(p,q)}_{O(l,l)-} \stackrel{{\cal P}}{\longrightarrow}-W^{(p,q)}_{O(l,l)-}$ if only one of the structures $p$ or $q$ is parity odd. For mixed symmetry exchanged operators $O^{l+2\delta,l}$ or $O^{l,l+2\delta}$, we have $W^{(p,q)}_{O(l+2\delta,\,l)} \stackrel{{\cal P}}{\longrightarrow}W^{(p,q)}_{O(l,\,l+2\delta)}$, so that $W^{(p,q)}_{Or+}=W^{(p,q)}_{Or}+W^{(p,q)}_{O\bar r}$ is parity even and $W^{(p,q)}_{Or-}=W^{(p,q)}_{Or}-W^{(p,q)}_{O\bar r}$ is parity odd. If parity is conserved, only parity even or odd CPW survive, according to the parity transformation of the external operators. The number of parity even and parity odd 4-point tensor structures are \begin{equation}\begin{aligned}\label{n4Parity} N_{4+}&=N^{12}_{3(l,l)+}N^{34}_{3(l,l)+}+N^{12}_{3(l,l)-}N^{34}_{3(l,l)-}+\sum_{r\neq(l,l)}\frac{1}{2}N^{12}_{3r}N^{34}_{3\bar r} \,, \\ N_{4-}&=N^{12}_{3(l,l)-}N^{34}_{3(l,l)+}+N^{12}_{3(l,l)+}N^{34}_{3(l,l)-}+\sum_{r\neq(l,l)}\frac{1}{2}N^{12}_{3r}N^{34}_{3\bar r}\,. \end{aligned}\end{equation} The numbers $N_{4+}$ and $N_{4-}$ in eq.(\ref{n4Parity}) are always integers, because in the sum over $r$ one has to consider separately $r=(l,\bar l)$ and $r=(\bar l,l)$,\footnote{Recall that $r$ is not an infinite sum over all possible spins, but a finite sum over the different classes of representations, see eq.(\ref{sch4pt}) and text below.} and which give an equal contribution that compensates for the factor 1/2. When some of the external operators are equal, permutation symmetry should be imposed. We consider here only the permutations $1\leftrightarrow3$, $2\leftrightarrow4$ and $1\leftrightarrow 2$, $3\leftrightarrow4$ that leave $U$ and $V$ invariant and simply give rise to a reduced number of tensor structures. Other permutations would give relations among the various functions $g_n(U,V)$ evaluated at different values of their argument. If $O_1=O_3$, $O_2=O_4$, the CPW in the s-channel transforms as follows under the permutation $1\leftrightarrow3$, $2\leftrightarrow4$: $W^{(p,q)}_{Or}\xrightarrow{per} W^{(q,p)}_{O\bar r}$. We then have $W^{(p,q)}_{O(l,l)+}=W^{(q,p)}_{O(l,l)+}$, $W^{(p,q)}_{O(l,l)-}=W^{(q,p)}_{O(l,l)-}$, $W^{(p,q)}_{Or+}=W^{(q,p)}_{Or+}$, $W^{(p,q)}_{Or-}=-W^{(q,p)}_{Or-}$. The number of parity even and parity odd 4-point tensor structures in this case is \begin{equation}\begin{aligned}\label{n4ParityPermutation} N_{4+}^{per}&=\frac12N^{12}_{3(l,l)+}(N^{34}_{3(l,l)+}+1)+\frac12N^{12}_{3(l,l)-}(N^{34}_{3(l,l)-}+1)+\sum_{r\neq(l,l)}\frac{1}{4}N^{12}_{3r}(N^{12}_{3r}+1)\,, \\ N_{4-}^{per}&=N^{12}_{3(l,l)-}N^{12}_{3(l,l)+}+\sum_{r\neq(l,l)}\frac{1}{4}N^{12}_{3r}(N^{12}_{3r}-1) \,, \end{aligned}\end{equation} where again in the sum over $r$ one has to consider separately $r=(l,\bar l)$ and $r=(\bar l,l)$. If $O_1=O_2$, $O_3=O_4$, the permutation $1\leftrightarrow2$, $3\leftrightarrow4$ reduces the number of tensor structures of the CPW in the s-channel, $N^{12}_3\rightarrow N^{1=2}_3\leq N^{12}_3$ and $N^{34}_3\rightarrow N^{3=4}_3\leq N^{12}_3$. Conservation of external operators has a similar effect. \subsection{Relation between ``Seed" Conformal Partial Waves} Using the results of the last section, we can compute the CPW associated to the exchange of arbitrary operators with external traceless symmetric fields, in terms of a set of seed CPW, schematically denoted by $W_{{\cal O}^{l+2\delta,l}}^{(p,q)}(l_1,l_2,l_3,l_4)$. We have \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber W_{O^{l+2\delta,l}}^{(p,q)}(l_1,l_2,l_3,l_4) = D_{(12)}^{(p)} D_{(34)}^{(q)} W_{O^{l+2\delta,l}}(0,\delta,0,\delta)\,, \label{Wl1234} \ee where $D_{12}^{(p)}$ schematically denotes the action of the differential operators reported in the last section, and $D_{34}^{(q)}$ are the same operators for the fields at $X_3$ and $X_4$, obtained by replacing $1\rightarrow 3$, $2\rightarrow 4$ everywhere in eqs.(\ref{DDtilde})-(\ref{Dtilde12}) and (\ref{nablaS}). For simplicity we do not report the dependence of $W$ on $U,V$, and on the scaling dimensions of the external and exchanged operators. The seed CPW are the simplest among the ones appearing in correlators of traceless symmetric tensors, but they are {\it not} the simplest in general. These will be the CPW arising from the four-point functions with the {\it lowest} number of tensor structures with a non-vanishing contribution of the field $O^{l+2\delta,l}$ in some of the OPE channels. Such minimal four-point functions are\footnote{Instead of eq.(\ref{4ptanti}) one could also use 4-point functions with two scalars and two $O^{(0,2\delta)}$ fields or two scalars and two $O^{(2\delta,0)}$ fields. Both have the same number $2\delta+1$ of tensor structures as the correlator (\ref{4ptanti}).} \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber \langle O^{(0,0)}(X_1) O^{(2\delta,0)}(X_2) O^{(0,0)}(X_3) O^{(0,2\delta)}(X_4) \rangle = \mathcal{K}_4 \sum_{n=0}^{2\delta} g_n(U,V) I_{42}^n J_{42,31}^{2\delta-n}\,, \label{4ptanti} \ee with just \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber N_4^{seed}(\delta)=2\delta+1 \ee tensor structures. In the s-channel (12-34) operators $O^{l+n,l}$, with $-2\delta\leq n\leq 2\delta$, are exchanged. We denote by $W_{seed}(\delta)$ and $\overline{W}_{seed}(\delta)$ the single CPW associated to the exchange of the fields $O^{l+2\delta,l}$ and $O^{l,l+2\delta}$ in the four-point function (\ref{4ptanti}). They are parametrized in terms of $2\delta+1$ conformal blocks as follows (${\cal G}_0^{(0)}= \overline{{\cal G}}_0^{(0)}$): \begin{eqnarray}} \def\eea{\end{eqnarray} W_{seed}(\delta)& = & \mathcal{K}_4 \sum_{n=0}^{2\delta} {\cal G}_n^{(\delta)}(U,V) I_{42}^n J_{42,31}^{2\delta-n}\,,\nn \\ \overline W_{seed}(\delta)& = & \mathcal{K}_4 \sum_{n=0}^{2\delta} \overline{\cal G}_n^{(\delta)}(U,V) I_{42}^n J_{42,31}^{2\delta-n}\,. \label{W2deltaExp} \eea In contrast, the number of tensor structures in $\langle O^{(0,0)}(X_1) O^{(\delta,\delta)}(X_2) O^{(0,0)}(X_3) O^{(\delta,\delta)}(X_4) \rangle$ grows rapidly with $\delta$. Denoting it by $\widetilde N_4(\delta)$ we have, using eq.(6.6) of ref.\cite{Elkhidir:2014woa}: \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber \widetilde N_4(\delta) = \frac 13 \Big(2 \delta^3+6\delta^2+7\delta+3\Big)\,. \ee It is important to stress that a significant simplification occurs in using seed CPW even when there is no need to reduce their number, i.e. $p=q=1$. For instance, consider the correlator of four traceless symmetric spin 2 tensors. The CPW $W_{O^{l+8,l}}(2,2,2,2)$ is unique, yet it contains 1107 conformal blocks (one for each tensor structure allowed in this correlator), to be contrasted to the 85 present in $W_{O^{l+8,l}}(0,4,0,4)$ and the 9 in $W_{seed}(4)$! We need to relate $\langle O^{(0,0)}(X_1) O^{(2\delta,0)}(X_2) O^{(l+2\delta,l)}(X_3)\rangle$ and $\langle O^{(0,0)}(X_1) O^{(\delta,\delta)}(X_2) O^{(l+2\delta,l)}(X_3)\rangle$ in order to be able to use the results of section \ref{sec:DBTSO} together with $W_{seed}(\delta)$. As explained at the end of Section \ref{sec:OB}, there is no combination of first-order operators which can do this job and one is forced to use the operator~(\ref{nablaS}): \begin{equation} \langle O^{(0,0)}_{\Delta_1}(X_1) O^{(\delta,\delta)}_{\Delta_2}(X_2) O_{\Delta}^{(l,\,l+2\delta)}(X)\rangle_1= \Big(\prod_{n=1}^\delta c_n\Big) (\bar{d}_{1} \nabla_{12} \widetilde D_{1})^\delta \langle O^{(0,0)}_{\Delta_1+\delta}(X_1) O^{(2\delta,0)}_{\Delta_2}(X_2) O^{(l,\,l+2\delta)}_{\Delta}(X)\rangle_1 \,, \label{Deltad1t} \end{equation} where\footnote{Notice that the scalings dimension $\Delta_1$ and $\Delta_2$ in eq.(\ref{cn}) do not exactly correspond in general to those of the external operators, but should be identified with $\Delta_1^\prime$ and $\Delta_2^\prime$ in eq.(\ref{DBevenNew}). It might happen that the coefficient $c_n$ vanishes for some values of $\Delta_1$ and $\Delta_2$. As we already pointed out, there is some redundancy that allows us to choose a different set of operators. Whenever this coefficient vanishes, we can choose a different operator, e.g. $\widetilde D_1\rightarrow D_1$.} \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber c_n^{-1} = 2(1-n+2\delta)\Big(2(n+1)+\delta+l+\Delta_1-\Delta_2+\Delta\Big)\,. \label{cn} \ee Equation (\ref{Deltad1t}) implies the following relation between the two CPW: \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber W_{O^{l+2\delta,l}}(0,\delta,0,\delta) = \Big(\prod_{n=1}^\delta c_n^{12} c_n^{34}\Big) \Big(\nabla_{43} d_{3} \widetilde D_{3}\Big)^\delta \Big(\nabla_{12} \bar{d}_{1} \widetilde D_{1}\Big)^\delta W_{seed}(\delta) \,, \label{WtoW} \ee where $c_n^{12}=c_n$ in eq.(\ref{cn}), $c_n^{34}$ is obtained from $c_n$ by exchanging $1\rightarrow 3, 2\rightarrow 4$ and the scaling dimensions of the corresponding external operators are related as indicated in eq.(\ref{Deltad1t}). Summarizing, the whole highly non-trivial problem of computing $W_{O^{l+2\delta,l}}^{(p,q)}(l_1,l_2,l_3,l_4)$ has been reduced to the computation of the $2\times (2\delta+1)$ conformal blocks ${\cal G}_n^{(\delta)}(U,V)$ and $ \overline{\cal G}_n^{(\delta)}(U,V)$ entering eq.(\ref{W2deltaExp}). Once they are known, one can use eqs.(\ref{WtoW}) and (\ref{Wl1234}) to finally reconstruct $W_{O^{l+2\delta,l}}^{(p,q)}(l_1,l_2,l_3,l_4)$. \section{Examples} In this section we would like to elucidate various aspects of our construction. In the subsection~\ref{subsec:4fer} we give an example in which we deconstruct a correlation function of four fermions. We leave the domain of traceless symmetric external operators to show the generality of our formalism. It might also have some relevance in phenomenological applications beyond the Standard Model \cite{Caracciolo:2014cxa}. In the subsection~\ref{subsec:ConservedOperators} we consider the special cases of correlators with four conserved identical operators, like spin 1 currents and energy momentum tensors, whose relevance is obvious. There we will just outline the main steps focusing on the implications of current conservations and permutation symmetry in our deconstruction process. \subsection{Four Fermions Correlator} \label{subsec:4fer} Our goal here is to deconstruct the CPW in the s-channel associated to the four fermion correlator \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber \langle \bar\psi^{\dot \alpha}(x_1)\psi_\beta (x_2)\chi_\gamma(x_3) \bar\chi^{\dot \delta}(x_4) \rangle \,. \label{4ferm4D} \ee For simplicity, we take $\bar \psi$ and $\bar \chi$ to be conjugate fields of $\psi$ and $\chi$, respectively, so that we have only two different scaling dimensions, $\Delta_\psi$ and $\Delta_\chi$. Parity invariance is however not imposed in the underlying CFT. The correlator (\ref{4ferm4D}) admits six different tensor structures. An independent basis of tensor structures for the 6D uplift of eq. (\ref{4ferm4D}) can be found using the relation~(\ref{eq:rel8}). A possible choice is \begin{eqnarray}} \def\eea{\end{eqnarray} \label{eq:4-pf} && \!\!\!\!\! \langle \Psi(X_1,\,\bar S_1)\,\bar{\Psi}(X_2,\, S_2)\,\bar{\mathcal{X}}(X_3,\,S_3)\,\mathcal{X}(X_4,\,\bar S_4) \rangle=\frac{1}{X_{12}^{\Delta_\psi+\tfrac{1}{2}}X_{34}^{\Delta_\chi+\tfrac{1}{2}}} \bigg(g_1(U,V)I_{12}I_{43}+ \\ && \!\!\!\!\! g_2(U,V)I_{42}I_{13}+g_3(U,V)I_{12}J_{43,21}+ g_4(U,V)I_{42}J_{13,24}+g_5(U,V)I_{43}J_{12,34}+g_6(U,V)I_{13}J_{42,31}\bigg). \nn \eea For $l\geq 1$, four CPW $W_{O^{l,l}}^{(p,q)}$ ($p,q=1,2$) are associated to the exchange of traceless symmetric fields, and one for each mixed symmetry field, $W_{O^{l+2,l}}$ and $W_{O^{l,l+2}}$. Let us start with $W_{O^{l,l}}^{(p,q)}$. The traceless symmetric CPW are obtained as usual by relating the three point function of two fermions and one $O^{l,l}$ to that of two scalars and one $O^{l,l}$. This relation requires to use the operator (\ref{nablaS}). There are two tensor structures for $l\geq 1$: \begin{eqnarray}} \def\eea{\end{eqnarray} \label{FerTS} \langle \Psi(\bar S_1) \bar{\Psi}(S_2) O^{l,l} \rangle_1 & = & \mathcal{K} I_{12}J_{0,12}^l =I_{12} \langle \Phi^{\frac 12} \Phi^{\frac 12} O^{l,l} \rangle_1, \\ \langle \Psi(\bar S_1) \bar{\Psi}(S_2) O^{l,l} \rangle_2 & = & \mathcal{K} I_{10}I_{02}J_{0,12}^{l-1} =\frac{1}{16l(\Delta-1)} \nabla_{21} \Big( \widetilde D_2 \widetilde D_1 +\kappa I_{12} \Big) \langle \Phi^{\frac 12} \Phi^{\frac 12} O^{l,l} \rangle_1, \nn \eea where $\kappa=2\big(4\Delta-(\Delta+l)^2 \big)$, the superscript $n$ in $\Phi$ indicates the shift in the scaling dimensions of the field and the operator $O^{l,l}$ is taken at $X_0$. Plugging eq.(\ref{FerTS}) (and the analogous one for $\mathcal{X}$ and $\bar{\mathcal{X}}$) in eq.(\ref{shadow2}) gives the relation between CPW. In order to simplify the equations, we report below the CPW in the differential basis, the relation with the ordinary basis being easily determined from eq.(\ref{FerTS}): \begin{equation} \begin{aligned} W_{O^{l,l}}^{(1,1)} = & I_{12} I_{43} W^{\frac 12,\frac 12,\frac 12,\frac 12}_{seed}(0)\,, \\ W_{O^{l,l}}^{(1,2)} = & I_{12} \nabla_{34} \widetilde D_4 \widetilde D_3 W^{\frac 12,\frac 12,\frac 12,\frac 12}_{seed}(0) \,, \\ W_{O^{l,l}}^{(2,1)} = & I_{43} \nabla_{21} \widetilde D_2 \widetilde D_1 W^{\frac 12,\frac 12,\frac 12,\frac 12}_{seed}(0) \,, \\ W_{O^{l,l}}^{(2,2)} = & \nabla_{21} \widetilde D_2 \widetilde D_1 \nabla_{34} \widetilde D_4 \widetilde D_3 W^{\frac 12,\frac 12,\frac 12,\frac 12}_{seed}(0) \,, \end{aligned} \label{CPWferExp} \end{equation} where $\widetilde D_3$ and $\widetilde D_4$ are obtained from $\widetilde D_1$ and $\widetilde D_2$ in eq.(\ref{DDtilde}) by replacing $1\rightarrow 3$ and $2\rightarrow 4$ respectively. The superscripts indicate again the shift in the scaling dimensions of the external operators. As in ref.\cite{Costa:2011dw} the CPW associated to the exchange of traceless symmetric fields is entirely determined in terms of the single known CPW of four scalars $W_{seed}(0)$. For illustrative purposes, we report here the explicit expressions of $W_{O^{l,l}}^{(1,2)}$: \begin{multline} \mathcal{K}_4^{-1}W_{O^{l,l}}^{(1,2)} = 8I_{12}I_{43}\Bigg(U\big(V-U-2\big)\partial_U + U^2\big(V-U\big)\partial_U^2 + \big(V^2-(2+U)V+1\big)\partial_V +\\ V\big(V^2-(2+U)V+1\big)\partial_V^2 +2UV\big(V-U-1\big)\partial_U\partial_V\Bigg){\cal G}_0^{(0)} \\ + 4UI_{12}J_{43,21}\Bigg(U\partial_U+U^2\partial_U^2+\big(V-1\big)\partial_V+V\big(V-1\big)\partial_V^2 +2UV\partial_U\partial_V \Bigg){\cal G}_0^{(0)}, \label{W12Ol} \end{multline} where ${\cal G}_0^{(0)}$ are the known scalar conformal blocks \cite{Dolan:2000ut,Dolan:2003hv}. It is worth noting that the relations~(\ref{eq:rel1})-(\ref{eq:rel8}) have to be used to remove redundant structures and write the above result (\ref{W12Ol}) in the chosen basis~(\ref{eq:4-pf}). The analysis for the mixed symmetry CPW $W_{O^{l+2,l}}$ and $W_{O^{l,l+2}}$ is simpler. The three point function of two fermions and one $O^{l,l+2}$ field has a unique tensor structure, like the one of a scalar and a $(2,0)$ field $F$. One has \begin{equation} \begin{aligned} \langle \Psi(\bar S_1) \bar\Psi(S_2) O^{l+2,l} \rangle_1 & = \mathcal{K} I_{10}K_{1,20} J_{0,12}^l = \frac 14 \bar d_{2} \langle \Phi^{\frac12} F^{\frac12} O^{l+2,l} \rangle_1 \,, \\ \langle \Psi(\bar S_1) \bar\Psi(S_2) O^{l,l+2} \rangle_1 & = \mathcal{K} I_{02}\overline K_{2,10} J_{0,12}^l = \frac 12 \bar d_{2} \langle \Phi^{\frac12} F^{\frac12} O^{l,l+2} \rangle_1 \end{aligned} \label{CPWfer2Exp} \end{equation} and similarly for the conjugate $(0,2)$ field $\bar F$. Using the above relation, modulo an irrelevant constant factor, we get \begin{equation} \begin{aligned} W_{O^{l+2,l}} = & \; \bar d_2 d_4 W_{seed}^{\frac12,\frac12,\frac12,\frac12}(1) \,, \\ W_{O^{l,l+2}} = &\; \bar d_2 d_4 \overline W_{seed}^{\frac12,\frac12,\frac12,\frac12}(1) \,, \end{aligned} \label{CPWfer4Exp} \end{equation} where $W_{seed}(1)$ and $\overline W_{seed}(1)$ are defined in eq.(\ref{W2deltaExp}). Explicitly, one gets \begin{equation} \begin{aligned} \frac{\sqrt{U}}{4}\mathcal{K}_4^{-1}W_{O^{l+2,l}} = & I_{12}I_{43} \Big({\cal G}_2^{(1)} +(V-U-1){\cal G}_1^{(1)} +4U{\cal G}_0^{(1)}\Big) -4U I_{42} I_{13} {\cal G}_1^{(1)} +U I_{12} J_{43,21} {\cal G}_1^{(1)} \\ & -U I_{42} J_{13,24} {\cal G}_2^{(1)}+U I_{43}J_{12,34} {\cal G}_1^{(1)}-4U I_{13}J_{42,31} {\cal G}_0^{(1)}\,. \end{aligned} \label{CPWfer5Exp} \end{equation} The same applies for $W_{O^{l,l+2}}$ with ${\cal G}_n^{(1)}\rightarrow \overline {\cal G}_n^{(1)}$. The expression (\ref{CPWfer5Exp}) shows clearly how the six conformal blocks entering $W_{O^{l,l+2}}$ are completely determined in terms of the three ${\cal G}_n^{(1)}$. \subsection{Conserved Operators} \label{subsec:ConservedOperators} In this subsection we outline, omitting some details, the deconstruction of four identical currents and four energy-momentum tensor correlators, which are among the most interesting and universal correlators to consider. In general, current conservation relates the coefficients $\lambda_s$ of the three-point function and reduces the number of independent tensor structures. Since CPW are determined in terms of products of two 3-point functions, the number of CPW $\widetilde W_{O}$ associated to external conserved operators is reduced with respect to the one of CPW for non-conserved operators $W_{O}$: \begin{equation} \sum_{p,q=1}^{N_3}\lambda^p_{12O}\lambda^q_{34\bar O} W_{O}^{(p,q)} \longrightarrow \sum_{\tilde p,\tilde q =1}^{\tilde N_3} \lambda^{\tilde p}_{12O}\lambda^{\tilde q} _{34\bar O} \widetilde W_{O}^{(\tilde p,\tilde q)}\,, \end{equation} where $\tilde N_3\leq N_3$ and \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber \widetilde W_{O}^{(\tilde p,\tilde q)}=\sum_{p,q=1}^{N_3} F^{\tilde p p}_{12O}F^{\tilde q q }_{34\bar O} W_{O}^{(p,q)} \,. \label{CPWtilda} \ee The coefficients $F^{\tilde p p}_{12O}$ and $F^{\tilde q q}_{34\bar O}$ depend in general on the scaling dimension $\Delta$ and spin $l$ of the exchanged operator $O$. They can be determined by applying the operator defined in eq.(\ref{ConservedD}) to 3-point functions. \subsubsection{Spin 1 Four-Point Functions} In any given channel, the exchanged operators are in the $(l,l)$, $(l+2,l)$, $(l,l+2)$, $(l+4,l)$ and $(l,l+4)$ representations. The number of 3-point function tensor structures of these operators with the two external vectors and the total number of four-point function structures is reported in table \ref{tableConservedCurrent}. Each CPW can be expanded in terms of the 70 tensor structures for a total of 4900 scalar conformal blocks as defined in eq.(\ref{WGen}). Using the differential basis, the $36\times 70=2520$ conformal blocks associated to the traceless symmetric CPW are determined in terms of the single known scalar CPW \cite{Costa:2011dw}. The $16\times 70=1120$ ones associated to ${\cal O}^{l+2,l}$ and ${\cal O}^{l,l+2}$ are all related to the two CPW $W_{seed}(1)$ and $\overline W_{seed}(1)$. Each of them is a function of 3 conformal blocks, see eq.(\ref{W2deltaExp}), for a total of 6 unknown. Finally, the $2\times 70=140$ conformal blocks associated to ${\cal O}^{l+4,l}$ and ${\cal O}^{l,l+4}$ are expressed in terms of the $5\times 2=10$ conformal blocks coming from the two CPW $W_{seed}(2)$ and $\overline W_{seed}(2)$. Let us see more closely the constrains coming from permutation symmetry and conservation. For $l\geq 2$, the $5_++1_-$ tensor structures of the three-point function $\langle V_1 V_2 O_{l,l}\rangle$, for distinct non-conserved vectors, reads \begin{eqnarray}} \def\eea{\end{eqnarray} \langle V_1 V_2 O^{l,l}\rangle &=&\mathcal{K}_3\Big(\lambda_1 I_{23}I_{32}J_{1,23}J_{3,12}+\lambda_2I_{13}I_{31}J_{2,31}J_{3,12}+\lambda_3I_{12}I_{21}J_{3,12}^2 \\ && +\lambda_4 I_{13}I_{31}I_{23}I_{32}+\lambda_5J_{1,23}J_{2,31}J_{3,12}^2+\lambda_6( I_{21}I_{13}I_{32}+I_{12}I_{23}I_{31} )J_{3,12}\Big)J_{3,12}^{l-2} \,. \nn \eea Taking $V_1=V_2$ and applying the conservation condition to the external vectors gives a set of constraints for the OPE coefficients $\lambda_p$. For $\Delta\neq l+4$, we have\footnote{This is the result for generic non-conserved operators $O^{l,l}$.} \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{|c|}{$O_{l,l}$}&\multicolumn{2}{|c|}{$O_{l+2,l}$}&\multicolumn{2}{|c|}{$O_{l+4,l}$}&\multicolumn{1}{||c|}{$N_4$}\\ $l=$&$2n$&$2n+1$&$2n$&$2n+1$&$2n$&$2n+1$&\multicolumn{1}{||c|}{}\\ \hline $N^{12}_O$& \multicolumn{2}{|c|}{$5_+ +1_-$}&\multicolumn{2}{|c|}{4}&\multicolumn{2}{|c|}{1}&\multicolumn{1}{||c|}{$43_++27_-$}\\ \hline $N^{1=2}_O$&$4_+$&$1_+ +1_-$&2&2&1&0&\multicolumn{1}{||c|}{$19_++3_-$}\\ \hline $N^{1=2}_O$&\multirow{2}{*}{$2_+$}&\multirow{2}{*}{$1_-$}&\multirow{2}{*}{1}&\multirow{2}{*}{1}&\multirow{2}{*}{1}&\multirow{2}{*}{0}&\multicolumn{1}{||c|}{\multirow{2}{*}{$7_+$}}\\ conserved&&&&&&&\multicolumn{1}{||c|}{}\\ \hline \end{tabular} \caption{Number of independent tensor structures in the 3-point function $\langle V_1 V_2 O^{l,\bar l}\rangle$ when min$(l,\bar l)\geq2-\delta$. In the last column we report $N_4$ as computed using eqs.(\ref{n4Parity}) and (\ref{n4ParityPermutation}) for general four spin 1, identical four spin 1 and identical conserved currents respectively. Subscripts + and - refers to parity even and parity odd structures. For conjugate fields we have $N^{12}_{O(l,l+\delta)}=N^{12}_{O(l+\delta,l)}$.} \label{tableConservedCurrent} \end{table} \begin{equation}\begin{aligned} F_{12O}^{\tilde p p}(\Delta ,l=2n )=& \begin{pmatrix}1&1&c&a&0&0\\ -\frac12&-\frac12&-\frac12&b&-\frac18&0 \end{pmatrix}, \ \ \ \ F_{12O}^{\tilde p p}(\Delta ,l=2n+1 )=\begin{pmatrix}0&0&0&0&0&0\\0&0&0&0&0&1 \end{pmatrix} \,, \end{aligned} \label{FlambdaCon} \end{equation} with \begin{equation}} \def\ee{\end{equation}} \def\nn{\nonumber a=8\frac{\Delta(\Delta+l+9)- l(l+8)}{(\Delta-l-4)(\Delta+l)}\,, \ \ b=-4\frac{(\Delta-l-2)}{\Delta-l-4}\,, \ \ c=\frac{-\Delta+l+6}{\Delta+l} \,, \ee where $F_{12O}^{\tilde p p}$ are the coefficients entering eq.(\ref{CPWtilda}). The number of independent tensor structures is reduced from 6 to $2_+$ when $l$ is even and from 6 to $1_-$ when $l$ is odd, as indicated in the table \ref{tableConservedCurrent}. When $\Delta = l+4$, eq.(\ref{FlambdaCon}) is modified, but the number of constraints remains the same. The 3-point function structures obtained, after conservation and permutation is imposed, differ between even and odd $l$. Therefore, we need to separately consider the even and odd $l$ contributions when computing $N_4$ using eq.(\ref{n4ParityPermutation}). For four identical conserved currents, $N_4=7_+$, as indicated in table \ref{tableConservedCurrent}, and agrees with what found in ref.\cite{Dymarsky:2013wla}. \subsubsection{Spin 2 Four-Point Functions} The exchanged operators can be in the representations $(l+2\delta,l)$ and $(l,l+2\delta)$ where $\delta= 0,1,...,4$. The number of tensor structures in the three-point functions of these operators with two external spin 2 tensors is shown in table \ref{tableConservedTensor}. We do not list here the number of CPW and conformal blocks for each representation, which could be easily derived from table \ref{tableConservedTensor}. In the most general case of four distinct non conserved operators, no parity imposed, one should compute $1107^2\sim 10^6$ conformal blocks, that are reduced to 49 using the differential basis, $W_{seed}(\delta)$ and $\overline W_{seed}(\delta)$. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{|c|}{$O_{l,l}$}&\multicolumn{2}{|c|}{$O_{l+2,l}$}&\multicolumn{2}{|c|}{$O_{l+4,l}$}&\multicolumn{2}{|c|}{$O_{l+6,l}$}&\multicolumn{2}{|c|}{$O_{l+8,l}$}&\multicolumn{1}{||c|}{$N_4$}\\ $l=$&$2n$&\!$2n\!+\!1$\!&$2n$&\!$2n\!+\!1$\!&$2n$&\!$2n\!+\!1$\!&$2n$&\!$2n\!+\!1$\!&$2n$&\!$2n\!+\!1$\!&\multicolumn{1}{||c|}{}\\ \hline $N^{12}_{O}$& \multicolumn{2}{|c|}{$14_+ \!+\!5_-$}&\multicolumn{2}{|c|}{16}&\multicolumn{2}{|c|}{10}&\multicolumn{2}{|c|}{4}&\multicolumn{2}{|c|}{1}&\multicolumn{1}{||c|}{$594_+\!+\!513_-$}\\ \hline $N^{1=2}_O$&$10_+\!+\! 1_-$&$4_+\!+\! 4_-$&8&8&6&4&2&2&1&0&\multicolumn{1}{||c|}{$186_+\!+\!105_-$}\\ \hline $N^{1=2}_O$&\multirow{2}{*}{$3_+$}&\multirow{2}{*}{$2_-$}&\multirow{2}{*}{2}&\multirow{2}{*}{2}&\multirow{2}{*}{2}&\multirow{2}{*}{1}&\multirow{2}{*}{1}&\multirow{2}{*}{1}&\multirow{2}{*}{1}&\multirow{2}{*}{0}&\multicolumn{1}{||c|}{\multirow{2}{*}{$22_+\!+\!3_-$}}\\ cons.&&&&&&&&&&&\multicolumn{1}{||c|}{}\\ \hline \end{tabular} \caption{Number of independent tensor structures in the 3-point function $\langle T_1 T_2 O^{l,\bar l}\rangle$ when min$(l,\bar l)\geq4-\delta$. In the last column we report $N_4$ as computed using eqs.(\ref{n4Parity}) and (\ref{n4ParityPermutation}) for general four spin 2, identical four spin 2 and energy momentum tensors respectively. Subscripts $+$ and $-$ refers to parity even and parity odd structures. For conjugate fields we have $N^{12}_{O(l,l+\delta)}=N^{12}_{O(l+\delta,l)}$.} \label{tableConservedTensor} \end{table} The constraints coming from permutation symmetry and conservation are found as in the spin 1 case, but are more involved and will not be reported. For four identical spin 2 tensors, namely for four energy momentum tensors, using eq.(\ref{n4ParityPermutation}) one immediately gets $N_4=22_++3_{-}$, as indicated in table \ref{tableConservedCurrent}. The number of parity even structures agrees with what found in ref.\cite{Dymarsky:2013wla}, while to the best of our knowledge the 3 parity odd structures found is a new result. Notice that even if the number of tensor structures is significantly reduced when conservation is imposed, they are still given by a linear combination of all the tensor structures, as indicated in eq.(\ref{CPWtilda}). It might be interesting to see if there exists a formalism that automatically gives a basis of independent tensor structures for conserved operators bypassing eq.(\ref{CPWtilda}) and the use of the much larger basis of allowed structures. \section{Conclusions} We have introduced in this paper a set of differential operators, eqs.(\ref{DDtilde}), (\ref{D12}) (\ref{Dbar12}) and (\ref{nablaS}), that enables us to relate different three-point functions in 4D CFTs. The 6D embedding formalism in twistor space with an index free notation, as introduced in ref.\cite{SimmonsDuffin:2012uy}, and the recent classification of three-point functions in 4D CFTs \cite{Elkhidir:2014woa} have been crucial to perform this task. In particular, three-point tensor correlators with different tensor structures can always be related to a three-point function with a single tensor structure. Particular attention has been devoted to the three point functions of two traceless symmetric and one mixed tensor operator, where explicit independent bases have been provided, eqs.(\ref{deltaGenExpSTlowl3}) and (\ref{DBevenNew}). These results allow us to deconstruct four point tensor correlators, since we can express the CPW in terms of a few CPW seeds. We argue that the simplest CPW seeds are those associated to the four point functions of two scalars, one ${\cal O}^{2\delta ,0}$ and one ${\cal O}^{0, 2\delta}$ field, that have only $2\delta+1$ independent tensor structures. We are now one step closer to bootstrapping tensor correlators in 4D CFTs. There is of course one important task to be accomplished: the computation of the seed CPW. One possibility is to use the shadow formalism as developed in ref.\cite{SimmonsDuffin:2012uy}, or to apply the Casimir operator to the above four point function seeds, hoping that the second order set of partial differential equations for the conformal blocks is tractable. In order to bootstrap general tensor correlators, it is also necessary to have a full classification of 4-point functions in terms of $SU(2,2)$ invariants. This is a non-trivial task, due to the large number of relations between the four-point function $SU(2,2)$ invariants. A small subset of them has been reported in the appendix A but many more should be considered for a full classification. We hope to address these problems in future works. We believe that universal 4D tensor correlators, such as four energy momentum tensors, might no longer be a dream and are appearing on the horizon! \section*{Acknowledgments} We thank Jo$\tilde {\rm a}$o Penedones for useful discussions. The work of M.S. was supported by the ERC Advanced Grant no. 267985 DaMESyFla.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} To understand the origin of cosmic rays (CRs), it is important to distinguish between the lower energy CRs which can be contained within the magnetic field of our Galaxy and thus have energies of up to about $3 \times 10^{18}$ eV for heavy nuclei, and those that are even more energetic. The bulk of the CRs below that energy can be explained by supernova explosions, while the extremely energetic ones probably originate from either some class of Active Galactic Nuclei \citep{ginzburgsyrovatskii64,biermannstrittmatter87} or some extreme type of stellar activity such as gamma-ray bursts~\citep{waxman95}. Indeed, the spectrum of CRs shows a kink near $3 \times 10^{18}$ eV, matching the expectation that their origin changes around this energy threshold. Stellar explosions can account for the flux, spectrum, particle energy and chemical composition of the less energetic CRs, considering that all very massive stars explode into their pre-existing winds~\citep[e.g.][] {prantzos84,stanevetal93,meyer97}. Further quantitative confirmation of this picture has now emerged from detailed observations of cosmic ray electrons and positrons, as well as the WMAP haze ~\citep{biermannetal09b,biermannetal10}. The supernova origin of Galactic cosmic rays may lead us to an understanding of the seed particle population ~\citep{biermannetal09a} on which active galactic nuclei energizing radio galaxies can operate their acceleration processes. The origin of ultra high energy cosmic rays (UHECRs) is still an unresolved issue, but a few clues have begun to emerge. Although their arrival directions are nearly isotropic, a general correlation with the distribution of matter has been noted by the Auger observatory \citep{stanevetal95,auger08a,auger08b}, although it is disputed by the HiRes observatory~\citep{hires08,hires09}. In particular, there may be excess events with arrival directions close to the nearby radio galaxy Centaurus A~\citep{auger08a,auger09}. There are contradicting claims from experiments as to whether the UHECR events are heavy nuclei \citep[Auger --][]{abraham09} or purely protons \citep[HiRes --][]{abbasietal09}. Both possibilities need to be explored. In a picture where UHECR energies are attained by a single kick up from a seed population~\citep{gallantachterberg99} through the action of a relativistic jet, these events can indeed involve heavy nuclei~\citep{biermannetal09a}. In such a scheme the seed particles are the CRs near the spectral knee~\citep{stanevetal93} and the relativistic shock is very likely to arise from a jet carving out a new channel after being launched from a primary central black hole that has been reoriented following the merger of the nuclear black holes of two merging galaxies ~\citep{gergelybiermann09}. In this scenario all the UHECR particles are a mix of heavy nuclei, and the spectrum in ~\citet{stanevetal93} actually gives an adequate fit to the Auger data~\citep{biermannetal09a}. See Fig.\ 1. The sky distribution is easily isotropized by the intergalactic magnetic fields~\citep{dasetal08}; for the case of heavy nuclei one is even confronted with the possibility of excessive scattering~\citep{biermannetal09a}. This picture also allows the incorporation of the Poynting flux limit \citep{lovelace76}: the particles to be accelerated must remain confined within the jet diameter. This condition translates into a lower limit for the jet power, allowing most UHECR particles to originate from the jet interacting with lower energy CRs produced in the starburst in the central region of Cen A. Therefore we explore a scenario based on the observed head-on encounter of the Cen A jet with magnetized interstellar clouds~\citep{gksaripalli84,kraftetal09,gkwiita10} from which UHECR acceleration ensues. A distinctly appealing aspect of this proposal is that the postulated jet-cloud interaction is actually observed within the northern lobe of Cen A, whereby the jet is seen to be disturbed, bent westward and possibly disrupted temporarily~\citep{morgantietal99,oosterloomorganti05}. Since any supersonic flow reacts to a disturbance with shock formation, this in turn could cause particle acceleration. Note that the Fermi/LAT error circle for the peak of the gamma-ray emission~\citep{fermiLAT10} encompasses the jet-cloud interaction region, at the base of the northern middle lobe of Cen A, about 15 kpc from the nucleus. \section{Acceleration in Cen A from a jet interacting with gaseous shells} The key point is that the interaction of the northern jet with a gaseous shell in the northern middle lobe has clearly been seen~\citep{oosterloomorganti05,kraftetal09} and massive star formation is revealed at the location of the interaction by the GALEX UV image~\citep{kraftetal09}. Although other mechanisms can bend and disrupt radio jets, only a jet-shell interaction can explain the variety of data (radio, HI, UV, X-rays) for Cen A~\citep{kraftetal09}. It has also been argued that the oft-debated peculiar morphology of the northern middle radio lobe can be readily understood in terms of the same jet-shell collision~\citep{gkwiita10}. An important aspect of the basic acceleration physics to be stressed is that when particles are accelerated in a shock propagating parallel to the magnetic field lines, the maximum particle energy $E_{max}$ is given by~\citep{hillas84,ginzburgsyrovatskii64,stanev04} $E_{max} \; = \; e \, Z \, \beta_{sh} \, R_{B} \, B$, where $e$ is the elementary electric charge, $Z$ is the numerical charge of the particle, $\beta_{sh}$ is the shock speed in units of the speed of light, the available length scale is $R_{B}$, and the strength of the magnetic field is $B$. However, when the shock propagation is highly oblique, the corresponding limit~\citep{jokipii87,meli06} becomes \begin{equation} E_{max} \; = \; e \, Z \, R_{B} \, B, \end{equation} which is independent of the shock velocity. Invoking relativistic shocks obviously adds an additional factor of $\gamma_{sh}$, the shock's Lorentz factor~\citep{gallantachterberg99}. Losses will curtail this maximum attainable energy~\citep{hillas84,biermannstrittmatter87}. We now focus on the particle acceleration due to the observed interaction of the jet with shells of fairly dense gas. Cen A has long been known to have a number of stellar shells, located in the vicinity of both the Northern and Southern lobes~\citep{malinetal83}. Some of these shells have later been found to contain large amounts of dense atomic~\citep{schiminovichetal94} and even molecular~\citep{charmandarisetal00} gas ($\sim 7.5 \times 10^8~$M$_{\odot}$). These shells are generally thought to have originated from the merger of a massive elliptical with a disk galaxy~\citep{quinn84}, very probably the same merger that gave rise to the peculiar overall appearance of this large elliptical galaxy marked by a striking dust lane. Radio maps reveal that the northern jet has encountered such shells at distances of 3.5 and 15 kpc from the core, and flared up each time to the same side, thereby forming the northern-inner and the northern-middle lobes~\citep{gksaripalli84,gkwiita10}. Simulations of such collisions indicate the formation of strong shocks where the jets impinge upon gas clouds~\citep[e.g.][]{choietal07}. We must ask whether the maximum observed particle energies, of order $10^{21}$ eV, are actually attainable in such interactions. Accelerating particles to such copious energies requires that the Larmor motion of a particle must fit within the gaseous cloud, both before and after the shock that forms inside the cloud by interaction with the impinging relativistic jet. This leads to the condition $E_{max} \; \lower 2pt \hbox {$\buildrel < \over {\scriptstyle \sim }$} \; e \, Z \, B_{cl} \, R_{cl}$, also called the Hillas limit~\citep{hillas84}, which is a general requirement to produce UHECR via shocks. Adopting the very reasonable parameter values of 3 kpc for $R_{cl}$, the approximate observed size of the HI shell found in the Northern Middle Lobe of Cen A~\citep{oosterloomorganti05,gkwiita10}, and $3 \times 10^{-6}$ Gauss for the magnetic field, it follows that the energy must remain below $Z \times 10^{19}$eV. Since particles are observed up to about $3 \times 10^{20}$ eV~\citep{birdetal94}, this implies that heavy nuclei, such as Fe, are much preferred for this mechanism to suffice; however, if a stronger magnetic field were present, this would ease the requirement on the abundances and allow for CRs to be accelerated to even higher energies. The magnetic field in the shell is not well constrained, but the required value is modest. The Hillas limit condition~\citep{hillas84} mentioned above can be expressed another way~\citep{lovelace76}. Taking the energy needed for particle acceleration to derive from a jet, we can connect the time-averaged energy flow along the jet with the condition that the accelerated particles are contained within the jet diameter, \begin{equation} L_{jet} \; \lower 2pt \hbox {$\buildrel > \over {\scriptstyle \sim }$} \; 10^{47} \, {\rm erg~s}^{-1} \, f_{int} \,{\left( \frac{E_{max}}{Z \times 10^{21} {\rm eV}} \right)}^{2}, \end{equation} where $f_{int}$ is an intermittency factor describing the temporal fluctuations of the energy outflow. Equality in this critical expression would imply that the energy flow in the jet is an entirely electromagnetic Poynting flux, an unrealistic extreme scenario. For Cen A we require both an intermittency factor $< 1$, and presumably also heavy nuclei, e.g., $Z \simeq 26$. We find $f_{int} \, \lower 2pt \hbox {$\buildrel < \over {\scriptstyle \sim }$} \, 0.75$ in order to match the kinetic jet power, which has been argued to be $L_{jet} \simeq 10^{43}$ erg s$^{-1}$ through several different approaches \citep[][and references therein]{whysongantonucci03,fermiLAT10,kraftetal09}. The recent HESS observations of Cen A \citep{hess09} detected an ultra-high energy ($> 250$ GeV) photon luminosity of only $\simeq 2.6 \times 10^{39}$ erg s$^{-1}$, but the entire photon luminosity in gamma-rays ($>100$ keV) is $\sim 2 \times 10^{42}$ erg s$^{-1}$, and thus also consistent with $L_{jet} \simeq 10^{43}$ erg s$^{-1}$. We next examine whether the inferred luminosity of UHECRs is indeed attainable. Assuming the observed spectrum of the jet corresponds to a CR particle spectrum of about $E^{-2.2}$, this leads to the requirement that the observed power in UHECR particles must be multiplied by a factor of about 200 in order to integrate over the power-law spectrum. The data then require a luminosity of about $10^{42}$ erg s$^{-1}$, still below the inferred jet power of $10^{43}$ erg s$^{-1}$ for Cen A~\citep{whysongantonucci03, fermiLAT10}. Thus, we could allow for a duty cycle of 0.1, and still have adequate jet power. So the jet's interactions with a dense cloud are capable of powering the observed UHECRs. Another way of asking the same question is, can a jet actually catch a sufficient number of particles from the knee region with energies near PeV and accelerate them to the ankle region near EeV to ZeV? Assuming that the energy density of CRs in the starburst region is about 100 times what we have in our galaxy, the particle density near and above $10^{15}$ eV is about $10^{-17}$ per cc. If through the non-steadiness of the jet these CRs are caught at the same rate by a kpc scale jet having an opening angle of, say, $5^{\circ}$, the cross-section of $\sim 10^{41.5}$ cm$^2$ implies a rate of $10^{35}$ particles accelerated per second. Pushing them to UHECR energies gives an energy turnover of order $10^{42}$ erg s$^{-1}$ just for the energies above $10^{18.5}$ eV, again quite sufficient. Third, we need to check whether enough time is available for the particles to be accelerated. A jet encounter with such a large cloud would last for at least $10^{4}$ yrs~\citep{choietal07}. A shock in either the external or the internal medium would take some small multiple of the Larmor time scale at the maximum energy of a few times $10^{4.3}$ yrs, to complete the acceleration process. The two relevant time scales, for transit and acceleration, seem consistent within the scope of our broad estimates. Lastly, we need to check whether the time scales are long enough so that the time window for possible detection of the UHECR source is not too brief. The time scales for particle acceleration and the jet-cloud encounter are somewhere between $10^{4}$ and $10^{5}$ yr. The times for the jet to transit a shell and then to move on to the next shell appear to be in a ratio of about 1 to 10. Therefore, a duty cycle, $f_{int}$, of about 0.1, which is easily allowed for by the above calculation, is actually necessary to maintain a quasi-continuous output of accelerated particles. \section{Consequences of the jet/cloud acceleration scenario} Having shown that the basic model is viable, we now consider some of its consequences. First we note that the jet may still be mildly precessing after the episode of the merger of black holes~\citep{gergelybiermann09}, in the aftermath of the merger of the elliptical and spiral galaxies comprising Cen A~\citep{fisrael98}. Also, the gaseous shell may have its own motion, also due to the preceding merger of the two galaxies. This would naturally explain the observed multiple bendings and flarings of the northern jet in Cen A~\citep{gksaripalli84,gketal03,gkwiita10}. Both effects would expose continuously fresh material to the action of the jet, but are not an essential requirement for our model. Second, the transport and scattering of the particles along the way might smooth out any variability even if Cen A were the only significant source of UHECRs in our part of the universe. Such variability might explain the inconsistencies between Auger and HiRes results~\citep{abraham09,hires09}. The magnetic field at the site of origin is locally enhanced by the Lorentz factor of the shock, possibly between 10 and 50 \citep[e.g.,][]{biermannetal09a}. That could imply the shortest possible variability time $\tau_{var} \simeq 100 \, \tau_{var, 2}$ yrs, taking a high Lorentz factor of 50. The scattering near the Earth needed to attain near isotropy in arrival directions requires a relatively strong magnetic field within the distance equal to $c \tau_{var}$. So the containment of Fe particles of up to $3 \times 10^{20}$ eV would imply an energy content near Earth of $E_{B, var} \; \lower 2pt \hbox {$\buildrel > \over {\scriptstyle \sim }$} \; 6 \times 10^{51} \, \tau_{var, 2}^{+1} \, {\rm erg}$. Interestingly, this total energy approaches the energy of a hypernova ($10^{52}$ erg). However, there is currently no evidence for such a region surrounding the Sun. Finally, we have to follow through with the deduction from the Poynting flux limit \citep{lovelace76}, that the highest energy events can only be heavy nuclei if they come from Cen A. This limit requires that all particles caught by a shock in the jet have $E/Z$ less than or equal to that of Fe at $10^{20.5}$ eV, the highest energy event yet seen; let us assume initially, that this one event at $10^{20.5}$ eV is a factor of 3 below the real limit imposed by the acceleration site, the shock in the jet interaction region. It follows that He above $10^{19.9}$, and CNO above $10^{20.1}$ eV are ruled out, but near $10^{19.7}$ eV both are possible. We use the prescriptions of \citet{allard08} to define a photo-disintegration distance $\Lambda_{dis}$ for any nucleus and energy. Averaging over some wiggles in the curves that cover both FIR and microwave backgrounds from the very early universe, we find that over the relevant energy and nucleus charge range an adequate approximation is $\Lambda_{dis} \, = \, 10^{1.6} \, {\rm Mpc} \, (Z/Z_{Fe})/(E/10^{19.7} \, {\rm eV})^{2.6}$, which we use here to guide us. There are two extreme scattering limits. In one limit, the isotropization of the events from Cen A is done in the intervening intergalactic medium (IGM). Cosmological MHD simulations by \citet{Ryu08} imply a Kolmogorov approximation; then $\Lambda_{trav} \, = \, 10^{1.6} \, {\rm Mpc} \, [(Z/Z_{Fe})/(E/10^{19.7} \, {\rm eV})]^{1/3}$. However, this already leads to extreme losses of the heavy nuclei between Cen A and us. So we consider the other limit, in which the UHECRs travel essentially straight from Cen A to us, and are isotropized in the magnetic wind of our Galaxy \citep{Everett08}. Modeled values of the wind's magnetic field strength ($\sim 8~\mu$G) and radial scale ($\sim 3$ kpc) allow Fe, as well as all elements down to about Oxygen, to be scattered into isotropy; however, there is less effect on lower $Z$ elements. No other approach gave a reasonable fit to the data. The losses due to the path traversed during the scattering are small. A fit with this approach is shown in Fig.\ 1. One could use other magnetic wind model numbers, closer to a Parker-type wind~\citep{Parker58}, but the essential results do not change. Now we must ask, how can this be compatible with IGM models \citep{Ryu08,dasetal08,choryu09}? Given the overall magnetic energy content in the IGM, scattering can be reduced if much of the overall magnetic energy is pushed into thin sheets \citep{biermannetal09c} and such substructure plausibly arises from radio galaxies and galactic winds. A second question is whether the magnetic field could also produce a systematic shift on the sky for UHECRs, in addition to scattering and isotropizing them. Indeed, any Galactic magnetic field~\citep{beck96}, in the disk or in the foot region of a Galactic wind~\citep{stanev97,Everett08} would also produce a systematic shift relative to the central position of Cen A on the sky. Since Cen A is not far from the sensitivity edge of the Auger array in the sky, it is quite possible that there is a shift for all events, especially at slightly lower energies. The models of ~\citet{Zirakashvili96} show that angular momentum conservation and transport quickly generate a magnetic field component parallel to the galactic disk, which would shift particle orbits in a direction perpendicular to the disk, and possibly away from the center of symmetry. A testable prediction of this scenario then is that a solid angle on the sky containing half the UHECR events towards Cen A should show the signature of lighter nuclei, hence larger fluctuations, compared to the events seen from the remaining part of the sky. Since the main scattering also has a systematic component, the center of this anisotropy may be shifted with respect to Cen A, so that the part of the sky with the largest fluctuations in the shower properties may be offset by up to a few tens of degrees from Cen A for $Z > 1$. This effect might be strong enough so that in some parts of the sky lighter elements might predominate over heavies and thus reconcile results from the Auger and HiRes experiments~\citep{abbasietal09,abraham09}. This could soon be checked with the growing data on UHECRs. If such a test were positive, it would unequivocally and simultaneously show that Cen A is the best source candidate, that scattering depends on the energy/charge ratio, and that the most energetic events are heavy nuclei. PLB acknowledges discussions with J.\ Becker, L.\ Caramete, S.\ Das, T.\ Gaisser, L.\ Gergely, H.\ Falcke, S.\ Jiraskova, H.\ Kang, K.-H.\ Kampert, R.\ Lovelace, A.\ n, M.\ Romanova, D.\ Ryu, T.\ Stanev, and his Auger Collaborators, especially H.\ Glass. PLB also acknowledges the award of a Sarojini Damodaran Fellowship by the Tata Institute of Fundamental Research. VdS is supported by FAPESP (2008/04259-0) and CNPq.\\
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $k\geq 2$ be fixed. Erd\H{o}s, Faudree, Rousseau and Schelp \cite{erdos2} observed the following fact. \begin{fact}\label{fact1} Every graph on $n\geq k-1$ vertices with at least $$(k-1)(n-k+2)+{k-2\choose 2}$$ edges contains a subgraph of minimum degree at least $k$. \end{fact} This fact can be proved very easily by induction: For $n=k-1$ it is vacuously true, because there are no graphs on $k-1$ vertices with the given number of edges. For any $n\geq k$ given a graph with at least $(k-1)(n-k+2)+{k-2\choose 2}$ edges that does not have minimum degree at least $k$, we can delete a vertex of degree at most $k-1$. Then we obtain a graph with $n-1$ vertices and at least $(k-1)((n-1)-k+2)+{k-2\choose 2}$ edges and we can apply the induction assumption. Erd\H{o}s, Faudree, Rousseau and Schelp \cite{erdos2} also observed that the bound given in Fact \ref{fact1} is sharp and that for each $n\geq k+1$ there exist graphs on $n$ vertices with $(k-1)(n-k+2)+{k-2\choose 2}$ edges, that do not have any subgraphs of minimum degree at least $k$ on fewer than $n$ vertices (an example for such a graph is the generalized wheel formed by a copy of $K_{k-2}$ and a copy of $C_{n-k+2}$ with all edges in between). However, they conjectured in \cite{erdos2} that having just one additional edge implies the existence of a significantly smaller subgraph of minimum degree at least $k$: \begin{conj}\label{conject} For every $k\geq 2$ there exists $\varepsilon_k>0$ such that each graph on $n\geq k-1$ vertices with $$(k-1)(n-k+2)+{k-2\choose 2}+1$$ edges contains a subgraph on at most $(1-\varepsilon_k)n$ vertices and with minimum degree at least $k$. \end{conj} According to \cite{erdos2}, originally this was a conjecture of Erd\H{o}s for $k=3$. He also included the conjecture for $k=3$ in a list of his favourite problems in graph theory \cite[p.~13]{erdos1}. Erd\H{o}s, Faudree, Rousseau and Schelp \cite{erdos2} made progress towards Conjecture \ref{conject} and proved that there is a subgraph with minimum degree at least $k$ with at most $n-\lfloor\sqrt{n}/\sqrt{6k^{3}}\rfloor$ vertices. Mousset, Noever and \v{S}kori\'{c} \cite{zurich} improved this to $n-n/(8(k+1)^5\log_2 n)$. The goal of this paper is to prove Conjecture \ref{conject}. More precisely, we prove the following theorem. \begin{theo}\label{thm} Let $k\geq 2$ and let $1\leq t\leq \frac{(k-2)(k+1)}{2}-1$ be an integer. Then every graph on $n\geq k-1$ vertices with at least $(k-1)n-t$ edges contains a subgraph on at most $$\left(1-\frac{1}{\max(10^{4}k^2, 100kt)}\right)n$$ vertices and with minimum degree at least $k$. \end{theo} For $t=\frac{(k-2)(k+1)}{2}-1$ we have $$(k-1)n-t=(k-1)n-\frac{(k-2)(k+1)}{2}+1=(k-1)(n-k+2)+{k-2\choose 2}+1$$ and so Theorem \ref{thm} implies Conjecture \ref{conject} with $$\varepsilon_k=\frac{1}{\max\left(10^{4}k^2, 100k\left(\frac{(k-2)(k+1)}{2}-1\right)\right)}>\frac{1}{10^{4}k^3}.$$ On the other hand, for example for $t=1$ we obtain the following statement: Every graph on $n\geq k-1$ vertices with at least $(k-1)n-1$ edges contains a subgraph on at most $$\left(1-\frac{1}{10^{4}k^2}\right)n$$ vertices and with minimum degree at least $k$. Thus, the presence of one additional edge compared to the number in Fact \ref{fact1} implies the existence of a subgraph with minimum degree at least $k$ on $(1-\varepsilon)n$ vertices with $\varepsilon=\Omega(k^{-3})$, while the presence of $\frac{(k-2)(k+1)}{2}$ additional edges (which is a fixed number with respect to $n$) already gives $\varepsilon=\Omega(k^{-2})$. The basic approach to proving Theorem \ref{thm} is to assign colours to some vertices, such that for every colour the subgraph remaining after deleting all vertices of that colour has minimum degree at least $k$. If we can ensure that sufficiently many vertices get coloured (and the number of colours is fixed), then in this way we find a significantly smaller subgraph with minimum degree at least $k$. Our proof relies on and extends the ideas of Mousset, Noever and \v{S}kori\'{c} in \cite{zurich}, although they do not use a colouring approach in their argument. We construct our desired colouring iteratively, and apply the techniques of Mousset, Noever and \v{S}kori\'{c} in every step of the iteration. Note that if $G$ has a subgraph with minimum degree at least $k$, then the induced subgraph with the same vertices also has minimum degree at least $k$. Thus, in all the statements above we can replace `subgraph' by `induced subgraph'. \textit{Organization.} We prove Theorem \ref{thm} in Section \ref{sect2} apart from the proof of a certain lemma. This lemma is proved in Section \ref{sect3} assuming another lemma, Lemma \ref{mainlem}, that is stated at the beginning of Section 3. Lemma \ref{mainlem} is a key tool in our proof and an extension of Lemma 2.7 in Mousset, Noever and \v{S}kori\'{c}'s paper \cite{zurich}. We prove Lemma \ref{mainlem} in Section \ref{sect5} after some preparations in Section \ref{sect4}. \textit{Notation.} All graphs throughout this paper are assumed to be finite. Furthermore, all subgraphs are meant to be non-empty (but they may be equal to the original graph). An induced subgraph is called proper if it has fewer vertices than the original graph. When we say that a graph has minimum degree at least $k$, we implicitly also mean to say that the graph is non-empty. For a graph $G$ let $V(G)$ denote the set of its vertices, $v(G)$ the number of its vertices and $e(G)$ the number of its edges. For any integer $i$, let $V_{\leq i}(G)$ be the set of vertices of $G$ with degree at most $i$ and $V_{i}(G)$ the set of vertices of $G$ with degree equal to $i$. For a subset $X\subseteq V(G)$ let $G-X$ be the graph obtained by deleting all vertices in $X$ from $G$. Let $\overline{e}_G(X)$ denote the number of edges of $G$ that are incident with at least one vertex in $X$, i.e.\ $\overline{e}_G(X)=e(G)-e(G-X)$. Call a vertex $v\in V(G)$ adjacent to $X$, if $v$ is adjacent to at least one vertex in $X$. In this case, call $X$ a neighbour of $v$ and $v$ a neighbour of $X$. Finally, if $\mathcal{X}$ is a collection of disjoint subsets $X\subseteq V(G)$, then $G-\mathcal{X}$ denotes the graph obtained from $G$ by deleting all members of $\mathcal{X}$. In general, we try to keep the notation and the choice of variables similar to \cite{zurich}, so that the reader can see the connections to the ideas in \cite{zurich} more easily. \section{Proof of Theorem \ref{thm}} \label{sect2} Let $k\geq 2$ be fixed throughout the paper. Furthermore, let $1\leq t\leq \frac{(k-2)(k+1)}{2}-1$ be an integer and let $$\varepsilon=\frac{1}{\max(10^{4}k^2, 100kt)}.$$ We prove Theorem \ref{thm} by induction on $n$. Note that the theorem is vacuously true for $n=k-1$, because in this case a graph on $n$ vertices can have at most \[{k-1\choose 2}=(k-1)^2-\frac{(k-1)k}{2}=(k-1)n-\frac{(k-2)(k+1)}{2}-1<(k-1)n-t\] edges. So from now on, let us assume that $n\geq k$ and that Theorem \ref{thm} holds for all smaller values of $n$. Consider a graph $G$ on $n$ vertices with $e(G)\geq (k-1)n-t$ edges. We would like to show that $G$ contains a subgraph on at most $(1-\varepsilon)n$ vertices and with minimum degree at least $k$. So let us assume for contradiction that $G$ does not contain such a subgraph. \begin{claim}\label{claim-sets-many-edges}For every subset $X\subseteq V(G)$ of size $1\leq \vert X\vert\leq n-k+1$, we have $\overline{e}_G(X)\geq (k-1)\vert X\vert+1$. \end{claim} \begin{proof}Suppose there exists a subset $X\subseteq V(G)$ of size $1\leq \vert X\vert\leq n-k+1$ with $\overline{e}_G(X)\leq (k-1)\vert X\vert$. Then let $G'=G-X$ be the graph obtained by deleting $X$. Note that $G'$ has $n-\vert X\vert\geq k-1$ vertices and \[e(G')=e(G)-\overline{e}_G(X)\geq (k-1)n-t-(k-1)\vert X\vert=(k-1)(n-\vert X\vert)-t.\] So by the induction assumption $G'$ contains a subgraph on at most $(1-\varepsilon)(n-\vert X\vert)\leq (1-\varepsilon)n$ vertices and with minimum degree at least $k$. This subgraph is also a subgraph of $G$, which is a contradiction to our assumption on $G$. \end{proof} In particular, for every vertex $v\in V(G)$, Claim \ref{claim-sets-many-edges} applied to $X=\lbrace v\rbrace$ yields $\deg(v)=\overline{e}_G(\lbrace v\rbrace)\geq k$. Thus, the graph $G$ has minimum degree at least $k$ and $n\geq k+1$. The following lemma is very similar to Lemma 4 in \cite{erdos2} for $\alpha=\frac{1}{3k}$. However, in contrast to \cite[Lemma 4]{erdos2}, in our situation we are not given an upper bound on the number of edges. \begin{lem}[see Lemma 4 in \cite{erdos2}]\label{deg-k-vertices} Let $H$ be a graph on $n$ vertices with minimum degree at least $k$. If $H$ has at most $\frac{n}{3k}$ vertices of degree $k$, then $H$ has a subgraph on at most $\left(1-\frac{1}{27k^{2}}\right)n$ vertices with minimum degree at least $k$. \end{lem} One can prove this Lemma in a similar way as \cite[Lemma 4]{erdos2}. For the reader's convenience, we provide a proof in the appendix. Recall our assumption that $G$ does not have a subgraph on at most $(1-\varepsilon)n$ vertices with minimum degree at least $k$, and also recall that $G$ has minimum degree at least $k$. By Lemma \ref{deg-k-vertices} the graph $G$ must therefore have at least $\frac{n}{3k}$ vertices of degree $k$. Furthermore, as $\varepsilon<\frac{1}{2}$, the graph $G$ must be connected. A very important idea for the proof is the notion of \emph{good sets} from \cite{zurich}. Both the following definition and the subsequent properties are taken from \cite{zurich}. We repeat the proofs here, because our statements differ slightly from the ones in \cite{zurich}, but the proofs are essentially the same. \begin{defi}[Definition 2.1 in \cite{zurich}]\label{def-good-set} A \emph{good set} in $G$ is a subset of $V(G)$ constructed according to the following rules: \begin{itemize} \item If $v$ is a vertex of degree $k$ in $G$, then $\lbrace v\rbrace$ is a good set. \item If $A$ is a good set and $v\in V(G)\setminus A$ with $\deg_{G-A}(v)\leq k-1$, then $A\cup \lbrace v\rbrace$ is a good set. \item If $A$ and $B$ are good sets and $A\cap B\neq \emptyset$, then $A\cup B$ is a good set. \item If $A$ and $B$ are good sets and there is an edge connecting a vertex in $A$ to a vertex in $B$, then $A\cup B$ is a good set. \end{itemize} \end{defi} Clearly, each good set is non-empty. \begin{lem}[see Claim 2.2(i) in \cite{zurich}]\label{lemma-edges-good-set} If $D$ is a good set in $G$ of size $\vert D\vert\leq n-k+1$, then $\overline{e}_G(D)\leq (k-1)\vert D\vert+1$. \end{lem} \begin{proof}We prove the lemma by induction on the construction rules in Definition \ref{def-good-set}. For the first rule, observe that $\overline{e}_G(\lbrace v\rbrace)=\deg_G(v)= (k-1)+1$ for every vertex $v$ of degree $k$. For the second rule, assume that $\overline{e}_G(A)\leq (k-1)\vert A\vert+1$ and $\deg_{G-A}(v)\leq k-1$, then $$\overline{e}_G(A\cup \lbrace v\rbrace)=\overline{e}_G(A)+\deg_{G-A}(v)\leq (k-1)(\vert A\vert+1)+1.$$ For the third rule, assume that $\overline{e}_G(A)\leq (k-1)\vert A\vert+1$ and $\overline{e}_G(B)\leq (k-1)\vert B\vert+1$ as well as $\vert A\vert, \vert B\vert\leq n-k+1$ and $A\cap B\neq \emptyset$. Then $1\leq \vert A\cap B\vert\leq n-k+1$ and therefore by Claim \ref{claim-sets-many-edges} we have $\overline{e}_G(A\cap B)\geq (k-1)\vert A\cap B\vert+1$. Thus, $$\overline{e}_G(A\cup B)\leq \overline{e}_G(A)+\overline{e}_G(B)-\overline{e}_G(A\cap B)\leq (k-1)(\vert A\vert+\vert B\vert-\vert A\cap B\vert)+1=(k-1)\vert A\cup B\vert+1.$$ For the fourth rule, assume that $\overline{e}_G(A)\leq (k-1)\vert A\vert+1$ and $\overline{e}_G(B)\leq (k-1)\vert B\vert+1$ and that there is at least one edge between $A$ and $B$. We may also assume that $A$ and $B$ are disjoint, because the case $A\cap B\neq \emptyset$ is already covered by the third rule. Now $$\overline{e}_G(A\cup B)\leq \overline{e}_G(A)+\overline{e}_G(B)-1\leq (k-1)(\vert A\vert+\vert B\vert)+1=(k-1)\vert A\cup B\vert+1.$$ This finishes the proof of the lemma.\end{proof} \begin{claim}[see Claim 2.2(ii) in \cite{zurich}]\label{remove-good-set-1} If $D$ is a good set in $G$ of size $\vert D\vert\leq n-k+1$, then $G-D$ contains a subgraph of minimum degree at least $k$. \end{claim} \begin{proof} The graph $G-D$ has $n-\vert D\vert\geq k-1$ vertices and $e(G)-\overline{e}_G(D)$ edges. By Lemma \ref{lemma-edges-good-set} these are at least \begin{multline*} e(G)-\overline{e}_G(D)\geq ((k-1)n-t)-((k-1)\vert D\vert+1)=(k-1)(n-\vert D\vert)-(t+1)\\ \geq (k-1)(n-\vert D\vert)-\frac{(k-2)(k+1)}{2}=(k-1)((n-\vert D\vert)-k+2)+{k-2\choose 2} \end{multline*} edges, so by Fact \ref{fact1} the graph $G-D$ contains a subgraph of minimum degree at least $k$. \end{proof} \begin{claim}[see the paragraph of \cite{zurich} below the proof of Claim 2.2]\label{no-big-good-set} If $D$ is a good set in $G$, then $\vert D\vert\leq \frac{n}{k}$.\end{claim} \begin{proof} Suppose there is a good set $D$ of size $\vert D\vert> \frac{n}{k}>1$ (recall $n\geq k+1$). Then let us choose such a set with $\vert D\vert$ minimal (under the constraint $\vert D\vert> \frac{n}{k}$). As $D$ is constructed according to the rules in Definition \ref{def-good-set}, there must be a good set $D'$ with size $\vert D'\vert=\vert D\vert-1$ or $\vert D\vert/2\leq \vert D'\vert<\vert D\vert$. As $\vert D\vert\geq 2$ we have in either case $\vert D'\vert\geq \vert D\vert/2\geq \frac{n}{2k}$, but also $\vert D'\vert<\vert D\vert$ and therefore $\vert D'\vert\leq \frac{n}{k}$ by the choice of $D$. As $n\geq k+1$, this implies $\vert D'\vert\leq \frac{n}{k}\leq n-k+1$ and so by Claim \ref{remove-good-set-1} the graph $G-D'$ contains a subgraph with minimum degree at least $k$. But this subgraph has at most $n-\vert D'\vert\leq n-\frac{n}{2k}<(1-\varepsilon)n$ vertices, which is a contradiction to our assumption on $G$. \end{proof} \begin{coro}\label{coro-edges-good-set}For every good set $D$ in $G$ we have $\overline{e}_G(D)\leq (k-1)\vert D\vert+1$. \end{coro} \begin{proof}As $\frac{n}{k}\leq n-k+1$, this follows directly from Claim \ref{no-big-good-set} and Lemma \ref{lemma-edges-good-set}. \end{proof} Now denote by $\mathcal{C}^{*}$ the collection of all maximal good subsets of $G$. By Corollary \ref{coro-edges-good-set} every $D\in \mathcal{C}^{*}$ is a subset of $V(G)$ satisfying $\overline{e}_G(D)\leq (k-1)\vert D\vert+1$. Furthermore, by the construction rules in Definition \ref{def-good-set} all the members of $\mathcal{C}^{*}$ are disjoint and there are no edges between them. \begin{claim}\label{remove-good-set} If $D\in \mathcal{C}^{*}$, then $G-D$ is a non-empty graph with minimum degree at least $k$.\end{claim} \begin{proof} By Claim \ref{no-big-good-set} we have $\vert D\vert\leq \frac{n}{k}<n$, hence $G-D$ is non-empty. Since $D$ is a maximal good set, we have $\deg_{G-D}(v)\geq k$ for each $v\in V(G)\setminus D$ by the second rule in Definition \ref{def-good-set}. \end{proof} By the first rule in Definition \ref{def-good-set} every vertex of $G$ with degree $k$ is contained in some member of $\mathcal{C}^{*}$. We know by Lemma \ref{deg-k-vertices} that $G$ has at least $\frac{n}{3k}$ vertices of degree $k$. Hence $$\sum_{D\in \mathcal{C}^{*}}\vert D\vert\geq \frac{n}{3k}.$$ For any subset $\mathcal{X}$ of $\mathcal{C}^{*}$, let $\Vert \mathcal{X}\Vert=\sum_{D\in \mathcal{X}}\vert D\vert$ be the sum of the sizes of the elements of $\mathcal{X}$. Then the last inequality reads $\Vert\mathcal{C}^{*}\Vert=\sum_{D\in \mathcal{C}^{*}}\vert D\vert\geq \frac{n}{3k}$. Let $m=\vert \mathcal{C}^{*}\vert$ and $\mathcal{C}^{*}=\lbrace D_1,D_2,\dots,D_m\rbrace$ with $\vert D_1 \vert\geq \vert D_2 \vert\geq \dots \geq \vert D_m \vert>0$. Now let $J$ be the largest positive integer with $2^{J}-1\leq m$. For each $j=1,\dots,J$ set $$\mathcal{C}_j=\lbrace D_i \,\vert\, 2^{j-1}\leq i<2^{j}\rbrace$$ and set $$\mathcal{C}=\lbrace D_i \,\vert\, 1\leq i\leq 2^{J}-1\rbrace.$$ Then $\mathcal{C}_1$,\dots, $\mathcal{C}_J$ are disjoint subcollections of $\mathcal{C}^{*}$ and their union is $\mathcal{C}$. For each $j=1,\dots,J$ we have $\vert \mathcal{C}_j\vert=2^{j-1}$. Note that $\vert \mathcal{C}\vert=2^{J}-1$ and therefore $\vert \mathcal{C}^{*}\setminus \mathcal{C}\vert=m-(2^{J}-1)<2^{J}$ by the choice of $J$. Thus, $\vert \mathcal{C}^{*}\setminus \mathcal{C}\vert\leq \vert \mathcal{C}\vert$. Since every element of $\mathcal{C}$ has at least the size of every element of $\mathcal{C}^{*}\setminus \mathcal{C}$, this implies $\Vert \mathcal{C}^{*}\setminus \mathcal{C}\Vert\leq \Vert \mathcal{C}\Vert$ and hence $\Vert \mathcal{C}\Vert\geq \frac{1}{2} \Vert\mathcal{C}^{*}\Vert\geq \frac{n}{6k}$. Furthermore for each $j=1,\dots,J-1$ we have $\vert \mathcal{C}_{j+1}\vert=2^{j}=2\vert \mathcal{C}_{j}\vert$ and since every element of $\mathcal{C}_j$ has at least the size of every element of $\mathcal{C}_{j+1}$, this implies $\Vert \mathcal{C}_{j+1}\Vert\leq 2\Vert \mathcal{C}_j\Vert$. Let $J'\leq J$ be the least positive integer such that \begin{equation}\label{eqJstrich1} \Vert \mathcal{C}_1\Vert+\dots+\Vert \mathcal{C}_{J'}\Vert\geq \frac{n}{100k} \end{equation} (note that $J$ has this property since $\Vert \mathcal{C}_1\Vert+\dots+\Vert \mathcal{C}_J\Vert=\Vert \mathcal{C}\Vert\geq \frac{n}{6k}$, hence there is a least positive integer $J'\leq J$ with the property). Note that $J'>1$, since otherwise we would have $\vert D_1\vert=\Vert \mathcal{C}_1\Vert\geq \frac{n}{100k}$ and since $G-D_1$ has minimum degree at least $k$ by Claim \ref{remove-good-set}, this would mean that $G$ has a subgraph with at most $(1-\frac{1}{100k})n$ vertices and minimum degree at least $k$, which contradicts our assumptions on $G$. Thus, indeed $J'>1$. Note that $\Vert \mathcal{C}_1\Vert+\dots+\Vert \mathcal{C}_{J'-1}\Vert< \frac{n}{100k}$ by the choice of $J'$ and in particular $\Vert \mathcal{C}_{J'-1}\Vert< \frac{n}{100k}$. Hence \begin{equation}\label{eqJstrich3} \Vert \mathcal{C}_{J'}\Vert\leq 2\Vert \mathcal{C}_{J'-1}\Vert< \frac{2n}{100k} \end{equation} and therefore $$\Vert \mathcal{C}_1\Vert+\dots+\Vert \mathcal{C}_{J'}\Vert<\frac{n}{100k}+\frac{2n}{100k}=\frac{3n}{100k}.$$ Thus, \begin{equation}\label{eqJstrich2} \Vert \mathcal{C}_{J'+1}\Vert+\dots+\Vert \mathcal{C}_{J}\Vert=\Vert \mathcal{C}\Vert-(\Vert \mathcal{C}_1\Vert+\dots+\Vert \mathcal{C}_{J'}\Vert)\geq \frac{n}{6k}-\frac{3n}{100k}>\frac{n}{8k}. \end{equation} \begin{claim}\label{Jstrichgross} $2^{J'}>t$. \end{claim} \begin{proof} By (\ref{eqJstrich1}) we have $$\sum_{D\in \mathcal{C}_1\cup\dots\cup\mathcal{C}_{J'}}\vert D\vert=\Vert \mathcal{C}_1\Vert+\dots+\Vert \mathcal{C}_{J'}\Vert\geq \frac{n}{100k}.$$ Note that the set $\mathcal{C}_1\cup\dots\cup\mathcal{C}_{J'}=\lbrace D_i \,\vert\, 1\leq i<2^{J'}\rbrace$ has $2^{J'}-1$ elements. If we had $2^{J'}\leq t$, then the above sum would have at most $t-1$ summands and in particular there would be some $D\in \mathcal{C}_1\cup\dots\cup\mathcal{C}_{J'}\subseteq \mathcal{C}^{*}$ with $\vert D\vert\geq \frac{n}{100kt}$. But then by Claim \ref{remove-good-set} the graph $G-D$ would be a subgraph of $G$ with minimum degree at least $k$ and at most $(1-\frac{1}{100kt})n\leq (1-\varepsilon)n$ vertices. This is a contradiction to our assumption on $G$. \end{proof} Let us now fix $401k$ colours and enumerate them colour 1 to colour $401k$. As indicated in the introduction, we prove Theorem \ref{thm} by constructing a colouring of some of the vertices of $G$, such that when removing the vertices of any colour class what remains is a graph of minimum degree at least $k$. If we colour sufficiently many vertices of $G$, then one of the colour classes has size at least $\varepsilon n$ and we obtain a subgraph on at most $(1-\varepsilon) n$ vertices with minimum degree at least $k$. Throughout the paper, when we talk about colourings of the vertices of a graph we do not mean proper colourings (i.e.\ we allow adjacent vertices to have the same colour) and we also allow some vertices to remain uncoloured. \begin{defi}\label{approcolour} For $\l=J',\dots,J$, an \emph{$\l$-appropriate} colouring of $G$ is a colouring of some of the vertices of $G$ by the $401k$ given colours, such that the following seven conditions are satisfied. Here, for each $i=1,\dots,401k$, we let $X_i\subseteq V(G)$ denote the set of vertices coloured in colour $i$. \begin{itemize} \item[(i)] Each vertex has at most one colour. \item[(ii)] For each $D\in \mathcal{C}$, the set $D$ is either monochromatic or completely uncoloured. \item[(iii)] For $i=1,\dots,401k$, let $y_i^{(\l)}$ be the number of members of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_\l$ that are monochromatic in colour $i$. Then for each $i=1,\dots,401k$ we have $\overline{e}_G(X_i)\leq (k-1)\vert X_i\vert+y_i^{(\l)}$. \item[(iv)] For each $i=1,\dots,401k$, the graph $G-X_i$ has minimum degree at least $k$. In other words: When removing all the vertices with colour $i$, we obtain a graph of minimum degree at least $k$. \item[(v)] The members of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_{J'}$ are all uncoloured. \item[(vi)] For every $J'+1\leq j\leq \l$, the number of uncoloured members of $\mathcal{C}_j$ is at most $\frac{1}{4}\vert \mathcal{C}_j\vert$. \item[(vii)] If $D\in \mathcal{C}_{\l+1}\cup\dots\cup\mathcal{C}_{J}$ is uncoloured, then all of its neighbours $v\in V(G)$ are also uncoloured. \end{itemize} \end{defi} \begin{lem}\label{colour} For every $\l=J',\dots,J$ there exists an $\l$-appropriate colouring of $G$. \end{lem} We prove Lemma \ref{colour} by induction in Section \ref{sect3}. We remark that in order to prove Theorem \ref{thm} we only need Lemma \ref{colour} for $\l=J$ and we also only need conditions (ii), (iv) and (vi) in Definition \ref{approcolour}. However, all the other conditions are needed in order to keep the induction in the proof of Lemma \ref{colour} running. Let us now finish the proof of Theorem \ref{thm}. Consider a $J$-appropriate colouring of $G$, which exists by Lemma \ref{colour}. For every $j=J'+1,\dots,J$ by condition (vi) in Definition \ref{approcolour} the number of uncoloured members of $\mathcal{C}_j$ is at most $\frac{1}{4}\vert \mathcal{C}_j\vert=\frac{1}{4}2^{j-1}=\frac{1}{2}2^{j-2}=\frac{1}{2}\vert \mathcal{C}_{j-1}\vert$. Since the size of every member of $\mathcal{C}_{j-1}$ is at least the size of every member of $\mathcal{C}_j$, this implies $$\sum_{D\in \mathcal{C}_j\text{ uncoloured}}\vert D\vert\leq \frac{1}{2}\Vert \mathcal{C}_{j-1}\Vert.$$ Thus, for every $j=J'+1,\dots,J$, $$\sum_{D\in \mathcal{C}_j\text{ coloured}}\vert D\vert=\Vert \mathcal{C}_j\Vert-\sum_{D\in \mathcal{C}_j\text{ uncoloured}}\vert D\vert\geq \Vert \mathcal{C}_j\Vert-\frac{1}{2}\Vert \mathcal{C}_{j-1}\Vert.$$ So the total number of coloured vertices is at least $$\sum_{D\in \mathcal{C}_{J'+1}\cup\dots\cup \mathcal{C}_J\text{ coloured}}\vert D\vert\geq \sum_{j=J'+1}^{J} \left(\Vert \mathcal{C}_j\Vert-\frac{1}{2}\Vert \mathcal{C}_{j-1}\Vert\right)= \sum_{j=J'+1}^{J} \Vert \mathcal{C}_j\Vert-\frac{1}{2}\sum_{j=J'+1}^{J-1} \Vert \mathcal{C}_j\Vert-\frac{1}{2}\Vert \mathcal{C}_{J'}\Vert,$$ and by (\ref{eqJstrich3}) and (\ref{eqJstrich2}) this is at least $$\frac{1}{2}\sum_{j=J'+1}^{J} \Vert \mathcal{C}_j\Vert-\frac{1}{2}\cdot \frac{2n}{100}>\frac{n}{16k}-\frac{n}{100k}>\frac{n}{20k}.$$ Since there are at least $\frac{n}{20k}$ coloured vertices, one of the $401k$ colours must occur at least $\frac{n}{10^{4}k^{2}}$ times. If we delete all vertices of this colour, then by condition (iv) in Definition \ref{approcolour} the remaining graph has minimum degree at least $k$ and at most $$\left(1-\frac{1}{10^{4}k^{2}}\right)n\leq (1-\varepsilon)n$$ vertices. This contradicts our assumption on $G$ and finishes the proof of Theorem \ref{thm}. \section{Proof of Lemma \ref{colour}} \label{sect3} The goal of this section is to prove Lemma \ref{colour}. Our proof proceeds by induction on $\l$. In each step we apply the following key lemma, which is an extension of \cite[Lemma 2.7]{zurich}. \begin{lem}\label{mainlem} Let $H$ be a graph and $\mathcal{C}_H$ be a collection of disjoint non-empty subsets of $V(H)$, such that for each $D\in\mathcal{C}_H$ we have $\overline{e}_H(D)\leq (k-1)\vert D\vert+1$ and $\deg_H(v)\geq k$ for each $v\in D$. Assume that $H$ does not have a subgraph of minimum degree at least $k$. Then we can find a subset $S\subseteq V_{\leq k-1}(H)$ with $ V_{\leq k-2}(H)\subseteq S$ and $$\sum_{s\in S}(k-\deg_H(s))\leq 2((k-1)v(H)-e(H))$$ as well as disjoint subsets $B_v\subseteq V(H)$ for each vertex $v\in V_{\leq k-1}(H)\setminus S=V_{k-1}(H)\setminus S$, such that the following holds: If $\widetilde{H}$ is a graph containing $H$ as a proper induced subgraph such that $V_{\leq k-1}(\widetilde{H})\subseteq V_{\leq k-1}(H)\setminus S$ and no vertex in $V(\widetilde{H})\setminus V(H)$ is adjacent to any member of $\mathcal{C}_H$, then there exists a (non-empty) induced subgraph $\widetilde{H}'$ of $\widetilde{H}$ with the following six properties: \begin{itemize} \item[(a)] The minimum degree of $\widetilde{H}'$ is at least $k$. \item[(b)] $(k-1)v(\widetilde{H}')-e(\widetilde{H}')\leq (k-1)v(\widetilde{H})-e(\widetilde{H})$. \item[(c)] $V(\widetilde{H})\setminus V(\widetilde{H}')\subseteq \bigcup_{v\in V_{\leq k-1}(\widetilde{H})}B_v$, so in particular $V(\widetilde{H})\setminus V(\widetilde{H}')\subseteq V(H)$. \item[(d)] No vertex in $V(\widetilde{H})\setminus V(\widetilde{H}')$ is adjacent to any vertex in $V(\widetilde{H})\setminus V(H)$. \item[(e)] For each $D\in \mathcal{C}_H$ either $D\subseteq V(\widetilde{H}')$ or $D\cap V(\widetilde{H}')=\emptyset$. \item[(f)] If $D\in \mathcal{C}_H$ and $D\subseteq V(\widetilde{H}')$, then $D$ is not adjacent to any vertex in $V(\widetilde{H})\setminus V(\widetilde{H}')$. \end{itemize} \end{lem} We prove Lemma \ref{mainlem} in Section \ref{sect5} after some preparations in Section \ref{sect4}. Let us now prove Lemma \ref{colour}. First notice that Lemma \ref{colour} is true for $\l=J'$. Indeed, we can take the colouring of $G$ in which all vertices are uncoloured. This satisfies all conditions in Definition \ref{approcolour} (note that condition (vi) is vacuous for $\l=J'$). Now let $J'\leq \l\leq J-1$ and assume that we are given an $\l$-appropriate colouring of $G$. We would like to extend this colouring by colouring some of the yet uncoloured vertices to obtain an $(\l+1)$-appropriate colouring. This would complete the induction step. In order to avoid later confusion, let us denote the given $\l$-appropriate colouring of $G$ by $\varphi$. As in Definition \ref{approcolour}, for $i=1,\dots,401k$, let $X_i$ be the set of vertices coloured in colour $i$ and $y_i^{(\l)}$ the number of members of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_\l$ that are monochromatic in colour $i$. By (v) all the coloured members of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_\l$ are actually in $\mathcal{C}_{J'+1}\cup\dots\cup\mathcal{C}_\l$ and hence \begin{equation}\label{y-sum} y_1^{(\l)}+y_2^{(\l)}+\dots+y_{401k}^{(\l)}+\vert\lbrace D\in \mathcal{C}_{J'+1}\cup\dots\cup\mathcal{C}_\l\,\vert\, D\text{ uncoloured}\rbrace\vert=\vert\mathcal{C}_{J'+1}\vert+\dots+\vert\mathcal{C}_\l\vert. \end{equation} Let $\mathcal{C}_{\l+1}'\subseteq \mathcal{C}_{\l+1}$ consist of those members of $\mathcal{C}_{\l+1}$ that are uncoloured in $\varphi$. \begin{claim}\label{Clplus1strich} If $\vert\mathcal{C}_{\l+1}'\vert\leq \frac{1}{4}\vert\mathcal{C}_{\l+1}\vert$, then the $\l$-appropriate colouring $\varphi$ is $(\l+1)$-appropriate. \end{claim} \begin{proof}Conditions (i), (ii), (iv) and (v) in Definition \ref{approcolour} do not depend on the value of $\l$ and are satisfied for $\varphi$. In condition (iii) we have $y_i^{(\l+1)}\geq y_i^{(\l)}$ for each colour $i$ and therefore $$\overline{e}_G(X_i)\leq (k-1)\vert X_i\vert+y_i^{(\l)}\leq (k-1)\vert X_i\vert+y_i^{(\l+1)}.$$ Note that condition (vi) is already satisfied for $J'+1\leq j\leq \l$ and is satisfied for $j=\l+1$ according to the assumption $\vert\mathcal{C}_{\l+1}'\vert\leq \frac{1}{4}\vert\mathcal{C}_{\l+1}\vert$. Condition (vii) is a strictly weaker statement for $\l+1$ than for $\l$. \end{proof} Hence we may assume that $\vert\mathcal{C}_{\l+1}'\vert> \frac{1}{4}\vert\mathcal{C}_{\l+1}\vert$. Set \begin{equation}\label{H-def1} H=G-(\mathcal{C}_1\cup\dots\cup\mathcal{C}_{\l+1})-(X_1\cup\dots\cup X_{401k}), \end{equation} i.e.\ $H$ is obtained from $G$ by deleting all members of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_{\l+1}$ and all coloured vertices. Note that this can also be expressed as \begin{equation}\label{H-def2} H=G-(X_1\cup\dots\cup X_{401k})-(\mathcal{C}_1\cup\dots\cup\mathcal{C}_{J'})-\mathcal{C}_{\l+1}'-\bigcup_{\substack{D\in \mathcal{C}_{J'+1}\cup\dots\cup\mathcal{C}_{\l}\\ \text{uncoloured in }\varphi}}D \end{equation} and here all the deleted sets of vertices are disjoint. \begin{claim}\label{H-nonempty} The graph $H$ is non-empty. \end{claim} \begin{proof} By Claim \ref{Clplus1strich} we may assume $\vert\mathcal{C}_{\l+1}'\vert> \frac{1}{4}\vert\mathcal{C}_{\l+1}\vert$. In particular, $\mathcal{C}_{\l+1}'$ is non-empty, so let $D\in \mathcal{C}_{\l+1}'\subseteq \mathcal{C}_{\l+1}$. Since $G$ is connected, $D$ is adjacent to some vertex $v\in V(G)\setminus D$. Since there are no edges between the members of $\mathcal{C}$, the vertex $v$ cannot lie in any member of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_{\l+1}$. Furthermore $D\in \mathcal{C}_{\l+1}'$ is uncoloured in the $\l$-appropriate colouring $\varphi$ and by condition (vii) from Definition \ref{approcolour}, this implies that $v$ is also uncoloured in $\varphi$. Hence $v\not\in X_1\cup\dots\cup X_{401k}$. So by (\ref{H-def1}) we obtain $v\in V(H)$ and in particular the graph $H$ is non-empty. \end{proof} Let us derive some use some useful properties of $H$ in order to apply Lemma \ref{mainlem} to $H$ afterwards. \begin{lem}\label{H-edge-defect} $(k-1)v(H)-e(H)\leq 12\vert\mathcal{C}_{\l+1}'\vert$. \end{lem} \begin{proof} From (\ref{H-def2}) we obtain \begin{equation}\label{H-size} v(H)=n-(\vert X_1\vert+\dots+\vert X_{401k}\vert)-\sum_{D\in \mathcal{C}_1\cup\dots\cup\mathcal{C}_{J'}}\vert D\vert-\sum_{D\in \mathcal{C}_{\l+1}'}\vert D\vert-\sum_{\substack{D\in \mathcal{C}_{J'+1}\cup\dots\cup\mathcal{C}_{\l}\\ \text{uncoloured}}}\vert D\vert. \end{equation} On the other hand $$e(H)\geq e(G)-(\overline{e}_G(X_1)+\dots+\overline{e}_G(X_{401k}))-\sum_{D\in \mathcal{C}_1\cup\dots\cup\mathcal{C}_{J'}}\overline{e}_G(D)-\sum_{D\in \mathcal{C}_{\l+1}'}\overline{e}_G(D)-\sum_{\substack{D\in \mathcal{C}_{J'+1}\cup\dots\cup\mathcal{C}_{\l}\\ \text{uncoloured}}}\overline{e}_G(D).$$ By condition (iii) in Definition \ref{approcolour} and by Corollary \ref{coro-edges-good-set} this implies \begin{multline*} e(H)\geq ((k-1)n-t)-\sum_{i=1}^{401k}((k-1)\vert X_i\vert+y_i^{(\l)})-\sum_{D\in \mathcal{C}_1\cup\dots\cup\mathcal{C}_{J'}}((k-1)\vert D\vert+1)-\sum_{D\in \mathcal{C}_{\l+1}'}((k-1)\vert D\vert+1)\\ -\sum_{\substack{D\in \mathcal{C}_{J'+1}\cup\dots\cup\mathcal{C}_{\l}\\ \text{uncoloured}}}((k-1)\vert D\vert+1). \end{multline*} Together with (\ref{H-size}) we get \begin{multline*} (k-1)v(H)-e(H)\leq t+(y_1^{(\l)}+\dots+y_{401k}^{(\l)})+(\vert\mathcal{C}_1\vert+\dots+\vert\mathcal{C}_{J'}\vert)+\vert\mathcal{C}_{\l+1}'\vert\\ +\vert\lbrace D\in \mathcal{C}_{J'+1}\cup\dots\cup\mathcal{C}_{\l}\,\vert\, D\text{ uncoloured}\rbrace\vert \end{multline*} and by (\ref{y-sum}) $$(k-1)v(H)-e(H)\leq t+(\vert\mathcal{C}_1\vert+\dots+\vert\mathcal{C}_{J'}\vert)+(\vert\mathcal{C}_{J'+1}\vert+\dots+\vert\mathcal{C}_{\l}\vert)+\vert\mathcal{C}_{\l+1}'\vert\leq t+\vert\mathcal{C}_1\vert+\dots+\vert\mathcal{C}_{\l+1}\vert.$$ Using Claim \ref{Jstrichgross} we obtain $$(k-1)v(H)-e(H)\leq 2^{J'}+(2^{0}+\dots+2^{\l})=2^{J'}+2^{\l+1}-1<3\cdot 2^{\l}=3\vert \mathcal{C}_{\l+1}\vert.$$ By Claim \ref{Clplus1strich} we can assume $\vert \mathcal{C}_{\l+1}\vert<4\vert \mathcal{C}_{\l+1}'\vert$ and this gives $(k-1)v(H)-e(H)\leq 12\vert\mathcal{C}_{\l+1}'\vert$ as desired. \end{proof} \begin{claim}\label{H-propC} For every $D\in \mathcal{C}$ we have $D\subseteq V(H)$ or $D\cap V(H)=\emptyset$. \end{claim} \begin{proof}If $D\in \mathcal{C}_1\cup\dots\cup\mathcal{C}_{\l+1}$, then by (\ref{H-def1}) clearly $D\cap V(H)=\emptyset$. Otherwise $D$ is disjoint from all elements of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_{\l+1}$. Also, by condition (ii) in Definition \ref{approcolour} the $D$ set is either contained in some $X_i$ or disjoint from $X_1\cup\dots\cup X_{401k}$. By (\ref{H-def1}) in the first case we have $D\cap V(H)=\emptyset$ and in the second case $D\subseteq V(H)$. \end{proof} Let $\mathcal{C}_H$ be the collection of those $D\in \mathcal{C}$ with $D\subseteq V(H)$. Note that by (\ref{H-def1}) we have $\mathcal{C}_H\subseteq \mathcal{C}_{\l+2}\cup\dots\cup\mathcal{C}_{J}$. Now we check that $H$ together with the collection $\mathcal{C}_H$ of subsets of $V(H)$ satisfies the assumptions of Lemma \ref{mainlem}: The elements of $\mathcal{C}_H$ are disjoint, as all elements of $\mathcal{C}$ are disjoint. Each $D\in \mathcal{C}_H$ is non-empty and we have by Corollary \ref{coro-edges-good-set} $$\overline{e}_H(D)\leq \overline{e}_G(D)\leq (k-1)\vert D\vert+1.$$ Also, for each $D\in \mathcal{C}_H$ by (\ref{H-def1}) we have $D\in \mathcal{C}_{\l+2}\cup\dots\cup\mathcal{C}_{J}$ and $D$ is disjoint from $X_1\cup\dots\cup X_{401k}$, hence $D$ is uncoloured in the colouring $\varphi$. By condition (vii) in Definition \ref{approcolour} this implies that all neighbours of $D$ in $G$ are also uncoloured (and therefore not in $X_1\cup\dots\cup X_{401k}$). Since $D$ does not have edges to any other member of $\mathcal{C}$, this implies that $H$ contains all neighbours of $D$ in $G$. Thus, for each $v\in D$ we have $\deg_H(v)=\deg_G(v)\geq k$. Finally, from (\ref{H-def2}) it is clear that $$v(H)\leq n-(\Vert \mathcal{C}_1\Vert+\dots+\Vert \mathcal{C}_{J'}\Vert)$$ and using (\ref{eqJstrich1}) this gives $v(H)\leq n-\frac{n}{100k}<(1-\varepsilon)n$. Since $G$ does not contain a subgraph on at most $(1-\varepsilon)n$ vertices with minimum degree at least $k$, we can conclude that $H$ does not have a subgraph of minimum degree at least $k$. Thus, $H$ does indeed satisfy all assumptions of Lemma \ref{mainlem}. By applying this lemma we find a subset $S\subseteq V_{\leq k-1}(H)$ with $V_{\leq k-2}(H)\subseteq S$ and \begin{equation}\label{ineq-s-sect3-1} \sum_{s\in S}(k-\deg_H(s))\leq 2((k-1)v(H)-e(H)) \end{equation} as well as disjoint subsets $B_v\subseteq V(H)$ for each vertex $v\in V_{\leq k-1}(H)\setminus S=V_{k-1}(H)\setminus S$, such that the statement in Lemma \ref{mainlem} holds. From (\ref{ineq-s-sect3-1}) and Lemma \ref{H-edge-defect} we obtain $$\sum_{s\in S}(k-\deg_H(s))\leq 2((k-1)v(H)-e(H))\leq 24\vert\mathcal{C}_{\l+1}'\vert.$$ Since $\deg_H(s)\leq k-1$ for each $s\in S$, this in particular implies $\vert S\vert\leq 24\vert\mathcal{C}_{\l+1}'\vert$. This, in turn, gives \begin{equation}\label{ineq-s-sect3-2} \sum_{s\in S}(k+1-\deg_H(s))=\vert S\vert+\sum_{s\in S}(k-\deg_H(s))\leq 48\vert\mathcal{C}_{\l+1}'\vert. \end{equation} The next step in the proof is to obtain a colouring of some of the members of $\mathcal{C}_{\l+1}'$ (recall that all members of $\mathcal{C}_{\l+1}'$ are uncoloured in $\varphi$). Afterwards, we extend $\varphi$ to an $(\l+1)$-appropriate colouring of $G$ by using the colouring on $\mathcal{C}_{\l+1}'$ and the statement in Lemma \ref{mainlem}. Let $S'\subseteq S$ be the set of all $s\in S$ that are adjacent to at least one member of $\mathcal{C}_{\l+1}'$ in the graph $G$. For each $s\in S'$ we define a set $\mathcal{C}(s)\subseteq\mathcal{C}_{\l+1}'$ as follows: If at least $k+1-\deg_H(s)$ members of $\mathcal{C}_{\l+1}'$ are adjacent to $s$, then we pick any $k+1-\deg_H(s)$ of them to form $\mathcal{C}(s)$. If less than $k+1-\deg_H(s)$ members of $\mathcal{C}_{\l+1}'$ are adjacent to $s$, let $\mathcal{C}(s)$ consist of all of them. In either case we get $\vert \mathcal{C}(s)\vert\leq k+1-\deg_H(s)\leq k+1$ and each member of $\mathcal{C}(s)$ is adjacent to $s$. By (\ref{ineq-s-sect3-2}) we have \begin{equation}\label{ineq-s-sect3-3} \sum_{s\in S'}\vert \mathcal{C}(s)\vert\leq \sum_{s\in S}(k+1-\deg_H(s))\leq 48\vert\mathcal{C}_{\l+1}'\vert. \end{equation} Let us call $D\in \mathcal{C}_{\l+1}'$ \emph{popular}, if $D$ is contained in more than $200$ sets $\mathcal{C}(s)$ for $s\in S'$. Otherwise, let us call $D$ \emph{non-popular}. By (\ref{ineq-s-sect3-3}) the number of popular elements in $\mathcal{C}_{\l+1}'$ is at most $$\frac{1}{200}\sum_{s\in S'}\vert \mathcal{C}(s)\vert\leq \frac{48}{200}\vert\mathcal{C}_{\l+1}'\vert<\frac{1}{4}\vert\mathcal{C}_{\l+1}'\vert\leq \frac{1}{4}\vert\mathcal{C}_{\l+1}\vert.$$ Our goal is to colour $\mathcal{C}_{\l+1}'$ with the $401k$ colours in such a way that only the popular sets are left uncoloured. In order to construct the colouring, let us define a list $L(s)$ of up to $k$ colours for each vertex $s\in S'$: If the neighbours of $s$ in $G$ have at most $k$ different colours in the colouring $\varphi$, then let the list $L(s)$ contain all of these colours. If the neighbours of $s$ in $G$ have more than $k$ different colours in the colouring $\varphi$, then we pick any $k$ of these colours for the list $L(s)$. Now we colour the non-popular elements of $\mathcal{C}_{\l+1}'$ with the $401k$ colours, so that each non-popular element of $\mathcal{C}_{\l+1}'$ receives exactly one colour. Furthermore, for each $s\in S'$, we want to ensure that all the non-popular elements of $\mathcal{C}(s)$ receive distinct colours and none of the elements of $\mathcal{C}(s)$ receives a colour on the list $L(s)$. This is possible by a simple greedy algorithm, since each non-popular $D\in \mathcal{C}_{\l+1}'$ is an element of at most 200 sets $\mathcal{C}(s)$, and each of these sets $\mathcal{C}(s)$ has at most $k$ other elements and for each of these sets $\mathcal{C}(s)$ there are at most $k$ forbidden colours on the list $L(s)$. So each of the at most 200 sets $\mathcal{C}(s)$ can rule out at most $2k$ colours for $D$, and $200\cdot 2k<401k$. Let us denote this colouring of $\mathcal{C}_{\l+1}'$ by $\psi$. Then for each $s\in S'$, all the non-popular elements of $\mathcal{C}(s)$ have distinct colours in the colouring $\psi$ and none of them has a colour on the list $L(s)$. Furthermore, all non-popular members of $\mathcal{C}_{\l+1}'$ are coloured with exactly one colour in $\psi$. Since $\mathcal{C}_{\l+1}'$ has at most $\frac{1}{4}\vert\mathcal{C}_{\l+1}\vert$ popular elements, this means that at least $\vert\mathcal{C}_{\l+1}'\vert-\frac{1}{4}\vert\mathcal{C}_{\l+1}\vert$ members of $\mathcal{C}_{\l+1}'$ are coloured in $\psi$. For each $i=1,\dots,401k$, let $\mathcal{Z}_i\subseteq \mathcal{C}_{\l+1}'$ consist of all the members of $\mathcal{C}_{\l+1}'$ which are coloured with colour $i$ in $\psi$. Our goal is to construct an $(\l+1)$-appropriate colouring of $G$. For that we use the colourings $\varphi$ and $\psi$ (recall that $\psi$ is a colouring on $\mathcal{C}_{\l+1}'$ and all of $\mathcal{C}_{\l+1}'$ is uncoloured in $\varphi$), but we may also need to extend the colouring to some vertices in $H$. More precisely, for each of the $401k$ colours we apply the statement in Lemma \ref{mainlem} to a different graph $\widetilde{H}$ (in each case obtaining a subgraph $\widetilde{H}'$ with the six properties listed in the lemma), and then also colour the set $V(\widetilde{H})\setminus V(\widetilde{H}')\subseteq V(H)$ with the corresponding colour. We then check that together with $\varphi$ and $\psi$ this defines an $(\l+1)$-appropriate colouring of $G$. In order to apply this plan, consider any of the $401k$ colours. To minimize confusion, let us call this colour red. As before, let $X_{\text{red}}$ be the set of vertices of $G$ coloured red in $\varphi$ and $y_{\text{red}}^{(\l)}$ the number of members of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_{\l}$ that are monochromatically red in $\varphi$. The set $\mathcal{Z}_{\text{red}}\subseteq \mathcal{C}_{\l+1}'$ consists of the red members of $\mathcal{C}_{\l+1}'$ in the colouring $\psi$. Let us consider the graph \begin{equation}\label{def-tH-red} \widetilde{H}_\text{red}=G-X_\text{red}-\mathcal{Z}_\text{red}. \end{equation} Let us now check that this graph has all properties required to act as $\widetilde{H}$ in Lemma \ref{mainlem}. \begin{claim}\label{tH-red-prop0} $\widetilde{H}_\text{red}$ contains $H$ as a proper induced subgraph. \end{claim} \begin{proof} As $X_\text{red}\subseteq X_1\cup\dots\cup X_{401k}$ and $\mathcal{Z}_\text{red}\subseteq \mathcal{C}_{\l+1}'\subseteq \mathcal{C}_{\l+1}$, it follows from (\ref{H-def1}), that $\widetilde{H}_\text{red}$ contains $H$ as an induced subgraph. On the other hand, by condition (v) in Definition \ref{approcolour} the members of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_{J'}$ are all completely uncoloured in $\varphi$ and therefore disjoint from $X_\text{red}$. By $\mathcal{Z}_\text{red}\subseteq \mathcal{C}_{\l+1}'\subseteq \mathcal{C}_{\l+1}$ (and since any two members of $\mathcal{C}$ are disjoint), the members of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_{J'}$ are also disjoint from all members of $\mathcal{Z}_\text{red}$. Hence all members of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_{J'}$ are subsets of $V(\widetilde{H}_\text{red})=G-X_\text{red}-\mathcal{Z}_\text{red}$. But by (\ref{H-def1}) they are all disjoint from $V(H)$. This establishes that $H$ must be a proper induced subgraph of $\widetilde{H}_\text{red}$. \end{proof} \begin{claim}\label{tH-red-prop1} $V_{\leq k-1}(\widetilde{H}_\text{red})\subseteq V(H)$. \end{claim} \begin{proof} By condition (iv) in Definition \ref{approcolour} the graph $G-X_\text{red}$ has minimum degree at least $k$. So all vertices of $\widetilde{H}_\text{red}=G-X_\text{red}-\mathcal{Z}_\text{red}$ with degree at most $k-1$ are neighbours of members of $\mathcal{Z}_\text{red}$ in $G$. But the members of $\mathcal{Z}_\text{red}$ do not have any edges to members of $\mathcal{C}\setminus \mathcal{Z}_\text{red}$ (as there are no edges between different members of $\mathcal{C}$). Also, every $D\in \mathcal{Z}_\text{red}\subseteq \mathcal{C}_{\l+1}'$ is uncoloured in $\varphi$ and by condition (vii) in Definition \ref{approcolour} this implies that all neighbours of $D$ in $G$ are also uncoloured in $\varphi$. Thus, $D$ does not have any neighbours in $X_1\cup\dots\cup X_{401k}$. Thus, all neighbours of members of $\mathcal{Z}_\text{red}$ in $G$ lie either in $\mathcal{Z}_\text{red}$ itself or in $H=G-(\mathcal{C}_1\cup\dots\cup\mathcal{C}_{\l+1})-(X_1\cup\dots\cup X_{401k})$. Thus, all vertices of $\widetilde{H}_\text{red}=G-X_\text{red}-\mathcal{Z}_\text{red}$ with degree at most $k-1$ are in $V(H)$. \end{proof} \begin{claim}\label{tH-red-prop2} $V_{\leq k-1}(\widetilde{H}_\text{red})\subseteq V_{\leq k-1}(H)\setminus S$. \end{claim} \begin{proof} By Claim \ref{tH-red-prop1} we already know $V_{\leq k-1}(\widetilde{H}_\text{red})\subseteq V(H)$. Since $\widetilde{H}_\text{red}$ contains $H$ as an induced subgraph by Claim \ref{tH-red-prop0}, this implies $V_{\leq k-1}(\widetilde{H}_\text{red})\subseteq V_{\leq k-1}(H)$. So it remains to show that every vertex $s\in S\subseteq V_{\leq k-1}(H)$ has degree at least $k$ in $\widetilde{H}_\text{red}$. If $s$ is not adjacent to any member of $\mathcal{Z}_\text{red}$, then the degree of $s$ in $\widetilde{H}_\text{red}=G-X_\text{red}-\mathcal{Z}_\text{red}$ is equal to the degree of $s$ in $G-X_\text{red}$, which is at least $k$ by condition (iv) in Definition \ref{approcolour}. If $s\not\in S'$, then $s$ is not adjacent to any member of $\mathcal{C}_{\l+1}'$ and in particular not to any member of $\mathcal{Z}_\text{red}$, so we are done by the previous observation. So let us now assume that $s\in S'$ and that $s$ is adjacent to some member $D$ of $\mathcal{Z}_\text{red}$. If $\vert \mathcal{C}(s)\vert=k+1-\deg_H(s)$, then at least $k-\deg_H(s)$ members of $\mathcal{C}(s)$ are not red in $\psi$ (since at most one member of $\mathcal{C}(s)$ is red). Since $\mathcal{C}(s)\subseteq \mathcal{C}_{\l+1}'$ all of its members are disjoint from $X_1\cup\dots\cup X_{401k}$ and in particular from $X_\text{red}$. Thus, all of the at least $k-\deg_H(s)$ non-red members of $\mathcal{C}(s)\subseteq \mathcal{C}_{\l+1}'$ are present in $\widetilde{H}_\text{red}=G-X_\text{red}-\mathcal{Z}_\text{red}$ and they are all neighbours of $s$. Furthermore, $s$ has $\deg_H(s)$ additional neighbours in $V(H)\subseteq V(\widetilde{H}_\text{red})$, so in total $s$ has degree at least $(k-\deg_H(s))+\deg_H(s)=k$ in $\widetilde{H}_\text{red}$. Thus, we can assume $\vert \mathcal{C}(s)\vert<k+1-\deg_H(s)$. By definition of $\mathcal{C}(s)$, this means that $\mathcal{C}(s)$ contains all the neighbours of $s$ in $\mathcal{C}_{\l+1}'$. Also recall our assumption that $s$ is adjacent to some member $D$ of $\mathcal{Z}_\text{red}\subseteq \mathcal{C}_{\l+1}'$. Then $D\in \mathcal{C}(s)$ and $D$ is coloured red in $\psi$. By the properties of $\psi$, the colour red is not on the list $L(s)$ and $D$ is the only red element in $\mathcal{C}(s)$. Thus, $D$ is the only neighbour of $s$ in $\mathcal{Z}_\text{red}$. If $\vert L(s)\vert=k$, then $s$ has neighbours in $G$ in $k$ different non-red colours. All these vertices lie in $\widetilde{H}_\text{red}=G-X_\text{red}-\mathcal{Z}_\text{red}$. Thus, $s$ has degree at least $k$ in $\widetilde{H}_\text{red}$. So we may assume $\vert L(s)\vert<k$. Then from the definition of $L(s)$ we obtain that $L(s)$ contains all colours of the neighbours of $s$ in $G$. Since red is not on the list $L(s)$, we can conclude that $s$ has no neighbours in $G$ inside the set $X_\text{red}$. Also, recall that $D$ is the only neighbour of $s$ in $\mathcal{Z}_\text{red}$. Hence the degree of $s$ in $\widetilde{H}_\text{red}=G-X_\text{red}-\mathcal{Z}_\text{red}$ is the same as in $G-D$, which is at least $k$ by Claim \ref{remove-good-set}. This finishes the proof of Claim \ref{tH-red-prop2}. \end{proof} \begin{claim}\label{tH-red-prop4} If $v\in V_{\leq k-1}(\widetilde{H}_\text{red})$, then $v\in V_{k-1}(H)\setminus S$, $v$ has at least one neighbour in $\mathcal{C}_{\l+1}'$ and all neighbours of $v$ in $\mathcal{C}_{\l+1}'$ are red in $\psi$. \end{claim} \begin{proof} By Claim \ref{tH-red-prop2} we know that $v\in V_{\leq k-1}(H)\setminus S$. Since $V_{\leq k-2}(H)\subseteq S$, we have $V_{\leq k-1}(H)\setminus S=V_{k-1}(H)\setminus S$. Thus, $v\in V_{k-1}(H)\setminus S$. So $v$ has degree $k-1$ in $H$. Since $v$ by assumption has degree at most $k-1$ in $\widetilde{H}_\text{red}$ and $\widetilde{H}_\text{red}$ contains $H$ as an induced subgraph, we can conclude that $v$ also has degree $k-1$ in $\widetilde{H}_\text{red}$ and it does not have any neighbours in $V(\widetilde{H}_\text{red})\setminus V(H)$. Since $v\in \widetilde{H}_\text{red}=G-X_\text{red}-\mathcal{Z}_\text{red}$ we have $v\not\in X_\text{red}$ and by condition (iv) in Definition \ref{approcolour} we know that $v$ has degree at least $k$ in $G-X_\text{red}$. But since $v$ has degree $k-1$ in $\widetilde{H}_\text{red}=G-X_\text{red}-\mathcal{Z}_\text{red}$, this implies that $v$ has at least one neighbour in $\mathcal{Z}_\text{red}\subseteq \mathcal{C}_{\l+1}'$. Note that all members of $\mathcal{C}_{\l+1}'\setminus \mathcal{Z}_\text{red}$ are subsets of $V(\widetilde{H}_\text{red})\setminus V(H)$: Each $D\in \mathcal{C}_{\l+1}'\setminus \mathcal{Z}_\text{red}$ is disjoint from $X_\text{red}$ (because it is uncoloured in $\varphi$) and is therefore a subset of $V(\widetilde{H}_\text{red})$, but by (\ref{H-def2}) it is clearly disjoint from $V(H)$. By the argument above, $v$ does not have any neighbours in $V(\widetilde{H}_\text{red})\setminus V(H)$. Hence $v$ does not have any neighbours in $\mathcal{C}_{\l+1}'\setminus \mathcal{Z}_\text{red}$ and therefore all the neighbours of $v$ in $\mathcal{C}_{\l+1}'$ belong to $\mathcal{Z}_\text{red}$, i.e.\ they are red in $\psi$. \end{proof} \begin{claim}\label{tH-red-prop3} No vertex in $V(\widetilde{H}_\text{red})\setminus V(H)$ is adjacent to any member of $\mathcal{C}_H$. \end{claim} \begin{proof} Let $D\in \mathcal{C}_H\subseteq \mathcal{C}_{\l+2}\cup\dots\cup\mathcal{C}_{J}$, then $D$ does not have any edges to members of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_{\l+1}$. Furthermore by (\ref{H-def1}) we know that $D\subseteq V(H)$ is disjoint from $X_1\cup\dots\cup X_{401k}$, which means that $D$ is uncoloured in $\varphi$. So by condition (vii) in Definition \ref{approcolour} all neighbours of $D$ in $G$ are also uncoloured in $\varphi$. Hence $D$ has no neighbours in $X_1\cup\dots\cup X_{401k}$. Thus, $$H=G-(\mathcal{C}_1\cup\dots\cup\mathcal{C}_{\l+1})-(X_1\cup\dots\cup X_{401k})$$ contains all neighbours of $D$ in $G$ and in particular all neighbours of $D$ in $V(\widetilde{H}_\text{red})$. Hence no vertex in $V(\widetilde{H}_\text{red})\setminus V(H)$ is adjacent to $D$. \end{proof} By Claim \ref{tH-red-prop0}, Claim \ref{tH-red-prop2} and Claim \ref{tH-red-prop3} the graph $\widetilde{H}_\text{red}$ satisfies all properties to act as $\widetilde{H}$ in Lemma \ref{mainlem}. So by the conclusion of the lemma, there is an induced subgraph $\widetilde{H}_\text{red}'$ of $\widetilde{H}_\text{red}$ with all the properties listed in Lemma \ref{mainlem}. Set \begin{equation}\label{Xstrich-red} X_\text{red}'=V(\widetilde{H}_\text{red})\setminus V(\widetilde{H}_\text{red}'). \end{equation} This is the set of vertices we want to colour red in addition to the red vertices in $\varphi$ and the red members of $\mathcal{C}_{\l+1}'$ in $\psi$. But first, we need to establish some properties of the set $X_\text{red}'$. By property (c) in Lemma \ref{mainlem} we have \begin{equation}\label{Xstrich-red2} X_\text{red}'=V(\widetilde{H}_\text{red})\setminus V(\widetilde{H}_\text{red}')\subseteq \bigcup_{v\in V_{\leq k-1}(\widetilde{H}_\text{red})}B_v\subseteq V(H). \end{equation} \begin{claim}\label{Xstrich-red6} For each $D\in \mathcal{C}$ we either have $D\subseteq X_\text{red}'$ or $D\cap X_\text{red}'=\emptyset$. \end{claim} \begin{proof} By Claim \ref{H-propC} we have $D\subseteq V(H)$ or $D\cap V(H)=\emptyset$. In the latter case we have $D\cap X_\text{red}'=\emptyset$ by (\ref{Xstrich-red2}). In the former case we have $D\in \mathcal{C}_H$ and hence property (e) in Lemma \ref{mainlem} gives $D\subseteq V(\widetilde{H}_\text{red}')$ or $D\cap V(\widetilde{H}_\text{red}')=\emptyset$. Since $D\subseteq V(H)$ implies $D\subseteq V(\widetilde{H}_\text{red})$, we obtain $D\cap (V(\widetilde{H}_\text{red})\setminus V(\widetilde{H}_\text{red}'))=\emptyset$ or $D\subseteq (V(\widetilde{H}_\text{red})\setminus V(\widetilde{H}_\text{red}'))$. This proves the claim as $X_\text{red}'=V(\widetilde{H}_\text{red})\setminus V(\widetilde{H}_\text{red}')$. \end{proof} \begin{claim}\label{Xstrich-red5} $\overline{e}_{G-X_\text{red}-\mathcal{Z}_\text{red}}(X_\text{red}')\leq (k-1)\vert X_\text{red}'\vert$. \end{claim} \begin{proof} By property (b) in Lemma \ref{mainlem} we have $$(k-1)v(\widetilde{H}_\text{red}')-e(\widetilde{H}_\text{red}')\leq (k-1)v(\widetilde{H}_\text{red})-e(\widetilde{H}_\text{red}),$$ that is $$e(\widetilde{H}_\text{red})-e(\widetilde{H}_\text{red}')\leq (k-1)(v(\widetilde{H}_\text{red})-v(\widetilde{H}_\text{red}')).$$ So recalling $\widetilde{H}_\text{red}=G-X_\text{red}-\mathcal{Z}_\text{red}$ and $X_\text{red}'=V(\widetilde{H}_\text{red})\setminus V(\widetilde{H}_\text{red}')$ we obtain $$\overline{e}_{G-X_\text{red}-\mathcal{Z}_\text{red}}(X_\text{red}')=\overline{e}_{\widetilde{H}_\text{red}}(X_\text{red}')=e(\widetilde{H}_\text{red})-e(\widetilde{H}_\text{red}')\leq (k-1)(v(\widetilde{H}_\text{red})-v(\widetilde{H}_\text{red}'))=(k-1)\vert X_\text{red}'\vert,$$ which concludes the proof of the claim. \end{proof} \begin{claim}\label{Xstrich-red7} The graph $G-X_\text{red}-\mathcal{Z}_\text{red}-X_\text{red}'$ has minimum degree at least $k$. \end{claim} \begin{proof} By (\ref{def-tH-red}) and (\ref{Xstrich-red}) we have $$G-X_\text{red}-\mathcal{Z}_\text{red}-X_\text{red}'=\widetilde{H}_\text{red}-X_\text{red}'=\widetilde{H}_\text{red}'$$ and by property (a) in Lemma \ref{mainlem} this is a (non-empty) graph with minimum degree at least $k$. \end{proof} \begin{claim}\label{Xstrich-red8} We have $$X_\text{red}'\subseteq \bigcup_{v}B_v,$$ where the union is taken over all $v\in V_{k-1}(H)\setminus S$, which have at least one neighbour in $\mathcal{C}_{\l+1}'$ and such that all neighbours of $v$ in $\mathcal{C}_{\l+1}'$ are red in $\psi$. \end{claim} \begin{proof} This is a direct consequence of (\ref{Xstrich-red2}) and Claim \ref{tH-red-prop4}. \end{proof} \begin{claim}\label{Xstrich-red9} The set $X_\text{red}'$ is disjoint from all members of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_{J'}$. \end{claim} \begin{proof} By (\ref{Xstrich-red2}) we have $X_\text{red}'\subseteq V(H)$, but by (\ref{H-def2}) all members of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_{J'}$ are disjoint from $V(H)$. \end{proof} \begin{claim}\label{Xstrich-red10} If $D\in \mathcal{C}_H$ and $D\cap X_\text{red}'=\emptyset$, then $D$ has no neighbours in $X_\text{red}'$. \end{claim} \begin{proof} From $D\in \mathcal{C}_H$ we know $D\subseteq V(H)\subseteq V(\widetilde{H}_\text{red})$ and because of $D\cap X_\text{red}'=\emptyset$ and $X_\text{red}'=V(\widetilde{H}_\text{red})\setminus V(\widetilde{H}_\text{red}')$, this implies $D\subseteq V(\widetilde{H}_\text{red}')$. Then by property (f) in Lemma \ref{mainlem} the set $D$ has no neighbours in $V(\widetilde{H}_\text{red})\setminus V(\widetilde{H}_\text{red}')=X_\text{red}'$. \end{proof} Finally, let us define the desired $(\l+1)$-appropriate colouring of $G$. All the above considerations hold when red is any of the $401k$ colours (in the arguments above it is just called `red' instead of `colour $i$' to make notation less confusing, since there are already number indices for the sets $\mathcal{C}_j$). So for each $i=1,\dots, 401k$ we can take red to be colour $i$ and apply the above arguments. Then for each $i=1,\dots, 401k$ we get a set $X_i'\subseteq V(H)$ with all the above properties (where we replace each index `red' by $i$ and each word `red' by `colour $i$'). We define a colouring $\rho$ of $G$ as follows: Start with the colouring $\varphi$. Now colour all non-popular members of $\mathcal{C}_{\l+1}'$ according to the colouring $\psi$ (recall that all members of $\mathcal{C}_{\l+1}'$ are uncoloured in $\varphi$). Finally, for each $i=1,\dots, 401k$ colour all vertices in the set $X_i'$ with colour $i$. Let us now check that the colouring $\rho$ is $(\l+1)$-appropriate: \begin{itemize} \item[(i)] We need to check that each vertex has at most one colour in the colouring $\rho$. We know (since the colouring $\varphi$ is $\l$-appropriate), that every vertex has at most one colour in the colouring $\varphi$. When colouring the members of $\mathcal{C}_{\l+1}'$ according to the colouring $\psi$, we only colour vertices that are uncoloured in $\varphi$ (because all members of $\mathcal{C}_{\l+1}'$ are by definition uncoloured in $\varphi$), so after applying $\psi$ still every vertex has at most one colour. Recall that $X_i'\subseteq V(H)$ by (\ref{Xstrich-red2}) and that all vertices of $H$ are uncoloured in $\varphi$ by (\ref{H-def1}). Furthermore $\psi$ only coloured members of $\mathcal{C}_{\l+1}'$ and by (\ref{H-def2}) these are all disjoint from $V(H)$. Hence the sets $X_i'$ only consist of vertices that have not been coloured yet by $\varphi$ and $\psi$. Thus, it remains to check that the sets $X_i'$ for $i=1,\dots,401k$ are disjoint. For each $i=1,\dots,401k$ we have by Claim \ref{Xstrich-red8} \begin{equation}\label{Xstrich-i-1} X_i'\subseteq \bigcup_{v}B_v, \end{equation} where the union is taken over all $v\in V_{k-1}(H)\setminus S$, which have at least one neighbour in $\mathcal{C}_{\l+1}'$ and such that all neighbours of $v$ in $\mathcal{C}_{\l+1}'$ have colour $i$ in $\psi$. Note that for each vertex $v\in V_{k-1}(H)\setminus S$ with at least one neighbour in $\mathcal{C}_{\l+1}'$, there is at most one colour $i$ such that all neighbours of $v$ in $\mathcal{C}_{\l+1}'$ have colour $i$ in $\psi$. Hence for each vertex $v\in V_{k-1}(H)\setminus S$ the corresponding set $B_v$ appears in (\ref{Xstrich-i-1}) for at most one $i$. Since the sets $B_v$ for $v\in V_{k-1}(H)\setminus S$ are disjoint, the right-hand-sides of (\ref{Xstrich-i-1}) for $i=1,\dots,401k$ are disjoint. Hence the sets $X_i'$ for $i=1,\dots,401k$ are disjoint. \item[(ii)] We need to show that each $D\in \mathcal{C}$ is either monochromatic or completely uncoloured in $\rho$. We already know that this is true in $\varphi$ (since $\varphi$ is $\l$-appropriate), and $\psi$ only colours entire members of $\mathcal{C}_{\l+1}'$ (recall that all members of $\mathcal{C}$ are disjoint). So it suffices to show that for each $D\in \mathcal{C}$ and each $i=1,\dots,401k$ we have $D\cap X_i'=\emptyset$ or $D\subseteq X_i'$. This is true by Claim \ref{Xstrich-red6}. \item[(iii)] For $i=1,\dots,401k$, the set of vertices having colour $i$ in $\rho$ is $$X_i\cup X_i'\cup\bigcup_{D\in \mathcal{Z}_i}D,$$ and we remember that this is a disjoint union. Let $y_i^{(\l+1)}$ be the number of members of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_{\l+1}$ that are coloured in colour $i$ in $\rho$. This includes the $y_i^{(\l)}$ members of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_\l$ with colour $i$ in $\varphi$ and the members of $\mathcal{Z}_i\subseteq \mathcal{C}_{\l+1}'\subseteq \mathcal{C}_{\l+1}$. Thus, \begin{equation}\label{Xstrich-i-2} y_i^{(\l+1)}\geq y_i^{(\l)}+\vert \mathcal{Z}_i\vert. \end{equation} Now, for the set $X_i\cup X_i'\cup\bigcup_{D\in \mathcal{Z}_i}D$ of vertices having colour $i$ in $\rho$ we obtain \begin{multline*} \overline{e}_G\left(X_i\cup X_i'\cup\bigcup_{D\in \mathcal{Z}_i}D\right)=\overline{e}_G\left(X_i\cup\bigcup_{D\in \mathcal{Z}_i}D\right)+\overline{e}_{G-X_i-\mathcal{Z}_i}(X_i')\\ \leq \overline{e}_G(X_i)+\sum_{D\in \mathcal{Z}_i} \overline{e}_G(D)+\overline{e}_{G-X_i-\mathcal{Z}_i}(X_i'). \end{multline*} We know that $\overline{e}_G(X_i)\leq (k-1)\vert X_i\vert+y_i^{(\l)}$ (as $\varphi$ is an $\l$-appropriate colouring), $\overline{e}_G(D)\leq (k-1)\vert D\vert+1$ for each $D\in \mathcal{Z}_i\subseteq \mathcal{C}$ by Corollary \ref{coro-edges-good-set} and $\overline{e}_{G-X_i-\mathcal{Z}_i}(X_i')\leq (k-1)\vert X_i'\vert$ by Claim \ref{Xstrich-red5}. Plugging all of that in, we obtain \begin{multline*} \overline{e}_G\left(X_i\cup X_i'\cup\bigcup_{D\in \mathcal{Z}_i}D\right)\leq (k-1)\vert X_i\vert+y_i^{(\l)}+\sum_{D\in \mathcal{Z}_i} ((k-1)\vert D\vert+1)+(k-1)\vert X_i'\vert\\ =(k-1)\left(\vert X_i\vert+\vert X_i'\vert+\sum_{D\in \mathcal{Z}_i}\vert D\vert\right)+y_i^{(\l)}+\vert \mathcal{Z}_i\vert\leq (k-1)\left\vert X_i\cup X_i'\cup\bigcup_{D\in \mathcal{Z}_i}D\right\vert+y_i^{(\l+1)}, \end{multline*} where the last inequality follows from (\ref{Xstrich-i-2}). \item[(iv)] For each $i=1,\dots,401k$ the graph $G-X_i-\mathcal{Z}_i-X_i'$ has minimum degree at least $k$ by Claim \ref{Xstrich-red7}. \item[(v)] The members of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_{J'}$ are all uncoloured in $\varphi$, since $\varphi$ is $\l$-appropriate. Furthermore, they do not get coloured by $\psi$, since $\psi$ only colours members of $\mathcal{C}_{\l+1}'\subseteq \mathcal{C}_{\l+1}$. By Claim \ref{Xstrich-red9} the members of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_{J'}$ are also disjoint from all $X_i'$. Thus, the members of $\mathcal{C}_1\cup\dots\cup\mathcal{C}_{J'}$ are uncoloured in $\rho$. \item[(vi)] For every $J'+1\leq j\leq \l$ the number of members of $\mathcal{C}_j$ that are uncoloured in $\rho$ is at most the number of members of $\mathcal{C}_j$ that are uncoloured in $\varphi$ (actually, one can check that these two numbers are equal, but this is not necessary for the argument). This latter number is at most $\frac{1}{4}\vert \mathcal{C}_j\vert$, since $\varphi$ is $\l$-appropriate. Hence in $\rho$ the number of uncoloured members of $\mathcal{C}_j$ is at most $\frac{1}{4}\vert \mathcal{C}_j\vert$ for each $J'+1\leq j\leq \l$. For $j=\l+1$ note that by definition of $\mathcal{C}_{\l+1}'$ all members of $\mathcal{C}_{\l+1}\setminus \mathcal{C}_{\l+1}'$ are coloured in $\varphi$ and hence also in $\rho$. Furthermore at least $\vert\mathcal{C}_{\l+1}'\vert-\frac{1}{4}\vert\mathcal{C}_{\l+1}\vert$ members of $\mathcal{C}_{\l+1}'\subseteq \mathcal{C}_{\l+1}$ are coloured in $\psi$ and hence also in $\rho$. So all in all at least $$\vert \mathcal{C}_{\l+1}\setminus \mathcal{C}_{\l+1}'\vert+\vert\mathcal{C}_{\l+1}'\vert-\frac{1}{4}\vert\mathcal{C}_{\l+1}\vert=(\vert\mathcal{C}_{\l+1}\vert-\vert\mathcal{C}_{\l+1}'\vert)+(\vert\mathcal{C}_{\l+1}'\vert-\frac{1}{4}\vert\mathcal{C}_{\l+1}\vert)=\frac{3}{4}\vert\mathcal{C}_{\l+1}\vert$$ members of $\mathcal{C}_{\l+1}$ are coloured in $\rho$. So the number of members of $\mathcal{C}_{\l+1}$ that are uncoloured in $\rho$ is at most $\frac{1}{4}\vert \mathcal{C}_{\l+1}\vert$. \item[(vii)] Let $D\in \mathcal{C}_{\l+2}\cup\dots\cup\mathcal{C}_{J}$ be uncoloured in $\rho$. We have to show that all of its neighbours $v\in V(G)$ are also uncoloured in $\rho$. First, note that $D$ is disjoint from all members of $\mathcal{C}_{1}\cup\dots\cup\mathcal{C}_{\l+1}$. Since $D$ is uncoloured in $\rho$, it is also uncoloured in $\varphi$ and hence disjoint from $X_1\cup\dots\cup X_{401k}$. By (\ref{H-def1}) we can conclude that $D\subseteq V(H)$, i.e.\ $D\in \mathcal{C}_H$. Since $D$ is uncoloured in $\rho$, it is disjoint from $X_1',\dots ,X_{401k}'$. Now Claim \ref{Xstrich-red10} implies that $D$ has no neighbours in $X_i'$ for any $i=1,\dots,401k$. Trivially $D\in \mathcal{C}_{\l+1}\cup\dots\cup\mathcal{C}_{J}$ and since $\varphi$ is $\l$-appropriate, all neighbours of $D$ are uncoloured in $\varphi$. When applying the colouring $\psi$, we only colour members of $\mathcal{C}_{\l+1}'$, but $D\in \mathcal{C}_{\l+2}\cup\dots\cup\mathcal{C}_{J}$ has no neighbours in any member of $\mathcal{C}_{\l+1}'$. Hence all neighbours of $D$ are still uncoloured after applying $\psi$. Since $D$ has no neighbours in $X_i'$ for any $i=1,\dots,401k$, we can conclude that all neighbours of $D$ are uncoloured in $\rho$. \end{itemize} Hence the colouring $\rho$ is indeed $(\l+1)$-appropriate. This finishes the induction step and the proof of Lemma \ref{colour}. \begin{remark} The reader might notice that we do not use property (d) from Lemma \ref{mainlem} for the proof of Lemma \ref{colour}. However, we prove Lemma \ref{mainlem} in Section \ref{sect5} by induction and we need property (d) in order keep the induction going. \end{remark} \section{Preparations for the proof of Lemma \ref{mainlem}} \label{sect4} In this section, let $H$ be a graph and $\mathcal{C}_H$ be a collection of disjoint non-empty subsets of $V(H)$ such that for each $D\in\mathcal{C}_H$ we have $\overline{e}_H(D)\leq (k-1)\vert D\vert+1$ and $\deg_H(v)\geq k$ for each $v\in D$. Note that this means that all $D\in\mathcal{C}_H$ are disjoint from $V_{\leq k-1}(H)$. The goal of this section is to introduce the shadow $\operatorname{sh}_H(w)$ of a vertex $w\in V_{\leq k-1}(H)$ and establish several useful properties of it. These play an important role in the proof of Lemma \ref{mainlem} in Section \ref{sect5}. \begin{defi}\label{defi-shadow}For a vertex $w\in V_{\leq k-1}(H)$ define the shadow $\operatorname{sh}_H(w)$ of $w$ in $H$ as the minimal subset $Y\subseteq V(H)$ with the following four properties: \begin{itemize} \item[(I)] $w\in Y$. \item[(II)] For each $D\in \mathcal{C}_H$ either $D\subseteq Y$ or $D\cap Y=\emptyset$. \item[(III)] If $v\in V(H)\setminus Y$ is adjacent to a vertex in $Y$, then $\deg_{H-Y}(v)\geq k$. \item[(IV)] If $D\in \mathcal{C}_H$ is adjacent to a vertex in $Y$, then $D\subseteq Y$. \end{itemize} \end{defi} First, let us check that there is indeed a unique minimal set with the properties (I) to (IV). Note that $Y=V(H)$ satisfies (I) to (IV). So it suffices to show that if $Y_1$ and $Y_2$ both satisfy these properties, then $Y_1\cap Y_2$ does as well: \begin{itemize} \item[(I)] Clearly $w\in Y_1\cap Y_2$. \item[(II)] If $D\cap Y_i=\emptyset$ for $i=1$ or $i=2$, then $D\cap (Y_1\cap Y_2)=\emptyset$. Otherwise $D\subseteq Y_1$ and $D\subseteq Y_2$ and hence $D\subseteq Y_1\cap Y_2$. \item[(III)] Let $v\not\in Y_1\cap Y_2$ be adjacent to a vertex in $Y_1\cap Y_2$. Then $v\not\in Y_i$ for $i=1$ or for $i=2$. Let us assume without loss of generality that $v\not\in Y_1$. Since $v$ is adjacent to a vertex in $Y_1$, we have $\deg_{H-Y_1}(v)\geq k$ and hence $$\deg_{H-(Y_1\cap Y_2)}(v)\geq \deg_{H-Y_1}(v)\geq k.$$ \item[(IV)] If $D\in \mathcal{C}_H$ is adjacent to a vertex in $Y_1\cap Y_2$, then it is both adjacent to a vertex in $Y_1$ and to a vertex in $Y_2$. Hence $D\subseteq Y_1$ and $D\subseteq Y_2$, so $D\subseteq Y_1\cap Y_2$. \end{itemize} So there is indeed a unique minimal set $Y\subseteq V(H)$ with the properties (I) to (IV) and Definition \ref{defi-shadow} makes sense. We now describe a procedure to determine the shadow $\operatorname{sh}_H(w)$ of a vertex $w\in V_{\leq k-1}(H)$. \begin{proc}\label{proc-shadow}For a vertex $w\in V_{\leq k-1}(H)$ consider the following algorithm during which $Y$ is always a subset of $V(H)$. In the beginning we set $Y=\lbrace w\rbrace$. As long as possible, we perform steps of the following form (if we have multiple options, we may choose either of the available options): \begin{itemize} \item If there is a vertex $v\not\in Y$ with $v\not\in D$ for all $D\in \mathcal{C}_H$, such that $v$ is adjacent to a vertex in $Y$ and $\deg_{H-Y}(v)\leq k-1$, then we are allowed to add $v$ to $Y$. \item If there is a $D\in \mathcal{C}_H$ with $D\cap Y=\emptyset$, such that $D$ is adjacent to a vertex in $Y$, then we are allowed to add all of $D$ to $Y$. \end{itemize} We terminate when we cannot perform any of these two steps. \end{proc} It is clear that this procedure must eventually terminate, because $Y\subseteq V(H)$ becomes larger in every step and $V(H)$ is a finite set. Next, we show that when the procedure terminates, we always have $Y=\operatorname{sh}_H(w)$, independently of the choices we made during the procedure (in case we had multiple allowed steps to choose from). \begin{claim}\label{proc-prop1} During Procedure \ref{proc-shadow} we always have $Y\subseteq \operatorname{sh}_H(w)$. \end{claim} \begin{proof} In the beginning we have $Y=\lbrace w\rbrace$ and $\lbrace w\rbrace\subseteq \operatorname{sh}_H(w)$ by property (I). It remains to show that if $Y\subseteq \operatorname{sh}_H(w)$ and we perform one of the operations in Procedure \ref{proc-shadow}, then the resulting set is still a subset of $\operatorname{sh}_H(w)$. \begin{itemize} \item Let $v\not\in Y$ be a vertex that is adjacent to a vertex in $Y\subseteq \operatorname{sh}_H(w)$ and $\deg_{H-Y}(v)\leq k-1$. If $v\not\in \operatorname{sh}_H(w)$, then $v$ is adjacent to a vertex in $\operatorname{sh}_H(w)$ and $$\deg_{H-\operatorname{sh}_H(w)}(v)\leq \deg_{H-Y}(v)\leq k-1.$$ This would be a contradiction to $\operatorname{sh}_H(w)$ having property (III). Hence we must have $v\in \operatorname{sh}_H(w)$ and hence $Y\cup \lbrace v\rbrace\subseteq \operatorname{sh}_H(w)$. \item Let $D\in \mathcal{C}_H$ with $D\cap Y=\emptyset$ be adjacent to a vertex in $Y\subseteq \operatorname{sh}_H(w)$, then $D$ is also adjacent to a vertex in $\operatorname{sh}_H(w)$ and by property (IV) of $\operatorname{sh}_H(w)$ we can conclude that $D\subseteq \operatorname{sh}_H(w)$. Hence $Y\cup D\subseteq \operatorname{sh}_H(w)$. \end{itemize} This finishes the proof of the claim.\end{proof} \begin{claim}\label{proc-prop2} During Procedure \ref{proc-shadow} the set $Y$ always satisfies the properties (I) and (II) in Definition \ref{defi-shadow}. \end{claim} \begin{proof}For property (I), this is clear as we start with the set $Y=\lbrace w\rbrace$. For property (II), note that $w\in V_{\leq k-1}(H)$ implies $w\not\in D$ for all $D\in \mathcal{C}_H$. During the procedure we only add complete sets $D\in C_H$ (recall that these sets are all disjoint) or vertices that are not contained in any $D\in \mathcal{C}_H$. Thus, $Y$ always satisfies property (II) as well.\end{proof} \begin{claim}\label{proc-prop3} When Procedure \ref{proc-shadow} terminates, the set $Y$ satisfies the properties (III) and (IV) in Definition \ref{defi-shadow}. \end{claim} \begin{proof} Let us start with proving property (IV). Let $D\in \mathcal{C}_H$ be adjacent to a vertex in $Y$, we need to show that $D\subseteq Y$. By Claim \ref{proc-prop2} the set $Y$ satisfies property (II) and therefore $D\subseteq Y$ or $D\cap Y=\emptyset$. If $D\cap Y=\emptyset$, then we could perform the second step in Procedure \ref{proc-shadow} and add $D$ to $Y$. This is a contradiction to the assumption that the procedure already terminated, hence $D\subseteq Y$ as desired. Now we prove property (III). Let $v\in V(H)\setminus Y$ be a vertex that is adjacent to a vertex in $Y$. We have to prove $\deg_{H-Y}(v)\geq k$. Suppose the contrary, i.e.\ $\deg_{H-Y}(v)\leq k-1$. If $v\in D$ for some $D\in \mathcal{C}_H$, then $D$ is adjacent to a vertex in $Y$ as well. But then by property (IV), which we already proved, we would have $D\subseteq Y$ and in particular $v\in Y$, a contradiction. Hence $v\not\in D$ for all $D\in \mathcal{C}_H$. But then we can perform the first step in Procedure \ref{proc-shadow} and add $v$ to $Y$. This is again contradiction to the assumption that the procedure already terminated, hence $\deg_{H-Y}(v)\geq k$. \end{proof} \begin{claim}\label{proc-prop4} When Procedure \ref{proc-shadow} terminates, we have $Y=\operatorname{sh}_H(w)$. \end{claim} \begin{proof} By Claim \ref{proc-prop1} we have $Y\subseteq \operatorname{sh}_H(w)$. On the other hand, by Claim \ref{proc-prop2} and Claim \ref{proc-prop3} the set $Y$ satisfies the properties (I) to (IV) in Definition \ref{defi-shadow}. Since $\operatorname{sh}_H(w)$ is the unique minimal set satisfying these properties, we have $\operatorname{sh}_H(w)\subseteq Y$. All in all this proves $Y=\operatorname{sh}_H(w)$. \end{proof} So Procedure \ref{proc-shadow} indeed determines the shadow $\operatorname{sh}_H(w)$ of $w\in V_{\leq k-1}(H)$ in $H$. We can use this to establish the following important properties of the shadow $\operatorname{sh}_H(w)$, which are used in the proof of Lemma \ref{mainlem} in Section \ref{sect5}. \begin{lem}\label{proc-prop5} Let $w\in V_{\leq k-1}(H)$. Then during Procedure \ref{proc-shadow} the quantity $(k-1)\vert Y\vert-\overline{e}_H(Y)$ is monotone increasing. In the beginning for $Y=\lbrace w\rbrace$ the quantity is non-negative. \end{lem} \begin{proof} In the beginning for $Y=\lbrace w\rbrace$ we have $$(k-1)\vert \lbrace w\rbrace\vert-\overline{e}_H(\lbrace w\rbrace)=(k-1)-\deg_H(w)\geq 0.$$ Now let us prove that the quantity is monotone increasing: \begin{itemize} \item Let $v\not\in Y$ be a vertex with $\deg_{H-Y}(v)\leq k-1$. Then $$\overline{e}_H(Y\cup \lbrace v\rbrace)=\overline{e}_H(Y)+\overline{e}_{H-Y}(\lbrace v\rbrace)=\overline{e}_H(Y)+\deg_{H-Y}(v)\leq \overline{e}_H(Y)+(k-1)$$ and therefore $$(k-1)\vert Y\cup \lbrace v\rbrace\vert-\overline{e}_H(Y\cup \lbrace v\rbrace)\geq (k-1)(\vert Y\vert+1)-(\overline{e}_H(Y)+(k-1))=(k-1)\vert Y\vert-\overline{e}_H(Y).$$ \item Let $D\in \mathcal{C}_H$ be adjacent to a vertex in $Y$ and $D\cap Y=\emptyset$. Then $\overline{e}_{H-Y}(D)\leq \overline{e}_{H}(D)-1$, because at least one of the edges of $H$ that are incident with a vertex in $D$ leads to a vertex in $Y$ and does therefore not exist in the graph $H-Y$. Furthermore recall that $\overline{e}_H(D)\leq (k-1)\vert D\vert+1$ by the assumptions in the beginning of Section \ref{sect4}. Together this gives $\overline{e}_{H-Y}(D)\leq (k-1)\vert D\vert$. Now $$\overline{e}_H(Y\cup D)=\overline{e}_H(Y)+\overline{e}_{H-Y}(D)\leq \overline{e}_H(Y)+(k-1)\vert D\vert$$ and therefore $$(k-1)\vert Y\cup D\vert-\overline{e}_H(Y\cup D)\geq (k-1)(\vert Y\vert+\vert D\vert)-(\overline{e}_H(Y)+(k-1)\vert D\vert)=(k-1)\vert Y\vert-\overline{e}_H(Y).$$ \end{itemize} \end{proof} \begin{coro}\label{shadow-prop1} For every $w\in V_{\leq k-1}(H)$ we have $\overline{e}_H(\operatorname{sh}_H(w))\leq (k-1)\vert \operatorname{sh}_H(w)\vert$. \end{coro} \begin{proof}This is an immediate corollary of Claim \ref{proc-prop4} and Lemma \ref{proc-prop5}. \end{proof} \begin{coro}\label{shadow-prop2} Let $w\in V_{\leq k-1}(H)$ with $\overline{e}_H(\operatorname{sh}_H(w))= (k-1)\vert \operatorname{sh}_H(w)\vert$. Then in each step during the Procedure \ref{proc-shadow} we have $\overline{e}_H(Y)=(k-1)\vert Y\vert$, i.e.\ the quantity $(k-1)\vert Y\vert-\overline{e}_H(Y)$ is constantly zero throughout the procedure. \end{coro} \begin{proof}This is also an immediate corollary of Claim \ref{proc-prop4} and Lemma \ref{proc-prop5}. \end{proof} \begin{lem}\label{shadow-prop3} For every $w\in V_{\leq k-1}(H)$ we have $$\sum_{\substack{s\in \operatorname{sh}_H(w)\\ \deg_H(s)\leq k-1}}(k-\deg_H(s))\leq (k-1)\vert \operatorname{sh}_H(w)\vert-\overline{e}_H(\operatorname{sh}_H(w))+1.$$ \end{lem} \begin{proof}Let us consider Procedure \ref{proc-shadow} for determining $\operatorname{sh}_H(w)$. If we have multiple options, let us fix some specific choices, so that we obtain some fixed procedure starting with $Y=\lbrace w\rbrace$ and arriving at $Y=\operatorname{sh}_H(w)$. Let us consider the quantity $(k-1)\vert Y\vert-\overline{e}_H(Y)$, which is by Lemma \ref{proc-prop5} monotone increasing throughout the procedure. Consider any $s\in \operatorname{sh}_H(w)$ with $\deg_H(s)\leq k-1$ and $s\neq w$. Since every $D\in \mathcal{C}_H$ is disjoint from $V_{\leq k-1}(H)$ and $s\in V_{\leq k-1}(H)$, the vertex $s$ is not contained in any $D\in \mathcal{C}_H$. So in order to become part of the set $Y$, there must be a step in the procedure where we add precisely the vertex $s$. Let $Y_s$ be the set $Y$ just before this step, then after the step the set $Y$ becomes $Y_s\cup \lbrace s\rbrace$. Note that in order to be allowed to perform this step, the vertex $s$ must be adjacent to some vertex in $Y_s$. Hence $\deg_{H-Y_s}(s)\leq \deg_{H}(s)-1$. Thus, $$\overline{e}_H(Y_s\cup \lbrace s\rbrace)=\overline{e}_H(Y_s)+\overline{e}_{H-Y_s}(\lbrace s\rbrace)=\overline{e}_H(Y_s)+\deg_{H-Y_s}(s)\leq \overline{e}_H(Y_s)+(\deg_{H}(s)-1)$$ and therefore \begin{multline*} (k-1)\vert Y_s\cup \lbrace s\rbrace\vert-\overline{e}_H(Y_s\cup \lbrace s\rbrace)\geq (k-1)(\vert Y_s\vert+1)-(\overline{e}_H(Y_s)+\deg_{H}(s)-1)\\ =((k-1)\vert Y_s\vert-\overline{e}_H(Y_s))+k-\deg_{H}(s). \end{multline*} Hence the quantity $(k-1)\vert Y\vert-\overline{e}_H(Y)$ increases by at least $k-\deg_{H}(s)$ from $Y=Y_s$ to $Y=Y_s\cup \lbrace s\rbrace$. Applying this argument for all $s\in \operatorname{sh}_H(w)$ with $\deg_H(s)\leq k-1$ and $s\neq w$, we can conclude that during the procedure the quantity $(k-1)\vert Y\vert-\overline{e}_H(Y)$ increases by at least $$\sum_{\substack{s\in \operatorname{sh}_H(w)\\ \deg_H(s)\leq k-1\\s\neq w}}(k-\deg_H(s)).$$ In the beginning for $Y=\lbrace w\rbrace$ the quantity $(k-1)\vert Y\vert-\overline{e}_H(Y)$ equals $$(k-1)\vert \lbrace w\rbrace\vert-\overline{e}_H(\lbrace w\rbrace)=(k-1)-\deg_H(w).$$ Hence when the process terminates the quantity $(k-1)\vert Y\vert-\overline{e}_H(Y)$ must be at least $$(k-1)-\deg_H(w)+\sum_{\substack{s\in \operatorname{sh}_H(w)\\ \deg_H(s)\leq k-1\\s\neq w}}(k-\deg_H(s))=\sum_{\substack{s\in \operatorname{sh}_H(w)\\ \deg_H(s)\leq k-1}}(k-\deg_H(s))-1.$$ But by Claim \ref{proc-prop4} at the termination point we have $Y=\operatorname{sh}_H(w)$. Thus, $$(k-1)\vert\operatorname{sh}_H(w)\vert-\overline{e}_H(\operatorname{sh}_H(w))\geq \sum_{\substack{s\in \operatorname{sh}_H(w)\\ \deg_H(s)\leq k-1}}(k-\deg_H(s))-1,$$ which proves the lemma. \end{proof} \begin{coro}\label{shadow-prop4} Let $w\in V_{\leq k-1}(H)$ with $\overline{e}_H(\operatorname{sh}_H(w))= (k-1)\vert \operatorname{sh}_H(w)\vert$. Then $w$ is the only vertex $v\in \operatorname{sh}_H(w)$ with $\deg_H(v)\leq k-1$ and furthermore $\deg_H(w)= k-1$. \end{coro} \begin{proof}The right-hand-side of the inequality in Lemma \ref{shadow-prop3} is 1, while all summands on the left-hand-side are positive and one of them is the summand $k-\deg_H(w)\geq 1$. This immediately implies the statement of this corollary. \end{proof} \begin{coro}\label{shadow-prop5} Let $w\in V_{\leq k-1}(H)$ with $\overline{e}_H(\operatorname{sh}_H(w))<(k-1)\vert \operatorname{sh}_H(w)\vert$. Then $$\sum_{\substack{s\in \operatorname{sh}_H(w)\\ \deg_H(s)\leq k-1}}(k-\deg_H(s))\leq 2((k-1)\vert \operatorname{sh}_H(w)\vert-\overline{e}_H(\operatorname{sh}_H(w))).$$ \end{coro} \begin{proof}We have $(k-1)\vert \operatorname{sh}_H(w)\vert-\overline{e}_H(\operatorname{sh}_H(w))\geq 1$ and hence by Lemma \ref{shadow-prop3} $$\sum_{\substack{s\in \operatorname{sh}_H(w)\\ \deg_H(s)\leq k-1}}(k-\deg_H(s))\leq (k-1)\vert \operatorname{sh}_H(w)\vert-\overline{e}_H(\operatorname{sh}_H(w))+1\leq 2((k-1)\vert \operatorname{sh}_H(w)\vert-\overline{e}_H(\operatorname{sh}_H(w))).$$ \end{proof} \begin{claim}\label{shadow-prop8} Let $w\in V_{\leq k-1}(H)$ with $\operatorname{sh}_H(w)\neq V(H)$. Then $$V_{\leq k-1}(H-\operatorname{sh}_H(w))\subseteq V_{\leq k-1}(H)\setminus \lbrace w\rbrace.$$ \end{claim} \begin{proof}From $\operatorname{sh}_H(w)\neq V(H)$ it is clear that $H-\operatorname{sh}_H(w)$ is a non-empty graph. Consider any vertex $v\in V_{\leq k-1}(H-\operatorname{sh}_H(w))$. If $v$ is adjacent to a vertex in $\operatorname{sh}_H(w)$, we have $\deg_{H-\operatorname{sh}_H(w)}(v)\geq k$ by property (III) in Definition \ref{defi-shadow}, a contradiction. Hence $v$ is not adjacent to a vertex in $\operatorname{sh}_H(w)$ and we have $\deg_{H}(v)=\deg_{H-\operatorname{sh}_H(w)}(v)\leq k-1$. Thus, $v\in V_{\leq k-1}(H)$. Furthermore $v\neq w$ since $v\not\in \operatorname{sh}_H(w)$. \end{proof} \begin{coro}\label{shadow-prop7} If $V_{\leq k-1}(H)=\lbrace w\rbrace$ and $\operatorname{sh}_H(w)\neq V(H)$, then $H-\operatorname{sh}_H(w)$ is a (non-empty) graph with minimum degree at least $k$. \end{coro} \begin{proof}This follows immediately from Claim \ref{shadow-prop8}.\end{proof} \begin{lem}\label{shadow-prop6} Let $\widetilde{H}$ be a graph containing $H$ as a proper induced subgraph, such that no vertex in $V(\widetilde{H})\setminus V(H)$ is adjacent to any $D\in \mathcal{C}_H$. Let $w\in V_{\leq k-1}(\widetilde{H})\cap V(H)$ and suppose that $\overline{e}_H(\operatorname{sh}_H(w))= (k-1)\vert \operatorname{sh}_H(w)\vert$. Then $\operatorname{sh}_{\widetilde{H}}(w)\subseteq \operatorname{sh}_H(w)$ and no vertex in $V(\widetilde{H})\setminus V(H)$ is adjacent to any vertex in $\operatorname{sh}_{\widetilde{H}}(w)$. \end{lem} Let us clarify that the shadow $\operatorname{sh}_{\widetilde{H}}(w)$ of $w$ in $\widetilde{H}$ is defined with respect to the same collection $\mathcal{C}_H$ of subsets of $V(H)$ that we use for $H$ (with all the properties described in the beginning of Section \ref{sect4}). \begin{proof} Note that $w\in V_{\leq k-1}(\widetilde{H})\cap V(H)$ already implies $w\in V_{\leq k-1}(H)$, since $H$ is an induced subgraph of $\widetilde{H}$. Thus, $\operatorname{sh}_H(w)$ is well defined. By Corollary \ref{shadow-prop4} we know $\deg_H(w)= k-1$. Hence we must have $\deg_{\widetilde{H}}(w)=\deg_H(w)= k-1$ and in particular $w$ is not adjacent to any vertex in $V(\widetilde{H})\setminus V(H)$. Let us consider Procedure \ref{proc-shadow} for determining the shadow $\operatorname{sh}_{\widetilde{H}}(w)$ of $w$ in $\widetilde{H}$. If we have multiple options, let us fix some specific choices, so that we obtain some fixed procedure starting with $Y=\lbrace w\rbrace$ and arriving at $Y=\operatorname{sh}_{\widetilde{H}}(w)$. We claim that during this procedure for determining the shadow $\operatorname{sh}_{\widetilde{H}}(w)$ of $w$ in $\widetilde{H}$ all the sets $Y$ have the following two properties: \begin{itemize} \item[($\alpha$)] No vertex in $Y$ is adjacent to a vertex in $V(\widetilde{H})\setminus V(H)$. \item[($\beta$)] It is possible to arrange Procedure \ref{proc-shadow} for determining the shadow $\operatorname{sh}_H(w)$ of $w$ in $H$ in such a way, that the set $Y$ also occurs during this procedure for $H$. \end{itemize} Since $w$ is not adjacent to any vertex in $V(\widetilde{H})\setminus V(H)$, the set $Y=\lbrace w\rbrace$ in the beginning satisfies ($\alpha$) and it clearly also satisfies ($\beta$). Now let $Y$ be any set occurring in the procedure for determining the shadow $\operatorname{sh}_{\widetilde{H}}(w)$ of $w$ in $\widetilde{H}$ and assume that $Y$ fulfills ($\alpha$) and ($\beta$). We want to show that after the next step, following the rules in Procedure \ref{proc-shadow} for the graph $\widetilde{H}$, the next set still has the properties ($\alpha$) and ($\beta$). Note that by ($\beta$) we in particular have $Y\subseteq \operatorname{sh}_H(w)\subseteq V(H)$. Suppose the next step is adding some set $D\in \mathcal{C}_H$ with $D\cap Y=\emptyset$, such that $D$ is adjacent to some vertex in $Y$. This step is definitely an allowed step in the procedure for the graph $H$ as well, so ($\beta$) is satisfied for $Y\cup D$. Since $Y$ satisfies ($\alpha$) and $D$ is not adjacent to any vertex in $V(\widetilde{H})\setminus V(H)$ (by the assumptions of the lemma), the set $Y\cup D$ also satisfies ($\alpha$). So in this case we are done. So we can assume that the next step is adding a vertex $v\in V(\widetilde{H})\setminus Y$ with $v\not\in D$ for all $D\in \mathcal{C}_H$, such that $v$ is adjacent to a vertex in $Y$ and $\deg_{\widetilde{H}-Y}(v)\leq k-1$. Since $Y$ is according to ($\alpha$) not adjacent to any vertex in $V(\widetilde{H})\setminus V(H)$, we must have $v\in V(H)$. So $v\in V(H)\setminus Y$ and $v$ is adjacent to a vertex in $Y$. Furthermore, $\deg_{H-Y}(v)\leq \deg_{\widetilde{H}-Y}(v)\leq k-1$ (since $H-Y$ is an induced subgraph of $\widetilde{H}-Y$). Thus, adding $v$ to $Y$ is also an allowed step in the procedure for determining the shadow $\operatorname{sh}_H(w)$ of $w$ in $H$. In particular, ($\beta$) is satisfied for $Y\cup \lbrace v\rbrace$ and it remains to show ($\alpha$) for $Y\cup \lbrace v\rbrace$. Suppose that $v$ is adjacent to a vertex in $V(\widetilde{H})\setminus V(H)$. Since $Y\subseteq V(H)$, this vertex also lies in $V(\widetilde{H}-Y)\setminus V(H-Y)$. Hence we have $\deg_{H-Y}(v)\leq \deg_{\widetilde{H}-Y}(v)-1\leq k-2$. But then $$\overline{e}_H(Y\cup \lbrace v\rbrace)=\overline{e}_H(Y)+\overline{e}_{H-Y}(\lbrace v\rbrace)=\overline{e}_H(Y)+\deg_{H-Y}(v)\leq \overline{e}_H(Y)+(k-2),$$ and therefore $$(k-1)\vert Y\cup \lbrace v\rbrace\vert-\overline{e}_H(Y\cup \lbrace v\rbrace)\geq (k-1)(\vert Y\vert+1)-(\overline{e}_H(Y)+(k-2))=(k-1)\vert Y\vert-\overline{e}_H(Y)+1,$$ so the quantity $(k-1)\vert Y\vert-\overline{e}_H(Y)$ increases by at least 1 when going from $Y$ to $Y\cup \lbrace v\rbrace$ in the procedure for determining the shadow $\operatorname{sh}_H(w)$ of $w$ in $H$. But this contradicts Corollary \ref{shadow-prop2}. Hence $v$ cannot be adjacent to a vertex in $V(\widetilde{H})\setminus V(H)$. Since $Y$ satisfies ($\alpha$), this implies that $Y\cup \lbrace v\rbrace$ satisfies ($\alpha$) as well. This finishes the proof that all sets $Y$ occurring in the procedure for determining the shadow $\operatorname{sh}_{\widetilde{H}}(w)$ of $w$ in $\widetilde{H}$ satisfy the two properties ($\alpha$) and ($\beta$). In particular, the final set $Y=\operatorname{sh}_{\widetilde{H}}(w)$ satisfies ($\alpha$) and ($\beta$). By ($\beta$) the set $\operatorname{sh}_{\widetilde{H}}(w)$ is a possible set during the procedure of determining the shadow $\operatorname{sh}_{H}(w)$ of $w$ in $H$. Hence $\operatorname{sh}_{\widetilde{H}}(w)\subseteq \operatorname{sh}_{H}(w)$. By ($\alpha$) no vertex in $\operatorname{sh}_{\widetilde{H}}(w)$ is adjacent to a vertex in $V(\widetilde{H})\setminus V(H)$.\end{proof} \section{Proof of Lemma \ref{mainlem}} \label{sect5} The goal of this section is to finally prove Lemma \ref{mainlem}. The proof proceeds by induction on $v(H)$. We use the results from Section \ref{sect4}, but otherwise the inductive proof of Lemma \ref{mainlem} is mostly a long case-checking. First, if $v(H)=1$, then $H$ just consists of a single vertex $w$ and no edges. Note that indeed $H$ does not contain a subgraph of minimum degree at least $k$. The collection $\mathcal{C}_H$ must be empty, because $\deg_H(w)<k$, so $w$ cannot be part of any member of $\mathcal{C}_H$. We can now take $S=\lbrace w\rbrace$, then $S\subseteq V_{\leq k-1}(H)$ and $V_{\leq k-2}(H)\subseteq S$ and $$\sum_{s\in S}(k-\deg_H(s))=k\leq 2(k-1)= 2((k-1)v(H)-e(H))$$ (recall that $k\geq 2$). Since $V_{\leq k-1}(H)\setminus S=\emptyset$, we do not need to specify any sets $B_v$. Now if $\widetilde{H}$ is a graph containing $H$ as a proper induced subgraph such that $V_{\leq k-1}(\widetilde{H})\subseteq V_{\leq k-1}(H)\setminus S=\emptyset$, then $\widetilde{H}$ itself has minimum degree at least $k$ and we can take $\widetilde{H}'=\widetilde{H}$. It is easy to see that in this case $\widetilde{H}'=\widetilde{H}$ satisfies the six properties in Lemma \ref{mainlem}. Now let $H$ be a graph on $v(H)\geq 2$ vertices that does not have a subgraph of minimum degree at least $k$. Furthermore let $\mathcal{C}_H$ be a collection of disjoint non-empty subsets of $V(H)$, such that for each $D\in\mathcal{C}_H$ we have $\overline{e}_H(D)\leq (k-1)\vert D\vert+1$ and $\deg_H(v)\geq k$ for each $v\in D$. By induction we can assume that Lemma \ref{mainlem} holds for all graphs on less than $v(H)$ vertices, and we have to prove Lemma \ref{mainlem} for $H$. Since $H$ does not have a subgraph of minimum degree at least $k$, we know in particular that the minimum degree of $H$ itself is less than $k$. So we can fix a vertex $w\in V(H)$ with $\deg_H(w)\leq k-1$. Consider the shadow $\operatorname{sh}_H(w)$ of $w$ in $H$. We distinguish two cases, namely whether $\operatorname{sh}_H(w)=V(H)$ (Case A) or whether $\operatorname{sh}_H(w)\subsetneq V(H)$ (Case B). \textbf{Case A: $\operatorname{sh}_H(w)=V(H)$.} By Corollary \ref{shadow-prop1} we have $\overline{e}_H(\operatorname{sh}_H(w))\leq (k-1)\vert \operatorname{sh}_H(w)\vert$. We distinguish two sub-cases, namely whether $\overline{e}_H(\operatorname{sh}_H(w))< (k-1)\vert \operatorname{sh}_H(w)\vert$ (Case A.1) or whether $\overline{e}_H(\operatorname{sh}_H(w))= (k-1)\vert \operatorname{sh}_H(w)\vert$ (Case A.2). \textbf{Case A.1: $\operatorname{sh}_H(w)=V(H)$ and $\overline{e}_H(\operatorname{sh}_H(w))< (k-1)\vert \operatorname{sh}_H(w)\vert$.} In this case we can take $S=V_{\leq k-1}(H)$, then clearly $S\subseteq V_{\leq k-1}(H)$ and $ V_{\leq k-2}(H)\subseteq S$ and by Corollary \ref{shadow-prop5} \begin{multline*} \sum_{s\in S}(k-\deg_H(s))=\sum_{\substack{s\in \operatorname{sh}_H(w)\\ \deg_H(s)\leq k-1}}(k-\deg_H(s))\leq 2((k-1)\vert \operatorname{sh}_H(w)\vert-\overline{e}_H(\operatorname{sh}_H(w)))\\ =2((k-1)v(H)-e(H)). \end{multline*} Since $V_{\leq k-1}(H)\setminus S=\emptyset$, we do not need to specify any sets $B_v$. Now if $\widetilde{H}$ is a graph containing $H$ as a proper induced subgraph such that $V_{\leq k-1}(\widetilde{H})\subseteq V_{\leq k-1}(H)\setminus S=\emptyset$, then $\widetilde{H}$ itself has minimum degree at least $k$ and we can take $\widetilde{H}'=\widetilde{H}$. It is easy to see that in this case $\widetilde{H}'=\widetilde{H}$ satisfies the six properties in Lemma \ref{mainlem}. \textbf{Case A.2: $\operatorname{sh}_H(w)=V(H)$ and $\overline{e}_H(\operatorname{sh}_H(w))=(k-1)\vert \operatorname{sh}_H(w)\vert$.} By Corollary \ref{shadow-prop4} the vertex $w$ is the only vertex $v\in \operatorname{sh}_H(w)=V(H)$ with $\deg_H(v)\leq k-1$ and furthermore $\deg_H(w)= k-1$. Thus, $V_{\leq k-1}(H)=\lbrace w\rbrace$ and $V_{\leq k-2}(H)=\emptyset$. Let us take $S=\emptyset$, then clearly $S\subseteq V_{\leq k-1}(H)$ with $ V_{\leq k-2}(H)\subseteq S$ and $$\sum_{s\in S}(k-\deg_H(s))=0=2((k-1)\vert \operatorname{sh}_H(w)\vert-\overline{e}_H(\operatorname{sh}_H(w)))= 2((k-1)v(H)-e(H)).$$ Now $V_{\leq k-1}(H)\setminus S=\lbrace w\rbrace$ and let us take $B_w=\operatorname{sh}_H(w)=V(H)$. Let $\widetilde{H}$ be a graph containing $H$ as a proper induced subgraph with $V_{\leq k-1}(\widetilde{H}) \subseteq {V_{\leq k-1}(H)\setminus S} = \lbrace w\rbrace$ and such that no vertex in $V(\widetilde{H})\setminus V(H)$ is adjacent to any member of $\mathcal{C}_H$. We have to show the existence of an induced subgraph $\widetilde{H}'$ of $\widetilde{H}$ satisfying the properties (a) to (f) in Lemma \ref{mainlem}. If $\widetilde{H}$ itself has minimum degree at least $k$, then we can take $\widetilde{H}'=\widetilde{H}$ and all the properties are satisfied. So let us assume that $\widetilde{H}$ has minimum degree smaller than $k$. By $V_{\leq k-1}(\widetilde{H})\subseteq V_{\leq k-1}(H)\setminus S=\lbrace w\rbrace$ this implies $V_{\leq k-1}(\widetilde{H})=\lbrace w\rbrace$. We can apply Lemma \ref{shadow-prop6}, so $\operatorname{sh}_{\widetilde{H}}(w)\subseteq \operatorname{sh}_H(w)$ and no vertex in $V(\widetilde{H})\setminus V(H)$ is adjacent to any vertex in $\operatorname{sh}_{\widetilde{H}}(w)$. Recall that here the shadow $\operatorname{sh}_{\widetilde{H}}(w)$ of $w$ in $\widetilde{H}$ is defined with respect to the same collection $\mathcal{C}_H$ as for $H$. Let us take $\widetilde{H}'=\widetilde{H}-\operatorname{sh}_{\widetilde{H}}(w)$. Note that $\operatorname{sh}_{\widetilde{H}}(w)\subseteq \operatorname{sh}_H(w)=V(H)$ and therefore $\operatorname{sh}_{\widetilde{H}}(w)\neq V(\widetilde{H})$. By Corollary \ref{shadow-prop7} applied to $\widetilde{H}$ and $w$, the graph $\widetilde{H}'=\widetilde{H}-\operatorname{sh}_{\widetilde{H}}(w)$ is a non-empty induced subgraph of $\widetilde{H}$ and has minimum degree at least $k$. This already establishes property (a). Let us check the other properties: \begin{itemize} \item[(b)] By Corollary \ref{shadow-prop1} applied to $\widetilde{H}$ and $w$ we have $\overline{e}_{\widetilde{H}}(\operatorname{sh}_{\widetilde{H}}(w))\leq (k-1)\vert \operatorname{sh}_{\widetilde{H}}(w)\vert$. Hence $$(k-1)v(\widetilde{H}')-e(\widetilde{H}')=(k-1)(v(\widetilde{H})-\vert \operatorname{sh}_{\widetilde{H}}(w)\vert)-(e(\widetilde{H})-\overline{e}_{\widetilde{H}}(\operatorname{sh}_{\widetilde{H}}(w)))\leq (k-1)v(\widetilde{H})-e(\widetilde{H}).$$ \item[(c)] $V(\widetilde{H})\setminus V(\widetilde{H}')=\operatorname{sh}_{\widetilde{H}}(w)\subseteq \operatorname{sh}_H(w)=B_w=\bigcup_{v\in V_{\leq k-1}(\widetilde{H})}B_v$. \item[(d)] As established above from Lemma \ref{shadow-prop6}, no vertex in $V(\widetilde{H})\setminus V(\widetilde{H}')=\operatorname{sh}_{\widetilde{H}}(w)$ is adjacent to any vertex in $V(\widetilde{H})\setminus V(H)$. \item[(e)] For each $D\in \mathcal{C}_H$, by property (II) in Definition \ref{defi-shadow} we either have $D\subseteq \operatorname{sh}_{\widetilde{H}}(w)$ or we have $D\cap \operatorname{sh}_{\widetilde{H}}(w)=\emptyset$. Thus, either $D\cap V(\widetilde{H}')=\emptyset$ or $D\subseteq V(\widetilde{H}')$. \item[(f)] If $D\in \mathcal{C}_H$ and $D\subseteq V(\widetilde{H}')$, then $D\cap \operatorname{sh}_{\widetilde{H}}(w)=\emptyset$. So by property (IV) in Definition \ref{defi-shadow} the set $D$ is not adjacent to any vertex in $\operatorname{sh}_{\widetilde{H}}(w)=V(\widetilde{H})\setminus V(\widetilde{H}')$. \end{itemize} \textbf{Case B: $\operatorname{sh}_H(w)\subsetneq V(H)$.} In this case $F=H-\operatorname{sh}_H(w)$ is a non-empty subgraph of $H$ and $F$ has fewer vertices than $H$. Note that $F$ does not have a subgraph of minimum degree at least $k$, since according to the assumptions of Lemma \ref{mainlem} the graph $H$ has no such subgraph. Let $\mathcal{C}_F$ be the collection of those $D\in \mathcal{C}_H$ with $D\subseteq V(F)$, i.e. \begin{equation}\label{F-collection} \mathcal{C}_F=\lbrace D\in \mathcal{C}_H\,\vert\,D\cap \operatorname{sh}_H(w)=\emptyset\rbrace=\lbrace D\in \mathcal{C}_H\,\vert\,D\not\subseteq\operatorname{sh}_H(w)\rbrace, \end{equation} using that $\operatorname{sh}_H(w)$ has property (II) from Definition \ref{defi-shadow}. Clearly $\mathcal{C}_F$ is a collection of disjoint non-empty subsets of $V(F)$ and for each $D\in \mathcal{C}_F$ we have $$\overline{e}_F(D)\leq \overline{e}_H(D)\leq (k-1)\vert D\vert+1.$$ Furthermore for each $D\in \mathcal{C}_F\subseteq \mathcal{C}_H$ we have $D\cap \operatorname{sh}_H(w)=\emptyset$ and by property (IV) from Definition \ref{defi-shadow} this implies that $D$ is not adjacent to any vertex in $\operatorname{sh}_H(w)$. Hence for every $v\in D$ we have $$\deg_F(v)=\deg_{H-\operatorname{sh}_H(w)}(v)=\deg_{H}(v)\geq k.$$ Thus, the graph $F$ together with the collection $\mathcal{C}_F$ of subsets of $V(F)$ satisfies the assumptions of Lemma \ref{mainlem}. By the induction assumption, we can apply Lemma \ref{mainlem} to $F$ (with $\mathcal{C}_F$). We find a subset $S_F\subseteq V_{\leq k-1}(F)$ with $ V_{\leq k-2}(F)\subseteq S_F$ and \begin{equation}\label{F-SF-ineq} \sum_{s\in S_F}(k-\deg_F(s))\leq 2((k-1)v(F)-e(F)) \end{equation} as well as disjoint subsets $B_v\subseteq V(F)$ for each vertex $v\in V_{\leq k-1}(F)\setminus S_F=V_{k-1}(F)\setminus S_F$, such that the conclusion of Lemma \ref{mainlem} holds. Note that \begin{multline}\label{F-edge-defect} (k-1)v(F)-e(F)=(k-1)(v(H)-\vert \operatorname{sh}_{H}(w)\vert)-(e(H)-\overline{e}_{H}(\operatorname{sh}_{H}(w)))\\ =(k-1)v(H)-e(H)-((k-1)\vert \operatorname{sh}_{H}(w)\vert-\overline{e}_{H}(\operatorname{sh}_{H}(w))). \end{multline} As $F$ is an induced subgraph of $H$, for all $v\in V(F)$ we have \begin{equation}\label{F-deg-H} \deg_F(v)\leq \deg_H(v). \end{equation} \begin{claim}\label{F-k-1-set} $V_{\leq k-1}(H)=V_{\leq k-1}(F)\cup \lbrace v\in \operatorname{sh}_H(w)\,\vert\, \deg_H(v)\leq k-1\rbrace$. \end{claim} \begin{proof}First, let $v$ be an element of the left-hand-side, i.e.\ $v\in V(H)$ and $\deg_H(v)\leq k-1$. If $v\in \operatorname{sh}_H(w)$, then $v$ is clearly contained in the right-hand side. Otherwise $v\in V(F)$ and $\deg_F(v)\leq \deg_H(v)\leq k-1$ by (\ref{F-deg-H}), hence $v\in V_{\leq k-1}(F)$. For the other inclusion note that obviously $$\lbrace v\in \operatorname{sh}_H(w)\,\vert\, \deg_H(v)\leq k-1\rbrace\subseteq V_{\leq k-1}(H)$$ and that $$V_{\leq k-1}(F)=V_{\leq k-1}(H-\operatorname{sh}_H(w))\subseteq V_{\leq k-1}(H)$$ by Claim \ref{shadow-prop8}.\end{proof} \begin{claim}\label{F-C-set} No vertex in $V(H)\setminus V(F)$ is adjacent to any member of $\mathcal{C}_F$. \end{claim} \begin{proof}Let $D\in \mathcal{C}_F$, then by (\ref{F-collection}) we have $D\cap\operatorname{sh}_H(w)=\emptyset$. Since $\operatorname{sh}_H(w)$ has property (IV) in Definition \ref{defi-shadow}, this implies that $D$ is not adjacent to any vertex in $\operatorname{sh}_H(w)=V(H)\setminus V(F)$. \end{proof} By Corollary \ref{shadow-prop1} we again have $\overline{e}_H(\operatorname{sh}_H(w))\leq (k-1)\vert \operatorname{sh}_H(w)\vert$. As before, let us distinguish two sub-cases, namely whether $\overline{e}_H(\operatorname{sh}_H(w))< (k-1)\vert \operatorname{sh}_H(w)\vert$ (Case B.1) or whether $\overline{e}_H(\operatorname{sh}_H(w))= (k-1)\vert \operatorname{sh}_H(w)\vert$ (Case B.2). \textbf{Case B.1: $\operatorname{sh}_H(w)\subsetneq V(H)$ and $\overline{e}_H(\operatorname{sh}_H(w))< (k-1)\vert \operatorname{sh}_H(w)\vert$.} In this case let us take $$S_H=S_F\cup \lbrace v\in \operatorname{sh}_H(w)\,\vert\, \deg_H(v)\leq k-1\rbrace.$$ Since $S_F\subseteq V_{\leq k-1}(F)\subseteq V_{\leq k-1}(H)$ (see Claim \ref{F-k-1-set}), it is clear that $S_H\subseteq V_{\leq k-1}(H)$. Let us check $V_{\leq k-2}(H)\subseteq S_H$: Let $v\in V(H)$ with $\deg_H(v)\leq k-2$. If $v\in \operatorname{sh}_H(w)$, then we clearly have $v\in S_H$. Otherwise $v\in V(F)$ and by (\ref{F-deg-H}) we have $\deg_F(v)\leq \deg_H(v)\leq k-2$, so $v\in V_{\leq k-2}(F)\subseteq S_F\subseteq S_H$. Thus, indeed $V_{\leq k-2}(H)\subseteq S_H$. Furthermore note that $$\sum_{s\in S_H}(k-\deg_H(s))=\sum_{s\in S_F}(k-\deg_H(s))+\sum_{\substack{s\in \operatorname{sh}_H(w)\\ \deg_H(s)\leq k-1}}(k-\deg_H(s)).$$ By (\ref{F-deg-H}) and Corollary \ref{shadow-prop5} (note that the assumption of this corollary is fulfilled by the assumption of Case B.1) we obtain $$\sum_{s\in S_H}(k-\deg_H(s))\leq\sum_{s\in S_F}(k-\deg_F(s))+2((k-1)\vert \operatorname{sh}_H(w)\vert-\overline{e}_H(\operatorname{sh}_H(w))).$$ Plugging in (\ref{F-SF-ineq}) yields $$\sum_{s\in S_H}(k-\deg_H(s))\leq 2((k-1)v(F)-e(F))+2((k-1)\vert \operatorname{sh}_H(w)\vert-\overline{e}_H(\operatorname{sh}_H(w))),$$ and, together with (\ref{F-edge-defect}), this gives $$\sum_{s\in S_H}(k-\deg_H(s))\leq 2((k-1)v(H)-e(H)).$$ All in all, $S_H\subseteq V_{\leq k-1}(H)$ has all the desired properties to act as the set $S$ for $H$. Furthermore note that by Claim \ref{F-k-1-set} we have \begin{equation}\label{F-S-difference-1} V_{\leq k-1}(H)\setminus S_H=V_{\leq k-1}(F)\setminus S_F. \end{equation} Thus, for each $v\in V_{\leq k-1}(H)\setminus S_H=V_{\leq k-1}(F)\setminus S_F$ we already have a set $B_v\subseteq V(F)\subseteq V(H)$ coming from the application of Lemma \ref{mainlem} to $F$. We can just keep those sets $B_v\subseteq V(H)$ for each $v\in V_{\leq k-1}(H)\setminus S_H=V_{\leq k-1}(F)\setminus S_F$. They are still disjoint. Now let $\widetilde{H}$ be a graph containing $H$ as a proper induced subgraph with $V_{\leq k-1}(\widetilde{H})\subseteq V_{\leq k-1}(H)\setminus S_H$ such that no vertex in $V(\widetilde{H})\setminus V(H)$ is adjacent to any member of $\mathcal{C}_H$. We need to find a (non-empty) induced subgraph $\widetilde{H}'$ of $\widetilde{H}$ with the properties (a) to (f) listed in Lemma \ref{mainlem}. Note that $\widetilde{H}$ also contains $F$ as a proper induced subgraph. By (\ref{F-S-difference-1}) we have $$V_{\leq k-1}(\widetilde{H})\subseteq V_{\leq k-1}(H)\setminus S_H=V_{\leq k-1}(F)\setminus S_F.$$ Furthermore no vertex in $V(\widetilde{H})\setminus V(H)$ is adjacent to any member of $\mathcal{C}_F\subseteq \mathcal{C}_H$ and by Claim \ref{F-C-set} no vertex in $V(H)\setminus V(F)$ is adjacent to any member of $\mathcal{C}_F$. Thus, no vertex in $V(\widetilde{H})\setminus V(F)$ is adjacent to any member of $\mathcal{C}_F$. So the graph $\widetilde{H}$ satisfies all conditions in Lemma \ref{mainlem} for $F$ (together with the collection $\mathcal{C}_F$). So by the conclusion of Lemma \ref{mainlem} for $F$ we can find a (non-empty) induced subgraph $\widetilde{H}'$ of $\widetilde{H}$ with the following six properties (these are the properties (a) to (f) but with respect to $F$ instead of $H$): \begin{itemize} \item[(a$_F$)] The minimum degree of $\widetilde{H}'$ is at least $k$. \item[(b$_F$)] $(k-1)v(\widetilde{H}')-e(\widetilde{H}')\leq (k-1)v(\widetilde{H})-e(\widetilde{H})$. \item[(c$_F$)] $V(\widetilde{H})\setminus V(\widetilde{H}')\subseteq \bigcup_{v\in V_{\leq k-1}(\widetilde{H})}B_v$, so in particular $V(\widetilde{H})\setminus V(\widetilde{H}')\subseteq V(F)$. \item[(d$_F$)] No vertex in $V(\widetilde{H})\setminus V(\widetilde{H}')$ is adjacent to any vertex in $V(\widetilde{H})\setminus V(F)$. \item[(e$_F$)] For each $D\in \mathcal{C}_F$ either $D\subseteq V(\widetilde{H}')$ or $D\cap V(\widetilde{H}')=\emptyset$. \item[(f$_F$)] If $D\in \mathcal{C}_F$ and $D\subseteq V(\widetilde{H}')$, then $D$ is not adjacent to any vertex in $V(\widetilde{H})\setminus V(\widetilde{H}')$. \end{itemize} We have to show that there is an induced subgraph of $\widetilde{H}$ fulfilling the six properties (a) to (f) in Lemma \ref{mainlem} (with respect to $H$ and the collection $\mathcal{C}_H$). Let us just take the (non-empty) induced subgraph $\widetilde{H}'$ of $\widetilde{H}$ with the properties (a$_F$) to (f$_F$) above. Let us check that this same graph $\widetilde{H}'$ also satisfies (a) to (f). For (a) and (b) this is clear, because they are identical with (a$_F$) and (b$_F$). Property (c) is identical with property (c$_F$) as well (recall that the sets $B_v$ are by definition the same for $F$ and for $H$ and $B_v\subseteq V(F)\subseteq V(H)$). Property (d) follows directly from property (d$_F$), because $V(\widetilde{H})\setminus V(H)\subseteq V(\widetilde{H})\setminus V(F)$. It remains to check properties (e) and (f): \begin{itemize} \item[(e)] Let $D\in \mathcal{C}_H$ and we have to show that $D\subseteq V(\widetilde{H}')$ or $D\cap V(\widetilde{H}')=\emptyset$. If $D\in \mathcal{C}_F$, this is clear from property (e$_F$). Otherwise, by (\ref{F-collection}) we have $D\subseteq \operatorname{sh}_H(w)=V(H)\setminus V(F)\subseteq V(\widetilde{H})\setminus V(F)$. Since $V(\widetilde{H})\setminus V(\widetilde{H}')\subseteq V(F)$ by (c$_F$), this implies $D\subseteq V(\widetilde{H}')$. \item[(f)] Let $D\in \mathcal{C}_H$ and $D\subseteq V(\widetilde{H}')$. We have to show that $D$ is not adjacent to any vertex in $V(\widetilde{H})\setminus V(\widetilde{H}')$. If $D\in \mathcal{C}_F$, this is clear from property (f$_F$). Otherwise, by (\ref{F-collection}) we have $D\subseteq \operatorname{sh}_H(w)=V(H)\setminus V(F)\subseteq V(\widetilde{H})\setminus V(F)$. So by (d$_F$) the set $D$ is not adjacent to any vertex in $V(\widetilde{H})\setminus V(\widetilde{H}')$. \end{itemize} \textbf{Case B.2: $\operatorname{sh}_H(w)\subsetneq V(H)$ and $\overline{e}_H(\operatorname{sh}_H(w))= (k-1)\vert \operatorname{sh}_H(w)\vert$.} By Corollary \ref{shadow-prop4} the vertex $w$ is the only vertex $v\in \operatorname{sh}_H(w)$ with $\deg_H(v)\leq k-1$ and furthermore $\deg_H(w)= k-1$. In particular, Claim \ref{F-k-1-set} implies \begin{equation}\label{F-k-1-set2} V_{\leq k-1}(H)=V_{\leq k-1}(F)\cup \lbrace w\rbrace. \end{equation} Let us take $S_H=S_F$. Then $S_H=S_F\subseteq V_{\leq k-1}(F)\subseteq V_{\leq k-1}(H)$ by (\ref{F-k-1-set2}). Let us check $V_{\leq k-2}(H)\subseteq S_H$: Let $v\in V(H)$ with $\deg_H(v)\leq k-2$. Then clearly $v\in V_{\leq k-1}(H)$ and furthermore $v\neq w$ since $\deg_H(w)= k-1$. Thus, (\ref{F-k-1-set2}) implies $v\in V_{\leq k-1}(F)$ and by (\ref{F-deg-H}) we have $\deg_F(v)\leq \deg_H(v)\leq k-2$. Thus, $v\in V_{\leq k-2}(F)\subseteq S_F=S_H$. By (\ref{F-edge-defect}) the assumption $\overline{e}_H(\operatorname{sh}_H(w))= (k-1)\vert \operatorname{sh}_H(w)\vert$ of Case B.2 implies $$(k-1)v(F)-e(F)=(k-1)v(H)-e(H).$$ Now (\ref{F-SF-ineq}) reads $$\sum_{s\in S_F}(k-\deg_F(s))\leq 2((k-1)v(H)-e(H))$$ and by (\ref{F-deg-H}) it implies $$\sum_{s\in S_H}(k-\deg_H(s))=\sum_{s\in S_F}(k-\deg_H(s))\leq \sum_{s\in S_F}(k-\deg_F(s))\leq 2((k-1)v(H)-e(H)).$$ All in all, $S_H\subseteq V_{\leq k-1}(H)$ has all the desired properties to act as the set $S$ for $H$. Furthermore note that by (\ref{F-k-1-set2}) we have \begin{equation}\label{F-S-difference-2} V_{\leq k-1}(H)\setminus S_H=(V_{\leq k-1}(F)\setminus S_F)\cup \lbrace w\rbrace. \end{equation} For each $v\in V_{\leq k-1}(F)\setminus S_F$ we already have a set $B_v\subseteq V(F)\subseteq V(H)$ coming from the application of Lemma \ref{mainlem} to $F$. We can just keep those sets $B_v\subseteq V(H)$ for each $v\in V_{\leq k-1}(F)\setminus S_F$. They are still disjoint. For $v=w$ let us set $B_w=\operatorname{sh}_H(w)\subseteq V(H)$. Since $B_v\subseteq V(F)=V(H)\setminus \operatorname{sh}_H(w)$ for $v\in V_{\leq k-1}(F)\setminus S_F$, the set $B_w$ is disjoint from all the other $B_v$. So this defines disjoint sets $B_v\in V(H)$ for each $v\in V_{\leq k-1}(H)\setminus S_H$. Now let $\widetilde{H}$ be a graph containing $H$ as a proper induced subgraph with $V_{\leq k-1}(\widetilde{H})\subseteq V_{\leq k-1}(H)\setminus S_H$ such that no vertex in $V(\widetilde{H})\setminus V(H)$ is adjacent to any member of $\mathcal{C}_H$. We need to find a (non-empty) induced subgraph $\widetilde{H}'$ of $\widetilde{H}$ with the properties (a) to (f) listed in Lemma \ref{mainlem}. In order to find a suitable $\widetilde{H}'$, we distinguish two cases again: $w\not\in V_{\leq k-1}(\widetilde{H})$ (Case B.2.a) or $w\in V_{\leq k-1}(\widetilde{H})$ (Case B.2.b). \textbf{Case B.2.a: $\operatorname{sh}_H(w)\subsetneq V(H)$, $\overline{e}_H(\operatorname{sh}_H(w))= (k-1)\vert \operatorname{sh}_H(w)\vert$ and $w\not\in V_{\leq k-1}(\widetilde{H})$.} Since $\widetilde{H}$ contains $H$ as a proper induced subgraph, it also contains $F$ as a proper induced subgraph. By (\ref{F-S-difference-2}) we have $$V_{\leq k-1}(\widetilde{H})\subseteq V_{\leq k-1}(H)\setminus S_H=(V_{\leq k-1}(F)\setminus S_F)\cup \lbrace w\rbrace$$ and by the assumption $w\not\in V_{\leq k-1}(\widetilde{H})$ of Case B.2.a this implies $V_{\leq k-1}(\widetilde{H})\subseteq V_{\leq k-1}(F)\setminus S_F$. Furthermore no vertex in $V(\widetilde{H})\setminus V(H)$ is adjacent to any member of $\mathcal{C}_F\subseteq \mathcal{C}_H$ and by Claim \ref{F-C-set} no vertex in $V(H)\setminus V(F)$ is adjacent to any member of $\mathcal{C}_F$. Thus, no vertex in $V(\widetilde{H})\setminus V(F)$ is adjacent to any member of $\mathcal{C}_F$. So the graph $\widetilde{H}$ satisfies all conditions in Lemma \ref{mainlem} for $F$ (together with the collection $\mathcal{C}_F$). So by the conclusion of Lemma \ref{mainlem} for $F$ we can find a (non-empty) induced subgraph $\widetilde{H}'$ of $\widetilde{H}$ with the following six properties (these are again the properties (a) to (f) but with respect to $F$ instead of $H$): \begin{itemize} \item[(a$_F$)] The minimum degree of $\widetilde{H}'$ is at least $k$. \item[(b$_F$)] $(k-1)v(\widetilde{H}')-e(\widetilde{H}')\leq (k-1)v(\widetilde{H})-e(\widetilde{H})$. \item[(c$_F$)] $V(\widetilde{H})\setminus V(\widetilde{H}')\subseteq \bigcup_{v\in V_{\leq k-1}(\widetilde{H})}B_v$, so in particular $V(\widetilde{H})\setminus V(\widetilde{H}')\subseteq V(F)$. \item[(d$_F$)] No vertex in $V(\widetilde{H})\setminus V(\widetilde{H}')$ is adjacent to any vertex in $V(\widetilde{H})\setminus V(F)$. \item[(e$_F$)] For each $D\in \mathcal{C}_F$ either $D\subseteq V(\widetilde{H}')$ or $D\cap V(\widetilde{H}')=\emptyset$. \item[(f$_F$)] If $D\in \mathcal{C}_F$ and $D\subseteq V(\widetilde{H}')$, then $D$ is not adjacent to any vertex in $V(\widetilde{H})\setminus V(\widetilde{H}')$. \end{itemize} We have to show that there is an induced subgraph of $\widetilde{H}$ fulfilling the six properties (a) to (f) in Lemma \ref{mainlem} (with respect to $H$ and the collection $\mathcal{C}_H$). Let us just take the (non-empty) induced subgraph $\widetilde{H}'$ of $\widetilde{H}$ with the properties (a$_F$) to (f$_F$) above. Let us check that this same graph $\widetilde{H}'$ also satisfies (a) to (f). For (a) and (b) this is clear, because they are identical with (a$_F$) and (b$_F$). Property (c) is identical with property (c$_F$) as well (recall that the sets $B_v$ for $v\neq w$ are by definition the same for $F$ and for $H$). Property (d) follows directly from property (d$_F$), because $V(\widetilde{H})\setminus V(H)\subseteq V(\widetilde{H})\setminus V(F)$. It remains to check properties (e) and (f): \begin{itemize} \item[(e)] Let $D\in \mathcal{C}_H$ and we have to show that $D\subseteq V(\widetilde{H}')$ or $D\cap V(\widetilde{H}')=\emptyset$. If $D\in \mathcal{C}_F$, this is clear from property (e$_F$). Otherwise, by (\ref{F-collection}) we have $D\subseteq \operatorname{sh}_H(w)=V(H)\setminus V(F)\subseteq V(\widetilde{H})\setminus V(F)$. Since $V(\widetilde{H})\setminus V(\widetilde{H}')\subseteq V(F)$ by (c$_F$), this implies $D\subseteq V(\widetilde{H}')$. \item[(f)] Let $D\in \mathcal{C}_H$ and $D\subseteq V(\widetilde{H}')$. We have to show that $D$ is not adjacent to any vertex in $V(\widetilde{H})\setminus V(\widetilde{H}')$. If $D\in \mathcal{C}_F$, this is clear from property (f$_F$). Otherwise, by (\ref{F-collection}) we have $D\subseteq \operatorname{sh}_H(w)=V(H)\setminus V(F)\subseteq V(\widetilde{H})\setminus V(F)$. So by (d$_F$) the set $D$ is not adjacent to any vertex in $V(\widetilde{H})\setminus V(\widetilde{H}')$. \end{itemize} \textbf{Case B.2.b: $\operatorname{sh}_H(w)\subsetneq V(H)$, $\overline{e}_H(\operatorname{sh}_H(w))= (k-1)\vert \operatorname{sh}_H(w)\vert$ and $w\in V_{\leq k-1}(\widetilde{H})$.} Recall that no vertex in $V(\widetilde{H})\setminus V(H)$ is adjacent to any member of $\mathcal{C}_H$. So we can apply Lemma \ref{shadow-prop6} and hence $\operatorname{sh}_{\widetilde{H}}(w)\subseteq \operatorname{sh}_H(w)$ and no vertex in $V(\widetilde{H})\setminus V(H)$ is adjacent to any vertex in $\operatorname{sh}_{\widetilde{H}}(w)$. Here the shadow $\operatorname{sh}_{\widetilde{H}}(w)$ of $w$ in $\widetilde{H}$ is defined with respect to the collection $\mathcal{C}_H$. Set $$\widetilde{H}_F=\widetilde{H}-\operatorname{sh}_{\widetilde{H}}(w).$$ Here, we cannot use the graph $\widetilde{H}$ itself in Lemma \ref{mainlem} for the graph $F$ as in the previous two cases (Case B.1 and Case B.2.a). But let us check that we can use the graph $\widetilde{H}_F$ instead. As $\operatorname{sh}_{\widetilde{H}}(w)\subseteq \operatorname{sh}_H(w)$, the graph $\widetilde{H}_F=\widetilde{H}-\operatorname{sh}_{\widetilde{H}}(w)$ does indeed contain the graph $F=H-\operatorname{sh}_{H}(w)$ as an induced subgraph. Since $\operatorname{sh}_{\widetilde{H}}(w)\subseteq \operatorname{sh}_H(w)\subseteq V(H)$, we have \begin{equation}\label{tHFohneF} V(\widetilde{H})\setminus V(H)\subseteq V(\widetilde{H}_F)\setminus V(F) \end{equation} and since $H$ is a proper induced subgraph of $\widetilde{H}$, we can conclude that $F$ is also a proper induced subgraph of $\widetilde{H}_F$. Recall that $\operatorname{sh}_{\widetilde{H}}(w)\subseteq \operatorname{sh}_H(w)\subseteq V(H)\subsetneq V(\widetilde{H})$. Thus, Claim \ref{shadow-prop8} applied to the graph $\widetilde{H}$ and the vertex $w$ implies that \begin{equation}\label{tHF-degk-1} V_{\leq k-1}(\widetilde{H}_F)=V_{\leq k-1}(\widetilde{H}-\operatorname{sh}_{\widetilde{H}}(w))\subseteq V_{\leq k-1}(\widetilde{H})\setminus \lbrace w\rbrace. \end{equation} From $V_{\leq k-1}(\widetilde{H})\subseteq V_{\leq k-1}(H)\setminus S_H$ and (\ref{F-S-difference-2}) we have $$V_{\leq k-1}(\widetilde{H})\subseteq V_{\leq k-1}(H)\setminus S_H= (V_{\leq k-1}(F)\setminus S_F)\cup \lbrace w\rbrace.$$ Together with (\ref{tHF-degk-1}) this yields $$V_{\leq k-1}(\widetilde{H}_F)\subseteq V_{\leq k-1}(F)\setminus S_F.$$ Furthermore no vertex in $V(\widetilde{H})\setminus V(H)$ is adjacent to any member of $\mathcal{C}_F\subseteq \mathcal{C}_H$ and by Claim \ref{F-C-set} no vertex in $V(H)\setminus V(F)$ is adjacent to any member of $\mathcal{C}_F$. Thus, no vertex in $V(\widetilde{H})\setminus V(F)$ is adjacent to any member of $\mathcal{C}_F$. In particular, no vertex in $V(\widetilde{H}_F)\setminus V(F)$ is adjacent to any member of $\mathcal{C}_F$. So the graph $\widetilde{H}_F$ indeed satisfies all conditions in Lemma \ref{mainlem} for $F$ (together with the collection $\mathcal{C}_F$). So by the conclusion of Lemma \ref{mainlem} for $F$ we can find a (non-empty) induced subgraph $\widetilde{H}'$ of $\widetilde{H}_F$ with the following six properties: \begin{itemize} \item[(a$_F$)] The minimum degree of $\widetilde{H}'$ is at least $k$. \item[(b$_F$)] $(k-1)v(\widetilde{H}')-e(\widetilde{H}')\leq (k-1)v(\widetilde{H}_F)-e(\widetilde{H}_F)$. \item[(c$_F$)] $V(\widetilde{H}_F)\setminus V(\widetilde{H}')\subseteq \bigcup_{v\in V_{\leq k-1}(\widetilde{H}_F)}B_v$, so in particular $V(\widetilde{H}_F)\setminus V(\widetilde{H}')\subseteq V(F)$. \item[(d$_F$)] No vertex in $V(\widetilde{H}_F)\setminus V(\widetilde{H}')$ is adjacent to any vertex in $V(\widetilde{H}_F)\setminus V(F)$. \item[(e$_F$)] For each $D\in \mathcal{C}_F$ either $D\subseteq V(\widetilde{H}')$ or $D\cap V(\widetilde{H}')=\emptyset$. \item[(f$_F$)] If $D\in \mathcal{C}_F$ and $D\subseteq V(\widetilde{H}')$, then $D$ is not adjacent to any vertex in $V(\widetilde{H}_F)\setminus V(\widetilde{H}')$. \end{itemize} We have to show that there is an induced subgraph of $\widetilde{H}$ fulfilling the six properties (a) to (f) in Lemma \ref{mainlem} (with respect to $H$ and the collection $\mathcal{C}_H$). Note that $\widetilde{H}'$ is a non-empty induced subgraph of $\widetilde{H}_F$ and hence also of $\widetilde{H}$. Let us now check that this graph $\widetilde{H}'$ satisfies (a) to (f) and is therefore the desired induced subgraph of $\widetilde{H}$. \begin{itemize} \item[(a)] This statement is identical with (a$_F$). \item[(b)] Note that \begin{multline*} (k-1)v(\widetilde{H}_F)-e(\widetilde{H}_F)=(k-1)(v(\widetilde{H})-\vert \operatorname{sh}_{\widetilde{H}}(w)\vert)-(e(\widetilde{H})-\overline{e}_{\widetilde{H}}(\operatorname{sh}_{\widetilde{H}}(w)))\\ =(k-1)v(\widetilde{H})-e(\widetilde{H})-((k-1)\vert \operatorname{sh}_{\widetilde{H}}(w)\vert-\overline{e}_{\widetilde{H}}(\operatorname{sh}_{\widetilde{H}}(w))). \end{multline*} By Corollary \ref{shadow-prop1} applied to $\widetilde{H}$ we have $\overline{e}_{\widetilde{H}}(\operatorname{sh}_{\widetilde{H}}(w))\leq (k-1)\vert \operatorname{sh}_{\widetilde{H}}(w)\vert$ and therefore $$(k-1)v(\widetilde{H}_F)-e(\widetilde{H}_F)\leq (k-1)v(\widetilde{H})-e(\widetilde{H}).$$ Thus, by (b$_F$) $$(k-1)v(\widetilde{H}')-e(\widetilde{H}')\leq (k-1)v(\widetilde{H}_F)-e(\widetilde{H}_F)\leq (k-1)v(\widetilde{H})-e(\widetilde{H}).$$ \item[(c)] Since $V(\widetilde{H})=\operatorname{sh}_{\widetilde{H}}(w)\cup V(\widetilde{H}_F)$, we have using (c$_F$) and $B_w=\operatorname{sh}_{\widetilde{H}}(w)$ $$V(\widetilde{H})\setminus V(\widetilde{H}')\subseteq \operatorname{sh}_{\widetilde{H}}(w)\cup (V(\widetilde{H}_F)\setminus V(\widetilde{H}'))\subseteq B_w\cup \bigcup_{v\in V_{\leq k-1}(\widetilde{H}_F)}B_v=\bigcup_{v\in V_{\leq k-1}(\widetilde{H}_F)\cup \lbrace w\rbrace}B_v.$$ By (\ref{tHF-degk-1}) and the assumption $w\in V_{\leq k-1}(\widetilde{H})$ of Case B.2.b we have $V_{\leq k-1}(\widetilde{H}_F)\cup \lbrace w\rbrace\subseteq V_{\leq k-1}(\widetilde{H})$ and hence $$V(\widetilde{H})\setminus V(\widetilde{H}')\subseteq \bigcup_{v\in V_{\leq k-1}(\widetilde{H})}B_v,$$ (and in particular $V(\widetilde{H})\setminus V(\widetilde{H}')\subseteq V(H)$ as $B_v\subseteq V(H)$ for all $v\in V_{\leq k-1}(\widetilde{H})$). \item[(d)] Recall that no vertex in $\operatorname{sh}_{\widetilde{H}}(w)$ is adjacent to any vertex in $V(\widetilde{H})\setminus V(H)$. By (d$_F$) no vertex in $V(\widetilde{H}_F)\setminus V(\widetilde{H}')$ is adjacent to any vertex in $V(\widetilde{H}_F)\setminus V(F)$. By (\ref{tHFohneF}) this implies that no vertex in $V(\widetilde{H}_F)\setminus V(\widetilde{H}')$ is adjacent to any vertex in $V(\widetilde{H})\setminus V(H)\subseteq V(\widetilde{H}_F)\setminus V(F)$. Hence no vertex in $V(\widetilde{H})\setminus V(\widetilde{H}')= \operatorname{sh}_{\widetilde{H}}(w)\cup (V(\widetilde{H}_F)\setminus V(\widetilde{H}'))$ is adjacent to any vertex in $V(\widetilde{H})\setminus V(H)$. \item[(e)] Let $D\in \mathcal{C}_H$ and we have to show that $D\subseteq V(\widetilde{H}')$ or $D\cap V(\widetilde{H}')=\emptyset$. If $D\in \mathcal{C}_F$, this is clear from property (e$_F$). Otherwise, by (\ref{F-collection}) we have $D\subseteq \operatorname{sh}_H(w)=V(H)\setminus V(F)\subseteq V(\widetilde{H})\setminus V(F)$. By property (II) in Definition \ref{defi-shadow} for the shadow $\operatorname{sh}_{\widetilde{H}}(w)$ of $w$ in $\widetilde{H}$ we know $D\subseteq \operatorname{sh}_{\widetilde{H}}(w)$ or $D\cap \operatorname{sh}_{\widetilde{H}}(w)=\emptyset$. If $D\subseteq \operatorname{sh}_{\widetilde{H}}(w)=V(\widetilde{H})\setminus V(\widetilde{H}_F)$, then $D$ is disjoint from $V(\widetilde{H}')\subseteq V(\widetilde{H}_F)$. If $D\cap \operatorname{sh}_{\widetilde{H}}(w)=\emptyset$, then $D\subseteq V(\widetilde{H}_F)$ and by $D\subseteq V(\widetilde{H})\setminus V(F)$ we obtain $D\subseteq V(\widetilde{H}_F)\setminus V(F)$. On the other hand $V(\widetilde{H}_F)\setminus V(\widetilde{H}')\subseteq V(F)$ by (c$_F$) and hence $D\subseteq V(\widetilde{H}_F)\setminus V(F)\subseteq V(\widetilde{H}')$. So in any case we have shown $D\subseteq V(\widetilde{H}')$ or $D\cap V(\widetilde{H}')=\emptyset$. \item[(f)] Let $D\in \mathcal{C}_H$ and $D\subseteq V(\widetilde{H}')$. We have to show that $D$ is not adjacent to any vertex in $V(\widetilde{H})\setminus V(\widetilde{H}')$. Note that $D\subseteq V(\widetilde{H}')\subseteq V(\widetilde{H}_F)$ implies $D\cap \operatorname{sh}_{\widetilde{H}}(w)=\emptyset$ and by property (IV) in Definition \ref{defi-shadow} we can conclude that $D$ is not adjacent to any vertex in $\operatorname{sh}_{\widetilde{H}}(w)=V(\widetilde{H})\setminus V(\widetilde{H}_F)$. So it remains to show that $D$ is not adjacent to any vertex in $V(\widetilde{H}_F)\setminus V(\widetilde{H}')$. If $D\in \mathcal{C}_F$, then by (f$_F$) the set $D$ is not adjacent to any vertex in $V(\widetilde{H}_F)\setminus V(\widetilde{H}')$ and we are done. So let us now assume $D\not\in \mathcal{C}_F$, then by (\ref{F-collection}) we have $D\subseteq \operatorname{sh}_H(w)=V(H)\setminus V(F)$. On the other hand $D\subseteq V(\widetilde{H}')\subseteq V(\widetilde{H}_F)$, hence $D\subseteq V(\widetilde{H}_F)\setminus V(F)$. So by (d$_F$) we obtain that $D$ is not adjacent to any vertex in $V(\widetilde{H}_F)\setminus V(\widetilde{H}')$. \end{itemize} This finishes the proof. \textit{Acknowledgements.} The author would like to thank her advisor Jacob Fox for suggesting this problem and for very helpful comments on earlier versions of this paper. Furthermore, the author is grateful to the anonymous referees for their useful comments and suggestions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Mixed univariate distributions have been introduced and studied in the last years by compounding continuous and discrete distributions. Marshall and Olkin (1997) introduced a class of distributions which can be obtained by minimum and maximum of independent and identically distributed (iid) continuous random variables (independent of the random sample size), where the sample size follows geometric distribution. Chahkandi and Ganjali (2009) introduced some lifetime distributions by compounding exponential and power series distributions; this models are called exponential power series (EPS) distributions. Recently, Morais and Barreto-Souza (2011) introduced a class of distributions obtained by mixing Weibull and power series distributions and studied several of its statistical properties. This class contains the EPS distributions and other lifetime models studied recently, for example, the Weibull-geometric distribution (Marshall and Olkin, 1997; Barreto-Souza et al., 2011). The reader is referred to introduction from Morais and Barreto-Souza's (2011) article for a brief literature review about some univariate distributions obtained by compounding. A mixed bivariate law with exponential and geometric marginals was introduced by Kozubowski and Panorska (2005), and named bivariate exponential-geometric (BEG) distribution. A bivariate random vector $(X,N)$ follows BEG law if admits the stochastic representation: \begin{eqnarray}\label{rep} \left(X,N\right)\stackrel{d}{=}\left(\sum_{i=1}^NX_i,N\right), \end{eqnarray} where the variable $N$ follows geometric distribution and $\{X_i\}_{i=1}^\infty$ is a sequence of iid exponential variables, independent of $N$. The BEG law is infinitely divisible and therefore leads a bivariate L\'evy process, in this case, with gamma and negative binomial marginal processes. This bivariate process, named BGNB motion, was introduced and studied by Kozubowski et al. (2008). Other multivariate distributions involving exponential and geometric distributions have been studied in the literature. Kozubowski and Panorska (2008) introduced and studied a bivariate distribution involving geometric maximum of exponential variables. A trivariate distribution involving geometric sums and maximum of exponentials variables was also recently introduced by Kozubowski et al. (2011). Our chief goal in this article is to introduce a three-parameter extension of the BEG law. We refer to this new three-parameter distribution as bivariate gamma-geometric (BGG) law. Further, we show that this extended distribution is infinitely divisible, and, therefore, it induces a bivariate L\'evy process which has the BGNB motion as particular case. The additional parameter controls the shape of the continuous part of our models. Our bivariate distribution may be applied in areas such as hydrology and finance. We here focus in finance applications and use the BGG law for modeling log-returns (the $X_i$'s) corresponding to a daily exchange rate. More specifically, we are interested in modeling cumulative log-returns (the $X$) in growth periods of the exchange rates. In this case $N$ represents the duration of the growth period, where the consecutive log-returns are positive. As mentioned by Kozubowski and Panorska (2005), the geometric sum represented by $X$ in (\ref{rep}) is very useful in several fields including water resources, climate research and finance. We refer the reader to the introduction from Kozubowski and Panorska's (2005) article for a good discussion on practical situations where the random vectors with description (\ref{rep}) may be useful. The present article is organized as follows. In the Section 2 we introduce the bivariate gamma-geometric law and derive basic statistical properties, including a study of some properties of its marginal and conditional distributions. Further, we show that our proposed law is infinitely divisible. Estimation by maximum likelihood and inference for large sample are addressed in the Section 3, which also contains a proposed reparametrization of the model in order to obtain orthogonality of the parameter in the sense of Cox and Reid (1987). An application to a real data set is presented in the Section 4. The induced L\'evy process is approached in the Section 5 and some of its basic properties are shown. We include a study of the bivariate distribution of the process at fixed time and also discuss estimation of the parameters and inferential aspects. We close the article with concluding remarks in the Section 6. \section{The law and basic properties} The bivariate gamma-geometric (BGG) law is defined by the stochastic representation (\ref{rep}) and assuming that $\{X_i\}_{i=1}^\infty$ is a sequence of iid gamma variables independent of $N$ and with probability density function given by $g(x;\alpha,\beta)=\beta^\alpha/\Gamma(\alpha)x^{\alpha-1}e^{-\beta x}$, for $x>0$ and $\alpha,\beta>0$; we denote $X_i\sim\Gamma(\alpha,\beta)$. As before, $N$ is a geometric variable with probability mass function given by $P(N=n)=p(1-p)^{n-1}$, for $n\in\mathbb{N}$; denote $N\sim\mbox{Geom}(p)$. Clearly, the BGG law contains the BEG law as particular case, for the choice $\alpha=1$. The joint density function $f_{X,N}(\cdot,\cdot)$ of $(X,N)$ is given by \begin{eqnarray}\label{density} f_{X,N}(x,n)=\frac{\beta^{n\alpha}}{\Gamma(\alpha n)}x^{n\alpha-1}e^{-\beta x}p(1-p)^{n-1}, \quad x>0,\,\, n\in\mathbb{N}. \end{eqnarray} Hence, it follows that the joint cumulative distribution function (cdf) of the BGG distribution can be expressed by \begin{eqnarray*} P(X\leq x, N\leq n)=p\sum_{j=1}^n(1-p)^{j-1}\frac{\Gamma_{\beta x}(j\alpha)}{\Gamma(j\alpha)}, \end{eqnarray*} for $x>0$ and $n\in\mathbb{N}$, where $\Gamma_x(\alpha)=\int_0^xt^{\alpha-1}e^{-t}dt$ is the incomplete gamma function. We will denote $(X,N)\sim\mbox{BGG}(\beta,\alpha,p)$. We now show that $(pX,pN)\stackrel{d}{\rightarrow}(\alpha Z/\beta,Z)$ as $p\rightarrow0^+$, where `$\stackrel{d}{\rightarrow}$' denotes convergence in distribution and $Z$ is a exponential variable with mean 1; for $\alpha=1$, we obtain the result given in the proposition 2.3 from Kozubowski and Panorska (2005). For this, we use the moment generation function of the BGG distribution, which is given in the Subsection 2.2. Hence, we have that $E(e^{tpX+spN})=\varphi(pt,ps)$, where $\varphi(\cdot,\cdot)$ is given by (\ref{mgf}). Using L'H\^opital's rule, one may check that $E(e^{tpX+spN})\rightarrow(1-s-\alpha t/\beta)^{-1}$ as $p\rightarrow0^+$, which is the moment generation function of $(\alpha Z/\beta,Z)$. \subsection{Marginal and conditional distributions} The marginal density of $X$ with respect to Lebesgue measure is an infinite mixture of gamma densities, which is given by \begin{eqnarray}\label{marginal} f_X(x)=\sum_{n=1}^\infty P(N=n)g(x;n\alpha,\beta)=\frac{px^{-1}e^{-\beta x}}{1-p}\sum_{n=1}^\infty\frac{[(\beta x)^\alpha(1-p)]^n}{\Gamma(n\alpha)}, \quad x>0. \end{eqnarray} Therefore, the BGG distribution has infinite mixture of gamma and geometric marginals. Some alternative expressions for the marginal density of $X$ can be obtained. For example, for $\alpha=1$, we obtain the exponential density. Further, with help from Wolfram\footnote{http://www.wolframalpha.com/}, for $\alpha=1/2, 2, 3,4$, we have that $$f_X(x)=p\beta^{1/2} x^{-1/2}e^{-\beta x}\{a(x)e^{a(x)^2}(1+\mbox{erf}(a(x)))+\pi^{-1/2}\},$$ $$f_X(x)=\frac{p\beta e^{-\beta x}}{\sqrt{1-p}}\sinh(\beta x\sqrt{1-p}),$$ $$f_X(x)=\frac{px^{-1} e^{-\beta x}}{3(1-p)}a(x)^{1/3}e^{-a(x)^{1/3}/2}\{e^{3a(x)^{1/3}/2}-2\sin(1/6(3\sqrt{3}a(x)^{1/3}+\pi))\},$$ and $$f_X(x)=\frac{px^{-1} e^{-\beta x}}{2(1-p)}a(x)^{1/4}\{\sinh(a(x)^{1/4})-\sin(a(x)^{1/4})\},$$ respectively, where $a(x)\equiv a(x;\beta,\alpha,p)=(1-p)(\beta x)^\alpha$ and $\mbox{erf}(x)=2\pi^{-1/2}\int_0^xe^{-t^2/2}dt$ is the error function. Figure \ref{marginaldensities} shows some plots of the marginal density of $X$ for $\beta=1$, $p=0.2,0.8$ and some values of $\alpha$. \begin{figure}[h!] \centering \includegraphics[width=0.33\textwidth]{figura_2.pdf}\includegraphics[width=0.33\textwidth]{figura_8.pdf} \caption{Plots of the marginal density of $X$ for $\beta=1$, $\alpha=0.5,1,2,3,4$, $p=0.2$ (left) and $p=0.8$ (right).} \label{marginaldensities} \end{figure} We now obtain some conditional distributions which may be useful in goodness-of-fit analyses when the BGG distribution is assumed to model real data (see Section 4). Let $m\leq n$ be positive integers and $x>0$. The conditional cdf of $(X,N)$ given $N\leq n$ is $$P(X\leq x,N\leq m|N\leq n)=\frac{p}{1-(1-p)^n}\sum_{j=1}^m(1-p)^{j-1}\frac{\Gamma_{\beta x}(j\alpha)}{\Gamma(j\alpha)}.$$ We have that $P(X\leq x| N\leq n)$ is given by the right side of the above expression with $n$ replacing $m$. For $0<x\leq y$ and $n\in\mathbb{N}$, the conditional cdf of $(X,N)$ given $X\leq y$ is $$P(X\leq x,N\leq n|X\leq y)=\frac{\sum_{j=1}^n(1-p)^{j-1}\Gamma_{\beta x}(j\alpha)/\Gamma(j\alpha)}{\sum_{j=1}^\infty (1-p)^{j-1}\Gamma_{\beta y}(j\alpha)/\Gamma(j\alpha)}.$$ The conditional probability $P(N\leq n|X\leq y)$ is given by the right side of the above expression with $y$ replacing $x$. From (\ref{density}) and (\ref{marginal}), we obtain that the conditional probability mass function of $N$ given $X=x$ is \begin{eqnarray*}\label{NgivenX} P(N=n|X=x)=\frac{[(1-p)(\beta x)^{\alpha}]^n/\Gamma(\alpha n)}{\sum_{j=1}^\infty[(1-p)(\beta x)^{\alpha}]^j/\Gamma(j\alpha)}, \end{eqnarray*} for $n\in\mathbb{N}$. If $\alpha$ is known, the above probability mass function belongs to the one-parameter power series class of distributions; for instance, see Noack (1950). In this case, the parameter would be $(1-p)(\beta x)^\alpha$. For $\alpha=1$, we obtain the Poisson distribution truncated at zero with parameter $\beta x(1-p)$, which agrees with formula (7) from Kozuboswki and Panorska (2005). For the choice $\alpha=2$, we have that $$P(N=n|X=x)=\frac{(1-p)^{n-1/2}(\beta x)^{2n-1}}{(2n-1)!\sinh(\beta x\sqrt{1-p})},$$ where $n\in\mathbb{N}$. \subsection{Moments} The moment generation function (mgf) of the BGG law is \begin{eqnarray*} \varphi(t,s)=E\left(e^{tX+sN}\right)=E\left[e^{sN}E\left(e^{tX}|N\right)\right]=E\left\{\left[e^s\left(\frac{\beta}{\beta-t}\right)^\alpha\right]^N\right\}, \quad t<\beta,\, s\in\mathbb{R}, \end{eqnarray*} and then \begin{eqnarray}\label{mgf} \varphi(t,s)=\frac{pe^s\beta^\alpha}{(\beta-t)^\alpha-e^s\beta^\alpha(1-p)}, \end{eqnarray} for $t<\beta\{1-[(1-p)e^s]^{1/\alpha}\}$. The characteristic function may be obtained in a similar way and is given by \begin{eqnarray}\label{cf} \Phi(t,s)=\frac{pe^{is}\beta^\alpha}{(\beta-it)^\alpha-e^{is}\beta^\alpha(1-p)}, \end{eqnarray} for $t,s\in\mathbb{R}$. With this, the product and marginal moments can be obtained by computing $E(X^mN^k)=\partial^{m+k}\varphi(t,s)\partial t^m\partial s^k|_{t,s=0}$ or $E(X^mN^k)=(-i)^{m+k}\partial^{m+k}\Phi(t,s)\partial t^m\partial s^k|_{t,s=0}$. Hence, we obtain the following expression for the product moments of the random vector $(X,N)$: \begin{eqnarray}\label{pm} E(X^mN^k)=\frac{p\Gamma(m)}{\beta^m}\sum_{n=0}^\infty\frac{n^k(1-p)^{n-1}}{B(\alpha n,m)}, \end{eqnarray} where $B(a,b)=\Gamma(a)\Gamma(b)/\Gamma(a+b)$, for $a,b>0$, is the beta function. In particular, we obtain that $E(X)=\alpha(p\beta)^{-1}$, $E(N)=p^{-1}$ and the covariance matrix $\Sigma$ of $(X,N)$ is given by \begin{eqnarray}\label{cov} \Sigma= \left(\begin{array}{ll} \frac{(1-p)\alpha^2}{p^2\beta^2}+\frac{\alpha}{\beta^2p} & \frac{(1-p)\alpha}{\beta p^2}\\ \frac{(1-p)\alpha}{\beta p^2} & \frac{1-p}{p^2}\\ \end{array}\right). \end{eqnarray} The correlation coefficient $\rho$ between $X$ and $N$ is $\rho=\sqrt{(1-p)/(1-p+p/\alpha)}$. Let $\rho^*=\sqrt{1-p}$, that is, the correlation coefficient of a bivariate random vector following BEG law. For $\alpha\leq1$, we have $\rho\leq\rho^*$, and for $\alpha>1$, it follows that $\rho>\rho^*$. Figure \ref{coefcorr} shows some plots of the correlation coefficient of the BGG law as a function of $p$ for some values of $\alpha$. \begin{figure}[h] \centering \includegraphics[width=0.40\textwidth]{figura1a.pdf} \caption{Plots of the correlation coefficient of the BGG law as a function of $p$ for $\alpha=0.1,0.5,1,1.5,3$.} \label{coefcorr} \end{figure} From (\ref{mgf}), we find that the marginal mgf of $X$ is given by $$\varphi(t)=\frac{p\beta^\alpha}{(\beta-t)^\alpha-\beta^\alpha(1-p)},$$ for $t<\beta\{1-(1-p)^{1/\alpha}\}$. The following expression for the $r$th moment of $X$ can be obtained from above formula or (\ref{pm}): $$E(X^r)=\frac{p\Gamma(r)}{\beta^r}\sum_{n=0}^\infty\frac{(1-p)^{n-1}}{B(\alpha n,r)}.$$ We notice that the above expression is valid for any real $r>0$. \subsection{Infinitely divisibility, geometric stability and representations} We now show that BGG law is infinitely divisible, just as BEG law is. Based on Kozubowski and Panorska (2005), we define the bivariate random vector $$(R,v)=\left(\sum_{i=1}^{1+nT}G_{i},\frac{1}{n}+T\right),$$ where the $G_{i}$'s are iid random variables following $\Gamma(\alpha/n,\beta)$ distribution and independent of the random variable $T$, which follows negative binomial $\mbox{NB}(r,p)$ distribution with the probability mass function \begin{eqnarray}\label{nbpf} P(T=k)=\frac{\Gamma(k+r)}{k!\Gamma(r)}p^r(1-p)^k, \quad k\in\mathbb{N}\cup\{0\}, \end{eqnarray} where $r=1/n$. The moment generation function of $(R,v)$ is given by \begin{eqnarray*}\label{idmgf} E\left(e^{tR+sv}\right)&=&E\left[e^{s/n+sT}E\left(e^{t\sum_{i=1}^{1+nT}G_i}\big|T\right)\right]\\ &=&e^{s/n}\left(\frac{\beta}{\beta-t}\right)^{\alpha/n}E\left\{\left[e^s\left(\frac{\beta}{\beta-t}\right)^\alpha\right]^T\right\}\\ &=&\left\{\frac{pe^s\beta^\alpha}{(\beta-t)^\alpha-e^s\beta^\alpha(1-p)}\right\}^r, \end{eqnarray*} which is valid for $t<\beta\{1-[(1-p)e^s]^{1/\alpha}\}$ and $s\in\mathbb{R}$. In a similar way, we obtain that the characteristic function is given by \begin{eqnarray}\label{idcf} E(e^{itR+isv})=\left\{\frac{pe^{is}\beta^\alpha}{(\beta-it)^\alpha-e^{is}\beta^\alpha(1-p)}\right\}^r, \end{eqnarray} for $t,s\in\mathbb{R}$. With this, we have that $E(e^{itR+isv})=\Phi(t,s)^{1/n}$, where $\Phi(t,s)$ is the characteristic function of the BGG law given in (\ref{cf}). In words, we have that BGG distribution is infinitely divisible. The exponential, geometric and BEG distributions are closed under geometric summation. We now show that our distribution also enjoys this geometric stability property. Let $\{(X_i,N_i)\}_{i=1}^\infty$ be iid random vectors following $\mbox{BGG}(\beta,\alpha,p)$ distribution independent of $M$, where $M\sim\mbox{Geom}(q)$, with $0<q<1$. By using (\ref{mgf}) and the probability generation function of the geometric distribution, one may easily check that $$\sum_{i=1}^M(X_i,N_i)\sim\mbox{BGG}(\beta,\alpha,pq).$$ From the above result, we find another stochastic representation of the BGG law, which generalizes proposition (4.2) from Kozubowski and Panorska (2005): $$(X,N)\stackrel{d}{=}\sum_{i=1}^M(X_i,N_i),$$ where $\{(X_i,N_i)\}_{i=1}^\infty\stackrel{iid}{\sim}\mbox{BGG}(\beta,\alpha,p/q)$, with $0<p<q<1$, and $M$ is defined as before. In what follows, another representation of the BGG law is provided, by showing that it is a convolution of a bivariate distribution (with gamma and degenerate at 1 marginals) and a compound Poisson distribution. Let $\{Z_i\}_{i=1}^\infty$ be a sequence of iid random variables following logarithmic distribution with probability mass function $P(Z_i=k)=(1-p)^k(\lambda k)^{-1}$, for $k\in\mathbb{N}$, where $\lambda=-\log p$. Define the random variable $Q\sim\mbox{Poisson}(\lambda)$, independent of the $Z_i$'s. Given the sequence $\{Z_i\}_{i=1}^\infty$, let $G_i\sim\Gamma(\alpha Z_i,\beta)$, for $i\in\mathbb{N}$, be a sequence of independent random variables and let $G\sim\Gamma(\alpha,\beta)$ be independent of all previously defined variables. Then, we have that \begin{eqnarray}\label{cp} (X,N)\stackrel{d}{=}(G,1)+\sum_{i=1}^{Q}(G_i,Z_i). \end{eqnarray} Taking $\alpha=1$ in (\ref{cp}), we obtain the proposition 4.3 from Kozubowski and Panorska (2005). To show that the above representation holds, we use the probability generation functions $E\left(t^Q\right)=e^{\lambda(t-1)}$ (for $t\in\mathbb{R}$) and $E\left(t^{Z_i}\right)=\log(1-(1-p)t)/\log p$ (for $t<(1-p)^{-1}$). With this, it follows that \begin{eqnarray}\label{cp1} E\left(e^{t(G+\sum_{i=1}^QG_i)+s(1+\sum_{i=1}^QZ_i)}\right)&=&e^s\left(\frac{\beta}{\beta-t}\right)^\alpha E\left\{\left[E\left(e^{tG_1+sZ_1}\right)\right]^Q\right\}\nonumber\\ &=&e^s\left(\frac{\beta}{\beta-t}\right)^\alpha e^{\lambda\left\{E\left(e^{tG_1+sZ_1}\right)-1\right\}}, \end{eqnarray} for $t<\beta$. Furthermore, for $t<\beta\{1-[(1-p)e^s]^{1/\alpha}\}$, we have that \begin{eqnarray*}\label{cp2} E\left(e^{tG_1+sZ_1}\right)=E\left\{\left[\frac{e^s\beta^\alpha}{(\beta-t)^\alpha}\right]^{Z_1}\right\}=\frac{\log\{1-(1-p)e^s\beta^\alpha/(\beta-t)^\alpha\}}{\log p}. \end{eqnarray*} By using the above result in (\ref{cp1}), we obtain the representation (\ref{cp}). \section{Estimation and inference} Let $(X_1,N_1)$, \ldots, $(X_n,N_n)$ be a random sample from $\mbox{BGG}(\beta,\alpha,p)$ distribution and $\theta=(\beta,\alpha,p)^\top$ be the parameter vector. The log-likelihood function $\ell=\ell(\theta)$ is given by \begin{eqnarray}\label{loglik} \ell&\propto& n\alpha\log\beta\,\bar{N}_n+n\log p-n\beta\bar{X}_n+n\log(1-p)(\bar{N}_n-1)\nonumber\\ &&+\sum_{i=1}^n\left\{\alpha N_i\log X_i-\log\Gamma(\alpha N_i)\right\}, \end{eqnarray} where $\bar{X}_n=\sum_{i=1}^nX_i/n$ and $\bar{N}_n=\sum_{i=1}^nN_i/n$. The associated score function $U(\theta)=(\partial\ell/\partial\beta,\partial\ell/\partial\alpha,\partial\ell/\partial p)^\top$ to log-likelihood function (\ref{loglik}) comes \begin{eqnarray*} \frac{\partial\ell}{\partial\beta}=n\left(\frac{\alpha\bar{N}_n}{\beta}-\bar{X}_n\right), \quad \frac{\partial\ell}{\partial\alpha}=n\bar{N}_n\log\beta +\sum_{i=1}^n\left\{N_i\log X_i-N_i\Psi(\alpha N_i)\right\} \end{eqnarray*} and \begin{eqnarray}\label{scorep} \frac{\partial\ell}{\partial p}=\frac{n}{p}-\frac{n(\bar{N}_n-1)}{1-p}, \end{eqnarray} where $\Psi(x)=d\log\Gamma(x)/dx$. By solving the nonlinear system of equations $U(\Theta)=0$, it follows that the maximum likelihood estimators (MLEs) of the parameters are obtained by \begin{eqnarray}\label{mles} \widehat\beta=\widehat\alpha\frac{\bar{N}_n}{\bar{X}_n},\quad \widehat{p}=\frac{1}{\bar{N}_n}\quad\mbox{and}\quad \sum_{i=1}^nN_i\Psi(\widehat\alpha N_i)-n\bar{N}_n\log\left(\frac{\widehat\alpha \bar{N}_n}{\bar{X}_n}\right)=\sum_{i=1}^nN_i\log X_i. \end{eqnarray} Since MLE of $\alpha$ may not be found in closed-form, nonlinear optimization algorithms such as a Newton algorithm or a quasi-Newton algorithm are needed. We are now interested in constructing confidence intervals for the parameters. For this, the Fisher's information matrix is required. The information matrix $J(\theta)$ is \begin{eqnarray}\label{inform} J(\theta)= \left(\begin{array}{lll} \kappa_{\beta\beta} & \kappa_{\beta\alpha} & 0\\ \kappa_{\beta\alpha} & \kappa_{\alpha\alpha} & 0\\ 0 & 0 & \kappa_{pp} \\ \end{array}\right), \end{eqnarray} with $$\kappa_{\beta\beta}=\frac{\alpha}{\beta^2p},\quad \kappa_{\beta\alpha}=-\frac{1}{\beta p},\quad \kappa_{\alpha\alpha}=p\sum_{j=1}^\infty j^2(1-p)^{j-1}\Psi'(j\alpha) \quad \mbox{and} \quad \kappa_{pp}=\frac{1}{p^2(1-p)}.$$ where $\Psi'(x)=d\Psi(x)/dx$. Standard large sample theory gives us that $\sqrt{n}(\widehat\theta-\theta)\stackrel{d}{\rightarrow}N_3\left(0,J^{-1}(\theta)\right)$ as $n\rightarrow\infty$, where $J^{-1}(\theta)$ is the inverse matrix of $J(\theta)$ defined in (\ref{inform}). The asymptotic multivariate normal distribution of $\sqrt{n}(\widehat\theta-\theta)$ can be used to construct approximate confidence intervals and confidence regions for the parameters. Further, we can compute the maximum values of the unrestricted and restricted log-likelihoods to construct the likelihood ratio (LR) statistic for testing some sub-models of the BGG distribution. For example, we may use the LR statistic for testing the hypotheses $H_0 \mbox{:} \,\,\alpha=1$ versus $H_1 \mbox{:} \,\,\alpha\neq1$, which corresponds to test BEG distribution versus BGG distribution. \subsection{A reparametrization} We here propose a reparametrization of the bivariate gamma-geometric distribution and show its advantages over the previous one. Consider the reparametrization $\mu=\alpha/\beta$ and $\alpha$ and $p$ as before. Define now the parameter vector $\theta^*=(\mu,\alpha,p)^\top$. Hence, the density (\ref{density}) now becomes \begin{eqnarray*} f^*_{X,N}(x,n)=\frac{(\alpha/\mu)^{n\alpha}}{\Gamma(\alpha n)}x^{n\alpha-1}e^{-\alpha x/\mu}p(1-p)^{n-1}, \quad x>0,\,\, n\in\mathbb{N}. \end{eqnarray*} We shall denote $(X,N)\sim\mbox{BGG}(\mu,\alpha,p)$. Therefore if $(X_1,N_1)$, \ldots, $(X_n,N_n)$ is a random sample from $\mbox{BGG}(\mu,\alpha,p)$ distribution, the log-likelihood function $\ell^*=\ell(\theta^*)$ is given by \begin{eqnarray}\label{loglikr} \ell^*&\propto& n\alpha\log\left(\frac{\alpha}{\mu}\right)\bar{N}_n+n\log p-n\frac{\alpha}{\mu}\bar{X}_n+n\log(1-p)(\bar{N}_n-1)\nonumber\\ &&+\sum_{i=1}^n\left\{\alpha N_i\log X_i-\log\Gamma(\alpha N_i)\right\}. \end{eqnarray} The score function associated to (\ref{loglikr}) is $U^*(\theta^*)=(\partial\ell^*/\partial\mu,\partial\ell^*/\partial\alpha,\partial\ell^*/\partial p)^\top$, where \begin{eqnarray*} \frac{\partial\ell^*}{\partial\mu}=\frac{n\alpha}{\mu}\left(\frac{\bar{X}_n}{\mu}-\bar{N}_n\right), \quad \frac{\partial\ell^*}{\partial\alpha}=n\bar{N}_n\log\left(\frac{\alpha}{\mu}\right)+\sum_{i=1}^nN_i\{\log X_i-\Psi(\alpha N_i)\} \end{eqnarray*} and $\partial\ell^*/\partial p$ is given by (\ref{scorep}). The MLE of $p$ is given (as before) in (\ref{mles}), and the MLEs of $\mu$ and $\alpha$ are obtained by $$\widehat\mu=\frac{\bar{X}_n}{\bar{N}_n}\quad\mbox{and}\quad \sum_{i=1}^nN_i\Psi(\widehat\alpha N_i)-n\bar{N}_n\log\left(\widehat\alpha\frac{\bar{N}_n}{\bar{X}_n}\right)=\sum_{i=1}^nN_i\log X_i.$$ As before nonlinear optimization algorithms are needed to find MLE of $\alpha$. Under this reparametrization, Fisher's information matrix $J^*(\theta^*)$ becomes \begin{eqnarray*} J^*(\theta^*)= \left(\begin{array}{lll} \kappa^*_{\mu\mu} & 0 & 0\\ 0 & \kappa^*_{\alpha\alpha} & 0\\ 0 & 0 & \kappa^*_{pp} \\ \end{array}\right), \end{eqnarray*} with $$\kappa^*_{\mu\mu}=\frac{\alpha}{\mu^2p},\quad \kappa^*_{\alpha\alpha}=p\sum_{j=1}^\infty j^2(1-p)^{j-1}\Psi'(j\alpha)-\frac{1}{\alpha p}\quad \mbox{and} \quad \kappa^*_{pp}=\kappa_{pp}.$$ The asymptotic distribution of $\sqrt{n}(\widehat\theta^*-\theta^*)$ is trivariate normal with null mean and covariance matrix $J^{*\,-1}(\theta^*)=\mbox{diag}\{1/k^*_{\mu\mu}, 1/k^*_{\alpha\alpha},1/k_{pp}\}$. We see that under this reparametrization we have orthogonal parameters in the sense of Cox and Reid (1987); the information matrix is a diagonal matrix. With this, we obtain desirable properties such as asymptotic independence of the estimates of the parameters. The reader is referred to Cox and Reid (1987) for more details. \section{Application} Here, we show the usefulness of the bivariate gamma-geometric law applied to a real data set. We consider daily exchange rates between Brazilian real and U.K. pounds, quoted in Brazilian real, covering May 22, 2001 to December 31, 2009. With this, we obtain the daily log-returns, that is, the logarithms of the rates between two consecutive exchange rates. Figure \ref{log-returns} illustrates the daily exchange rates and the log-returns. \begin{figure}[h!] \centering \includegraphics[width=0.33\textwidth]{exchange_rate_plot.pdf}\includegraphics[width=0.33\textwidth]{log-returns_plot.pdf} \caption{Graphics of the daily exchange rates and log-returns.} \label{log-returns} \end{figure} We will jointly model the magnitude and duration of the consecutive positive log-returns by using BGG law. We call attention that the duration of the consecutive positive log-returns is the same that the duration of the growth periods of the exchange rates. The data set consists of 549 pairs $(X_i,N_i)$, where $X_i$ and $N_i$ are the magnitude and duration as described before, for $i=1,\ldots,549$. We notice that this approach of looking jointly at the magnitude and duration of the consecutive positive log-returns was firstly proposed by Kozubowski and Panorska (2005) with the BEG model, which showed a good fit to another currencies considered. Suppose $\{(X_i,N_i)\}_{i=1}^{549}$ are iid random vectors following $\mbox{BGG}(\mu,\alpha,p)$ distribution. We work with the reparametrization proposed in the Subsection 3.1. Table \ref{summaryfit} presents a summary of the fit of our model, which contains maximum likelihood estimates of the parameters with their respective standard errors, and asymptotic confidence intervals at the 5\% significance level. Note that the confidence interval of $\alpha$ does not contain the value $1$. Then, for the Wald test, we reject the hypothesis that the data come from BEG distribution in favor of the BGG distribution, at the 5\% significance level. We also perform likelihood ratio (LR) test and obtain that the LR statistic is equal to $5.666$ with associated p-value $0.0173$. Therefore, for any usual significance level (for example 5\%), the likelihood ratio test rejects the hypothesis that the data come from BEG distribution in favor of the BGG distribution, so agreeing with Wald test's decision. The empirical and fitted correlation coefficients are equal to 0.6680 and 0.6775, respectively, therefore, we have a good agreement between them. \begin{table}[h!] \centering \begin{tabular}{c|cccc} \hline Parameters & Estimate & Stand. error & Inf. bound & Sup. bound \\ \hline $\mu$ & 0.0082 & 0.00026 & 0.0076 & 0.0087 \\ $\alpha$ & 0.8805 & 0.04788 & 0.7867 & 0.9743 \\ $p$ & 0.5093 & 0.01523 & 0.4794 & 0.5391 \\ \hline \end{tabular} \caption{Maximum likelihood estimates of the parameters, standard errors and bounds of the asymptotic confidence intervals at the 5\% significance level.}\label{summaryfit} \end{table} The BEG model was motived by an empirical observation that the magnitude of the consecutive positive log-returns followed the same type of distribution as the positive one-day log-returns (see Kozubowski and Panorska, 2005). Indeed, the marginal distribution of $X$ in the BEG model is also exponential (with mean $p^{-1}\mu$), just as the positive daily log-returns (with mean $\mu$). This stability of the returns was observed earlier by Kozubowski and Podg\'orski (2003), with the log-Laplace distribution. We notice that BGG distribution does not enjoy this stability property, since the marginal distribution of $X$ is an infinite mixture of gamma distributions. We now show that the data set considered here does not present this stability. Denote the $i$th positive one-day log-returns by $D_i$ and define $D_i^*=p^{-1}D_i$. If the data was generated from a $\mbox{BEG}(\mu,p)$ distribution, then an empirical quantile-quantile plot between the $X_i$'s ($y$-axis) and the $D_i$'s ($x$-axis) would be around the straight line $y=p^{-1}x$, for $x>0$. Figure \ref{qqplot} presents this plot and we observe that a considerable part of the points are below of the straight line $y=1.9636 x$ (we replace $p$ by its MLE $\widehat p=0.5093$). Therefore, the present data set seems to have been generated by a distribution that lacks the stability property discussed above. In order to confirm this, we test the hypothesis that the $X_i$'s and $D_i^*$'s have the same distribution. In the BEG model, both have exponential distribution with mean $\mu$. Since $\widehat p$ converges in probability to $p$ (as $n\rightarrow\infty$), we perform the test with $\widehat p$ replacing $p$. The Kolmogorov-Smirnov statistic and associated p-value are equal to 0.0603 and 0.0369, respectively. Therefore, using a significance level at 5\%, we reject the hypothesis that the $X_i$'s and $D_i^*$'s have the same distribution. \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{qqplot.pdf} \caption{Empirical quantile-quantile plot between cumulative consecutive positive log-returns and positive one-day log-returns, with the straight line $y=1.9636 x$. The range $(x,y)\in(0,0.015)\times(0,0.030)$ covers 85\% of the data set.} \label{qqplot} \end{figure} Figure \ref{fit_densities} presents the fitted marginal density (mixture of gamma densities) of the cumulative log-returns with the histogram of the data and the empirical and fitted survival functions. These plots show a good fit of the mixture of gamma distributions to the data. This is confirmed by the Kolmogorov-Smirnov (KS) test, which we use to measure the goodness-of-fit of the mixture of gamma distributions to the data. The KS statistic and its p-value are equal to 0.0482 and 0.1557, respectively. Therefore, using any usual significance level, we accept the hypothesis that the mixture of gamma distributions is adequate to fit the cumulative log-returns. \begin{figure}[h!] \centering \includegraphics[width=0.33\textwidth]{fitted_density.pdf}\includegraphics[width=0.33\textwidth]{fitted_survival.pdf} \caption{Plot on the left shows the fitted mixture of gamma densities (density of $X$) with the histogram of the data. Plot on the right presents the empirical and fitted theoretical (mixture of gamma) survival functions. } \label{fit_densities} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.33\textwidth]{dailyposlog-returnsdensity.pdf}\includegraphics[width=0.33\textwidth]{dailyposlog-returnssurvival.pdf} \caption{Picture on the left shows the histogram and fitted gamma density for the daily positive log-returns. Empirical survival and fitted gamma survival are shown in the picture on the right.} \label{dailypositiveplots} \end{figure} Plots of the histogram, fitted gamma density and empirical and fitted survival functions for the daily positive log-returns are presented in the Figure \ref{dailypositiveplots}. The good performance of the gamma distribution may be seen by these graphics. In the Table \ref{marginalgeom} we show absolute frequency, relative frequency and fitted geometric model for the duration in days of the consecutive positive log-returns. From this, we observe that the geometric distribution fits well the data. This is confirmed by the Pearson's chi-squared (denoted by $\chi^2$) test, where our null hypothesis is that the duration follows geometric distribution. The $\chi^2$ statistic equals 42 (degrees of freedom equals 36) with associated p-value 0.2270, so we accept (using any usual significance level) that the growth period follows geometric distribution. We notice that geometric distribution has also worked quite well for modeling the duration of the growth periods of exchange rates as part of the BEG model in Kozubowski and Panorska (2005). \begin{table}[h!] \centering \begin{tabular}{c|ccccccc} \hline $N\rightarrow$ & 1 & 2 & 3 & 4 & 5 & 6 & $\geq7$ \\ \hline Absolute frequency &269 & 136 & 85 & 34 & 15 & 6 & 4 \\ Relative frequency& 0.48998& 0.24772 & 0.15483 & 0.06193 & 0.02732 & 0.01093 & 0.00728 \\ Fitted model & 0.50928 & 0.24991 & 0.12264 & 0.06018 & 0.02953 & 0.01449 & 0.01396\\ \hline \end{tabular} \caption{Absolute and relative frequencies and fitted marginal probability mass function of $N$ (duration in days of the growth periods).}\label{marginalgeom} \end{table} \begin{figure}[h!] \centering \includegraphics[width=0.33\textwidth]{onedaycumulative.pdf}\includegraphics[width=0.33\textwidth]{fitted_conditional_density1.pdf} \includegraphics[width=0.33\textwidth]{twodaycumulative.pdf}\includegraphics[width=0.33\textwidth]{fitted_conditional_density2.pdf} \includegraphics[width=0.33\textwidth]{threedaycumulative.pdf}\includegraphics[width=0.33\textwidth]{fitted_conditional_density3.pdf} \caption{Plots of the fitted conditional density and survival functions of $X$ given $N=1$, $N=2$ and $N=3$. In the pictures of the density and survival functions, we also plot the histogram of the data and the empirical survival function, respectively.} \label{conditional_densities} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.33\textwidth]{fourdaycumulative.pdf}\includegraphics[width=0.33\textwidth]{fitted_conditional_density4.pdf} \includegraphics[width=0.33\textwidth]{fivedaycumulative.pdf}\includegraphics[width=0.33\textwidth]{fitted_conditional_density5.pdf} \caption{Plots of the fitted conditional density and survival functions of $X$ given $N=4$ and $N=5$. In the pictures of the density and survival functions, we also plot the histogram of the data and the empirical survival function, respectively.} \label{conditional_densities2} \end{figure} So far our analysis has showed that the bivariate gamma-geometric distribution and its marginals provided a suitable fit to the data. We end our analysis verifying if the conditional distributions of the cumulative log-returns given the duration also provide good fits to the data. As mentioned before, the conditional distribution of $X$ given $N=n$ is $\Gamma(n\alpha,\alpha/\mu)$. Figure \ref{conditional_densities} shows plots of the fitted density and fitted survival function of the conditional distributions of $X$ given $N=1,2,3$. The histograms of the data and the empirical survival functions are also displayed. The corresponding graphics for the conditional distributions of $X$ given $N=4,5$ are displayed in the Figure \ref{conditional_densities2}. These graphics show a good performance of the gamma distribution to fit cumulative log-returns given the growth period (in days). We also use the Kolmogorov-Smirnov test to verify the goodness-of-fit these conditional distributions. In the table \ref{conditionalX} we present the KS statistics and their associated p-values. In all cases considered, using any usual significance level, we accept the hypothesis that the data come from gamma distribution with parameters specified above. \begin{table}[h!] \centering \begin{tabular}{c|ccccc} \hline Given $N\rightarrow$ & one-day & two-day & three-day & four-day & five-day\\ \hline KS statistic & 0.0720 & 0.0802 & 0.1002 & 0.1737 & 0.2242\\ p-value & 0.1229 & 0.3452 & 0.3377 & 0.2287 & 0.3809\\ \hline \end{tabular} \caption{Kolmogorov-Smirnov statistics and their associated p-values for the goodness-of-fit of the conditional distributions of the cumulative log-returns given the durations (one-day, two-day, three-day, four-day and five-day).}\label{conditionalX} \end{table} \section{The induced L\'evy process} As seen before, the bivariate gamma-geometric distribution is infinitely divisible, therefore, we have that (\ref{idcf}) is a characteristic function for any real $r>0$. This characteristic function is associated with the bivariate random vector \begin{eqnarray*} (R(r),v(r))=\left(\sum_{i=1}^{T}X_i+G,r+T\right), \end{eqnarray*} where $\{X_i\}_{i=1}^\infty$ are iid random variables following $\Gamma(\alpha,\beta)$ distribution, $G\sim\Gamma(r\alpha,\beta)$, $T$ is a discrete random variable with $\mbox{NB}(r,p)$ distribution and all random variables involved are mutually independent. Hence, it follows that the BGG distribution induces a L\'evy process $\{(X(r),\mbox{NB(r)}),\,\, r\geq0\}$, which has the following stochastic representation: \begin{eqnarray}\label{flp} \{(X(r),N(r)),\,\, r\geq0\}\stackrel{d}{=}\left\{\left(\sum_{i=1}^{NB(r)}X_i+G(r),r+\mbox{NB}(r)\right),\,\, r\geq0\right\}, \end{eqnarray} where the $X_i$'s are defined as before, $\{G(r),\,\, r\geq0\}$ is a gamma L\'evy process and $\{\mbox{NB}(r),\,\, r\geq0\}$ is a negative binomial L\'evy process, both with characteristic functions given by \begin{eqnarray*} E\left(e^{itG(r)}\right)=\left(\frac{\beta}{\beta-it}\right)^{\alpha r}, \quad t\in\mathbb{R}, \end{eqnarray*} and \begin{eqnarray*} E\left(e^{isN(r)}\right)=\left(\frac{p}{1-(1-p)e^{is}}\right)^r, \quad s\in\mathbb{R}, \end{eqnarray*} respectively. All random variables and processes involved in (\ref{flp}) are mutually independent. From the process defined in (\ref{flp}), we may obtain other related L\'evy motions by deleting $r$ and/or $G(r)$. Here, we focus on the L\'evy process given by (\ref{flp}) and by deleting $r$. In this case, we obtain the following stochastic representation for our process: \begin{eqnarray}\label{lp} \{(X(r),\mbox{NB}(r)),\,\, r\geq0\}\stackrel{d}{=}\left\{\left(G(r+\mbox{NB}(r)),\mbox{NB}(r)\right),\,\, r\geq0\right\}. \end{eqnarray} Since both processes (the left and the right ones of the equality in distribution) in (\ref{lp}) are L\'evy, the above result follows by noting that for all fixed $r$, we have $\sum_{i=1}^{NB(r)}X_i+G(r)|\mbox{NB}(r)=k\sim\Gamma(\alpha(r+k),\beta)$. One may also see that the above result follows from the stochastic self-similarity property discussed, for example, by Kozubowski and Podg\'orski (2007): a gamma L\'evy process subordinated to a negative binomial process with drift is again a gamma process. The characteristic function corresponding to the (\ref{lp}) is given by \begin{eqnarray}\label{lpcf} \Phi^*(t,s)\equiv E\left(e^{itX(r)+isNB(r)}\right)=\left\{\frac{p\beta^\alpha}{(\beta-it)^\alpha-e^{is}\beta^\alpha(1-p)}\right\}^r, \end{eqnarray} for $t,s\in\mathbb{R}$. With this, it easily follows that the characteristic function of the marginal process $\{X(r),\,\, r\geq0\}$ is \begin{eqnarray*}\label{lpcfm} E\left(e^{itX(r)}\right)=\left\{\frac{p\beta^\alpha}{(\beta-it)^\alpha-\beta^\alpha(1-p)}\right\}^r. \end{eqnarray*} Since the above characteristic function corresponds to a random variable whose density is an infinite mixture of gamma densities (see Subsection 5.1), we have that $\{X(r),\,\, r\geq0\}$ is an infinite mixture of gamma L\'evy process (with negative binomial weights). Then, we obtain that the marginal processes of $\{(X(r),\mbox{NB}(r)),\,\, r\geq0\}$ are infinite mixture of gamma and negative binomial processes. Therefore, we define that $\{(X(r),\mbox{NB}(r)),\,\, r\geq0\}$ is a $\mbox{BMixGNB}(\beta,\alpha,p)$ L\'evy process. We notice that, for the choice $\alpha=1$ in (\ref{lp}), we obtain the bivariate process with gamma and negative binomial marginals introduced by Kozubowski et al.\,(2008), named BGNB L\'evy motion. As noted by Kozubowski and Podg\'orski (2007), if $\{\widetilde{\mbox{NB}}(r),\,\, r\geq0\}$ is a negative binomial process, with parameter $q\in(0,1)$, independent of another negative binomial process \{$\mbox{NB}(r),\,\, r\geq0\}$ with parameter $p\in(0,1)$, then the changed time process $\{\mbox{NB}^*(r),\,\,r\geq0\}=\{\mbox{NB}(r+\widetilde{\mbox{NB}}(r)),\,\,r\geq0\}$ is a negative binomial process with parameter $p^*=pq/(1-p+pq)$. With this and (\ref{lp}), we have that the changed time process $\{(G(r+\mbox{NB}^*(r)),\mbox{NB}(r+\widetilde{\mbox{NB}}(r))),\,\, r\geq0\}$ is a $\mbox{BMixGNB}(\beta,\alpha,p^*)$ L\'evy process. In what follows, we derive basic properties of the bivariate distribution of the BMixGNB process for fixed $r>0$ and discuss estimation by maximum likelihood and inference for large sample. From now on, unless otherwise mentioned, we will consider $r>0$ fixed. \subsection{Basic properties of the bivariate process for fixed $r>0$} For simplicity, we will denote $(Y,M)=(X(r),\mbox{NB}(r))$. From stochastic representation (\ref{lp}), it is easy to see that the joint density and distribution function of $(Y,M)$ are \begin{eqnarray}\label{pdfr} g_{Y,M}(y,n)=\frac{\Gamma(n+r)p^r(1-p)^n}{n!\Gamma(r)\Gamma(\alpha(r+n))}\beta^{\alpha(r+n)}y^{\alpha(r+n)-1}e^{-\beta y} \end{eqnarray} and \begin{eqnarray*} P(Y\leq y, M\leq n)=\frac{p^r}{\Gamma(r)}\sum_{j=0}^n(1-p)^j\frac{\Gamma(j+r)}{j!\Gamma(\alpha(r+j))}\Gamma_{\beta y}(\alpha(r+j)), \end{eqnarray*} for $y>0$ and $n\in\mathbb{N}\cup\{0\}$. Making $\alpha=1$ in (\ref{pdfr}), we obtain the $\mbox{BGNB}$ distribution (bivariate distribution with gamma and negative binomial marginals) as particular case. This model was introduced and studied by Kozubowski et al. (2008). We have that the marginal distribution of $M$ is negative binomial with probability mass function given in (\ref{nbpf}). The marginal density of $Y$ is given by \begin{eqnarray*}\label{densityy} g_Y(y)=\sum_{n=0}^\infty P(M=n)g(y;\alpha(r+n),\beta), \quad y>0, \end{eqnarray*} where $g(\cdot;\alpha,\beta)$ is the density of a gamma variable as defined in the Section 2. Therefore, the above density is an infinite mixture of gamma densities (with negative binomial weigths). Since the marginal distributions of $(Y,M)$ are infinite mixture of gamma and negative binomial distributions, we denote $(Y,M)\sim\mbox{BMixGNB}(\beta,\alpha,p,r)$. Some plots of the marginal density of $Y$ are displayed in the Figure \ref{marginaldensities2}, for $\beta=1$ and some values of $\alpha$, $p$ and $r$. \begin{figure}[h!] \centering \includegraphics[width=0.33\textwidth]{figura_3.pdf}\includegraphics[width=0.33\textwidth]{figura_4.pdf} \includegraphics[width=0.33\textwidth]{figura_5.pdf}\includegraphics[width=0.33\textwidth]{figura_6.pdf} \caption{Graphics of the marginal density of $Y$ for $\beta=1$, $\alpha=0.5,1,2,3,4$, $p=0.2,0.8$ and $r=0.7,2$.} \label{marginaldensities2} \end{figure} The conditional distribution of $Y|M=k$ is gamma with parameters $\alpha(r+k)$ and $\beta$, while the conditional probability distribution function of $M|Y=y$ is given by $$P(M=n|Y=y)=\frac{\Gamma(n+r)}{n!\Gamma(\alpha(n+r))}[(1-p)(\beta y)^\alpha]^n\bigg/\sum_{j=0}^\infty\frac{\Gamma(j+r)}{j!\Gamma(\alpha(j+r))}[(1-p)(\beta y)^\alpha]^j,$$ for $n=0,1,\ldots$, which belongs to one-parameter power series distributions if $\alpha$ and $r$ are known. In this case, the parameter is $(1-p)(\beta y)^\alpha$. For positive integers $m\leq n$ and real $y>0$, it follows that $$P(Y\leq y, M\leq m|M\leq n)=\sum_{j=0}^m\frac{\Gamma(j+r)(1-p)^j}{j!\Gamma(\alpha(j+r))}\Gamma_{\beta y}(\alpha(r+j))\bigg/\sum_{j=0}^n\frac{\Gamma(j+r)}{j!}(1-p)^j$$ and for $0<x\leq y$ and positive integer $n$ $$P(Y\leq x, M\leq n|Y\leq y)=\frac{\sum_{j=0}^n\frac{\Gamma(j+r)(1-p)^j}{j!\Gamma(\alpha(j+r))}\Gamma_{\beta x}(\alpha(r+j))}{\sum_{j=0}^\infty\frac{\Gamma(j+r)(1-p)^j}{j!\Gamma(\alpha(j+r))}\Gamma_{\beta y}(\alpha(r+j))}.$$ The moments of a random vector $(Y,M)$ following $\mbox{BMixGNB}(\beta,\alpha,p,r)$ distribution may be obtained by $E(Y^nM^k)=(-i)^{n+k}\partial^{n+k}\Phi^*(t,s)/\partial t^n\partial s^k|_{t,s=0}$, where $\Phi^*(t,s)$ is the characteristic function given in (\ref{lpcf}). It follows that the product moments are given by \begin{eqnarray}\label{momBMixGNB} E(Y^nM^k)=\frac{p^r\Gamma(n)}{\beta^n\Gamma(r)}\sum_{m=0}^\infty\frac{m^k(1-p)^m\Gamma(m+r)}{m!B(\alpha(r+m),n)}. \end{eqnarray} The covariance matrix of $(Y,M)$ is given by $r\Sigma$, where $\Sigma$ is defined in (\ref{cov}). The correlation coefficient is given by $\rho$, which is defined in the Subsection 2.2. Further, an expression for the $n$th marginal moment of $Y$ may be obtained by taking $k=0$ in (\ref{momBMixGNB}). If $\{W(r),\,r>0\}$ is a $\mbox{BMixGNB}(\beta,\alpha,p)$ L\'evy motion, one may check that $\mbox{cov}(W(t),W(s))=\min(t,s)\Sigma$. The $\mbox{BMixGNB}$ law may be represented by a convolution between a bivariate distribution (with gamma and degenerate at 0 marginals) and a compound Poisson distribution. Such a representation is given by \begin{eqnarray*} (Y,M)\stackrel{d}{=}(G,0)+\sum_{i=1}^{Q}(G_i,Z_i), \end{eqnarray*} with all random variables above defined as in the formula (\ref{cp}), but here we define $G\sim\Gamma(\alpha r,\beta)$ and $\lambda=-r\log p$. We end this Subsection by noting that if $\{(Y_i,M_i)\}_{i=1}^n$ are independent random vectors with $(Y_i,M_i)\sim\mbox{BMixGNB}(\beta,\alpha,p,r_i)$, then \begin{eqnarray*} \sum_{i=1}^n (Y_i,M_i)\sim \mbox{BMixGNB}\left(\beta,\alpha,p,\sum_{i=1}^nr_i\right). \end{eqnarray*} One may easily check the above result by using characteristic function (\ref{lpcf}). \subsection{Estimation and inference for the $\mbox{BMixGNB}$ distribution} Suppose $(Y_1,M_1), \ldots, (Y_n,M_n)$ is a random sample from $\mbox{BMixGNB}(\beta,\alpha,p,\tau)$ distribution. Here the parameter vector will be denoted by $\theta^\dag=(\beta,\alpha,p,\tau)^\top$. The log-likelihood function, denoted by $\ell^\dag$, is given by \begin{eqnarray*} \ell^\dag&\propto& n\{\tau\alpha\log\beta-\log\Gamma(\tau)+\tau\log p\}-n\beta\bar{X}_n+n\{\log(1-p)+\alpha\log\beta\}\bar{M}_n\\&+&\sum_{i=1}^n\log\Gamma(M_i+\tau)- \sum_{i=1}^n\log\Gamma(\alpha(M_i+\tau))+\alpha\sum_{i=1}^n(M_i+\tau)\log X_i, \end{eqnarray*} where $\bar{M}_n=\sum_{i=1}^nM_i/n$. The associated score function $U^\dag(\theta^\dag)=(\partial\ell^\dag/\partial\beta,\partial\ell^\dag/\partial\alpha,\partial\ell^\dag/\partial p,\partial\ell^\dag/\partial\tau)$ has its components given by \begin{eqnarray*} \frac{\partial\ell^\dag}{\partial\beta}&=&n\left\{\frac{\alpha}{\beta}(\tau+\bar{M}_n)-\bar{X}_n\right\},\\ \frac{\partial\ell^\dag}{\partial\alpha}&=&n(\tau+\bar{M}_n)\log\beta+\sum_{i=1}^n(\tau+M_i)\{\log X_i-\Psi(\alpha(\tau+M_i))\},\\ \frac{\partial\ell^\dag}{\partial p}&=&-\frac{n\bar{M}_n}{1-p}+\frac{n\tau}{p},\\ \frac{\partial\ell^\dag}{\partial\tau}&=&n\left\{\log(p\beta^\alpha)-\Psi(\tau)\right\}+\sum_{i=1}^n\left\{\alpha[\log X_i-\Psi(\alpha(\tau+M_i))]+\Psi(\tau+M_i)\right\}. \end{eqnarray*} Hence, the maximum likelihood estimators of $\beta$ and $p$ are respectively given by \begin{eqnarray}\label{mles2} \widehat\beta=\widehat\alpha\frac{\widehat\tau+\bar{M}_n}{\bar{X}_n}\quad \mbox{and} \quad \widehat{p}=\frac{\widehat\tau}{\widehat\tau+\bar{M}_n}, \end{eqnarray} while the maximum likelihood estimators of $\alpha$ and $\tau$ are found by solving the nonlinear system of equations \begin{eqnarray*} n(\widehat\tau+\bar{M}_n)\log\left(\widehat\alpha\frac{\widehat\tau+\bar{M}_n}{\bar{X}_n}\right)+\sum_{i=1}^n(\widehat\tau+M_i)\{\log X_i-\Psi(\widehat\alpha(\widehat\tau+M_i))\}=0 \end{eqnarray*} and \begin{eqnarray}\label{mletau} \widehat\alpha\left\{n\log\left(\widehat\alpha\frac{\widehat\tau+\bar{M}_n}{\bar{X}_n}\right)+\sum_{i=1}^n\left\{\log X_i-\Psi(\widehat\alpha(\widehat\tau+M_i))\right\}\right\}=\nonumber\\n\left\{\Psi(\widehat\tau)-\log\left(\frac{\widehat\tau}{\widehat\tau+\bar{M}_n}\right)\right\}-\sum_{i=1}^n\Psi(\widehat\tau+M_i). \end{eqnarray} After some algebra, we obtain that Fisher's information matrix is \begin{eqnarray*} J^\dag(\theta^\dag)= \left(\begin{array}{llll} \kappa^\dag_{\beta\beta} & \kappa^\dag_{\beta\alpha} & 0 & \kappa^\dag_{\beta\tau}\\ \kappa^\dag_{\beta\alpha} & \kappa^\dag_{\alpha\alpha} & 0 & \kappa^\dag_{\alpha\tau}\\ 0 & 0 & \kappa^\dag_{pp} & \kappa^\dag_{p\tau} \\ \kappa^\dag_{\beta\tau} & \kappa^\dag_{\alpha\tau} & \kappa^\dag_{p\tau}& \kappa^\dag_{\tau\tau}\\ \end{array}\right), \end{eqnarray*} with \begin{eqnarray*} &&\kappa^\dag_{\beta\beta}=\frac{\alpha\tau}{\beta^2p},\quad \kappa^\dag_{\beta\alpha}=-\frac{\tau}{p\beta},\quad\kappa^\dag_{\beta\tau}=-\frac{\alpha}{\beta},\\ &&\kappa^\dag_{\alpha\alpha}=\frac{p^\tau}{\Gamma(\tau)}\sum_{j=0}^\infty (\tau+j)^2(1-p)^j\Psi'(\alpha(\tau+j))\frac{\Gamma(\tau+j)}{j!}, \quad \kappa^\dag_{pp}=\frac{\tau}{p^2(1-p)},\\ &&\kappa^\dag_{\alpha\tau}=\frac{\alpha p^\tau}{\Gamma(\tau)}\sum_{j=0}^\infty (1-p)^j(\tau+j)\Psi'(\alpha(\tau+j))\frac{\Gamma(\tau+j)}{j!}, \quad\kappa^\dag_{p\tau}=-\frac{1}{p},\\ &&\kappa^\dag_{\tau\tau}=\Psi'(\tau)+\frac{p^\tau}{\Gamma(\tau)}\sum_{j=0}^\infty (1-p)^j\{\alpha^2\Psi'(\alpha(\tau+j))-\Psi'(\tau+j)\}\frac{\Gamma(\tau+j)}{j!}. \end{eqnarray*} So we obtain that the asymptotic distribution of $\sqrt{n}(\widehat\theta^\dag-\theta^\dag)$ is trivariate normal with null mean and covariance matrix $J^{\dag\,-1}(\theta^\dag)$, where $J^{\dag\,-1}(\cdot)$ is the inverse of the information matrix $J^\dag(\cdot)$ defined above. The likelihood ratio, Wald and Score tests may be performed in order to test the hypotheses $H_0 \mbox{:} \,\,\alpha=1$ versus $H_1 \mbox{:} \,\,\alpha\neq1$, that is, to compare $\mbox{BGNB}$ and $\mbox{BMixGNB}$ fits. Further, we may test the $\mbox{BMixGNB}$ model versus the BGG or BEG models, which corresponds to the null hypotheses $H_0 \mbox{:} \,\,\tau=1$ and $H_0 \mbox{:} \,\,\alpha=\tau=1$, respectively. As made in the Subsection 4.2, we here propose the reparametrization $\mu=\alpha/\beta$. We now denote the parameter vector by $\theta^{\star}=(\mu,\alpha,p,\tau)^\top$. With this, one may check that the MLEs of $p$ and $\mu$ are given by (\ref{mles2}) and $\widehat\mu=\bar{X}_n/(\widehat\tau+\bar{M}_n)$. The MLEs of $\tau$ and $\alpha$ are obtained by solving the nonlinear system of equations (\ref{mletau}) and \begin{eqnarray*} n(\widehat\tau+\bar{M}_n)\left\{\log\left(\widehat\alpha\frac{\widehat\tau+\bar{M}_n}{\bar{X}_n}\right)-\frac{\widehat\tau+\bar{M}_n}{\bar{X}_n}\right\}+\sum_{i=1}^n(\widehat\tau+M_i)\{\log X_i-\Psi(\widehat\alpha(\widehat\tau+M_i))\}=0. \end{eqnarray*} Under this proposed reparametrization, the Fisher's information matrix becomes \begin{eqnarray*} J^\star(\theta^{\star} )= \left(\begin{array}{llll} \kappa^\star_{\mu\mu} & 0 & 0 & \kappa^\star_{\mu\tau}\\ 0 & \kappa^\star_{\alpha\alpha} & 0 & \kappa^\star_{\alpha\tau}\\ 0 & 0 & \kappa^\star_{pp} & \kappa^\star_{p\tau} \\ \kappa^\star_{\mu\tau} & \kappa^\star_{\alpha\tau} & \kappa^\star_{p\tau}& \kappa^\star_{\tau\tau}\\ \end{array}\right), \end{eqnarray*} where its elements are given by \begin{eqnarray*} &&\kappa^\star_{\mu\mu}=\frac{\alpha\tau}{\mu^2p},\quad \kappa^\star_{\mu\tau}=\frac{\alpha}{\mu}, \quad\kappa^\star_{\alpha\alpha}=\frac{p^\tau}{\Gamma(\tau)}\sum_{j=0}^\infty (1-p)^j(\tau+j)^2\Psi'(\alpha(\tau+j))\frac{\Gamma(\tau+j)}{j!}-\frac{\tau}{\alpha p},\\ &&\kappa^\star_{\alpha\tau}=\frac{\alpha p^\tau}{\Gamma(\tau)}\sum_{j=0}^\infty (1-p)^j(\tau+j)\Psi'(\alpha(\tau+j))\frac{\Gamma(\tau+j)}{j!}-1, \quad \kappa^\star_{pp}=\kappa^\dag_{pp},\\ && \kappa^\star_{p\tau}=\kappa^\dag_{p\tau}\quad\mbox{and}\quad \kappa^\star_{\tau\tau}=\kappa^\dag_{\tau\tau}. \end{eqnarray*} We have that $\kappa^\star_{\mu\alpha}=0$, that is, $\mu$ and $\alpha$ are orthogonal parameters in contrast with the parameters $\beta$ and $\alpha$ considered previously, where $\kappa^\dag_{\beta\alpha}\neq0$. Further, we have that $\sqrt{n}(\widehat\theta^\star-\theta^\star)\rightarrow N_4(0,J^{\star\,-1}(\theta^\star))$ as $n\rightarrow\infty$, where the covariance matrix $J^{\star\,-1}(\theta^\star)$ is the inverse of the information matrix $J^\star(\theta^\star)$. \section{Concluding remarks} We introduced and studied the bivariate gamma-geometric (BGG) law, which extends the bivariate exponential-geometric (BEG) law proposed by Kozubowski and Panorska (2005). The marginals of our model are infinite mixture of gamma and geometric distributions. Several results and properties were obtained such as joint density and survival functions, conditional distributions, moment generation and characteristic functions, product moments, covariance matrix, geometric stability and stochastic representations. We discussed estimation by maximum likelihood and inference for large sample. Further, a reparametrization was suggested in order to obtain orthogonality of the parameters. An application to exchange rates between Brazilian real and U.K. pounds, quoted in Brazilian real, was presented. There our aim was to model jointly the magnitude and duration of the consecutive positive log-returns. In that application, we showed that the BGG model and its marginal and conditional distributions fitted suitably the real data set considered. Further, we performed the likelihood ratio and Wald tests and both rejected (with significance level at 5\%) the hypothesis that the data come from BEG distribution in favor of the BGG distribution. We showed that our bivariate law is infinitely divisible and, therefore, induces a L\'evy process, named $\mbox{BMixGNB}$ L\'evy motion. We also derived some properties and results of this process, including a study of its distribution at fixed time. Our proposed L\'evy motion has infinite mixture of gamma and negative binomial marginal processes and generalizes the one proposed by Kozubowski et al. (2008), whose marginals are gamma and negative binomial processes. Estimation and inference for the parameters of the distribution of our process at fixed time were also discussed, including a reparametrization to obtain a partial orthogonality of the parameters. \section*{Acknowledgements} \noindent I thank the anonymous referee for their careful reading, comments and suggestions. I also gratefully acknowledge financial support from {\it Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico} (CNPq-Brazil). \section{Introduction} Mixed univariate distributions have been introduced and studied in the last years by compounding continuous and discrete distributions. Marshall and Olkin (1997) introduced a class of distributions which can be obtained by minimum and maximum of independent and identically distributed (iid) continuous random variables (independent of the random sample size), where the sample size follows geometric distribution. Chahkandi and Ganjali (2009) introduced some lifetime distributions by compounding exponential and power series distributions; this models are called exponential power series (EPS) distributions. Recently, Morais and Barreto-Souza (2011) introduced a class of distributions obtained by mixing Weibull and power series distributions and studied several of its statistical properties. This class contains the EPS distributions and other lifetime models studied recently, for example, the Weibull-geometric distribution (Marshall and Olkin, 1997; Barreto-Souza et al., 2011). The reader is referred to introduction from Morais and Barreto-Souza's (2011) article for a brief literature review about some univariate distributions obtained by compounding. A mixed bivariate law with exponential and geometric marginals was introduced by Kozubowski and Panorska (2005), and named bivariate exponential-geometric (BEG) distribution. A bivariate random vector $(X,N)$ follows BEG law if admits the stochastic representation: \begin{eqnarray}\label{rep} \left(X,N\right)\stackrel{d}{=}\left(\sum_{i=1}^NX_i,N\right), \end{eqnarray} where the variable $N$ follows geometric distribution and $\{X_i\}_{i=1}^\infty$ is a sequence of iid exponential variables, independent of $N$. The BEG law is infinitely divisible and therefore leads a bivariate L\'evy process, in this case, with gamma and negative binomial marginal processes. This bivariate process, named BGNB motion, was introduced and studied by Kozubowski et al. (2008). Other multivariate distributions involving exponential and geometric distributions have been studied in the literature. Kozubowski and Panorska (2008) introduced and studied a bivariate distribution involving geometric maximum of exponential variables. A trivariate distribution involving geometric sums and maximum of exponentials variables was also recently introduced by Kozubowski et al. (2011). Our chief goal in this article is to introduce a three-parameter extension of the BEG law. We refer to this new three-parameter distribution as bivariate gamma-geometric (BGG) law. Further, we show that this extended distribution is infinitely divisible, and, therefore, it induces a bivariate L\'evy process which has the BGNB motion as particular case. The additional parameter controls the shape of the continuous part of our models. Our bivariate distribution may be applied in areas such as hydrology and finance. We here focus in finance applications and use the BGG law for modeling log-returns (the $X_i$'s) corresponding to a daily exchange rate. More specifically, we are interested in modeling cumulative log-returns (the $X$) in growth periods of the exchange rates. In this case $N$ represents the duration of the growth period, where the consecutive log-returns are positive. As mentioned by Kozubowski and Panorska (2005), the geometric sum represented by $X$ in (\ref{rep}) is very useful in several fields including water resources, climate research and finance. We refer the reader to the introduction from Kozubowski and Panorska's (2005) article for a good discussion on practical situations where the random vectors with description (\ref{rep}) may be useful. The present article is organized as follows. In the Section 2 we introduce the bivariate gamma-geometric law and derive basic statistical properties, including a study of some properties of its marginal and conditional distributions. Further, we show that our proposed law is infinitely divisible. Estimation by maximum likelihood and inference for large sample are addressed in the Section 3, which also contains a proposed reparametrization of the model in order to obtain orthogonality of the parameter in the sense of Cox and Reid (1987). An application to a real data set is presented in the Section 4. The induced L\'evy process is approached in the Section 5 and some of its basic properties are shown. We include a study of the bivariate distribution of the process at fixed time and also discuss estimation of the parameters and inferential aspects. We close the article with concluding remarks in the Section 6. \section{The law and basic properties} The bivariate gamma-geometric (BGG) law is defined by the stochastic representation (\ref{rep}) and assuming that $\{X_i\}_{i=1}^\infty$ is a sequence of iid gamma variables independent of $N$ and with probability density function given by $g(x;\alpha,\beta)=\beta^\alpha/\Gamma(\alpha)x^{\alpha-1}e^{-\beta x}$, for $x>0$ and $\alpha,\beta>0$; we denote $X_i\sim\Gamma(\alpha,\beta)$. As before, $N$ is a geometric variable with probability mass function given by $P(N=n)=p(1-p)^{n-1}$, for $n\in\mathbb{N}$; denote $N\sim\mbox{Geom}(p)$. Clearly, the BGG law contains the BEG law as particular case, for the choice $\alpha=1$. The joint density function $f_{X,N}(\cdot,\cdot)$ of $(X,N)$ is given by \begin{eqnarray}\label{density} f_{X,N}(x,n)=\frac{\beta^{n\alpha}}{\Gamma(\alpha n)}x^{n\alpha-1}e^{-\beta x}p(1-p)^{n-1}, \quad x>0,\,\, n\in\mathbb{N}. \end{eqnarray} Hence, it follows that the joint cumulative distribution function (cdf) of the BGG distribution can be expressed by \begin{eqnarray*} P(X\leq x, N\leq n)=p\sum_{j=1}^n(1-p)^{j-1}\frac{\Gamma_{\beta x}(j\alpha)}{\Gamma(j\alpha)}, \end{eqnarray*} for $x>0$ and $n\in\mathbb{N}$, where $\Gamma_x(\alpha)=\int_0^xt^{\alpha-1}e^{-t}dt$ is the incomplete gamma function. We will denote $(X,N)\sim\mbox{BGG}(\beta,\alpha,p)$. We now show that $(pX,pN)\stackrel{d}{\rightarrow}(\alpha Z/\beta,Z)$ as $p\rightarrow0^+$, where `$\stackrel{d}{\rightarrow}$' denotes convergence in distribution and $Z$ is a exponential variable with mean 1; for $\alpha=1$, we obtain the result given in the proposition 2.3 from Kozubowski and Panorska (2005). For this, we use the moment generation function of the BGG distribution, which is given in the Subsection 2.2. Hence, we have that $E(e^{tpX+spN})=\varphi(pt,ps)$, where $\varphi(\cdot,\cdot)$ is given by (\ref{mgf}). Using L'H\^opital's rule, one may check that $E(e^{tpX+spN})\rightarrow(1-s-\alpha t/\beta)^{-1}$ as $p\rightarrow0^+$, which is the moment generation function of $(\alpha Z/\beta,Z)$. \subsection{Marginal and conditional distributions} The marginal density of $X$ with respect to Lebesgue measure is an infinite mixture of gamma densities, which is given by \begin{eqnarray}\label{marginal} f_X(x)=\sum_{n=1}^\infty P(N=n)g(x;n\alpha,\beta)=\frac{px^{-1}e^{-\beta x}}{1-p}\sum_{n=1}^\infty\frac{[(\beta x)^\alpha(1-p)]^n}{\Gamma(n\alpha)}, \quad x>0. \end{eqnarray} Therefore, the BGG distribution has infinite mixture of gamma and geometric marginals. Some alternative expressions for the marginal density of $X$ can be obtained. For example, for $\alpha=1$, we obtain the exponential density. Further, with help from Wolfram\footnote{http://www.wolframalpha.com/}, for $\alpha=1/2, 2, 3,4$, we have that $$f_X(x)=p\beta^{1/2} x^{-1/2}e^{-\beta x}\{a(x)e^{a(x)^2}(1+\mbox{erf}(a(x)))+\pi^{-1/2}\},$$ $$f_X(x)=\frac{p\beta e^{-\beta x}}{\sqrt{1-p}}\sinh(\beta x\sqrt{1-p}),$$ $$f_X(x)=\frac{px^{-1} e^{-\beta x}}{3(1-p)}a(x)^{1/3}e^{-a(x)^{1/3}/2}\{e^{3a(x)^{1/3}/2}-2\sin(1/6(3\sqrt{3}a(x)^{1/3}+\pi))\},$$ and $$f_X(x)=\frac{px^{-1} e^{-\beta x}}{2(1-p)}a(x)^{1/4}\{\sinh(a(x)^{1/4})-\sin(a(x)^{1/4})\},$$ respectively, where $a(x)\equiv a(x;\beta,\alpha,p)=(1-p)(\beta x)^\alpha$ and $\mbox{erf}(x)=2\pi^{-1/2}\int_0^xe^{-t^2/2}dt$ is the error function. Figure \ref{marginaldensities} shows some plots of the marginal density of $X$ for $\beta=1$, $p=0.2,0.8$ and some values of $\alpha$. \begin{figure}[h!] \centering \includegraphics[width=0.33\textwidth]{figura_2.pdf}\includegraphics[width=0.33\textwidth]{figura_8.pdf} \caption{Plots of the marginal density of $X$ for $\beta=1$, $\alpha=0.5,1,2,3,4$, $p=0.2$ (left) and $p=0.8$ (right).} \label{marginaldensities} \end{figure} We now obtain some conditional distributions which may be useful in goodness-of-fit analyses when the BGG distribution is assumed to model real data (see Section 4). Let $m\leq n$ be positive integers and $x>0$. The conditional cdf of $(X,N)$ given $N\leq n$ is $$P(X\leq x,N\leq m|N\leq n)=\frac{p}{1-(1-p)^n}\sum_{j=1}^m(1-p)^{j-1}\frac{\Gamma_{\beta x}(j\alpha)}{\Gamma(j\alpha)}.$$ We have that $P(X\leq x| N\leq n)$ is given by the right side of the above expression with $n$ replacing $m$. For $0<x\leq y$ and $n\in\mathbb{N}$, the conditional cdf of $(X,N)$ given $X\leq y$ is $$P(X\leq x,N\leq n|X\leq y)=\frac{\sum_{j=1}^n(1-p)^{j-1}\Gamma_{\beta x}(j\alpha)/\Gamma(j\alpha)}{\sum_{j=1}^\infty (1-p)^{j-1}\Gamma_{\beta y}(j\alpha)/\Gamma(j\alpha)}.$$ The conditional probability $P(N\leq n|X\leq y)$ is given by the right side of the above expression with $y$ replacing $x$. From (\ref{density}) and (\ref{marginal}), we obtain that the conditional probability mass function of $N$ given $X=x$ is \begin{eqnarray*}\label{NgivenX} P(N=n|X=x)=\frac{[(1-p)(\beta x)^{\alpha}]^n/\Gamma(\alpha n)}{\sum_{j=1}^\infty[(1-p)(\beta x)^{\alpha}]^j/\Gamma(j\alpha)}, \end{eqnarray*} for $n\in\mathbb{N}$. If $\alpha$ is known, the above probability mass function belongs to the one-parameter power series class of distributions; for instance, see Noack (1950). In this case, the parameter would be $(1-p)(\beta x)^\alpha$. For $\alpha=1$, we obtain the Poisson distribution truncated at zero with parameter $\beta x(1-p)$, which agrees with formula (7) from Kozuboswki and Panorska (2005). For the choice $\alpha=2$, we have that $$P(N=n|X=x)=\frac{(1-p)^{n-1/2}(\beta x)^{2n-1}}{(2n-1)!\sinh(\beta x\sqrt{1-p})},$$ where $n\in\mathbb{N}$. \subsection{Moments} The moment generation function (mgf) of the BGG law is \begin{eqnarray*} \varphi(t,s)=E\left(e^{tX+sN}\right)=E\left[e^{sN}E\left(e^{tX}|N\right)\right]=E\left\{\left[e^s\left(\frac{\beta}{\beta-t}\right)^\alpha\right]^N\right\}, \quad t<\beta,\, s\in\mathbb{R}, \end{eqnarray*} and then \begin{eqnarray}\label{mgf} \varphi(t,s)=\frac{pe^s\beta^\alpha}{(\beta-t)^\alpha-e^s\beta^\alpha(1-p)}, \end{eqnarray} for $t<\beta\{1-[(1-p)e^s]^{1/\alpha}\}$. The characteristic function may be obtained in a similar way and is given by \begin{eqnarray}\label{cf} \Phi(t,s)=\frac{pe^{is}\beta^\alpha}{(\beta-it)^\alpha-e^{is}\beta^\alpha(1-p)}, \end{eqnarray} for $t,s\in\mathbb{R}$. With this, the product and marginal moments can be obtained by computing $E(X^mN^k)=\partial^{m+k}\varphi(t,s)\partial t^m\partial s^k|_{t,s=0}$ or $E(X^mN^k)=(-i)^{m+k}\partial^{m+k}\Phi(t,s)\partial t^m\partial s^k|_{t,s=0}$. Hence, we obtain the following expression for the product moments of the random vector $(X,N)$: \begin{eqnarray}\label{pm} E(X^mN^k)=\frac{p\Gamma(m)}{\beta^m}\sum_{n=0}^\infty\frac{n^k(1-p)^{n-1}}{B(\alpha n,m)}, \end{eqnarray} where $B(a,b)=\Gamma(a)\Gamma(b)/\Gamma(a+b)$, for $a,b>0$, is the beta function. In particular, we obtain that $E(X)=\alpha(p\beta)^{-1}$, $E(N)=p^{-1}$ and the covariance matrix $\Sigma$ of $(X,N)$ is given by \begin{eqnarray}\label{cov} \Sigma= \left(\begin{array}{ll} \frac{(1-p)\alpha^2}{p^2\beta^2}+\frac{\alpha}{\beta^2p} & \frac{(1-p)\alpha}{\beta p^2}\\ \frac{(1-p)\alpha}{\beta p^2} & \frac{1-p}{p^2}\\ \end{array}\right). \end{eqnarray} The correlation coefficient $\rho$ between $X$ and $N$ is $\rho=\sqrt{(1-p)/(1-p+p/\alpha)}$. Let $\rho^*=\sqrt{1-p}$, that is, the correlation coefficient of a bivariate random vector following BEG law. For $\alpha\leq1$, we have $\rho\leq\rho^*$, and for $\alpha>1$, it follows that $\rho>\rho^*$. Figure \ref{coefcorr} shows some plots of the correlation coefficient of the BGG law as a function of $p$ for some values of $\alpha$. \begin{figure}[h] \centering \includegraphics[width=0.40\textwidth]{figura1a.pdf} \caption{Plots of the correlation coefficient of the BGG law as a function of $p$ for $\alpha=0.1,0.5,1,1.5,3$.} \label{coefcorr} \end{figure} From (\ref{mgf}), we find that the marginal mgf of $X$ is given by $$\varphi(t)=\frac{p\beta^\alpha}{(\beta-t)^\alpha-\beta^\alpha(1-p)},$$ for $t<\beta\{1-(1-p)^{1/\alpha}\}$. The following expression for the $r$th moment of $X$ can be obtained from above formula or (\ref{pm}): $$E(X^r)=\frac{p\Gamma(r)}{\beta^r}\sum_{n=0}^\infty\frac{(1-p)^{n-1}}{B(\alpha n,r)}.$$ We notice that the above expression is valid for any real $r>0$. \subsection{Infinitely divisibility, geometric stability and representations} We now show that BGG law is infinitely divisible, just as BEG law is. Based on Kozubowski and Panorska (2005), we define the bivariate random vector $$(R,v)=\left(\sum_{i=1}^{1+nT}G_{i},\frac{1}{n}+T\right),$$ where the $G_{i}$'s are iid random variables following $\Gamma(\alpha/n,\beta)$ distribution and independent of the random variable $T$, which follows negative binomial $\mbox{NB}(r,p)$ distribution with the probability mass function \begin{eqnarray}\label{nbpf} P(T=k)=\frac{\Gamma(k+r)}{k!\Gamma(r)}p^r(1-p)^k, \quad k\in\mathbb{N}\cup\{0\}, \end{eqnarray} where $r=1/n$. The moment generation function of $(R,v)$ is given by \begin{eqnarray*}\label{idmgf} E\left(e^{tR+sv}\right)&=&E\left[e^{s/n+sT}E\left(e^{t\sum_{i=1}^{1+nT}G_i}\big|T\right)\right]\\ &=&e^{s/n}\left(\frac{\beta}{\beta-t}\right)^{\alpha/n}E\left\{\left[e^s\left(\frac{\beta}{\beta-t}\right)^\alpha\right]^T\right\}\\ &=&\left\{\frac{pe^s\beta^\alpha}{(\beta-t)^\alpha-e^s\beta^\alpha(1-p)}\right\}^r, \end{eqnarray*} which is valid for $t<\beta\{1-[(1-p)e^s]^{1/\alpha}\}$ and $s\in\mathbb{R}$. In a similar way, we obtain that the characteristic function is given by \begin{eqnarray}\label{idcf} E(e^{itR+isv})=\left\{\frac{pe^{is}\beta^\alpha}{(\beta-it)^\alpha-e^{is}\beta^\alpha(1-p)}\right\}^r, \end{eqnarray} for $t,s\in\mathbb{R}$. With this, we have that $E(e^{itR+isv})=\Phi(t,s)^{1/n}$, where $\Phi(t,s)$ is the characteristic function of the BGG law given in (\ref{cf}). In words, we have that BGG distribution is infinitely divisible. The exponential, geometric and BEG distributions are closed under geometric summation. We now show that our distribution also enjoys this geometric stability property. Let $\{(X_i,N_i)\}_{i=1}^\infty$ be iid random vectors following $\mbox{BGG}(\beta,\alpha,p)$ distribution independent of $M$, where $M\sim\mbox{Geom}(q)$, with $0<q<1$. By using (\ref{mgf}) and the probability generation function of the geometric distribution, one may easily check that $$\sum_{i=1}^M(X_i,N_i)\sim\mbox{BGG}(\beta,\alpha,pq).$$ From the above result, we find another stochastic representation of the BGG law, which generalizes proposition (4.2) from Kozubowski and Panorska (2005): $$(X,N)\stackrel{d}{=}\sum_{i=1}^M(X_i,N_i),$$ where $\{(X_i,N_i)\}_{i=1}^\infty\stackrel{iid}{\sim}\mbox{BGG}(\beta,\alpha,p/q)$, with $0<p<q<1$, and $M$ is defined as before. In what follows, another representation of the BGG law is provided, by showing that it is a convolution of a bivariate distribution (with gamma and degenerate at 1 marginals) and a compound Poisson distribution. Let $\{Z_i\}_{i=1}^\infty$ be a sequence of iid random variables following logarithmic distribution with probability mass function $P(Z_i=k)=(1-p)^k(\lambda k)^{-1}$, for $k\in\mathbb{N}$, where $\lambda=-\log p$. Define the random variable $Q\sim\mbox{Poisson}(\lambda)$, independent of the $Z_i$'s. Given the sequence $\{Z_i\}_{i=1}^\infty$, let $G_i\sim\Gamma(\alpha Z_i,\beta)$, for $i\in\mathbb{N}$, be a sequence of independent random variables and let $G\sim\Gamma(\alpha,\beta)$ be independent of all previously defined variables. Then, we have that \begin{eqnarray}\label{cp} (X,N)\stackrel{d}{=}(G,1)+\sum_{i=1}^{Q}(G_i,Z_i). \end{eqnarray} Taking $\alpha=1$ in (\ref{cp}), we obtain the proposition 4.3 from Kozubowski and Panorska (2005). To show that the above representation holds, we use the probability generation functions $E\left(t^Q\right)=e^{\lambda(t-1)}$ (for $t\in\mathbb{R}$) and $E\left(t^{Z_i}\right)=\log(1-(1-p)t)/\log p$ (for $t<(1-p)^{-1}$). With this, it follows that \begin{eqnarray}\label{cp1} E\left(e^{t(G+\sum_{i=1}^QG_i)+s(1+\sum_{i=1}^QZ_i)}\right)&=&e^s\left(\frac{\beta}{\beta-t}\right)^\alpha E\left\{\left[E\left(e^{tG_1+sZ_1}\right)\right]^Q\right\}\nonumber\\ &=&e^s\left(\frac{\beta}{\beta-t}\right)^\alpha e^{\lambda\left\{E\left(e^{tG_1+sZ_1}\right)-1\right\}}, \end{eqnarray} for $t<\beta$. Furthermore, for $t<\beta\{1-[(1-p)e^s]^{1/\alpha}\}$, we have that \begin{eqnarray*}\label{cp2} E\left(e^{tG_1+sZ_1}\right)=E\left\{\left[\frac{e^s\beta^\alpha}{(\beta-t)^\alpha}\right]^{Z_1}\right\}=\frac{\log\{1-(1-p)e^s\beta^\alpha/(\beta-t)^\alpha\}}{\log p}. \end{eqnarray*} By using the above result in (\ref{cp1}), we obtain the representation (\ref{cp}). \section{Estimation and inference} Let $(X_1,N_1)$, \ldots, $(X_n,N_n)$ be a random sample from $\mbox{BGG}(\beta,\alpha,p)$ distribution and $\theta=(\beta,\alpha,p)^\top$ be the parameter vector. The log-likelihood function $\ell=\ell(\theta)$ is given by \begin{eqnarray}\label{loglik} \ell&\propto& n\alpha\log\beta\,\bar{N}_n+n\log p-n\beta\bar{X}_n+n\log(1-p)(\bar{N}_n-1)\nonumber\\ &&+\sum_{i=1}^n\left\{\alpha N_i\log X_i-\log\Gamma(\alpha N_i)\right\}, \end{eqnarray} where $\bar{X}_n=\sum_{i=1}^nX_i/n$ and $\bar{N}_n=\sum_{i=1}^nN_i/n$. The associated score function $U(\theta)=(\partial\ell/\partial\beta,\partial\ell/\partial\alpha,\partial\ell/\partial p)^\top$ to log-likelihood function (\ref{loglik}) comes \begin{eqnarray*} \frac{\partial\ell}{\partial\beta}=n\left(\frac{\alpha\bar{N}_n}{\beta}-\bar{X}_n\right), \quad \frac{\partial\ell}{\partial\alpha}=n\bar{N}_n\log\beta +\sum_{i=1}^n\left\{N_i\log X_i-N_i\Psi(\alpha N_i)\right\} \end{eqnarray*} and \begin{eqnarray}\label{scorep} \frac{\partial\ell}{\partial p}=\frac{n}{p}-\frac{n(\bar{N}_n-1)}{1-p}, \end{eqnarray} where $\Psi(x)=d\log\Gamma(x)/dx$. By solving the nonlinear system of equations $U(\Theta)=0$, it follows that the maximum likelihood estimators (MLEs) of the parameters are obtained by \begin{eqnarray}\label{mles} \widehat\beta=\widehat\alpha\frac{\bar{N}_n}{\bar{X}_n},\quad \widehat{p}=\frac{1}{\bar{N}_n}\quad\mbox{and}\quad \sum_{i=1}^nN_i\Psi(\widehat\alpha N_i)-n\bar{N}_n\log\left(\frac{\widehat\alpha \bar{N}_n}{\bar{X}_n}\right)=\sum_{i=1}^nN_i\log X_i. \end{eqnarray} Since MLE of $\alpha$ may not be found in closed-form, nonlinear optimization algorithms such as a Newton algorithm or a quasi-Newton algorithm are needed. We are now interested in constructing confidence intervals for the parameters. For this, the Fisher's information matrix is required. The information matrix $J(\theta)$ is \begin{eqnarray}\label{inform} J(\theta)= \left(\begin{array}{lll} \kappa_{\beta\beta} & \kappa_{\beta\alpha} & 0\\ \kappa_{\beta\alpha} & \kappa_{\alpha\alpha} & 0\\ 0 & 0 & \kappa_{pp} \\ \end{array}\right), \end{eqnarray} with $$\kappa_{\beta\beta}=\frac{\alpha}{\beta^2p},\quad \kappa_{\beta\alpha}=-\frac{1}{\beta p},\quad \kappa_{\alpha\alpha}=p\sum_{j=1}^\infty j^2(1-p)^{j-1}\Psi'(j\alpha) \quad \mbox{and} \quad \kappa_{pp}=\frac{1}{p^2(1-p)}.$$ where $\Psi'(x)=d\Psi(x)/dx$. Standard large sample theory gives us that $\sqrt{n}(\widehat\theta-\theta)\stackrel{d}{\rightarrow}N_3\left(0,J^{-1}(\theta)\right)$ as $n\rightarrow\infty$, where $J^{-1}(\theta)$ is the inverse matrix of $J(\theta)$ defined in (\ref{inform}). The asymptotic multivariate normal distribution of $\sqrt{n}(\widehat\theta-\theta)$ can be used to construct approximate confidence intervals and confidence regions for the parameters. Further, we can compute the maximum values of the unrestricted and restricted log-likelihoods to construct the likelihood ratio (LR) statistic for testing some sub-models of the BGG distribution. For example, we may use the LR statistic for testing the hypotheses $H_0 \mbox{:} \,\,\alpha=1$ versus $H_1 \mbox{:} \,\,\alpha\neq1$, which corresponds to test BEG distribution versus BGG distribution. \subsection{A reparametrization} We here propose a reparametrization of the bivariate gamma-geometric distribution and show its advantages over the previous one. Consider the reparametrization $\mu=\alpha/\beta$ and $\alpha$ and $p$ as before. Define now the parameter vector $\theta^*=(\mu,\alpha,p)^\top$. Hence, the density (\ref{density}) now becomes \begin{eqnarray*} f^*_{X,N}(x,n)=\frac{(\alpha/\mu)^{n\alpha}}{\Gamma(\alpha n)}x^{n\alpha-1}e^{-\alpha x/\mu}p(1-p)^{n-1}, \quad x>0,\,\, n\in\mathbb{N}. \end{eqnarray*} We shall denote $(X,N)\sim\mbox{BGG}(\mu,\alpha,p)$. Therefore if $(X_1,N_1)$, \ldots, $(X_n,N_n)$ is a random sample from $\mbox{BGG}(\mu,\alpha,p)$ distribution, the log-likelihood function $\ell^*=\ell(\theta^*)$ is given by \begin{eqnarray}\label{loglikr} \ell^*&\propto& n\alpha\log\left(\frac{\alpha}{\mu}\right)\bar{N}_n+n\log p-n\frac{\alpha}{\mu}\bar{X}_n+n\log(1-p)(\bar{N}_n-1)\nonumber\\ &&+\sum_{i=1}^n\left\{\alpha N_i\log X_i-\log\Gamma(\alpha N_i)\right\}. \end{eqnarray} The score function associated to (\ref{loglikr}) is $U^*(\theta^*)=(\partial\ell^*/\partial\mu,\partial\ell^*/\partial\alpha,\partial\ell^*/\partial p)^\top$, where \begin{eqnarray*} \frac{\partial\ell^*}{\partial\mu}=\frac{n\alpha}{\mu}\left(\frac{\bar{X}_n}{\mu}-\bar{N}_n\right), \quad \frac{\partial\ell^*}{\partial\alpha}=n\bar{N}_n\log\left(\frac{\alpha}{\mu}\right)+\sum_{i=1}^nN_i\{\log X_i-\Psi(\alpha N_i)\} \end{eqnarray*} and $\partial\ell^*/\partial p$ is given by (\ref{scorep}). The MLE of $p$ is given (as before) in (\ref{mles}), and the MLEs of $\mu$ and $\alpha$ are obtained by $$\widehat\mu=\frac{\bar{X}_n}{\bar{N}_n}\quad\mbox{and}\quad \sum_{i=1}^nN_i\Psi(\widehat\alpha N_i)-n\bar{N}_n\log\left(\widehat\alpha\frac{\bar{N}_n}{\bar{X}_n}\right)=\sum_{i=1}^nN_i\log X_i.$$ As before nonlinear optimization algorithms are needed to find MLE of $\alpha$. Under this reparametrization, Fisher's information matrix $J^*(\theta^*)$ becomes \begin{eqnarray*} J^*(\theta^*)= \left(\begin{array}{lll} \kappa^*_{\mu\mu} & 0 & 0\\ 0 & \kappa^*_{\alpha\alpha} & 0\\ 0 & 0 & \kappa^*_{pp} \\ \end{array}\right), \end{eqnarray*} with $$\kappa^*_{\mu\mu}=\frac{\alpha}{\mu^2p},\quad \kappa^*_{\alpha\alpha}=p\sum_{j=1}^\infty j^2(1-p)^{j-1}\Psi'(j\alpha)-\frac{1}{\alpha p}\quad \mbox{and} \quad \kappa^*_{pp}=\kappa_{pp}.$$ The asymptotic distribution of $\sqrt{n}(\widehat\theta^*-\theta^*)$ is trivariate normal with null mean and covariance matrix $J^{*\,-1}(\theta^*)=\mbox{diag}\{1/k^*_{\mu\mu}, 1/k^*_{\alpha\alpha},1/k_{pp}\}$. We see that under this reparametrization we have orthogonal parameters in the sense of Cox and Reid (1987); the information matrix is a diagonal matrix. With this, we obtain desirable properties such as asymptotic independence of the estimates of the parameters. The reader is referred to Cox and Reid (1987) for more details. \section{Application} Here, we show the usefulness of the bivariate gamma-geometric law applied to a real data set. We consider daily exchange rates between Brazilian real and U.K. pounds, quoted in Brazilian real, covering May 22, 2001 to December 31, 2009. With this, we obtain the daily log-returns, that is, the logarithms of the rates between two consecutive exchange rates. Figure \ref{log-returns} illustrates the daily exchange rates and the log-returns. \begin{figure}[h!] \centering \includegraphics[width=0.33\textwidth]{exchange_rate_plot.pdf}\includegraphics[width=0.33\textwidth]{log-returns_plot.pdf} \caption{Graphics of the daily exchange rates and log-returns.} \label{log-returns} \end{figure} We will jointly model the magnitude and duration of the consecutive positive log-returns by using BGG law. We call attention that the duration of the consecutive positive log-returns is the same that the duration of the growth periods of the exchange rates. The data set consists of 549 pairs $(X_i,N_i)$, where $X_i$ and $N_i$ are the magnitude and duration as described before, for $i=1,\ldots,549$. We notice that this approach of looking jointly at the magnitude and duration of the consecutive positive log-returns was firstly proposed by Kozubowski and Panorska (2005) with the BEG model, which showed a good fit to another currencies considered. Suppose $\{(X_i,N_i)\}_{i=1}^{549}$ are iid random vectors following $\mbox{BGG}(\mu,\alpha,p)$ distribution. We work with the reparametrization proposed in the Subsection 3.1. Table \ref{summaryfit} presents a summary of the fit of our model, which contains maximum likelihood estimates of the parameters with their respective standard errors, and asymptotic confidence intervals at the 5\% significance level. Note that the confidence interval of $\alpha$ does not contain the value $1$. Then, for the Wald test, we reject the hypothesis that the data come from BEG distribution in favor of the BGG distribution, at the 5\% significance level. We also perform likelihood ratio (LR) test and obtain that the LR statistic is equal to $5.666$ with associated p-value $0.0173$. Therefore, for any usual significance level (for example 5\%), the likelihood ratio test rejects the hypothesis that the data come from BEG distribution in favor of the BGG distribution, so agreeing with Wald test's decision. The empirical and fitted correlation coefficients are equal to 0.6680 and 0.6775, respectively, therefore, we have a good agreement between them. \begin{table}[h!] \centering \begin{tabular}{c|cccc} \hline Parameters & Estimate & Stand. error & Inf. bound & Sup. bound \\ \hline $\mu$ & 0.0082 & 0.00026 & 0.0076 & 0.0087 \\ $\alpha$ & 0.8805 & 0.04788 & 0.7867 & 0.9743 \\ $p$ & 0.5093 & 0.01523 & 0.4794 & 0.5391 \\ \hline \end{tabular} \caption{Maximum likelihood estimates of the parameters, standard errors and bounds of the asymptotic confidence intervals at the 5\% significance level.}\label{summaryfit} \end{table} The BEG model was motived by an empirical observation that the magnitude of the consecutive positive log-returns followed the same type of distribution as the positive one-day log-returns (see Kozubowski and Panorska, 2005). Indeed, the marginal distribution of $X$ in the BEG model is also exponential (with mean $p^{-1}\mu$), just as the positive daily log-returns (with mean $\mu$). This stability of the returns was observed earlier by Kozubowski and Podg\'orski (2003), with the log-Laplace distribution. We notice that BGG distribution does not enjoy this stability property, since the marginal distribution of $X$ is an infinite mixture of gamma distributions. We now show that the data set considered here does not present this stability. Denote the $i$th positive one-day log-returns by $D_i$ and define $D_i^*=p^{-1}D_i$. If the data was generated from a $\mbox{BEG}(\mu,p)$ distribution, then an empirical quantile-quantile plot between the $X_i$'s ($y$-axis) and the $D_i$'s ($x$-axis) would be around the straight line $y=p^{-1}x$, for $x>0$. Figure \ref{qqplot} presents this plot and we observe that a considerable part of the points are below of the straight line $y=1.9636 x$ (we replace $p$ by its MLE $\widehat p=0.5093$). Therefore, the present data set seems to have been generated by a distribution that lacks the stability property discussed above. In order to confirm this, we test the hypothesis that the $X_i$'s and $D_i^*$'s have the same distribution. In the BEG model, both have exponential distribution with mean $\mu$. Since $\widehat p$ converges in probability to $p$ (as $n\rightarrow\infty$), we perform the test with $\widehat p$ replacing $p$. The Kolmogorov-Smirnov statistic and associated p-value are equal to 0.0603 and 0.0369, respectively. Therefore, using a significance level at 5\%, we reject the hypothesis that the $X_i$'s and $D_i^*$'s have the same distribution. \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{qqplot.pdf} \caption{Empirical quantile-quantile plot between cumulative consecutive positive log-returns and positive one-day log-returns, with the straight line $y=1.9636 x$. The range $(x,y)\in(0,0.015)\times(0,0.030)$ covers 85\% of the data set.} \label{qqplot} \end{figure} Figure \ref{fit_densities} presents the fitted marginal density (mixture of gamma densities) of the cumulative log-returns with the histogram of the data and the empirical and fitted survival functions. These plots show a good fit of the mixture of gamma distributions to the data. This is confirmed by the Kolmogorov-Smirnov (KS) test, which we use to measure the goodness-of-fit of the mixture of gamma distributions to the data. The KS statistic and its p-value are equal to 0.0482 and 0.1557, respectively. Therefore, using any usual significance level, we accept the hypothesis that the mixture of gamma distributions is adequate to fit the cumulative log-returns. \begin{figure}[h!] \centering \includegraphics[width=0.33\textwidth]{fitted_density.pdf}\includegraphics[width=0.33\textwidth]{fitted_survival.pdf} \caption{Plot on the left shows the fitted mixture of gamma densities (density of $X$) with the histogram of the data. Plot on the right presents the empirical and fitted theoretical (mixture of gamma) survival functions. } \label{fit_densities} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.33\textwidth]{dailyposlog-returnsdensity.pdf}\includegraphics[width=0.33\textwidth]{dailyposlog-returnssurvival.pdf} \caption{Picture on the left shows the histogram and fitted gamma density for the daily positive log-returns. Empirical survival and fitted gamma survival are shown in the picture on the right.} \label{dailypositiveplots} \end{figure} Plots of the histogram, fitted gamma density and empirical and fitted survival functions for the daily positive log-returns are presented in the Figure \ref{dailypositiveplots}. The good performance of the gamma distribution may be seen by these graphics. In the Table \ref{marginalgeom} we show absolute frequency, relative frequency and fitted geometric model for the duration in days of the consecutive positive log-returns. From this, we observe that the geometric distribution fits well the data. This is confirmed by the Pearson's chi-squared (denoted by $\chi^2$) test, where our null hypothesis is that the duration follows geometric distribution. The $\chi^2$ statistic equals 42 (degrees of freedom equals 36) with associated p-value 0.2270, so we accept (using any usual significance level) that the growth period follows geometric distribution. We notice that geometric distribution has also worked quite well for modeling the duration of the growth periods of exchange rates as part of the BEG model in Kozubowski and Panorska (2005). \begin{table}[h!] \centering \begin{tabular}{c|ccccccc} \hline $N\rightarrow$ & 1 & 2 & 3 & 4 & 5 & 6 & $\geq7$ \\ \hline Absolute frequency &269 & 136 & 85 & 34 & 15 & 6 & 4 \\ Relative frequency& 0.48998& 0.24772 & 0.15483 & 0.06193 & 0.02732 & 0.01093 & 0.00728 \\ Fitted model & 0.50928 & 0.24991 & 0.12264 & 0.06018 & 0.02953 & 0.01449 & 0.01396\\ \hline \end{tabular} \caption{Absolute and relative frequencies and fitted marginal probability mass function of $N$ (duration in days of the growth periods).}\label{marginalgeom} \end{table} \begin{figure}[h!] \centering \includegraphics[width=0.33\textwidth]{onedaycumulative.pdf}\includegraphics[width=0.33\textwidth]{fitted_conditional_density1.pdf} \includegraphics[width=0.33\textwidth]{twodaycumulative.pdf}\includegraphics[width=0.33\textwidth]{fitted_conditional_density2.pdf} \includegraphics[width=0.33\textwidth]{threedaycumulative.pdf}\includegraphics[width=0.33\textwidth]{fitted_conditional_density3.pdf} \caption{Plots of the fitted conditional density and survival functions of $X$ given $N=1$, $N=2$ and $N=3$. In the pictures of the density and survival functions, we also plot the histogram of the data and the empirical survival function, respectively.} \label{conditional_densities} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.33\textwidth]{fourdaycumulative.pdf}\includegraphics[width=0.33\textwidth]{fitted_conditional_density4.pdf} \includegraphics[width=0.33\textwidth]{fivedaycumulative.pdf}\includegraphics[width=0.33\textwidth]{fitted_conditional_density5.pdf} \caption{Plots of the fitted conditional density and survival functions of $X$ given $N=4$ and $N=5$. In the pictures of the density and survival functions, we also plot the histogram of the data and the empirical survival function, respectively.} \label{conditional_densities2} \end{figure} So far our analysis has showed that the bivariate gamma-geometric distribution and its marginals provided a suitable fit to the data. We end our analysis verifying if the conditional distributions of the cumulative log-returns given the duration also provide good fits to the data. As mentioned before, the conditional distribution of $X$ given $N=n$ is $\Gamma(n\alpha,\alpha/\mu)$. Figure \ref{conditional_densities} shows plots of the fitted density and fitted survival function of the conditional distributions of $X$ given $N=1,2,3$. The histograms of the data and the empirical survival functions are also displayed. The corresponding graphics for the conditional distributions of $X$ given $N=4,5$ are displayed in the Figure \ref{conditional_densities2}. These graphics show a good performance of the gamma distribution to fit cumulative log-returns given the growth period (in days). We also use the Kolmogorov-Smirnov test to verify the goodness-of-fit these conditional distributions. In the table \ref{conditionalX} we present the KS statistics and their associated p-values. In all cases considered, using any usual significance level, we accept the hypothesis that the data come from gamma distribution with parameters specified above. \begin{table}[h!] \centering \begin{tabular}{c|ccccc} \hline Given $N\rightarrow$ & one-day & two-day & three-day & four-day & five-day\\ \hline KS statistic & 0.0720 & 0.0802 & 0.1002 & 0.1737 & 0.2242\\ p-value & 0.1229 & 0.3452 & 0.3377 & 0.2287 & 0.3809\\ \hline \end{tabular} \caption{Kolmogorov-Smirnov statistics and their associated p-values for the goodness-of-fit of the conditional distributions of the cumulative log-returns given the durations (one-day, two-day, three-day, four-day and five-day).}\label{conditionalX} \end{table} \section{The induced L\'evy process} As seen before, the bivariate gamma-geometric distribution is infinitely divisible, therefore, we have that (\ref{idcf}) is a characteristic function for any real $r>0$. This characteristic function is associated with the bivariate random vector \begin{eqnarray*} (R(r),v(r))=\left(\sum_{i=1}^{T}X_i+G,r+T\right), \end{eqnarray*} where $\{X_i\}_{i=1}^\infty$ are iid random variables following $\Gamma(\alpha,\beta)$ distribution, $G\sim\Gamma(r\alpha,\beta)$, $T$ is a discrete random variable with $\mbox{NB}(r,p)$ distribution and all random variables involved are mutually independent. Hence, it follows that the BGG distribution induces a L\'evy process $\{(X(r),\mbox{NB(r)}),\,\, r\geq0\}$, which has the following stochastic representation: \begin{eqnarray}\label{flp} \{(X(r),N(r)),\,\, r\geq0\}\stackrel{d}{=}\left\{\left(\sum_{i=1}^{NB(r)}X_i+G(r),r+\mbox{NB}(r)\right),\,\, r\geq0\right\}, \end{eqnarray} where the $X_i$'s are defined as before, $\{G(r),\,\, r\geq0\}$ is a gamma L\'evy process and $\{\mbox{NB}(r),\,\, r\geq0\}$ is a negative binomial L\'evy process, both with characteristic functions given by \begin{eqnarray*} E\left(e^{itG(r)}\right)=\left(\frac{\beta}{\beta-it}\right)^{\alpha r}, \quad t\in\mathbb{R}, \end{eqnarray*} and \begin{eqnarray*} E\left(e^{isN(r)}\right)=\left(\frac{p}{1-(1-p)e^{is}}\right)^r, \quad s\in\mathbb{R}, \end{eqnarray*} respectively. All random variables and processes involved in (\ref{flp}) are mutually independent. From the process defined in (\ref{flp}), we may obtain other related L\'evy motions by deleting $r$ and/or $G(r)$. Here, we focus on the L\'evy process given by (\ref{flp}) and by deleting $r$. In this case, we obtain the following stochastic representation for our process: \begin{eqnarray}\label{lp} \{(X(r),\mbox{NB}(r)),\,\, r\geq0\}\stackrel{d}{=}\left\{\left(G(r+\mbox{NB}(r)),\mbox{NB}(r)\right),\,\, r\geq0\right\}. \end{eqnarray} Since both processes (the left and the right ones of the equality in distribution) in (\ref{lp}) are L\'evy, the above result follows by noting that for all fixed $r$, we have $\sum_{i=1}^{NB(r)}X_i+G(r)|\mbox{NB}(r)=k\sim\Gamma(\alpha(r+k),\beta)$. One may also see that the above result follows from the stochastic self-similarity property discussed, for example, by Kozubowski and Podg\'orski (2007): a gamma L\'evy process subordinated to a negative binomial process with drift is again a gamma process. The characteristic function corresponding to the (\ref{lp}) is given by \begin{eqnarray}\label{lpcf} \Phi^*(t,s)\equiv E\left(e^{itX(r)+isNB(r)}\right)=\left\{\frac{p\beta^\alpha}{(\beta-it)^\alpha-e^{is}\beta^\alpha(1-p)}\right\}^r, \end{eqnarray} for $t,s\in\mathbb{R}$. With this, it easily follows that the characteristic function of the marginal process $\{X(r),\,\, r\geq0\}$ is \begin{eqnarray*}\label{lpcfm} E\left(e^{itX(r)}\right)=\left\{\frac{p\beta^\alpha}{(\beta-it)^\alpha-\beta^\alpha(1-p)}\right\}^r. \end{eqnarray*} Since the above characteristic function corresponds to a random variable whose density is an infinite mixture of gamma densities (see Subsection 5.1), we have that $\{X(r),\,\, r\geq0\}$ is an infinite mixture of gamma L\'evy process (with negative binomial weights). Then, we obtain that the marginal processes of $\{(X(r),\mbox{NB}(r)),\,\, r\geq0\}$ are infinite mixture of gamma and negative binomial processes. Therefore, we define that $\{(X(r),\mbox{NB}(r)),\,\, r\geq0\}$ is a $\mbox{BMixGNB}(\beta,\alpha,p)$ L\'evy process. We notice that, for the choice $\alpha=1$ in (\ref{lp}), we obtain the bivariate process with gamma and negative binomial marginals introduced by Kozubowski et al.\,(2008), named BGNB L\'evy motion. As noted by Kozubowski and Podg\'orski (2007), if $\{\widetilde{\mbox{NB}}(r),\,\, r\geq0\}$ is a negative binomial process, with parameter $q\in(0,1)$, independent of another negative binomial process \{$\mbox{NB}(r),\,\, r\geq0\}$ with parameter $p\in(0,1)$, then the changed time process $\{\mbox{NB}^*(r),\,\,r\geq0\}=\{\mbox{NB}(r+\widetilde{\mbox{NB}}(r)),\,\,r\geq0\}$ is a negative binomial process with parameter $p^*=pq/(1-p+pq)$. With this and (\ref{lp}), we have that the changed time process $\{(G(r+\mbox{NB}^*(r)),\mbox{NB}(r+\widetilde{\mbox{NB}}(r))),\,\, r\geq0\}$ is a $\mbox{BMixGNB}(\beta,\alpha,p^*)$ L\'evy process. In what follows, we derive basic properties of the bivariate distribution of the BMixGNB process for fixed $r>0$ and discuss estimation by maximum likelihood and inference for large sample. From now on, unless otherwise mentioned, we will consider $r>0$ fixed. \subsection{Basic properties of the bivariate process for fixed $r>0$} For simplicity, we will denote $(Y,M)=(X(r),\mbox{NB}(r))$. From stochastic representation (\ref{lp}), it is easy to see that the joint density and distribution function of $(Y,M)$ are \begin{eqnarray}\label{pdfr} g_{Y,M}(y,n)=\frac{\Gamma(n+r)p^r(1-p)^n}{n!\Gamma(r)\Gamma(\alpha(r+n))}\beta^{\alpha(r+n)}y^{\alpha(r+n)-1}e^{-\beta y} \end{eqnarray} and \begin{eqnarray*} P(Y\leq y, M\leq n)=\frac{p^r}{\Gamma(r)}\sum_{j=0}^n(1-p)^j\frac{\Gamma(j+r)}{j!\Gamma(\alpha(r+j))}\Gamma_{\beta y}(\alpha(r+j)), \end{eqnarray*} for $y>0$ and $n\in\mathbb{N}\cup\{0\}$. Making $\alpha=1$ in (\ref{pdfr}), we obtain the $\mbox{BGNB}$ distribution (bivariate distribution with gamma and negative binomial marginals) as particular case. This model was introduced and studied by Kozubowski et al. (2008). We have that the marginal distribution of $M$ is negative binomial with probability mass function given in (\ref{nbpf}). The marginal density of $Y$ is given by \begin{eqnarray*}\label{densityy} g_Y(y)=\sum_{n=0}^\infty P(M=n)g(y;\alpha(r+n),\beta), \quad y>0, \end{eqnarray*} where $g(\cdot;\alpha,\beta)$ is the density of a gamma variable as defined in the Section 2. Therefore, the above density is an infinite mixture of gamma densities (with negative binomial weigths). Since the marginal distributions of $(Y,M)$ are infinite mixture of gamma and negative binomial distributions, we denote $(Y,M)\sim\mbox{BMixGNB}(\beta,\alpha,p,r)$. Some plots of the marginal density of $Y$ are displayed in the Figure \ref{marginaldensities2}, for $\beta=1$ and some values of $\alpha$, $p$ and $r$. \begin{figure}[h!] \centering \includegraphics[width=0.33\textwidth]{figura_3.pdf}\includegraphics[width=0.33\textwidth]{figura_4.pdf} \includegraphics[width=0.33\textwidth]{figura_5.pdf}\includegraphics[width=0.33\textwidth]{figura_6.pdf} \caption{Graphics of the marginal density of $Y$ for $\beta=1$, $\alpha=0.5,1,2,3,4$, $p=0.2,0.8$ and $r=0.7,2$.} \label{marginaldensities2} \end{figure} The conditional distribution of $Y|M=k$ is gamma with parameters $\alpha(r+k)$ and $\beta$, while the conditional probability distribution function of $M|Y=y$ is given by $$P(M=n|Y=y)=\frac{\Gamma(n+r)}{n!\Gamma(\alpha(n+r))}[(1-p)(\beta y)^\alpha]^n\bigg/\sum_{j=0}^\infty\frac{\Gamma(j+r)}{j!\Gamma(\alpha(j+r))}[(1-p)(\beta y)^\alpha]^j,$$ for $n=0,1,\ldots$, which belongs to one-parameter power series distributions if $\alpha$ and $r$ are known. In this case, the parameter is $(1-p)(\beta y)^\alpha$. For positive integers $m\leq n$ and real $y>0$, it follows that $$P(Y\leq y, M\leq m|M\leq n)=\sum_{j=0}^m\frac{\Gamma(j+r)(1-p)^j}{j!\Gamma(\alpha(j+r))}\Gamma_{\beta y}(\alpha(r+j))\bigg/\sum_{j=0}^n\frac{\Gamma(j+r)}{j!}(1-p)^j$$ and for $0<x\leq y$ and positive integer $n$ $$P(Y\leq x, M\leq n|Y\leq y)=\frac{\sum_{j=0}^n\frac{\Gamma(j+r)(1-p)^j}{j!\Gamma(\alpha(j+r))}\Gamma_{\beta x}(\alpha(r+j))}{\sum_{j=0}^\infty\frac{\Gamma(j+r)(1-p)^j}{j!\Gamma(\alpha(j+r))}\Gamma_{\beta y}(\alpha(r+j))}.$$ The moments of a random vector $(Y,M)$ following $\mbox{BMixGNB}(\beta,\alpha,p,r)$ distribution may be obtained by $E(Y^nM^k)=(-i)^{n+k}\partial^{n+k}\Phi^*(t,s)/\partial t^n\partial s^k|_{t,s=0}$, where $\Phi^*(t,s)$ is the characteristic function given in (\ref{lpcf}). It follows that the product moments are given by \begin{eqnarray}\label{momBMixGNB} E(Y^nM^k)=\frac{p^r\Gamma(n)}{\beta^n\Gamma(r)}\sum_{m=0}^\infty\frac{m^k(1-p)^m\Gamma(m+r)}{m!B(\alpha(r+m),n)}. \end{eqnarray} The covariance matrix of $(Y,M)$ is given by $r\Sigma$, where $\Sigma$ is defined in (\ref{cov}). The correlation coefficient is given by $\rho$, which is defined in the Subsection 2.2. Further, an expression for the $n$th marginal moment of $Y$ may be obtained by taking $k=0$ in (\ref{momBMixGNB}). If $\{W(r),\,r>0\}$ is a $\mbox{BMixGNB}(\beta,\alpha,p)$ L\'evy motion, one may check that $\mbox{cov}(W(t),W(s))=\min(t,s)\Sigma$. The $\mbox{BMixGNB}$ law may be represented by a convolution between a bivariate distribution (with gamma and degenerate at 0 marginals) and a compound Poisson distribution. Such a representation is given by \begin{eqnarray*} (Y,M)\stackrel{d}{=}(G,0)+\sum_{i=1}^{Q}(G_i,Z_i), \end{eqnarray*} with all random variables above defined as in the formula (\ref{cp}), but here we define $G\sim\Gamma(\alpha r,\beta)$ and $\lambda=-r\log p$. We end this Subsection by noting that if $\{(Y_i,M_i)\}_{i=1}^n$ are independent random vectors with $(Y_i,M_i)\sim\mbox{BMixGNB}(\beta,\alpha,p,r_i)$, then \begin{eqnarray*} \sum_{i=1}^n (Y_i,M_i)\sim \mbox{BMixGNB}\left(\beta,\alpha,p,\sum_{i=1}^nr_i\right). \end{eqnarray*} One may easily check the above result by using characteristic function (\ref{lpcf}). \subsection{Estimation and inference for the $\mbox{BMixGNB}$ distribution} Suppose $(Y_1,M_1), \ldots, (Y_n,M_n)$ is a random sample from $\mbox{BMixGNB}(\beta,\alpha,p,\tau)$ distribution. Here the parameter vector will be denoted by $\theta^\dag=(\beta,\alpha,p,\tau)^\top$. The log-likelihood function, denoted by $\ell^\dag$, is given by \begin{eqnarray*} \ell^\dag&\propto& n\{\tau\alpha\log\beta-\log\Gamma(\tau)+\tau\log p\}-n\beta\bar{X}_n+n\{\log(1-p)+\alpha\log\beta\}\bar{M}_n\\&+&\sum_{i=1}^n\log\Gamma(M_i+\tau)- \sum_{i=1}^n\log\Gamma(\alpha(M_i+\tau))+\alpha\sum_{i=1}^n(M_i+\tau)\log X_i, \end{eqnarray*} where $\bar{M}_n=\sum_{i=1}^nM_i/n$. The associated score function $U^\dag(\theta^\dag)=(\partial\ell^\dag/\partial\beta,\partial\ell^\dag/\partial\alpha,\partial\ell^\dag/\partial p,\partial\ell^\dag/\partial\tau)$ has its components given by \begin{eqnarray*} \frac{\partial\ell^\dag}{\partial\beta}&=&n\left\{\frac{\alpha}{\beta}(\tau+\bar{M}_n)-\bar{X}_n\right\},\\ \frac{\partial\ell^\dag}{\partial\alpha}&=&n(\tau+\bar{M}_n)\log\beta+\sum_{i=1}^n(\tau+M_i)\{\log X_i-\Psi(\alpha(\tau+M_i))\},\\ \frac{\partial\ell^\dag}{\partial p}&=&-\frac{n\bar{M}_n}{1-p}+\frac{n\tau}{p},\\ \frac{\partial\ell^\dag}{\partial\tau}&=&n\left\{\log(p\beta^\alpha)-\Psi(\tau)\right\}+\sum_{i=1}^n\left\{\alpha[\log X_i-\Psi(\alpha(\tau+M_i))]+\Psi(\tau+M_i)\right\}. \end{eqnarray*} Hence, the maximum likelihood estimators of $\beta$ and $p$ are respectively given by \begin{eqnarray}\label{mles2} \widehat\beta=\widehat\alpha\frac{\widehat\tau+\bar{M}_n}{\bar{X}_n}\quad \mbox{and} \quad \widehat{p}=\frac{\widehat\tau}{\widehat\tau+\bar{M}_n}, \end{eqnarray} while the maximum likelihood estimators of $\alpha$ and $\tau$ are found by solving the nonlinear system of equations \begin{eqnarray*} n(\widehat\tau+\bar{M}_n)\log\left(\widehat\alpha\frac{\widehat\tau+\bar{M}_n}{\bar{X}_n}\right)+\sum_{i=1}^n(\widehat\tau+M_i)\{\log X_i-\Psi(\widehat\alpha(\widehat\tau+M_i))\}=0 \end{eqnarray*} and \begin{eqnarray}\label{mletau} \widehat\alpha\left\{n\log\left(\widehat\alpha\frac{\widehat\tau+\bar{M}_n}{\bar{X}_n}\right)+\sum_{i=1}^n\left\{\log X_i-\Psi(\widehat\alpha(\widehat\tau+M_i))\right\}\right\}=\nonumber\\n\left\{\Psi(\widehat\tau)-\log\left(\frac{\widehat\tau}{\widehat\tau+\bar{M}_n}\right)\right\}-\sum_{i=1}^n\Psi(\widehat\tau+M_i). \end{eqnarray} After some algebra, we obtain that Fisher's information matrix is \begin{eqnarray*} J^\dag(\theta^\dag)= \left(\begin{array}{llll} \kappa^\dag_{\beta\beta} & \kappa^\dag_{\beta\alpha} & 0 & \kappa^\dag_{\beta\tau}\\ \kappa^\dag_{\beta\alpha} & \kappa^\dag_{\alpha\alpha} & 0 & \kappa^\dag_{\alpha\tau}\\ 0 & 0 & \kappa^\dag_{pp} & \kappa^\dag_{p\tau} \\ \kappa^\dag_{\beta\tau} & \kappa^\dag_{\alpha\tau} & \kappa^\dag_{p\tau}& \kappa^\dag_{\tau\tau}\\ \end{array}\right), \end{eqnarray*} with \begin{eqnarray*} &&\kappa^\dag_{\beta\beta}=\frac{\alpha\tau}{\beta^2p},\quad \kappa^\dag_{\beta\alpha}=-\frac{\tau}{p\beta},\quad\kappa^\dag_{\beta\tau}=-\frac{\alpha}{\beta},\\ &&\kappa^\dag_{\alpha\alpha}=\frac{p^\tau}{\Gamma(\tau)}\sum_{j=0}^\infty (\tau+j)^2(1-p)^j\Psi'(\alpha(\tau+j))\frac{\Gamma(\tau+j)}{j!}, \quad \kappa^\dag_{pp}=\frac{\tau}{p^2(1-p)},\\ &&\kappa^\dag_{\alpha\tau}=\frac{\alpha p^\tau}{\Gamma(\tau)}\sum_{j=0}^\infty (1-p)^j(\tau+j)\Psi'(\alpha(\tau+j))\frac{\Gamma(\tau+j)}{j!}, \quad\kappa^\dag_{p\tau}=-\frac{1}{p},\\ &&\kappa^\dag_{\tau\tau}=\Psi'(\tau)+\frac{p^\tau}{\Gamma(\tau)}\sum_{j=0}^\infty (1-p)^j\{\alpha^2\Psi'(\alpha(\tau+j))-\Psi'(\tau+j)\}\frac{\Gamma(\tau+j)}{j!}. \end{eqnarray*} So we obtain that the asymptotic distribution of $\sqrt{n}(\widehat\theta^\dag-\theta^\dag)$ is trivariate normal with null mean and covariance matrix $J^{\dag\,-1}(\theta^\dag)$, where $J^{\dag\,-1}(\cdot)$ is the inverse of the information matrix $J^\dag(\cdot)$ defined above. The likelihood ratio, Wald and Score tests may be performed in order to test the hypotheses $H_0 \mbox{:} \,\,\alpha=1$ versus $H_1 \mbox{:} \,\,\alpha\neq1$, that is, to compare $\mbox{BGNB}$ and $\mbox{BMixGNB}$ fits. Further, we may test the $\mbox{BMixGNB}$ model versus the BGG or BEG models, which corresponds to the null hypotheses $H_0 \mbox{:} \,\,\tau=1$ and $H_0 \mbox{:} \,\,\alpha=\tau=1$, respectively. As made in the Subsection 4.2, we here propose the reparametrization $\mu=\alpha/\beta$. We now denote the parameter vector by $\theta^{\star}=(\mu,\alpha,p,\tau)^\top$. With this, one may check that the MLEs of $p$ and $\mu$ are given by (\ref{mles2}) and $\widehat\mu=\bar{X}_n/(\widehat\tau+\bar{M}_n)$. The MLEs of $\tau$ and $\alpha$ are obtained by solving the nonlinear system of equations (\ref{mletau}) and \begin{eqnarray*} n(\widehat\tau+\bar{M}_n)\left\{\log\left(\widehat\alpha\frac{\widehat\tau+\bar{M}_n}{\bar{X}_n}\right)-\frac{\widehat\tau+\bar{M}_n}{\bar{X}_n}\right\}+\sum_{i=1}^n(\widehat\tau+M_i)\{\log X_i-\Psi(\widehat\alpha(\widehat\tau+M_i))\}=0. \end{eqnarray*} Under this proposed reparametrization, the Fisher's information matrix becomes \begin{eqnarray*} J^\star(\theta^{\star} )= \left(\begin{array}{llll} \kappa^\star_{\mu\mu} & 0 & 0 & \kappa^\star_{\mu\tau}\\ 0 & \kappa^\star_{\alpha\alpha} & 0 & \kappa^\star_{\alpha\tau}\\ 0 & 0 & \kappa^\star_{pp} & \kappa^\star_{p\tau} \\ \kappa^\star_{\mu\tau} & \kappa^\star_{\alpha\tau} & \kappa^\star_{p\tau}& \kappa^\star_{\tau\tau}\\ \end{array}\right), \end{eqnarray*} where its elements are given by \begin{eqnarray*} &&\kappa^\star_{\mu\mu}=\frac{\alpha\tau}{\mu^2p},\quad \kappa^\star_{\mu\tau}=\frac{\alpha}{\mu}, \quad\kappa^\star_{\alpha\alpha}=\frac{p^\tau}{\Gamma(\tau)}\sum_{j=0}^\infty (1-p)^j(\tau+j)^2\Psi'(\alpha(\tau+j))\frac{\Gamma(\tau+j)}{j!}-\frac{\tau}{\alpha p},\\ &&\kappa^\star_{\alpha\tau}=\frac{\alpha p^\tau}{\Gamma(\tau)}\sum_{j=0}^\infty (1-p)^j(\tau+j)\Psi'(\alpha(\tau+j))\frac{\Gamma(\tau+j)}{j!}-1, \quad \kappa^\star_{pp}=\kappa^\dag_{pp},\\ && \kappa^\star_{p\tau}=\kappa^\dag_{p\tau}\quad\mbox{and}\quad \kappa^\star_{\tau\tau}=\kappa^\dag_{\tau\tau}. \end{eqnarray*} We have that $\kappa^\star_{\mu\alpha}=0$, that is, $\mu$ and $\alpha$ are orthogonal parameters in contrast with the parameters $\beta$ and $\alpha$ considered previously, where $\kappa^\dag_{\beta\alpha}\neq0$. Further, we have that $\sqrt{n}(\widehat\theta^\star-\theta^\star)\rightarrow N_4(0,J^{\star\,-1}(\theta^\star))$ as $n\rightarrow\infty$, where the covariance matrix $J^{\star\,-1}(\theta^\star)$ is the inverse of the information matrix $J^\star(\theta^\star)$. \section{Concluding remarks} We introduced and studied the bivariate gamma-geometric (BGG) law, which extends the bivariate exponential-geometric (BEG) law proposed by Kozubowski and Panorska (2005). The marginals of our model are infinite mixture of gamma and geometric distributions. Several results and properties were obtained such as joint density and survival functions, conditional distributions, moment generation and characteristic functions, product moments, covariance matrix, geometric stability and stochastic representations. We discussed estimation by maximum likelihood and inference for large sample. Further, a reparametrization was suggested in order to obtain orthogonality of the parameters. An application to exchange rates between Brazilian real and U.K. pounds, quoted in Brazilian real, was presented. There our aim was to model jointly the magnitude and duration of the consecutive positive log-returns. In that application, we showed that the BGG model and its marginal and conditional distributions fitted suitably the real data set considered. Further, we performed the likelihood ratio and Wald tests and both rejected (with significance level at 5\%) the hypothesis that the data come from BEG distribution in favor of the BGG distribution. We showed that our bivariate law is infinitely divisible and, therefore, induces a L\'evy process, named $\mbox{BMixGNB}$ L\'evy motion. We also derived some properties and results of this process, including a study of its distribution at fixed time. Our proposed L\'evy motion has infinite mixture of gamma and negative binomial marginal processes and generalizes the one proposed by Kozubowski et al. (2008), whose marginals are gamma and negative binomial processes. Estimation and inference for the parameters of the distribution of our process at fixed time were also discussed, including a reparametrization to obtain a partial orthogonality of the parameters. \section*{Acknowledgements} \noindent I thank the anonymous referee for their careful reading, comments and suggestions. I also gratefully acknowledge financial support from {\it Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico} (CNPq-Brazil).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Problem statement} Given alphabets $ {\cal X} $ and $ {\cal Z} $, an $n$-\emph{block denoiser} is a mapping $\re:{\cal Z}^n \rightarrow {\cal X}^n$. For any $z^n \in {\cal Z}^n$, let $\re(z^n)[i]$ denote the $i$-th term of the sequence $\re(z^n)$. Fixing a per symbol loss function $ \Lambda(\cdot,\cdot) $, for a noiseless input sequence $x^n$ and the observed output sequence $z^n$, the \emph{normalized cumulative loss} $\loss{\re}(x^n, z^n)$ of the denoiser $\re$ is \[ \loss{\re}(x^n,z^n) = \frac{1}{n} \sum_{i=1}^n \Lambda\Paren{x_i, \re(z^n)[i]}. \] Given a discrete memoryless channel (DMC) with transition probability matrix $ \Pi $ between $ {\cal X}^n $ and $ {\cal Z}^n $ (i.e., the setting of DUDE~\cite{Wei+05}) and two sequences of denoisers $ \hat{X}_1: {\cal Z}^n \rightarrow {\cal X}^n $ and $ \hat{X}_2: {\cal Z}^n \rightarrow {\cal X}^n $, we ask if there always exists a sequence of denoisers $ \hat{X}_U $ whose expected losses $ \loss{\hat{X}_U} $ satisfy \begin{multline} \limsup_{n\rightarrow \infty} \max_{x^n} E(\loss{\hat{X}_U}(x^n,Z^n)) \\ - \min\{E(\loss{\hat{X}_1}(x^n,Z^n)), E(\loss{\hat{X}_2}(x^n,Z^n))\} = 0. \label{eq:univdef} \end{multline} Such a denoiser $ \hat{X}_U $ would then perform, in an expected sense and asymptotically, as well as the best of $ \hat{X}_1 $ and $ \hat{X}_2 $ for any channel input sequence(s). The analogous problem in the settings of prediction~\cite{Ces+97}, noisy prediction~\cite{WeiMer01}, and filtering (i.e., causal denoising)~\cite{Wei+07} has been solved. DUDE~\cite{Wei+05} is a solution when the two denoisers are sliding window denoisers (each denoised symbol is a function of a window of noisy symbols centered at the corresponding noisy symbol). We are not aware of a solution to the problem at the stated level generality. In the sequel, we analyze the successes and limitations of the loss estimator approach developed in~\cite{Wei+07} for filtering and extended to the denoising setting in~\cite{2dcxt}, in the context of the above problem. We show that while a direct application of this approach fails in general, a certain randomized version of the approach does, in fact, solve the above problem for the case of the binary symmetric channel (BSC) (though, for now, not in a computationally practical way). The approach should be applicable to other DMCs, as will be addressed in future work. \section{Implications for error correction} In a channel coding setting, we can set the two target sequences of denoisers to the decoders of {\em any} two sequences of channel codes with vanishing maximal error probability. A denoiser with the universality property~(\ref{eq:univdef}) acts like a super-decoder that when applied to the decoding of the union of the two sequences of codebooks achieves asymptotically vanishing {\em bit-error rate} with respect to the transmitted codeword. It would be interesting to know if such a super-decoder can be constructed without relying on randomization, as we do herein. \section{Loss estimator based approach} \label{sec:lossestapproach} A loss estimator for a denoiser $\re$ is a mapping $ \losse{\re}:{\cal Z}^n \rightarrow \RR $ that, given a noisy sequence $z^n$, estimates the loss $\loss{\re}(x^n,z^n)$ incurred by $\re$ to be $\losse{\re}(z^n)$. Given a loss estimator, let $ \hat{j}^*(z^n) $ denote the index $ j \in \{1,2\} $ of the denoiser $ \hat{X}_j $ attaining the smallest estimated loss. That is $ \hat{j}^*(z^n) = \arg \min_{j \in \{1,2\}} \losse{\hat{X}_j}(z^n). $ Consider the loss estimator based denoiser $ \hat{X}_U^n(z^n) = \hat{X}_{\hat{j}^*(z^n)}(z^n). $ \begin{Lemma} If for all $ \epsilon > 0 $, $ \losse{\hat{X}_j} $ satisfies \begin{equation} \limsup_{n\rightarrow \infty} \max_{x^n}\max_{j\in\{1,2\}} Pr(|\losse{\hat{X}_j}(Z^n) - \loss{\hat{X}_j}(x^n,Z^n)| \geq \epsilon) = 0 \label{eq:conc1} \end{equation} then $ \hat{X}_U $ satisfies~(\ref{eq:univdef}). \label{lem:conc=univ} \end{Lemma} The proof of the lemma is similar to that of Lemma~\ref{lem:conc=univrand} below, so we omit it. \if{false} \noindent{\bf Proof.} We'll give the proof for the first condition. The second is similar. Let $ j^* $ denote \[ j^*(x^n,z^n)= \arg \min_{j \in \{1,2\}} \loss{\hat{X}_j}(x^n,z^n). \] Suppose $ x^n $ and $ z^n $ are such that $ |\losse{\hat{X}_j}(z^n) - \loss{\hat{X}_j}(x^n,z^n)| \leq \epsilon $ for $ j \in \{1,2\} $. We then have \begin{align*} &\loss{\hat{X}_{\hat{j}^*}}(x^n,z^n) - \loss{\hat{X}_{j^*}}(x^n,z^n) \\ &= \loss{\hat{X}_{\hat{j}^*}}(x^n,z^n) - \losse{\hat{X}_{\hat{j}^*}}(z^n) + \losse{\hat{X}_{\hat{j}^*}}(z^n) - \loss{\hat{X}_{j^*}}(x^n,z^n) \\ &\leq \loss{\hat{X}_{\hat{j}^*}}(x^n,z^n) - \losse{\hat{X}_{\hat{j}^*}}(z^n) + \losse{\hat{X}_{j^*}}(z^n) - \loss{\hat{X}_{j^*}}(x^n,z^n) \\ & \leq 2\epsilon, \end{align*} implying that \begin{multline} Pr( \loss{\hat{X}_{\hat{j}^*}}(x^n,Z^n) - \loss{\hat{X}_{j^*}}(x^n,Z^n) \geq 2\epsilon) \\ \leq \sum_{j=1}^2 Pr(|\losse{\hat{X}_j}(Z^n) - \loss{\hat{X}_j}(x^n,Z^n)| \geq \epsilon). \label{eq:probineq} \end{multline} Noting that $ \hat{X}_{\hat{j}^*} = \hat{X}_U $ it follows from this that for all $ \epsilon > 0 $, \begin{align} \max_{x^n} & E(\loss{\hat{X}_U}(x^n,Z^n)) - \min\{E(\loss{\hat{X}_1}(x^n,Z^n)), E(\loss{\hat{X}_2}(x^n,Z^n))\} \nonumber \\ & \leq \max_{x^n} E(\loss{\hat{X}_U}(x^n,Z^n) - \loss{\hat{X}_{j^*}}(x^n,Z^n)) \nonumber \\ & \leq 2\epsilon + \Lambda_{\max}\max_{x^n} Pr( \loss{\hat{X}_U}(x^n,Z^n) - \loss{\hat{X}_{j^*}}(x^n,Z^n) \geq 2\epsilon) \label{eq:foralleps} \end{align} The lemma now follows from~(\ref{eq:probineq}), (\ref{eq:conc1}), and the fact that~(\ref{eq:foralleps}) holds for all $ \epsilon > 0 $. $ \hspace*{\fill}~\IEEEQED\par $ \fi \label{sec:losse} Lemma~\ref{lem:conc=univ} suggests that one solution to the problem of asymptotically tracking the best of two denoisers is to estimate the loss of each denoiser from the noisy sequence and denoise using the one minimizing the estimated loss. This would work provided the loss estimator could be shown to satisfy~(\ref{eq:conc1}). The following is one potential estimator, first proposed in~\cite{2dcxt}. The estimate of the loss incurred by {\em any} denoiser $\re$ proposed in~\cite{2dcxt} is given by \begin{equation} \label{eq:loss_estimate} \losse{\re}(z^n) = \frac 1n \sum_{i=1}^n \sum_{x \in {\cal X}} h(x,z_i) \sum_{z \in {\cal Z}} \Lambda(x, \hat{x}_i(z)) \Pi(x,z) \end{equation} where we use $\hat{x}_i(z)$ to abbreviate $\hat{X}(z_1^{i-1}\cdot z \cdot z_{i+1}^n)[i]$ and $ h(\cdot,\cdot) $ satisfies $ \sum_{z}\Pi(x,z)h(x',z) = 1(x = x'). $ \begin{Example} For a DMC with invertible $ \Pi $, $ h(x,z) = \Pi^{-T}(x, z), $ uniquely. \end{Example} \begin{Example} Specializing the previous example to a BSC with crossover probability $ \delta $, \[ h(x,z) = \left\{ \begin{array}{ll} \frac{\overline{\delta}}{1-2\delta} & (x,z) \in \{(0,0),(1,1)\} \\ \frac{-\delta}{1-2\delta} & (x,z) \in \{(0,1),(1,0)\} \end{array} \right. \] where $ \overline{x} $ defaults to $ 1-x $. \end{Example} \begin{Example} For the binary erasure channel, $ h $ with the above property is not unique. Consider a symmetric binary erasure channel with erasure probability $ 1/2 $. One example of a valid $ h $ is: \begin{equation} h(x,z) = 2\cdot 1(x = z). \label{eq:erasureh} \end{equation} In this case, the estimator~(\ref{eq:loss_estimate}) assumes an especially intuitive form: for each unerased symbol, determine what the denoiser would have denoised that symbol to if had been erased, average the total losses over all symbols. Formally, \begin{equation} \label{eq:erasure_loss_estimate} \losse{\re}(z^n) = \frac{1}{n} \sum_{i:z_i \neq e} \Lambda(z_i, \hat{x}_i(e)) \end{equation} and note that $ z_i = x_i $ for $ i: z_i \neq e $. \end{Example} \noindent{\bf Conditional unbiasedness.} The loss estimator~(\ref{eq:loss_estimate}) has been shown to be conditionally unbiased in the following sense. Let \[ \tilde{\Lambda}_{i, \re}\Paren{z^n} \stackrel{\triangle}{=} \sum_{x \in {\cal X}} h(x, z_i) \sum_{z \in {\cal Z}} \Lambda(x, \hat{x}_i(z)) \Pi(x,z) \] denote the estimate of the loss incurred on the $i$-th symbol. Then $ \losse{\re}(z^n) = \frac 1n \sum_{i=1}^n \tilde{\Lambda}_{i,\re}\Paren{z^n}. $ \begin{Lemma} \cite{2dcxt,MooWei09,Ord+13} \label{unbiased} For all $x^n$, all denoisers $\re$, and all $ i $, $1 \le i \le n$, $z_1^{i-1}$, $z_{i+1}^n$ \begin{multline} \label{eq:conditional_unbiased} E\Brack{\tilde{\Lambda}_{i,\re}\Paren{Z^n} \left| Z_1^{i-1} = z_1^{i-1}, Z_{i+1}^n = z_{i+1}^n \right.} \\ = E\Brack{\Lambda\Paren{x_i,\re(Z^n)[i]}\left| Z_1^{i-1} = z_1^{i-1}, Z_{i+1}^n = z_{i+1}^n \right.} \end{multline} and therefore $ E\Brack{ \losse{\re}(z^n)} = E\Brack{\loss{\re}(x^n,Z^n)}. $ \end{Lemma} \section{Success stories} In this section, we review some special cases for which the loss estimator (\ref{eq:loss_estimate}) exhibits the concentration property (\ref{eq:conc1}) and hence for which the loss estimation paradigm solves the universal denoising problem. A key tool is the martingale difference method for obtaining concentration inequalities. Briefly, in our context, consider a function $ f:{\cal Z}^n \rightarrow \RR $ with $ E(f(Z^n)) = 0 $ and let \begin{equation} M_i = E(f(Z^n)|Z^{i}) \label{eq:doobmartingale} \end{equation} denote the Doob martingale associated with $ f $ and $ Z^n $. Let $ D_i = M_i - M_{i-1} $ and suppose it satisfies $ |D_i| \leq c_i $ with probability one. Then Azuma's inequality~\cite{mcdiarmid} states that for any $ \epsilon > 0 $, \[ Pr(f(Z^n) \geq n\epsilon) \leq e^{\frac{-n^2\epsilon^2}{2\sum_{i}c_i^2}} \] and \[ Pr(f(Z^n) \leq -n\epsilon) \leq e^{\frac{-n^2\epsilon^2}{2\sum_{i}c_i^2}} \] from which it follows that \[ Pr(|f(Z^n)| \geq n\epsilon) \leq 2e^{\frac{-n^2\epsilon^2}{2\sum_{i}c_i^2}} \] In our case, we will take $ f $ to be \[ f(z^n) = \sum_{i} \tilde{\Lambda}_{i, \re}\Paren{z^n} - \Lambda(x_i,\hat{X}(z^n)[i]). \] We have $ E(f) = 0 $ by the unbiasedness of the loss estimator and noting that $ f $ is simply the difference between unnormalized estimated and true losses, concentration inequalities for $ f $ are precisely what we seek. A special case of the above concentration inequalities is McDiarmid's inequality~\cite{mcdiarmid} which applies to the case of $ Z_i $ being independent and $ f $ satisfying \[ |f(z_1^{i-1},x,z_{i+1}^n)-f(z_1^{i-1},y,z_{i+1}^n)| \leq c_i \] for all $ i $, $ x $, and $ y $. This condition can be shown to imply the above bound on $ D_i $ thereby yielding the above concentration inequalities. Note that different concentration inequalities can be obtained by conditioning $ f $ on $ Z_i $ in a different order than in (\ref{eq:doobmartingale}), or even on increasingly refined functions of $ Z^n $. The best bound is obtained for which the resulting martingale differences are the ``smallest''. Finally, notice that the concentration inequality decays to zero even if the $ c_i $ are as large as $ o(\sqrt{n}) $. If even a single $ c_i = O(n) $, then no concentration is implied. \begin{Example} {\em Causal denoisers {\rm \cite{Wei+07}}.} In this case, $ \hat{X}(z^n)[i] $ is a function of only $ z_1,\ldots,z_i $. It follows that $ \Delta_i(z^n) = \tilde{\Lambda}_{i, \re}\Paren{z^n} - \Lambda(x_i,\hat{X}(z^n)[i]) $ is also causal, further implying, together with the conditional unbiasedness, that $ D_i = \Delta_i(Z^n) $ in the above martingale difference approach. As this is clearly bounded by $ c \Lambda_{\max} $, we have exponentially decaying concentration by the above inequality. \end{Example} \begin{Example}{\em Bounded (or slowly growing) lookahead denoisers.} This is similar to the previous case, except that now $ D_i $ includes the conditional expectations of a bounded number of additional terms. The boundedness follows from the conditional unbiasedness and the bounded lookahead. The $ D_i $ are thus again bounded and exponential concentration results. \end{Example} \begin{Example}{\em Each noisy sample affects only a few denoised values.} If in a non-causal denoiser, the number of denoised values affected by each noisy sample is $ o(\sqrt{n}) $, then concentration follows by McDiarmid's inequality above. \end{Example} \begin{Example}{\em Each denoised value depends only on a few noisy samples.} \label{ex:sqrtnsamples} Assume that for all $ i $, $ \hat{X}(z^n)[i] $ depends on only $ c_n = o(\sqrt{n}) $ of the $ z_j $. For each $ i $ let $ V_i $ denote the set of $ j $'s, such that $ \hat{X}(z^n)[i] $ depends on $ z_j $, and for each $ j $ let $ S_j $ denote the set of $ i $'s, such that $ \hat{X}(z^n)[i] $ depends on $ z_j $. Let $ a_n $ satisfy $ a_n = o(\sqrt{n}) $ and $ c_n = o(a_n) $. We then have \begin{align*} a_n|\{j:|S_j| > a_n\}| & \leq \sum_j |S_j| \\ & = \sum_i |V_i| \\ & < nc_n \end{align*} so that \begin{equation} |\{j:|S_j| > a_n \}| \leq \frac{c_n}{a_n} n. \label{eq:Jbnd} \end{equation} Now note that \[ Pr(|f(Z^n)| \geq n\epsilon) = E(Pr(|f(Z^n)| \geq n\epsilon | \{Z_j:|S_j| > a_n\})) \] so it suffices to show that the conditional deviation probabilities $ Pr(|f(Z^n)| \geq n\epsilon | \{Z_j:|S_j| > a_n\})) $ vanish with $ n $. Let $ J = \{j:|S_j| > a_n\} $. The idea is to note that \[ f(Z^n) = \sum_{j \in J}\Delta_j + \sum_{j \notin J} \Delta_j \] and therefore that \begin{align*} Pr(|f(Z^n)| & \geq n\epsilon | \{Z_j:|S_j| > a_n\}) \\ & \leq Pr(\sum_{j \notin J}\Delta_j \geq n\epsilon-c|J| | \{Z_j:|S_j| > a_n\}) \\ & \quad + Pr(\sum_{j \notin J}\Delta_j \leq -n\epsilon+c|J| | \{Z_j:|S_j| > a_n\}). \end{align*} We can then apply McDiarmid's inequality conditionally to bound each of these conditional probabilities, since, the $ Z_j $ are independent, and by design, for $ j \notin J $, each $ Z_j $ affects at most $ a_n = o(\sqrt{n})$ of the $ \Delta_j $, and since the conditional expectation of $ \sum_{j\notin J}\Delta_j $ is 0 by the conditional unbiasedness of the loss estimator. The overall concentration follows from the fact that, by~(\ref{eq:Jbnd}), $ |J| = o(n) $. \end{Example} The following proposition improves on this last example in terms of expanding the number of noisy variables each denoising function can depend on, but at the expense of non-exponential concentration. \begin{Proposition} Suppose for each $ i $, $ \hat{X}(z^n)[i] $ is a function of only (but any) $ o(n) $ of the $ z^n $. Then for any clean sequence $ x^n $ \begin{equation} \max_{x^n} E([\losse{\hat{X}}(Z^n) - \loss{\hat{X}}(x^n,Z^n)]^2) = o(1) \end{equation} where the expectation is with respect to the noise. \label{prop:functionoffew} \end{Proposition} \begin{Remark} Note that, via an application of Chebyshev's inequality, this proposition implies~(\ref{eq:conc1}). \end{Remark} \noindent{\bf Proof:} Let $ \Delta_i = \Delta_i(z^n) = \tilde{\Lambda}_{i, \re}\Paren{z^n} - \Lambda(x_i,\hat{X}(z^n)[i]) $ so that \begin{equation} \losse{\hat{X}}(z^n) - \loss{\hat{X}}(x^n,z^n) = \frac{1}{n}\sum_{i=1}^n \Delta_i(z^n). \label{eq:deltaequiv} \end{equation} Let $ T_i $ denote the subset of indices $ i $, such that $ \hat{X}(z^n)[i] $ is a function of $ z_j $ with $ j \in T_i $. We then have that $ \Delta_i $ is a function of $ z_j $ with $ j \in T'_i = T_i \cup \{i\} $. We then have that for $ i $ and $ j \notin T'_i $, \begin{align} E(\Delta_i(Z^n)\Delta_j(Z^n)) &= E(E(\Delta_i(Z^n)\Delta_j(Z^n)|Z^{j-1},Z_{j+1}^n)) \nonumber \\ &= E(\Delta_i(Z^n)E(\Delta_j(Z^n)|Z^{j-1},Z_{j+1}^n)) \label{eq:zerocorra} \\ &= 0 \label{eq:zerocorr} \end{align} where~(\ref{eq:zerocorra}) follows from the fact that $ \Delta_i(Z^n) $ is completely determined by $ Z^{j-1},Z_{j+1}^n $, since $ j \notin T'_i $, and~(\ref{eq:zerocorr}) follows from the conditional unbiasedness~(\ref{eq:conditional_unbiased}). We then have \begin{align} E\big(\big(\sum_{i=1}^n \Delta_i\big)^2\big) & = \sum_{i=1}^n \sum_{j=1}^n E(\Delta_i\Delta_j) \nonumber \\ & = \sum_{i=1}^n \sum_{j \in T'_i} E(\Delta_i\Delta_j) \label{eq:zeroterms}\\ & = O(n\max_i|T'_i|) = o(n^2), \label{eq:onsquared} \end{align} where~(\ref{eq:zeroterms}) follows from~(\ref{eq:zerocorr}) and~(\ref{eq:onsquared}) follows from the assumption of the proposition. The proposition then follows after normalizing both sides by $ n^2 $. $\hspace*{\fill}~\IEEEQED\par$ \section{Problematic cases} The following are some problematic cases for the above loss estimator based approach. {\em Binary erasure channel with erasure probability 1/2.} Consider the loss estimator based scheme with $ h(x,z) $ as given by~(\ref{eq:erasureh}) applied to tracking the two denoisers \begin{align} \hat{X}_1(z^n)[i] & = \sum_{j=1}^n 1(z_j = 0) \mod 2 \nonumber \\ \hat{X}_2(z^n)[i] & = 1+\sum_{j=1}^n 1(z_j = 0) \mod 2 \label{eq:paritydenoiser} \end{align} for each $ i $ that $ z_i = e $, under the Hamming loss. Thus, denoiser 1 denoises to all 0's if the number of 0's in $ z^n $ is even and to all 1's, otherwise, and denoiser 2 does precisely the opposite. Suppose the input sequence $ x^n $ is the all zero sequence. In this case (actually all cases), the expected (unnormalized) loss of each denoiser is $ n/4 $. It turns out, however, that the loss estimator based denoiser always makes the worst possible choice. Suppose $ z^n $ has an even number of $ 0 $'s. Denoiser 1 in this case achieves $0$ loss, while denoiser 2 achieves a loss of $ N_e $ (denoting the number of erasures). Following~(\ref{eq:erasure_loss_estimate}), the estimated unnormalized loss of denoiser 1, on the other hand, is $ N_0 $ and of denoiser 2, $ 0 $. The loss estimator based denoiser will thus elect to follow denoiser 2, incurring a loss of $ N_e $. The loss estimator goes similarly astray for $ z^n $ with an odd number of $ 0 $'s, and the average denoiser loss is thus $ n/2 $, failing to track the $ n/4 $ average performance. {\em Binary symmetric channel.} \if{false} Consider the adaptation of the denoiser pair (\ref{eq:paritydenoiser}) to the BSC with crossover $ \delta > 0 $, wherein the stated denoiser output applies to all $ i $. In this case, letting $ N_0 $ and $ N_1 $ respectively denote the number of $0$'s and $1$'s in $ x^n $, a straightforward analysis of the loss estimator reveals that for $ z^n $ such that a denoiser output's all 0's the estimated loss for that denoiser is $ N_0\delta + N_1(1-\delta) $ while for the other set of $z^n$ the estimated loss is $ N_0(1-\delta) + N_1\delta $. If the input sequence is the all-0 sequence we would expect $ N_0 $ and $ N_1 $ to be respectively close to $ n(1-\delta) $ and $ n\delta $, implying that the estimator based denoiser will select the denoiser outputting all 0's with high probability (since $ 2\delta(1-\delta) < \delta^2 + (1-\delta)^2 $ for all $ \delta \in (0,1) $). Note, that although the loss estimator based denoiser is ultimately operating correctly (and, in fact, substantially beating the $ n/2 $ average loss of either denoiser), the loss estimator fails to concentrate around the true loss which is either $ n $ or $ 0 $. It seems just a happy accident that the loss estimator based denoiser works in this case and indeed, we next specify a different parity based denoising pair which breaks the loss estimator based denoiser on the BSC. \fi It turns out that the above example fails to break the loss estimator based denoiser for the BSC and Hamming loss and a more complicated example is required. For the BSC with crossover probability $ \delta $, the loss estimate of denoiser $ \hat{X} $ is \begin{align} &n \losse{\re}(z^n) \nonumber \\ & {=} \sum_{i:z_i = 0} \Big[ \frac{\overline{\delta}}{1{-}2\delta}(\delta \Lambda(0,\hat{X}(z^n{\oplus} {\mathbf e}_i)[i]){+}\overline{\delta}\Lambda(0,\hat{X}(z^n)[i]) \nonumber \\ & \; \quad {-}\frac{\delta}{1{-}2\delta} (\delta \Lambda(1,\hat{X}(z^n)[i]){+}\overline{\delta}\Lambda(1,\hat{X}(z^n{\oplus} {\mathbf e}_i)[i]) \Big] \nonumber \\ & \; {+} \sum_{i:z_i = 1} \Big[ \frac{\overline{\delta}}{1{-}2\delta}(\delta \Lambda(1,\hat{X}(z^n{\oplus} {\mathbf e}_i)[i]){+}\overline{\delta}\Lambda(1,\hat{X}(z^n)[i]) \nonumber \\ & \; \quad {-}\frac{\delta}{1{-}2\delta} (\delta \Lambda(0,\hat{X}(z^n)[i]){+}\overline{\delta}\Lambda(0,\hat{X}(z^n{\oplus} {\mathbf e}_i)[i]) \Big], \label{eq:bsclossest} \end{align} where $ {\mathbf e}_i $ denotes the ``indicator'' sequence, with $ {\mathbf e}_i[j] = 0 $ if $ j \neq i $ and $ {\mathbf e}_i[i] = 1 $ and $ \oplus $ denotes componentwise modulo two addition. We can express~(\ref{eq:bsclossest}) in terms of the joint type of the three sequences $ z^n, \hat{X}(z^n), $ and $ \{\hat{X}(z^n\oplus {\mathbf e}_i )[i]\}_{i=1}^n $. Specifically, for $ b_k \in \{0,1\} $, $ k = 0,1,2 $, define \[ N_{b_0b_1b_2} = |\{i : z_i = b_0, \hat{X}(z^n)[i] = b_1, \hat{X}(z^n \oplus {\mathbf e}_i)[i]=b_2 \}|, \] \[ N_{b_0b_1} = \sum_{b_2} N_{b_0b_1b_2}, \mbox{ and } N_{b_0} = \sum_{b_1} N_{b_0b_1}. \] After some simplification, we can then express~(\ref{eq:bsclossest}) as \begin{align} n\losse{\re}(z^n) &= -\frac{\delta}{1-2\delta}(N_{000}+N_{111}) + \delta(N_{001} + N_{110}) \nonumber \\ & + \overline{\delta}(N_{010}+N_{101}) + \frac{\overline{\delta}}{1-2\delta}(N_{011}+N_{100}) \label{eq:estlossNs} \end{align} For our example, we will set $ \hat{X}_1(z^n)[i] = \hat{X}_2(z^n)[i] = 0 $ for all $ i $ and $ z^n $ with even parity. Thus, for even parity, the two denoisers will be identical, resulting in identical losses for any clean sequence. For $ z^n $ with odd parity, this implies that the corresponding $ N_{b_0b_1b_2} = 0 $ for $ b_2 = 1 $ so that $ N_{b_0b_1} = N_{b_0b_10} $. We will next assume that the clean sequence is the all $ 0 $ sequence and specify the behavior of the two denoisers for odd parity $ z^n $ taking this into account. Under this assumption on the clean sequence, with probability tending to $ 1 $, $ N_1 = \delta n + o(n)$ and $ N_0 = \overline{\delta} n + o(n)$, so that for odd parity $ z^n $, with probability tending to 1, we can write \[ N_{10} = n\delta - N_{11} + o(n) \mbox{ and } N_{00} = n\overline{\delta} - N_{01} + o(n). \] Using the above, we can further simplify~(\ref{eq:estlossNs}) to \begin{align} n\losse{\re}(z^n) &= -\frac{\delta}{1-2\delta}N_{00} + \delta N_{11} + \overline{\delta}N_{01} + \frac{\overline{\delta}}{1-2\delta}N_{10} \nonumber \\ &= N_{01}\left(\frac{\delta}{1-2\delta}+\overline{\delta}\right) + N_{11}\left(\delta - \frac{\overline{\delta}}{1-2\delta}\right) + o(n) \nonumber \\ &= (N_{01}-N_{11})\left(\frac{\delta}{1-2\delta}+\overline{\delta}\right) + o(n) . \label{eq:estlossNssimp} \end{align} The two denoisers will then, respectively, denoise $ z^n $ with odd parity so that: \begin{align*} \hat{X}_1(z^n) & \rightarrow N_{01} = 0, N_{11} = N_1 \\ \hat{X}_2(z^n) & \rightarrow N_{01} = \lfloor \delta N_0 \rfloor , N_{11} = 0. \end{align*} Thus, denoiser 1, for $ z^n $ with odd parity, sets $ \hat{X}_1(z^n)[i] = z_i $, while denoiser 2 sets $ \hat{X}_2(z^n)[i] = 0 $ if $ z_i = 1 $ and $ \hat{X}_2(z^n)[i] = 1 $ for an arbitrary fraction $ \delta $ of those $ i $ for which $ z_i = 0 $. Under the assumption that $ x^n $ is all $ 0 $, the following summarizes the actual losses and estimated losses for $ z^n $ with odd parity and $ N_1 = n\delta + o(n) $: \[ \begin{array}{|l|l|l|} \hline \mbox{Denoiser} & n\loss{\re}(x^n,z^n) & n\losse{\re}(z^n) \\ \hline 1 & \delta n + o(n) & -\left(\frac{\delta}{1-2\delta}+\overline{\delta}\right)\delta n + o(n) \\ 2 & \delta\overline{\delta}n + o(n) & \left(\frac{\delta}{1-2\delta}+\overline{\delta}\right)\delta\overline{\delta} n + o(n) \\ \hline \end{array} \label{eq:distvsestdist} \] Thus, we see that the estimated loss for denoiser 1 is smaller (negative in fact) while its actual loss is larger. Since the above scenario ( odd parity $ z^n $ and $ N_1 = \delta n + o(n) $) occurs roughly with probability $ 1/2 $, and for $ z^n $ with even parity the two denoisers both incur zero loss, it follows that the expected loss of the loss estimator based denoiser fails to track the expected loss of the best denoiser, namely denoiser 2, in this case. \section{Smoothed denoisers} The misbehavior of the loss estimator in the previous section appears to be the result of an excessive sensitivity of the target denoisers to the noisy sequence. Our path forward for the BSC is to first ``smooth'' the target denoisers via a randomization procedure in a way that does not significanly alter their average case performance on any sequence. The expected performance (with respect to the randomization) of the smoothed denoisers, in turn, will be shown to be more amenable to accurate loss estimation. To this end, for the BSC-$\delta$ case, let $ W^n $ be i.i.d. Bernoulli-$q_n$ for some $ q_n $ vanishing (with $n$). Given a denoiser $ \hat{X} $, the randomized (smoothed) version is taken to be \begin{equation} \hat{X}'(z^n) = \hat{X}(z^n\oplus W^n). \label{eq:randdendef} \end{equation} Conditioned on $ Z^n=z^n $, the expected loss (with respect to $ W^n $) of this randomized denoiser is \if{false} \footnote{For any quantity $ X $ depending on the randomization $ W^n $, the notation $ \overline{X} $ will denote the expectation of $ X $ with respect to the randomization $ W^n $, with all other variables fixed. This should not be confused with the case of a real number $ x \in [0,1] $, for which $ \overline{x} = 1-x $, as above. The intended meaning of this notation should be clear from the context.} \fi \begin{equation} \overline{L}_{\hat{X}'}(x^n,z^n) \stackrel{\triangle}{=} \frac{1}{n}\sum_{i} E_{W^n}\Lambda(x_i,\hat{X}(z^n\oplus W^n)[i]) \label{eq:randomizedloss} . \end{equation} We can readily adapt the above loss estimator to estimate $ \overline{L}_{\hat{X}'}(x^n,z^n) $ as \begin{multline} \hat{{\overline{L}}}_{\re'}(z^n) = \frac 1n \sum_{i=1}^n \sum_{x \in {\cal X}} h(x,z_i) \\ \times \sum_{z \in {\cal Z}} E_{W^n}\Lambda(x,\hat{X}((z^{i-1},z,z_{i+1}^n)\oplus W^n))[i] \Pi(x,z). \label{eq:loss_estimate_randomized} \end{multline} The summands (over $ i $) of this estimate of the expected loss of the randomized denoiser also have a conditional unbiasedness property. Specifically, letting \begin{multline} \overline{\tilde{\Lambda}}_{i,\hat{X}'}(z^n) \stackrel{\triangle}{=} \sum_{x \in {\cal X}} h(x,z_i) \\ \times \sum_{z \in {\cal Z}} E_{W^n}\Lambda(x,\hat{X}((z^{i-1},z,z_{i+1}^n)\oplus W^n))[i] \Pi(x,z), \label{eq:overlinetildelambda} \end{multline} we have \begin{align} &E\Big[\overline{\tilde{\Lambda}}_{i,\re'}\Paren{Z^n} \left| Z_1^{i-1} = z_1^{i-1}, Z_{i+1}^n = z_{i+1}^n \right.\Big] \nonumber \\ &= E\Big[E_{W^n}\Lambda\Paren{x_i,\re(Z^n{\oplus} W^n)[i]} \left| Z_1^{i-1}{=} z_1^{i-1}, Z_{i+1}^n {=} z_{i+1}^n \right.\Big] \label{eq:conditional_unbiased_rand} \end{align} We can then prove (see below) the following key lemma. \begin{Lemma} For all $ \delta $ and $ \hat{X'} $ as in~(\ref{eq:randdendef}) with $ q_n = n^{-\nu} $ and $ 0 < \nu < 1 $, \begin{equation} \max_{x^n} E\left(\overline{L}_{\hat{X}'}(x^n,Z^n)-\hat{\overline{L}}_{\hat{X}'}(Z^n)\right)^2 = o(1) \end{equation} with $\overline{L}_{\hat{X}'} $ and $ \hat{\overline{L}}_{\hat{X}'} $ as in~(\ref{eq:randomizedloss}) and~(\ref{eq:loss_estimate_randomized}) and where the expectation is with respect to the BSC-$\delta$ induced $ Z^n $. \label{lem:randden} \end{Lemma} The lemma implies that for any BSC the estimate~(\ref{eq:loss_estimate_randomized}) of the randomized denoiser conditional expected loss concentrates for all clean sequences and all underlying denoisers, including those in which the estimate of the underlying denoiser loss does not. This motivates an estimation minimizing randomized denoiser which departs from the approach of Section~\ref{sec:lossestapproach} as follows. Given denoisers $ \hat{X}_1 $ and $ \hat{X}_2 $ let $ \hat{X}'_1 $ and $ \hat{X}'_2 $ denote their respective randomized versions according to the above randomization. Next, define $ \hat{j}^{'*}(z^n) $ to be \[ \hat{j}^{'*}(z^n) = \arg \min_{j \in \{1,2\}} \hat{\overline{L}}_{\hat{X}'_j}(z^n) \] with $ \hat{\overline{L}}_{\hat{X}'_j} $ in~(\ref{eq:loss_estimate_randomized}) above. The estimation minimizing randomized denoiser is then defined as \begin{equation} \hat{X}^n_{RU}(z^n) = \hat{X}'_{\hat{j}^{'*}(z^n)}(z^n) = \hat{X}_{\hat{j}^{'*}(z^n)}(z^n\oplus W^n). \label{eq:randunivden} \end{equation} This denoiser thus determines the denoiser whose randomized version yields the smallest estimated expected loss computed according to~(\ref{eq:loss_estimate_randomized}) and denoises using the randomized version of the selected denoiser. We then have the following. \begin{Lemma} If for all $ \epsilon > 0 $, $ \hat{\overline{L}}_{\hat{X}'_j} $ satisfies \begin{equation} \limsup_{n\rightarrow \infty} \max_{x^n}\max_{j\in\{1,2\}} Pr(|\hat{\overline{L}}_{\hat{X}'_j}(Z^n) - \overline{L}_{\hat{X}'_j}(x^n,Z^n)| \geq \epsilon) = 0 \label{eq:conc1rand} \end{equation} then $ \hat{X}_{RU} $ satisfies \begin{multline} \limsup_{n\rightarrow \infty} \max_{x^n} E(\loss{\hat{X}_{RU}}(x^n,Z^n)) \\ - \min\{E(\loss{\hat{X}'_1}(x^n,Z^n)), E(\loss{\hat{X}'_2}(x^n,Z^n))\} = 0, \label{eq:univdefrand} \end{multline} where the expectations are with respect to the channel output $ Z^n $ {\em and} the randomization $ W^n $. \label{lem:conc=univrand} \end{Lemma} \noindent{\bf Proof.} Let $ j^{'*} $ denote \[ j^{'*}(x^n,z^n){=} \arg \min_{j \in \{1,2\}} \overline{L}_{\hat{X}'_j}(x^n,z^n). \] Suppose for $ x^n $ and $ z^n $, $ |\hat{\overline{L}}_{\hat{X}'_j}(z^n) {-} \overline{L}_{\hat{X}'_j}(x^n,z^n)| {\leq} \epsilon $ for $ j \in \{1,2\} $. We then have \begin{align*} &\overline{L}_{\hat{X}'_{\hat{j}^{'*}}}(x^n,z^n) - \overline{L}_{\hat{X}'_{j^{'*}}}(x^n,z^n) \\ &= \overline{L}_{\hat{X}'_{\hat{j}^{'*}}}(x^n,z^n){-} \hat{\overline{L}}_{\hat{X}'_{\hat{j}^{'*}}}(z^n) {+} \hat{\overline{L}}_{\hat{X}'_{\hat{j}^{'*}}}(z^n) {-} \overline{L}_{\hat{X}'_{j^{'*}}}(x^n,z^n) \\ &\leq \overline{L}_{\hat{X}'_{\hat{j}^{'*}}}(x^n,z^n) {-} \hat{\overline{L}}_{\hat{X}'_{\hat{j}^{'*}}}(z^n) {+} \hat{\overline{L}}_{\hat{X}'_{j^{'*}}}(z^n) {-} \overline{L}_{\hat{X}'_{j^{'*}}}(x^n,z^n) \\ & \leq 2\epsilon, \end{align*} implying, via a union bound, that \begin{multline} Pr( \overline{L}_{\hat{X}'_{\hat{j}^{'*}}}(x^n,Z^n) - \overline{L}_{\hat{X}'_{j^{'*}}}(x^n,Z^n) \geq 2\epsilon) \\ \leq \sum_{j=1}^2 Pr(|\hat{\overline{L}}_{\hat{X}'_j}(Z^n) - \overline{L}_{\hat{X}'_j}(x^n,Z^n)| \geq \epsilon). \label{eq:probineq2} \end{multline} Noting that $ \hat{X}'_{\hat{j}^{'*}} = \hat{X}_{RU} $, it follows that, for all $ \epsilon > 0 $, \begin{align} &\max_{x^n} E(\loss{\hat{X}_{RU}}(x^n,Z^n)) \nonumber \\ &\; \quad\quad - \min\{E(\loss{\hat{X}'_1}(x^n,Z^n)), E(\loss{\hat{X}'_2}(x^n,Z^n))\} \nonumber \\ &\; \leq \max_{x^n} E(\overline{L}_{\hat{X}'_{\hat{j}^{'*}}}(x^n,Z^n) - \overline{L}_{\hat{X}_{j^{'*}}}(x^n,Z^n)) \label{eq:overlineequivstep} \\ &\; \leq 2\epsilon {+} \Lambda_{\max}\max_{x^n} Pr( \overline{L}_{\hat{X}'_{\hat{j}^{'*}}}(x^n,Z^n) {-} \overline{L}_{\hat{X}_{j^{'*}}}(x^n,Z^n) \geq 2\epsilon) \label{eq:foralleps2} \end{align} where $ \Lambda_{\max} $ denotes the maximum loss and (\ref{eq:overlineequivstep}) follows from $ E(\loss{\hat{X}_{RU}}(x^n,Z^n)) {=} E(\overline{L}_{\hat{X}'_{\hat{j}^{'*}}}(x^n,Z^n)) $ and \begin{align*} &\min\{E(\loss{\hat{X}'_1}(x^n,Z^n)), E(\loss{\hat{X}'_2}(x^n,Z^n))\} \\ & = \min\{E(\overline{L}_{\hat{X}'_1}(x^n,Z^n)), E(\overline{L}_{\hat{X}'_2}(x^n,Z^n))\} \\ & \geq E(\min\{\overline{L}_{\hat{X}'_1}(x^n,Z^n), \overline{L}_{\hat{X}'_2}(x^n,Z^n)\}) {=} E(\overline{L}_{\hat{X}'_{j^{'*}}}(x^n,Z^n)). \end{align*} The lemma now follows from~(\ref{eq:probineq2}), (\ref{eq:conc1rand}), and the fact that~(\ref{eq:foralleps2}) holds for all $ \epsilon > 0 $. $ \hspace*{\fill}~\IEEEQED\par $ This lemma shows that the loss estimation minimizing randomized denoiser exhibits the same asymptotic expected performance as the best of two randomized denoisers, and if the expected performance of each such randomized denoiser were, in turn, close to the expected performance of the corresponding original denoiser, the estimation minimizing randomized denoiser would solve our original problem. The proof of Lemma~\ref{lem:randden} is presented in the next section, while the latter property is contained in the following. \begin{Lemma} For a BSC-$ \delta $, Hamming loss, and any denoiser $ \hat{X} $, if $ W^n $ is i.i.d.\ Bernoulli-$q_n$ with $ q_n = n^{-\nu} $ for $ \nu > 1/2 $, \[ \max_{x^n}|E(L(x^n,\hat{X}(Z^n{\oplus} W^n))){-}E(L(x^n,\hat{X}(Z^n)))| {=} o(1) \] where the first expectation is with respect to the channel and the randomization. \label{lem:closeexp} \end{Lemma} The proof of the lemma appears below. It involves showing that the $ L_1 $ distance between the distributions of the random variables $ Z^n $ and $ Z^n \oplus W^n $ vanishes uniformly for all input sequences $ x^n $. \if{false} \begin{Remark} The lemma can be shown to hold for $ q_n = o(n^{-1/2}) $, but it is a bit easier to write the proof if we concretely assume $ q_n = n^{-\nu} $ for some $ \nu > 1/2 $. \end{Remark} \fi Thus, we have the following. \begin{Theorem} For $ \nu $ satisfying $ 1/2 < \nu < 1 $, the loss estimation minimizing randomized denoiser $ \hat{X}_{RU} $ given by~(\ref{eq:randunivden}), with $ W^n $ i.i.d.\ Bernoulli-$n^{-\nu} $, satisfies \begin{multline*} \limsup_{n\rightarrow \infty} \max_{x^n} E(\loss{\hat{X}_{RU}}(x^n,Z^n)) \\ - \min\{E(\loss{\hat{X}_1}(x^n,Z^n)), E(\loss{\hat{X}_2}(x^n,Z^n))\} = 0. \end{multline*} \label{thm:main} \end{Theorem} \noindent{\bf Proof of Lemma~\ref{lem:closeexp}:} We start by noting that for any $ x^n $ \begin{align} |E(L(x^n,&\hat{X}(Z^n\oplus W^n)))-E(L(x^n,\hat{X}(Z^n)))| \nonumber \\ & = |E(L(x^n,\hat{X}(\tilde{Z}^n)))-E(L(x^n,\hat{X}(Z^n)))| \nonumber \\ &\leq \sum_{z^n}|P_{\tilde{Z}^n}(z^n)-P_{Z^n}(z^n)| \label{eq:l1dist} \end{align} where $ \tilde{Z}^n = Z^n\oplus W^n $, and in the last step $ P_{\tilde{Z}^n}(z^n) $ and $ P_{Z^n}(z^n) $ are the respective probabilities of $ \tilde{Z}^n = z^n $ and $ Z^n = z^n $ for the channel input sequence $x^n $. It follows from the properties of the channel that \begin{equation} Pr(Z_i = 1) = \left\{\begin{array}{ll} \delta & \mbox{if } x_i = 0 \\ \overline{\delta} & \mbox{if } x_i = 1 . \end{array} \right. \end{equation} Letting \[ v_n = \delta\overline{q_n}+\overline{\delta}q_n, \] it further follows from the channel and properties of $ W^n $ that $ \tilde{Z}^n $ are independent Bernoulli random variables with \begin{equation} Pr(\tilde{Z}_i = 1) = \left\{\begin{array}{ll} v_n & \mbox{if } x_i = 0 \\ \overline{v_n} & \mbox{if } x_i = 1 . \end{array} \right. \end{equation} Notice that~(\ref{eq:l1dist}) is invariant to a permutation of the underlying $ x^n $, so for notational convenience we shall assume that $ x^m = 0 $ and $ x_{m+1}^n = 1 $, for some value of $ m $. Define \[ A = \{z^n: |n_1(z^m) + n_0(z^{n}_{m+1})-\delta n| \leq n^{1/4+\nu/2} \}, \] where for any binary sequence $ y^k $, $ n_1(y^k) $ and $ n_0(y^k) $ respectively denote the number of $ 1 $'s and $ 0 $'s in $ y^k $. Since $ \nu > 1/2 $, the fact that $ n_1(Z^m)+n_0(Z^n_{m+1}) $ and $ n_1(\tilde{Z}^m) + n_0(\tilde{Z}^n) $ respectively have the same distributions as the sum of $ n $ i.i.d.\ Bernoulli-$ \delta $ and $ n $ i.i.d.\ Bernoulli-$ v_n $ random variables along with standard results (e.g., Hoeffding's inequality) imply that \begin{equation} Pr(Z^n \in A^c) = o(1) \mbox{ and } Pr(\tilde{Z}^n \in A^c) = o(1). \label{eq:Acompo1} \end{equation} In the case of the latter, note that $ n v_n = n\delta + O(n^{1-\nu}) $ so that the deviation from the mean implied by $ \tilde{Z}^n \in A^c $ is still $ O(n^{1/4+\nu/2}) $ (i.e., $ n^{1-\nu} = o(n^{1/4+\nu/2}) $ for $ \nu > 1/2 $). Additionally, for $ z^n \in A $ we have \begin{align} \log \frac{P_{\tilde{Z}^n}(z^n)}{P_{Z^n}(z^n)} &= (n_1(z^m)+n_0(z_{m+1}^n))\log \left(\frac{v_n}{\delta}\right) \nonumber \\ & \quad +(n_0(z^m)+n_1(z_{m+1}^n))\log \left(\frac{\overline{v_n}}{\overline{\delta}}\right) \nonumber \\ &= (n\delta {+} d)\left(\frac{v_n{-}\delta}{\delta}{-} \frac{(v_n{-}\delta)^2}{2\delta^2(1{+}\xi)^2}\right) \nonumber \\ & \quad + (n\overline{\delta}{-}d)\left(-\frac{v_n{-}\delta}{\overline{\delta}}{-} \frac{(v_n{-}\delta)^2}{2\overline{\delta}^2(1+\xi')^2}\right) \label{eq:taylor} \\ &= d(v_n{-}\delta)\left(\frac{1-2\delta}{\delta\overline{\delta}}\right) \nonumber \\ & \quad - (v_n{-}\delta)^2\left(\frac{n\delta {+} d}{2\delta^2(1{+}\xi)^2} {+} \frac{n\overline{\delta}{-}d} {2\overline{\delta}^2(1{+}\xi')^2} \right) \nonumber \\ & = o(1) \label{eq:laststepo1} \end{align} where~(\ref{eq:taylor}) follows by Taylor's approximation of $ \log(1+x) $ with $ d \stackrel{\triangle}{=} $ $ n_1(z^m)+n_0(z_{m+1}^n) - n\delta $ and $ |\xi| \leq |(v_n{-}\delta)/\delta| $, $ |\xi'| \leq |(v_n{-}\delta)/\overline{\delta}| $ and~(\ref{eq:laststepo1}) follows since $ \nu > 1/2 $, which implies $ |d(v_n{-}\delta)| = $ $ O(n^{1/4+\nu/2-\nu}) = $ $ O(n^{1/4-\nu/2}) = o(1) $ and $ n(v_n-\delta)^2 = O(n^{1-2\nu}) $ $ = o(1) $. Applying these facts to~(\ref{eq:l1dist}), we obtain \begin{align} \sum_{z^n}|P_{\tilde{Z}^n}(z^n) & {-}P_{Z^n}(z^n)| \nonumber \\ &= o(1) + \sum_{z^n\in A}|P_{\tilde{Z}^n}(z^n)-P_{Z^n}(z^n)| \label{eq:probbnd}\\ &= o(1) + \sum_{z^n\in A}P_{Z^n}(z^n)\left|\frac{P_{\tilde{Z}^n}(z^n)}{P_{Z^n}(z^n)} - 1\right| \nonumber \\ &= o(1) \label{eq:ratiobnd} \end{align} where~(\ref{eq:probbnd}) follows from~(\ref{eq:Acompo1}) and~(\ref{eq:ratiobnd}) follows from~(\ref{eq:laststepo1}), which is uniformly vanishing for $ z^n \in A $, and the fact that $ e^{x} $ is continuous.~$\hspace*{\fill}~\IEEEQED\par$ \vspace{-.5cm} \section{Proof of Lemma~\ref{lem:randden}} We begin by defining, for any $ f:\{0,1\}^n \rightarrow \RR $, the $ x^n $-dependent total influence (terminology inspired by a related quantity in~\cite{KKL88}) of $ f $ as \[ I(f) = \sum_{j=1}^n E(|f(Z^n)-f(Z^{j-1},\tilde{Z}_j,Z_{j+1}^n)|) \] where $ (\tilde{Z}^n,Z^n) $ constitute an i.i.d.\ pair of random variables with $ Z^n $ distributed according to the channel with input $ x^n $ (hence the dependence on $ x^n $). The proof of Lemma~\ref{lem:randden} hinges on the following result. \begin{Proposition} For all $ 0 < \nu < 1 $ and $ \hat{X}'(z^n) $ defined as in~(\ref{eq:randdendef}), \begin{equation} \max_{x^n,i} I(\overline{\hat{X}'}(\cdot)[i]) = o(n), \label{eq:infon} \end{equation} where $ \overline{\hat{X}'}(z^n)[i] {=} E_{W^n}(\hat{X}'(z^n)[i]) {=} E_{W^n}(\hat{X}(z^n\oplus W^n)[i]), $ with the expectation taken with respect to $ W^n $. \label{prop:infon} \end{Proposition} The proof, which follows, involves showing that $ \max_{f}\max_{z^n} \sum_{j=1}^n|\overline{f}(z^n)-\overline{f}(z^n\oplus {\mathbf e}_j)| = o(n), $ where the outer maximization is over all functions $ f:\{0,1\}^n \rightarrow [0,1] $, with $ \overline{f}(z^n) \stackrel{\triangle}{=} E_{W^n}(f(z^n\oplus W^n)). $ This, in turn, is reduced to proving that the $ L_1 $ distance between two related distributions vanishes. \noindent{\bf Proof:} For a function $ f:\{0,1\}^n \rightarrow [0,1] $, let $ \overline{f}(z^n) $ denote \[ \overline{f}(z^n) = E_{W^n}(f(z^n\oplus W^n)). \] Also, let $ e_j $ denote the ``indicator'' sequence (or vector) with $ e_j[t] = 0 $ if $ t \neq j $ and $ e_j[j] = 1 $. We will prove the proposition by showing that \begin{equation} \max_{f}\max_{z^n} \sum_{j=1}^n|\overline{f}(z^n)-\overline{f}(z^n\oplus e_j)| = o(n), \label{eq:mainstep} \end{equation} where the maximization over $ f $ is over all functions $ f:\{0,1\}^n \rightarrow [0,1] $.\footnote{A simple example (e.g., $ f(z^n) = z_1 $) shows that $ |\overline{f}(z^n)-\overline{f}(z^n\oplus e_j)| $ can be $ \Omega(1) $ for any fixed $ j $, but it turns out this can't occur for too many $ j $'s simultaneously for any underlying $ f $.} To see why~(\ref{eq:mainstep}) implies~(\ref{eq:infon}), note that \begin{equation} \max_{\tilde{z}^n} \sum_{j=1}^n|\overline{f}(z^n)-\overline{f}(z^{j-1},\tilde{z}_j,z_{j+1}^n)| = \sum_{j=1}^n|\overline{f}(z^n)-\overline{f}(z^n\oplus e_j)| \label{eq:maxstepreason} \end{equation} and therefore, \begin{align} & I(\overline{\hat{X}'}(\cdot)[i]) \nonumber \\ & = E\Big(E\Big(\sum_{j=1}^n|\overline{\hat{X}'}(Z^n)[i]-\overline{\hat{X}'}(Z^{j-1}, \tilde{Z}_j,Z_{j+1}^n)[i]|\Big| Z^n\Big)\Big) \nonumber\\ &\leq E\Big(\max_{\tilde{z}^n}\sum_{j=1}^n|\overline{\hat{X}'}(Z^n)[i]- \overline{\hat{X}'}(Z^{j-1},\tilde{z}_j,Z_{j+1}^n)[i]|\Big) \nonumber \\ &= E\Big(\sum_{j=1}^n|\overline{\hat{X}'}(Z^n)[i]- \overline{\hat{X}'}(Z^n\oplus e_j)[i]|\Big) \label{eq:maxstep}\\ &\leq \max_{f}\max_{z^n} \sum_{j=1}^n|\overline{f}(z^n)-\overline{f}(z^n\oplus e_j)| \label{eq:mainstepjust} \end{align} where~(\ref{eq:maxstep}) follows from~(\ref{eq:maxstepreason}). We begin the proof of~(\ref{eq:mainstep}) with the observation that \begin{align} & \max_{f}\max_{z^n} \sum_{j=1}^n|\overline{f}(z^n)-\overline{f}(z^n\oplus e_j)| \nonumber \\ &= \max_{f}\sum_{j=1}^n|\overline{f}(0^n)-\overline{f}(e_j)| \nonumber \\ & = \max_{f}\max_{s^n \in \{-1,+1\}^n} \sum_{j=1}^ns_j(\overline{f}(0^n)-\overline{f}(e_j)) \nonumber \\ &= \max_{s^n \in \{-1,+1\}^n} \max_f \sum_{j=1}^ns_j(E(f(W^n))-E(f(W^n\oplus e_j))) \label{eq:maxovers} \end{align} where the expectations are with respect to $ W^n $. Next, we note that for any $ s^n $, $ f $, and permutation $ \sigma $ of $ (1,2,\ldots,n) $ \begin{multline} \sum_{j=1}^ns_{\sigma(j)}(E(f(W^n))-E(f(W^n\oplus e_j))) \\ = \sum_{j=1}^ns_{j}(E(f\circ\sigma^{-1}(W^n))-E(f\circ\sigma^{-1}(W^n\oplus e_j))) \label{eq:perminv} \end{multline} where \[ f\circ\sigma^{-1}(z_1,\ldots,z_n) = f(z_{\sigma^{-1}(1)},\ldots,z_{\sigma^{-1}(n)}) \] and $ \sigma^{-1} $ is the inverse permutation of $ \sigma $. This can be seen as follows: \begin{align} \sum_{j=1}^ns_{j}&(E(f\circ\sigma^{-1}(W^n))-E(f\circ\sigma^{-1}(W^n\oplus e_j))) \nonumber \\ &= \sum_{j=1}^ns_{j}(E(f(W^n))-E(f(W^n\oplus e_{\sigma^{-1}(j)}))) \label{eq:perminvw} \\ &= \sum_{j=1}^ns_{\sigma(j)}(E(f(W^n))-E(f(W^n\oplus e_j))), \nonumber \end{align} where~(\ref{eq:perminvw}) follows from the fact that the distribution of $ W^n $ is permutation invariant. Relation~(\ref{eq:perminv}) implies that the maximization over $ s^n $ in~(\ref{eq:maxovers}) can be restricted to $ s^n $ for which $ s_j = 1 $ for $ j \leq m $ and $ s_j = -1 $ for $ j > m $, for some $ m \in \{0,1,\ldots,n\}$. Given such an $ m $, the maximization over $ f $ in~(\ref{eq:maxovers}) can be expressed as \begin{align} \max_f \Big[ \sum_{j=1}^m & E(f(W^n))-E(f(W^n\oplus e_j)) \nonumber \\ & \quad + \sum_{j=m+1}^n E(f(W^n\oplus e_j)) - E(f(W^n)) \Big] \nonumber \\ &= \max_f n(E(f(W_1^n)) - E(f(W_2^n))) \nonumber \\ &\leq \frac{n}{2} \sum_{w^n}|p_1(w^n)-p_2(w^n)| \label{eq:l1bndws} \end{align} where $ W_1^n $ and $ W_2^n $ are random sequences with respective probability distributions $ p_1(\cdot) $ and $ p_2(\cdot) $, and where $ W_1^n = W^n $ with probability $ m/n $ and $ W_1^n = W^n\oplus e_j $ with probability $ 1/n $ for $ j \in \{m{+}1,\ldots, n \} $, $ W_2^n = W^n $ with probability $ 1-m/n $ and $ W_2^n = W^n\oplus e_j $ with probability $ 1/n $ for $ j \in \{1,\ldots, m\} $. The last step~(\ref{eq:l1bndws}) follows since the range of $ f $ is in $ [0,1] $ (any bounded range could be accounted for with a suitable constant factor). Recalling the definition of $ W^n$, we have that $ p(w^n) = q_n^{n_1(w^n)}\overline{q_n}^{n-n_1(w^n)} $. It then follows from the above definitions of $ W_1 $ and $ W_2 $ that \begin{align} & p_1(w^n) \nonumber \\ &= \frac{m}{n}p(w^n) + \frac{1}{n}\sum_{j=m+1}^n p(w^n\oplus e_j) \nonumber \\ &= p(w^n)\Bigg[\frac{m}{n}{+} \frac{1}{n}\Bigg( n_1(w_{m{+}1}^n)\frac{\overline{q_n}}{q_n} {+} (n{-}m{-}n_1(w_{m{+}1}^n))\frac{q_n}{\overline{q_n}} \Bigg) \Bigg] \label{eq:w1prob} \end{align} and \begin{align} & p_2(w^n) \nonumber \\ &= \frac{n-m}{n}p(w^n) + \frac{1}{n}\sum_{j=1}^m p(w^n\oplus e_j) \nonumber \\ &= p(w^n)\Bigg[\frac{n-m}{n}+ \frac{1}{n}\Bigg( n_1(w^m)\frac{\overline{q_n}}{q_n} + (m-n_1(w^m))\frac{q_n}{\overline{q_n}} \Bigg) \Bigg]. \label{eq:w2prob} \end{align} These imply the obvious bounds \begin{multline} p(w^n)\left[\frac{m}{n}+ \frac{n_1(w_{m+1}^n)}{n}\frac{\overline{q_n}}{q_n} \right] \leq p_1(w^n) \\ \leq p(w^n)\left[\frac{m}{n}+ \frac{n_1(w_{m+1}^n)}{n}\frac{\overline{q_n}}{q_n}+\frac{q_n}{\overline{q_n}} \right] \label{eq:w1probbnds} \end{multline} and \begin{multline} p(w^n)\left[\frac{n-m}{n}+ \frac{n_1(w^m)}{n}\frac{\overline{q_n}}{q_n} \right] \leq p_2(w^n) \\ \leq p(w^n)\left[\frac{n-m}{n}+ \frac{n_1(w^m)}{n}\frac{\overline{q_n}}{q_n}+\frac{q_n}{\overline{q_n}} \right], \label{eq:w2probbnds} \end{multline} which, in turn, imply \begin{align} &\sum_{w^n}|p_1(w^n)-p_2(w^n)| \nonumber \\ &\leq \frac{q_n}{\overline{q_n}} {+} E\Bigg[\Bigg|\frac{m}{n}{+} \frac{n_1(W_{m{+}1}^n)}{n}\frac{\overline{q_n}}{q_n} {-}\frac{n{-}m}{n} {-}\frac{n_1(W^m)}{n}\frac{\overline{q_n}}{q_n} \Bigg|\Bigg] \nonumber \\ &= \frac{q_n}{\overline{q_n}} {+} E\Bigg[\Bigg|\frac{m}{n} {+} \frac{(n_1(W_{m{+}1}^n){-}(n{-}m)q_n{+}(n{-}m)q_n)}{n}\frac{\overline{q_n}}{q_n} \nonumber \\ & \quad\quad\quad\quad\quad {-}\frac{n{-}m}{n} {-}\frac{(n_1(W^m){-}mq_n{+}mq_n)}{n}\frac{\overline{q_n}}{q_n} \Bigg|\Bigg] \nonumber \\ &= \frac{q_n}{\overline{q_n}} {+} E\Bigg[\Bigg| \frac{(n_1(W_{m{+}1}^n){-}(n{-}m)q_n)}{n}\frac{\overline{q_n}}{q_n} \nonumber \\ & \quad\quad\quad\quad\quad {-}\frac{(n_1(W^m){-}mq_n)}{n}\frac{\overline{q_n}}{q_n} {+} q_n\frac{2m{-}n}{n} \Bigg|\Bigg] \nonumber \\ &\leq \frac{2q_n}{\overline{q_n}} {+} \frac{1}{nq_n}( E[|n_1(W_{m{+}1}^n){-}q_n(n{-}m)|] \nonumber \\ & \mbox{\hspace{1in}} {+} E[|n_1(W^m){-}q_nm|]), \label{eq:devstep} \end{align} where the expectations are with respect to $ W^n $. We will bound the expectations in~(\ref{eq:devstep}) using the concentration inequality~\cite[Theorem 2.3]{mcdiarmid} \begin{equation} P\Big(|n_1(W^k) - kq_n| \ge \epsilon \Big) \le 2\exp\left( - \frac { \epsilon^2}{2k q_n \left(1+ \epsilon/(3kq_n) \right)}\right), \label{eq:mcdiarmid} \end{equation} which is applicable since $ W^n $ is i.i.d.\ with $ W_j \in \{0,1\} $. Using the well known integration-by-parts formula for the expectation of a non-negative random variable, we have \begin{align} E&[|n_1(W^k){-}q_nk|] \nonumber \\ &= \int_{0}^{\infty} P\Big(|n_1(W^k) {-} kq_n| \ge \epsilon \Big) d\epsilon \nonumber \\ & \leq \int_{0}^{\infty} 2\exp\left( {-} \frac { \epsilon^2}{2k q_n \left(1{+} \epsilon/(3kq_n) \right)}\right) d\epsilon \nonumber \\ & \leq \int_{0}^{kq_n} 2\exp\left( {-} \frac { 3\epsilon^2}{8k q_n}\right) d\epsilon {+} \int_{kq_n}^{\infty} 2\exp\left( {-} \frac { 3\epsilon}{8}\right) d\epsilon \nonumber \\ & \leq \sqrt{\frac{8\pi kq_n}{3}} {+} \frac{16}{3}\exp\left({-}\frac{3kq_n}{8}\right). \nonumber \end{align} Applying this in~(\ref{eq:devstep}) with $ k = n-m $ and $ k = m $, respectively, yields \begin{align} &\sum_{w^n}|p_1(w^n){-}p_2(w^n)| \nonumber \\ &\leq \frac{2q_n}{\overline{q_n}} {+} \frac{1}{nq_n}\sqrt{\frac{8\pi (n{-}m)q_n}{3}} {+} \frac{1}{nq_n}\frac{16}{3}\exp\left({-}\frac{3(n{-}m)q_n}{8}\right) \nonumber \\ & \quad\quad {+} \frac{1}{q_nn}\sqrt{\frac{8\pi m q_n}{3}} {+} \frac{1}{nq_n}\frac{16}{3}\exp\left({-}\frac{3mq_n}{8}\right) \nonumber \\ &= O(q_n) {+} O((nq_n)^{-1/2}) {+} O((nq_n)^{-1}) \nonumber \\ &= o(1), \nonumber \end{align} uniformly in $ m $, where the last step follows from our assumption that $ q_n = n^{-\nu} $ for $ 0 < \nu < 1 $. Incorporating this bound into~(\ref{eq:l1bndws}), and then into~(\ref{eq:maxovers}), combined with the observation~(\ref{eq:perminv}), establishes~(\ref{eq:mainstep}) via~(\ref{eq:mainstepjust}), completing the proof.~$\hspace*{\fill}~\IEEEQED\par$ \noindent{\bf Proof of Lemma~\ref{lem:randden}:} The proof is similar to that of Proposition~\ref{prop:functionoffew}, except the correlations appearing in~(\ref{eq:zerocorra}) are handled using Proposition~\ref{prop:infon}. Define \begin{equation} \overline{\Delta}_i(z^n) \stackrel{\triangle}{=} \overline{\tilde{\Lambda}}_{i,\re'}\Paren{z^n} - E_{W^n}\Lambda\Paren{x_i,\re(z^n\oplus W^n)[i]} \end{equation} with~$\overline{\tilde{\Lambda}}_{i,\re'}$ as in~(\ref{eq:overlinetildelambda}). We claim that \begin{equation} \max_{x^n,i} I(\overline{\Delta}_i(\cdot)) = o(n). \label{eq:deltainfon} \end{equation} To see this, note that for the binary/Hamming loss case, $ E_{W^n}\Lambda(x,\hat{X}(z^n\oplus W^n)[i]) = \overline{\hat{X}'}(z^n)[i] $ if $ x = 0 $ and $ 1 - \overline{\hat{X}'}(z^n)[i] $ if $ x = 1 $. It is then immediate from the definitions that $ \overline{\Delta}_i $ can be expressed as \begin{multline} \overline{\Delta}_{i}(z^n) = c_1(z_i) + c_2(z_i)\overline{\hat{X}'}(z^{i-1},0,z_{i+1}^n)[i] \\ + c_3(z_i)\overline{\hat{X}'}(z^{i-1},1,z_{i+1}^n)[i] + c_4(x_i) + c_5(x_i)\overline{\hat{X}'}(z^n)[i] \label{eq:overlinedeltaexpr} \end{multline} for $ z_i $ and $ x_i $ dependent quantities $ c_1, \ldots, c_5 $. Thus, we have \begin{align} &I(\overline{\Delta}_i(\cdot)) \nonumber \\ &= \sum_{j=1}^nE(|\overline{\Delta}_{i}(Z^n) {-} \overline{\Delta}_{i}(Z^{j{-}1},\tilde{Z}_j,Z_{j{+}1}^n)|) \nonumber \\ &\leq d_1 {+} d_2\sum_{j\neq i}E\big(|\overline{\hat{X}'}(Z^n)[i]{-} \overline{\hat{X}'}(Z^{j{-}1},\tilde{Z}_j,Z_{j{+}1}^n)[i]|\big|Z_i = 0\big) \nonumber \\ & \quad {+} d_3\sum_{j\neq i}E\big(|\overline{\hat{X}'}(Z^n)[i]{-} \overline{\hat{X}'}(Z^{j{-}1},\tilde{Z}_j,Z_{j{+}1}^n)[i]|\big|Z_i = 1\big) \nonumber \\ & \quad {+} d_4\sum_{j\neq i}E\big(|\overline{\hat{X}'}(Z^n)[i]{-} \overline{\hat{X}'}(Z^{j{-}1},\tilde{Z}_j,Z_{j{+}1}^n)[i]|\big) \label{eq:IDeltastep1} \\ &\leq d_1 {+} d_5I(\overline{\hat{X'}}(\cdot)[i]), \label{eq:IDeltalaststep} \end{align} where $ d_1,\ldots,d_5 $ are bounded $ x_i $ dependent quantities and where~(\ref{eq:IDeltastep1}) follows from~(\ref{eq:overlinedeltaexpr}) and the triangle inequality. The claim~(\ref{eq:deltainfon}) follows from~(\ref{eq:IDeltalaststep}) and Proposition~\ref{prop:infon} since $ d_1 $ and $ d_5 $ can be bounded uniformly in $ x^n $ and $ i $. Next, we note that for $ (Z^n,\tilde{Z}^n) $ an i.i.d.\ pair with $ Z^n $ distributed according to the channel (as in the definition of total influence above), for all pairs $ (i,j) $, \begin{align} &E(\overline{\Delta}_i(Z^{j{-}1},\tilde{Z}_j,Z_{j{+}1}^n) \overline{\Delta}_j(Z^n)) \nonumber \\ &= E\big(E\big(\overline{\Delta}_i(Z^{j{-}1},\tilde{Z}_j,Z_{j{+}1}^n) \overline{\Delta}_j(Z^n)\big| Z^{j{-}1},\tilde{Z}_j,Z_{j{+}1}^n\big)\big) \nonumber \\ &= E\big(\overline{\Delta}_i(Z^{j{-}1},\tilde{Z}_j,Z_{j{+}1}^n) E\big(\overline{\Delta}_j(Z^n)\big| Z^{j{-}1},\tilde{Z}_j,Z_{j{+}1}^n\big)\big) \nonumber \\ &= 0 \label{eq:overlinedeltauncorr}, \end{align} where this last step follows from the conditional unbiasedness~(\ref{eq:conditional_unbiased_rand}) and the distribution of $ (Z^n,\tilde{Z}^n) $. We then have \begin{align} &E\Big(\sum_{i=1}^n\overline{\Delta}_i(Z^n)\Big)^2 \nonumber \\ &= \sum_{i=1}^nE\Big(\sum_{j=1}^n\overline{\Delta}_i(Z^n)\overline{\Delta}_j(Z^n) \Big) \nonumber \\ &= \sum_{i=1}^nE\Big( \sum_{j=1}^n\overline{\Delta}_i(Z^j,\tilde{Z_j},Z_j^n)\overline{\Delta}_j(Z^n) \Big) \nonumber \\ & \quad {+} \sum_{i=1}^nE\Big( \sum_{j=1}^n(\overline{\Delta}_i(Z^n){-} \overline{\Delta}_i(Z^j,\tilde{Z_j},Z_j^n))\overline{\Delta}_j(Z^n) \Big) \nonumber \\ &= \sum_{i=1}^nE\Big( \sum_{j=1}^n(\overline{\Delta}_i(Z^n){-} \overline{\Delta}_i(Z^j,\tilde{Z_j},Z_j^n))\overline{\Delta}_j(Z^n) \Big) \label{eq:uncorrstep} \\ &\leq c\sum_{i=1}^nE\Big( \sum_{j=1}^n(|\overline{\Delta}_i(Z^n){-} \overline{\Delta}_i(Z^j,\tilde{Z_j},Z_j^n)|) \Big) \nonumber \\ &\leq c\sum_{i=1}^nI(\overline{\Delta}_i(\cdot)), \label{eq:Idefstep} \end{align} where~(\ref{eq:uncorrstep}) follows from~(\ref{eq:overlinedeltauncorr}) and (\ref{eq:Idefstep}) from the fact that $ \overline{\Delta}_i(z^n) $ can be bounded by a constant $ c $ for all $ i $, $ z^n $ and $ x^n $ and the definition of total influence. The proof is completed by applying~(\ref{eq:deltainfon}).~$\hspace*{\fill}~\IEEEQED\par$
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Wireless ad hoc networks have received significant attention due to their many applications in, for instance, environmental monitoring or emergency disaster relief, where wiring is difficult. Unlike wired networks, wireless ad hoc networks lack a backbone infrastructure. Communication takes place either through single-hop transmission or by relaying through intermediate nodes. We consider the case that each node can adjust its transmit power for the purpose of power conservation. In the assignment of transmit powers, two conflicting effects have to be taken into account: if the transmit powers are too low, the resulting network may be disconnected. If the transmit powers are too high, the nodes run out of energy quickly. The goal of the power assignment problem is to assign transmit powers to the transceivers such that the resulting network is connected and the sum of transmit powers is minimized~\cite{LloydEA}. \subsection{Problem Statement and Previous Results} We consider a set of vertices $X \subseteq [0,1]^d$, which represent the sensors, $|X|=n$, and assume that $\|u-v\|^p$, for some $p \in \real$ (called the \emph{distance-power gradient} or \emph{path loss exponent}), is the power required to successfully transmit a signal from $u$ to $v$. This is called the power-attenuation model, where the strength of the signal decreases with $1/r^p$ for distance $r$, and is a simple yet very common model for power assignments in wireless networks~\cite{Rappaport:Wireless:2002}. In practice, we typically have $1 \leq p \leq 6$~\cite{Pahlavan:OneSix:1995}. A power assignment $\penergy: X \to [0, \infty)$ is an assignment of transmit powers to the nodes in $X$. Given $\penergy$, we have an edge between two nodes $u$ and $v$ if both $\penergy(x), \penergy(y) \geq \|x-y\|^p$. If the resulting graph is connected, we call it a \emph{PA graph}. Our goal is to find a PA graph and a corresponding power assignment $\penergy$ that minimizes $\sum_{v \in X} \penergy(v)$. Note that any PA graph $G = (X,E)$ induces a power assignment by $\penergy(v) = \max_{u \in X: \{u,v\} \in E} \|u-v\|^p$. PA graphs can in many aspects be regarded as a tree as we are only interested in connectedness, but it can contain more edges in general. However, we can simply ignore edges and restrict ourselves to a spanning tree of the PA graph. The minimal connected power assignment problem is NP-hard for $d \geq 2$ and APX-hard for $d \geq 3$~\cite{ClementiEA:PowerRadio:2004}. For $d=1$, i.e., when the sensors are located on a line, the problem can be solved by dynamic programming~\cite{KirousisEA}. A simple approximation algorithm for minimum power assignments is the minimum spanning tree heuristic (MST heuristic), which achieves a tight worst-case approximation ratio of $2$~\cite{KirousisEA}. This has been improved by Althaus et al.~\cite{AlthausEA:RangeAssignment:2006}, who devised an approximation algorithm that achieves an approximation ratio of $5/3$. A first average-case analysis of the MST heuristic was presented by de Graaf et al.~\cite{AveragePA}: First, they analyzed the expected approximation ratio of the MST heuristic for the (non-geometric, non-metric) case of independent edge lengths. Second, they proved convergence of the total power consumption of the assignment computed by the MST heuristic for the special case of $p = d$, but not of the optimal power assignment. They left as open problems, first, an average-case analysis of the MST heuristic for random geometric instances and, second, the convergence of the value of the optimal power assignment. Other power assignment problems studied include the $k$-station network coverage problem of Funke et al.~\cite{FunkeEA:PowerTSP:2011}, where transmit powers are assigned to at most $k$ stations such that $X$ can be reached from at least one sender, or power assignments in the SINR model~\cite{HalldorssonEA,Kesselheim:SINR:2011}. \subsection{Our Contribution} In this paper, we conduct an average-case analysis of the optimal power assignment problem for Euclidean instances. The points are drawn independently and uniformly from the $d$-dimensional unit cube $[0,1]^d$. We believe that probabilistic analysis is a better-suited measure for performance evaluation in wireless ad hoc networks, as the positions of the sensors -- in particular if deployed in areas that are difficult to access -- are naturally random. Roughly speaking, our contributions are as follows: \begin{enumerate} \setlength{\itemsep}{0mm} \item We show that the power assignment functional has sufficiently nice properties in order to apply Yukich's general framework for Euclidean functionals~\cite{Yukich:ProbEuclidean:1998} to obtain concentration results (Section~\ref{sec:properties}). \item Combining these insights with a recent generalization of the Azuma-Hoeff\-ding bound~\cite{Warnke}, we obtain concentration of measure and complete convergence for all combinations of $d$ and $p \geq 1$, even for the case $p \geq d$ (Section~\ref{sec:convergence}). In addition, we obtain complete convergence for $p \geq d$ for minimum-weight spanning trees. As far as we are aware, complete convergence for $p \geq d$ has not been proved yet for such functionals. The only exception we are aware of are minimum spanning trees for the case $p=d$~\cite[Sect.~6.4]{Yukich:ProbEuclidean:1998}. \item We provide a probabilistic analysis of the MST heuristic for the geometric case. We show that its expected approximation ratio is strictly smaller than its worst-case approximation ratio of $2$~\cite{KirousisEA} for any $d$ and $p$ (Section~\ref{sec:MST}). \end{enumerate} Our main technical contributions are two-fold: First, we introduce a transmit power redistribution argument to deal with the unbounded degree that graphs induced by the optimal transmit power assignment can have. The unboundedness of the degree makes the analysis of the power assignment functional $\pa$ challenging. The reason is that removing a vertex can cause the graph to fall into a large number of components and it might be costly to connect these components without the removed vertex. In contrast, the degree of any minimum spanning tree, for which strong concentration results are known in Euclidean space for $p \leq d$, is bounded for every fixed $d$, and this is heavily exploited in the analysis. (The concentration result by de Graaf et al.~\cite{AveragePA} for the power assignment obtained from the MST heuristic also exploits that MSTs have bounded degree.) Second, we apply a recent generalization of Azuma-Hoeffding's inequality by Warnke~\cite{Warnke} to prove complete convergence for the case $p \geq d$ for both power assignments and minimum spanning trees. We introduce the notion of \emph{typically smooth} Euclidean functionals, prove convergence of such functionals, and show that minimum spanning trees and power assignments are typically smooth. In this sense, our proof of complete convergence provides an alternative and generic way to prove complete convergence, whereas Yukich's proof for minimum spanning trees is tailored to the case $p=d$. In order to prove complete convergence with our approach, one only needs to prove convergence in mean, which is often much simpler than complete convergence, and typically smoothness. Thus, we provide a simple method to prove complete convergence of Euclidean functionals along the lines of Yukich's result that, in the presence of concentration of measure, convergence in mean implies complete convergence~\cite[Cor.~6.4]{Yukich:ProbEuclidean:1998}. \section{Definitions and Notation} \label{sec:def} Throughout the paper, $d$ (the dimension) and $p$ (the distance-power gradient) are fixed constants. For three points $x, y, v$, we by $\overline{xv}$ the line through $x$ and $v$, and we denote by $\angle(x,v,y)$ the angle between $\overline{xv}$ and $\overline{yv}$. A \emph{Euclidean functional} is a function $\functional^p$ for $p > 0$ that maps finite sets of points in $[0,1]^d$ to some non-negative real number and is translation invariant and homogeneous of order $p$~\cite[page 18]{Yukich:ProbEuclidean:1998}. From now on, we omit the superscript $p$ of Euclidean functionals, as $p$ is always fixed and clear from the context. $\pa_B$ is the canonical boundary functional of $\pa$ (we refer to Yukich~\cite{Yukich:ProbEuclidean:1998} for boundary functionals of other optimization problems): given a hyperrectangle $R \subseteq \real^d$ with $X \subseteq R$, this means that a solution is an assignment $\penergy(x)$ of power to the nodes $x \in X$ such that \begin{itemize} \setlength{\itemsep}{0mm} \item $x$ and $y$ are connected if $\penergy(x), \penergy(y) \geq \|x-y\|^p$, \item $x$ is connected to the boundary of $R$ if the distance of $x$ to the boundary of $R$ is at most $\penergy(x)^{1/p}$, and \item the resulting graph, called a \emph{boundary PA graph}, is either connected or consists of connected components that are all connected to the boundary. \end{itemize} Then $\pa_B(X, R)$ is the minimum value for $\sum_{x \in X} \penergy(x)$ that can be achieved by a boundary PA graph. Note that in the boundary functional, no power is assigned to the boundary. It is straight-forward to see that $\pa$ and $\pa_B$ are Euclidean functionals for all $p> 0$ according to Yukich~\cite[page 18]{Yukich:ProbEuclidean:1998}. For a hyperrectangle $R \subseteq \real^d$, let $\diam R = \max_{x,y \in R} \|x-y\|$ denote the diameter of $R$. For a Euclidean functional $\functional$, let $\functional(n) = \functional(\{U_1,\ldots, U_n\})$, where $U_1, \ldots, U_n$ are drawn uniformly and independently from $[0,1]^d$. Let \[ \gamma_{\functional}^{d,p} = \lim_{n \to \infty} \frac{\expected\bigl(\functional(n)\bigr)}{n^\stdexp}. \] (In principle, $\gamma_{\functional}^{d,p}$ need not exist, but it does exist for all functionals considered in this paper.) A sequence $(R_n)_{n \in \nat}$ of random variables \emph{converges in mean} to a constant $\gamma$ if $\lim_{n \to \infty} \expected(|R_n - \gamma|)= 0$. The sequence $(R_n)_{n \in \nat}$ \emph{converges completely to a constant $\gamma$} if we have \[ \sum_{n=1}^\infty \probab\bigl(|R_n-\gamma| > \eps\bigr) < \infty \] for all $\eps > 0$. Besides $\pa$, we consider two other Euclidean functions: $\mst(X)$ denotes the length of the minimum spanning tree with lengths raised to the power $p$. $\pt(X)$ denotes the total power consumption of the assignment obtained from the MST heuristic, again with lengths raised to the power $p$. The MST heuristic proceeds as follows: First, we compute a minimum spanning tree of $X$. The let $\penergy(x) = \max\{\| x-y\|^p \mid \text{$\{x,y\}$ is an edge of the MST}\}$. By construction and a simple analysis, we have $\mst(X) \leq \pa(X) \leq \pt(X) \leq 2 \cdot \mst(X)$~\cite{KirousisEA}. For $n \in \nat$, let $[n] = \{1, \ldots, n\}$. \section{Properties of the Power Assignment Functional} \label{sec:properties} After showing that optimal PA graphs can have unbounded degree and providing a lemma that helps solving this problem, we show that the power assignment functional fits into Yukich's framework for Euclidean functionals~\cite{Yukich:ProbEuclidean:1998}. \subsection{Degrees and Cones} \label{ssec:cones} As opposed to minimum spanning trees, whose maximum degree is bounded from above by a constant that depends only on the dimension $d$, a technical challenge is that the maximum degree in an optimal PA graphs cannot be bounded by a constant in the dimension. This holds even for the simplest case of $d=1$ and $p > 1$. We conjecture that the same holds also for $p=1$, but proving this seems to be more difficult and not to add much. \begin{lemma} \label{lem:unbounded} For all $p > 1$, all integers $d \geq 1$, and for infinitely many $n$, there exists instances of $n$ points in $[0,1]^d$ such that the unique optimal PA graph is a tree with a maximum degree of $n-1$. \end{lemma} \begin{proof} Let $n$ be odd, and let $2m+1=n$. Consider the instance \[ X_m = \{a_{-m}, a_{-m+1}, \ldots, a_0, \ldots, a_{m-1}, a_m\} \] that consists of $m$ positive integers $a_1, \ldots, a_m$, $m$ negative integers $a_{-i} = -a_{i}$ for $1 \leq i \leq m$, and $a_0 = 0$. We assume that $a_{i+1} \gg a_{i}$ for all $i$. By scaling and shifting, we can achieve that $X$ fits into the unit interval. A possible solution $\penergy: X_m \rightarrow \real^{+}$ is assigning power $a_i^p$ to $a_i$ and $a_{-i}$ for $1 \leq i \leq m$ and power $a_m^p$ to $0$. In this way, all points are connected to $0$. We claim that this power assignment is the unique optimum. As $a_m = -a_{-m} \gg |a_{i}|$ for $|i| < m$, the dominant term in the power consumption $\Psi_m$ is $3 a_m^p$ (the power of $a_m$, $a_{-m}$, and $a_0 = 0$). Note that no other term in the total power consumption involves $a_m$. We show that $a_m$ and $a_{-m}$ must be connected to $0$ in an optimal PA graph. First, assume that $a_m$ and $a_{-m}$ are connected to different vertices. Then the total power consumption increases to about $4 a_m^p$ because $a_{\pm m}$ is very large compared to $a_i$ for all $|i| < m$ (we say that $a_m$ is dominant). Second, assume that $a_m$ and $a_{-m}$ are connected to $a_i$ with $i \neq 0$. Without loss of generality, we assume that $i>0$ and, thus, $a_i > 0$. Then the total power consumption is at least $2 \cdot (a_m + a_i)^p + (a_m - a_i)^p \geq 3a_m^p + 2a_m^{p-1}a_i$. Because $a_m$ is dominant, this is strictly more than $\Psi_m$ because it contains the term $2a_m^{p-1}a_i$, which contains the very large $a_m$ because $p>1$. From now on, we can assume that $0 = a_0$ is connected to $a_{\pm m}$. Assume that there is some point $a_i$ that is connected to some $a_j$ with $i, j \neq 0$. Assume without loss of generality that $i > 0$ and $|i| \geq |j|$. Assume further that $i$ is maximal in the sense that there is no $|k| > i$ such that $a_k$ is connected to some vertex other than $0$. We set $a_i$'s power to $a_i^p$ and $a_j$'s power to $|a_j|^p$. Then both are connected to $0$ as $0$ has already sufficient power to send to both. Furthermore, the PA graph is still connected: All vertices $a_k$ with $|k| > i$ are connected to $0$ by the choice of $i$. If some $a_k$ with $|k| \leq i$ and $k \neq i, j$ was connected to $a_i$ before, then it has also sufficient power to send to $0$. The power balance remains to be considered: If $j = -i$, then the energy of both $a_i$ and $a_j$ has been strictly decreased. Otherwise, $|j| < i$. The power of $a_i$ was at least $(a_i-a_j)^p$ before and is now $a_i^p$. The power of $a_j$ was at least $(a_i-a_j)^p$ before and is now $a_j^p$. Since $a_i$ dominates all $a_j$ for $|j| < i$, this decreases the power. \end{proof} The unboundedness of the degree of PA graphs make the analysis of the functional $\pa$ challenging. The technical reason is that removing a vertex can cause the PA graph to fall into a non-constant number of components. The following lemma is the crucial ingredient to get over this ``degree hurdle''. \begin{lemma} \label{lem:conefactor} Let $x, y \in X$, let $v \in [0,1]^d$, and assume that $x$ and $y$ have power $\penergy(x) \geq \|x-v\|^p$ and $\penergy(y) \geq \|y-v\|^p$, respectively. Assume further that $\|x - v\| \leq \|y-v\|$ and that $\angle(x,v,y) \leq \alpha$ with $\alpha \leq \pi/3$. Then the following holds: \begin{enumerate}[(a)] \setlength{\itemsep}{0mm} \item $\penergy(y) \geq \|x-y\|^p$, i.e., $y$ has sufficient power to reach $x$. \label{reachcloser} \item If $x$ and $y$ are not connected (i.e., $\penergy(x)< \|x - y \|^p$), then $\|y - v\| > \frac{\sin(2\alpha)}{\sin(\alpha)} \cdot \|x-v\|$. \label{sqrtthree} \end{enumerate} \end{lemma} \begin{proof} Because $\alpha \leq \pi/3$, we have $\|y-v\| \geq \|y-x\|$. This implies~\eqref{reachcloser}. The point $x$ has sufficient power to reach any point within a radius of $\|x-v\|$ of itself. By~\eqref{reachcloser}, point $y$ has sufficient power to send to $x$. Thus, if $y$ is within a distance of $\|x-v\|$ of $x$, then also $x$ can send to $y$ and, thus, $x$ and $y$ are connected. We project $x$, $y$, and $v$ into the two-dimensional subspace spanned by the vectors $x-v$ and $y-v$. This yields a situation as depicted in Figure~\ref{fig:conefactor}. Since $\penergy(x) \geq \|x-v\|^p$, point $x$ can send to all points in the light-gray region, thus in particular to all dark-gray points in the cone rooted at $v$. In particular, $x$ can send to all points that are no further away from $v$ than the point $z$. The triangle $vxz$ is isosceles. Thus, also the angle at $z$ is $\alpha$ and the angle at $x$ is $\beta = \pi - 2\alpha$. Using the law of sines together with $\sin(\beta) = \sin(2\alpha)$ yields that $\|z-v\| = \frac{\sin(2\alpha)}{\sin(\alpha)} \cdot \|x-v\|$, which completes the proof of \eqref{sqrtthree}. \end{proof} \begin{figure}[t] \centering \begin{tikzpicture}[scale=2] \tikzstyle{Thickness}=[line width=0.8pt] \coordinate (v) at (0,0); \coordinate (x) at (0:1); \path[name path=xcircle] (x) circle (1cm); \fill[white!85!black] (x) circle (1cm); \path[name path=rightborder] (-\anglealpha:0.1) -- (-\anglealpha:2); \path[name path=leftborder] (\anglealpha:0.1) -- (\anglealpha:2); \path[-, name intersections={of=rightborder and xcircle, by=rightfarthest}]; \path[-, name intersections={of=leftborder and xcircle, by=leftfarthest}]; \fill[white!70!black] (v) -- (rightfarthest) arc (-2*\anglealpha:2*\anglealpha:1) -- cycle; \draw[dotted, Thickness] (rightfarthest) arc (-\anglealpha:\anglealpha:{sin(2*\anglealpha)/sin(\anglealpha)}); \draw[Thickness] (-\anglealpha:2) -- (v) -- (\anglealpha:2); \draw[Thickness] (v) -- (0:2.1); \draw[Thickness] (-\anglealpha:0.4) arc (-\anglealpha:\anglealpha:0.4); \draw[Thickness] ($(rightfarthest)+(180-2*\anglealpha:0.4)$) arc (180-2*\anglealpha:180-\anglealpha:0.4); \draw[Thickness] ($(x)+(180:0.28)$) arc (180:360-2*\anglealpha:0.28); \node at (-0.5*\anglealpha:0.3) {$\alpha$}; \node at (0.5*\anglealpha:0.3) {$\alpha$}; \node at ($(rightfarthest)+(180-1.5*\anglealpha:0.3)$) {$\alpha$}; \node at ($(x)+(270-\anglealpha:0.16)$) {$\beta$}; \draw[Thickness, dashed] (leftfarthest) -- (x) -- (rightfarthest); \tikzstyle{Node}=[circle, fill=black, inner sep=0pt, minimum size=2mm] \node[Node] (nv) at (v) {}; \node[left] at (nv) {$v$}; \node[Node] (nx) at (x) {}; \node[above] at (nx) {$x$}; \node[Node] (nz) at (rightfarthest) {}; \node[below] at (rightfarthest) {$z$}; \end{tikzpicture} \caption{Point $x$ can send to all points in the gray area as it can send to $v$. In particular, $x$ can send to all points that are no further away from $v$ than $z$. This includes all points to the left of the dotted line. The dotted line consists of points at a distance of $\frac{\sin(2\alpha)}{\sin(\alpha)} \cdot \|x-v\|$ of $v$.} \label{fig:conefactor} \end{figure} For instance, $\alpha = \pi/6$ results in a factor of $\sqrt{3} = \sin(\pi/3)/\sin(\pi/6)$. In the following, we invoke this lemma always with $\alpha = \pi/6$, but this choice is arbitrary as long as $\alpha < \pi/3$, which causes $\sin(2\alpha)/\sin(\alpha)$ to be strictly larger than $1$. \subsection{Deterministic Properties} \label{ssec:deterministic} In this section, we state properties of the power assignment functional. Subadditivity (Lem\-ma~\ref{lem:subadd}), superadditivity (Lemma~\ref{lem:supadd}), and growth bound (Lem\-ma~\ref{lem:growth}) are straightforward. \begin{lemma}[subadditivity] \label{lem:subadd} $\pa$ is subadditive~\cite[(2.2)]{Yukich:ProbEuclidean:1998} for all $p>0$ and all $d \geq 1$, i.e., for any point sets $X$ and $Y$ and any hyperrectangle $R\subseteq \real^d$ with $X, Y \subseteq R$, we have \[ \pa(X \cup Y) \leq \pa(X) + \pa(Y) + O\bigl((\diam R)^p\bigr). \] \end{lemma} \begin{proof} Let $T_X$ and $T_Y$ be optimal PA graphs for $X$ and $Y$, respectively. We connect these graphs by an edge of length at most $\diam R$. This yields a solution for $X \cup Y$, i.e., a PA graph, and the additional costs are bounded from above by the length of this edge to the power $p$, which is bounded by $(\diam R)^p$. \end{proof} \begin{lemma}[superadditivity] \label{lem:supadd} $\pa_B$ is superadditive for all $p \geq 1$ and $d \geq 1$~\cite[(3.3)]{Yukich:ProbEuclidean:1998}, i.e., for any $X$, hyperrectangle $R\subseteq \real^d$ with $X \subseteq R$ and partition of $R$ into hyperrectangles $R_1$ and $R_2$, we have \[ \pa_B^p(X, R) \geq \pa^p_B(X \cap R_1, R_1) + \pa^p_B(X \cap R_2, R_2) . \] \end{lemma} \begin{proof} Let $T$ be an optimal boundary PA graph for $(X, R)$. This graph restricted to $R_1$ and $R_2$ yields boundary graphs $T_1$ and $T_2$ for $(X \cap R_1, R_1)$ and $(X \cap R_2, R_2)$, respectively. The sum of the costs of $T_1$ and $T_2$ is upper bounded by the costs of $T$ because $p \geq 1$ (splitting an edge at the border between $R_1$ and $R_2$ results in two edges whose sum of lengths to the power $p$ is at most the length of the original edge to the power $p$). \end{proof} \begin{lemma}[growth bound] \label{lem:growth} For any $X \subseteq [0,1]^d$ and $0 < p$ and $d \geq 1$, we have \[ \pa_B(X) \leq \pa(X) \leq O\left(\max\left\{n^{\stdexp}, 1\right\}\right). \] \end{lemma} \begin{proof} This follows from the growth bound for the MST~\cite[(3.7)]{Yukich:ProbEuclidean:1998}, because $\mst(X) \leq \pa(X) \leq 2\mst(X)$ for all $X$~\cite{KirousisEA}. The inequality $\pa_B(X) \leq \pa(X)$ holds obviously. \end{proof} The following lemma shows that $\pa$ is smooth, which roughly means that adding or removing a few points does not have a huge impact on the function value. Its proof requires Lemma~\ref{lem:conefactor} to deal with the fact that optimal PA graphs can have unbounded degree. \begin{lemma} \label{lem:smooth} The power assignment functional $\pa$ is smooth for all $0 < p \leq d$~\cite[(3.8)]{Yukich:ProbEuclidean:1998}, i.e., \[ \bigl| \pa^p(X \cup Y) - \pa^p(X)\bigr| = O\left(|Y|^\stdexp\right) \] for all point sets $X, Y \subseteq [0,1]^d$. \end{lemma} \begin{proof} One direction is straightforward: $\pa(X \cup Y) - \pa(X)$ is bounded by $\Psi = O\bigl(|Y|^\stdexp\bigr)$, because the optimal PA graph for $Y$ has a value of at most $\Psi$ by Lemma~\ref{lem:growth}. Then we can take the PA graph for $Y$ and connect it to the tree for $X$ with a single edge, which costs at most $O(1) \leq \Psi$ because $p \leq d$. For the other direction, consider the optimal PA graph $T$ for $X \cup Y$. The problem is that the degrees $\deg_T(v)$ of vertices $v \in Y$ can be unbounded (Lemma~\ref{lem:unbounded}). (If the maximum degree were bounded, then we could argue in the same way as for the MST functional.) The idea is to exploit the fact that removing $v \in Y$ also frees some power. Roughly speaking, we proceed as follows: Let $v \in Y$ be a vertex of possibly large degree. We add the power of $v$ to some vertices close to $v$. The graph obtained from removing $v$ and distributing its energy has only a constant number of components. To prove this, Lemma~\ref{lem:conefactor} is crucial. We consider cones rooted at $v$ with the following properties: \begin{itemize} \setlength{\itemsep}{0mm} \item The cones have a small angle $\alpha$, meaning that for every cone $C$ and every $x, y \in C$, we have $\angle(x,v,y) \leq \alpha$. We choose $\alpha = \pi/6$. \item Every point in $[0,1]^d$ is covered by some cone. \item There is a finite number of cones. (This can be achieved because $d$ is a constant.) \end{itemize} Let $C_1, \ldots, C_m$ be these cones. By abusing notation, let $C_i$ also denote all points $x \in C_i \cap (X \cup Y \setminus \{v\})$ that are adjacent to $v$ in $T$. For $C_i$, let $x_i$ be the point in $C_i$ that is closest to $v$ and adjacent to $v$ (breaking ties arbitrarily), and let $y_i$ be the point in $C_i$ that is farthest from $v$ and adjacent to $v$ (again breaking ties arbitrarily). (For completeness, we remark that then $C_i$ can be ignored if $C_i \cap X = \emptyset$.) Let $\ell_i = \|y_i-v\|$ be the maximum distance of any point in $C_i$ to $v$, and let $\ell = \max_i \ell_i$. We increase the power of $x_i$ by $\ell^p/m$. Since the power of $v$ is at least $\ell^p$ and we have $m$ cones, we can account for this with $v$'s power because we remove $v$. Because $\alpha = \pi/6$ and $x_i$ is closest to $v$, any point in $C_i$ is closer to $x_i$ than to $v$. According to Lemma~\ref{lem:conefactor}\eqref{reachcloser}, every point in $C_i$ has sufficient power to reach $x_i$. Thus, if $x_i$ can reach a point $z \in C_i$, then there is an established connection between them. From this and increasing $x_i$'s power to at least $\ell^p/m$, there is an edge between $x_i$ and every point $z \in C_i$ that has a distance of at most $\ell/\sqrt[p]m$ from $v$. We recall that $m$ and $p$ are constants. Now let $z_1, \ldots, z_k \in C_i$ be the vertices in $C_i$ that are not connected to $x_i$ because $x_i$ has too little power. We assume that they are sorted by increasing distance from $v$. Thus, $z_k = y_i$. We can assume that no two $z_j$ and $z_{j'}$ are in the same component after removal of $v$. Otherwise, we can simply ignore one of the edges $\{v, z_j\}$ and $\{v, z_{j'}\}$ without changing the components. Since $z_j$ and $z_{j+1}$ were connected to $v$ and they are not connected to each other, we can apply Lemma~\ref{lem:conefactor}\eqref{sqrtthree}, which implies that $\|z_{j+1} - v\| \geq \sqrt 3 \cdot \|z_j - v\|$. Furthermore, $\|z_1-v\| \geq \ell/\sqrt[p]m$ by assumption. Iterating this argument yields $\ell = \|z_k - v\| \geq \sqrt{3}^{k-1} \|z_1 -v\| \geq \sqrt{3}^{k-1} \cdot \ell/\sqrt[p]m$. This implies $k \leq \log_{\sqrt 3}(\sqrt[p]m) +1$. Thus, removing $v$ and redistributing its energy as described causes the PA graph to fall into at most a constant number of components. Removing $|Y|$ points causes the PA graph to fall into at most $O(|Y|)$ components. These components can be connected with costs $O(|Y|^{\stdexp})$ by choosing one point per component and applying Lemma~\ref{lem:growth}. \end{proof} \begin{lemma} \label{lem:smoothboundary} $\pa_B$ is smooth for all $1 \leq p \leq d$~\cite[(3.8)]{Yukich:ProbEuclidean:1998}. \end{lemma} \begin{proof} The idea is essentially identical to the proof of Lemma~\ref{lem:smooth}, and we use the same notation. Again, one direction is easy. For the other direction, note that every vertex of $G=(X,E)$, with $E$ induced by $\penergy$ is connected to at most one point at the boundary. We use the same kind of cones as for Lemma~\ref{lem:smooth}. Let $v \in G$ be a vertex that we want to remove. We ignore $v$'s possible connection to the boundary and proceed with the remaining connections. In this way, we obtain a forest with $O(|G|)$ components. We compute a boundary PA graph for one vertex of each component and are done because of Lemma~\ref{lem:growth} and in the same way as in the proof of Lemma~\ref{lem:smooth}. \end{proof} Crucial for convergence of $\pa$ is that $\pa$, which is subadditive, and $\pa_B$, which is superadditive, are close to each other. Then both are close to being both subadditive and superadditive. The following lemma states that indeed $\pa$ and $\pa_B$ do not differ too much for $1 \leq p < d$. \begin{lemma} \label{lem:pclose} $\pa$ is point-wise close to $\pa_B$ for $1 \leq p < d$~\cite[(3.10)]{Yukich:ProbEuclidean:1998}, i.e., \[ \bigl|\pa^p(X) - \pa^p_B(X, [0,1]^d)\bigr| = o\bigl(n^\stdexp\bigr) \] for every set $X \subseteq [0,1]^d$ of $n$ points. \end{lemma} \begin{proof} Let $T$ be an optimal boundary PA graph for $X$. Let $Q\subseteq X$ be the set of points that have a connection to the boundary of $T$ and let $\partial Q$ be the corresponding points on the boundary. If we remove the connections to the boundary, we obtain a graph $T'$. We can assume that $Q$ contains exactly one point per connected component of the graph $T'$. We use the same dyadic decomposition as Yukich~\cite[proof of Lemma 3.8]{Yukich:ProbEuclidean:1998}. This yields that the sum of transmit powers used to connect to the boundary is bounded by the maximum of $O(n^{\frac{d-p-1}{d-1}})$ and $O(\log n)$ for $p \leq d-1$ and by a constant for $p \in (d-1, d)$. We omit the proof as it is basically identical to Yukich's proof. Let $Q \subseteq X$ be the points connected to the boundary, and let $\partial Q$ be the points where $Q$ connects to the boundary. We compute a minimum-weight spanning tree $Z$ of $\partial Q$. (Note that we indeed compute an MST and not a PA. This is because the MST has bounded degree and $\pa$ and $\mst$ differ by at most a factor of $2$.) This MST $Z$ has a weight of \[ O\left(\max\left\{n^{\frac{d-1-p}{d-1}}, 1\right\}\right) = o\left(n^{\stdexp}\right) \] according to the growth bound for $\mst$~\cite[(3.7)]{Yukich:ProbEuclidean:1998}. and because $d > p$. If two points $\tilde q, \tilde q' \in \partial Q$ are connected in this tree, then we connect the corresponding points $q, q' \in Q$. The question that remains is by how much the power of the vertices in $Q$ has to be increased in order to allow the connections as described above. If $q, q' \in Q$ are connected, then an upper bound for their power is given by the $p$-th power of their distances to the boundary points $\tilde q$ and $\tilde q'$ plus the length of the edge connecting $\tilde q$ and $\tilde q'$. Applying the triangle inequality for powers of metrics twice, the energy needed for connecting $q$ and $q'$ is at most $4^p=O(1)$ times the sum of these distances. Since the degree of $Z$ is bounded, every vertex in $Q$ contributes to only a constant number of edges and, thus, only to the power consumption of a constant number of other vertices. Thus, the total additional power needed is bounded by a constant times the power of connecting $Q$ to the boundary plus the power to use $Z$ as a PA graph. Because of the triangle inequality for powers of metrics, the bounded degree of every vertex of $\partial Q$ in $Z$, and because of the dyadic decomposition mentioned above, the increase of power is in compliance with the statement of the lemma. \end{proof} \begin{remark} Lemma~\ref{lem:pclose} is an analogue of its counterpart for MST, TSP, and matching~\cite[Lemma 3.7]{Yukich:ProbEuclidean:1998} in terms of the bounds. Namely, we obtain \[ \bigl|\pa(X) - \pa_B(X)\bigr| \leq \begin{cases} O(|X|^{\frac{d-p-1}{d-1}}) & \text{if $1 \leq p < d-1$},\\ O(\log|X|) & \text{if $p=d-1 \neq 1$},\\ O(1) & \text{if $d-1<p<d$ or $p=d-1=1$.} \end{cases} \] \end{remark} \subsection{Probabilistic Properties} \label{ssec:probabilistic} For $p > d$, smoothed is not guaranteed to hold, and for $p \geq d$, point-wise closeness is not guaranteed to hold. But similar properties typically hold for random point sets, namely smoothness in mean (Definition~\ref{def:smoothinmean}) and closeness in mean (Definition~\ref{def:closeinmean}). In the following, let $X=\{U_1, \ldots, U_n\}$. Recall that $U_1, \ldots, U_n$ are drawn uniformly and independently from $[0,1]^d$. Before proving smoothness in mean, we need a statement about the longest edge in an optimal PA graph and boundary PA graph. The bound is asymptotically equal to the bound for the longest edge in an MST~\cite{Penrose:LongestMST:1997,KozmaEA:ConnectivityThreshold:2010,GuptaKumar:CriticalPower:1999}. To prove our bound for the longest edge in optimal PA graphs (Lemma~\ref{lem:longest}), we need the following two lemmas. Lemma~\ref{lem:emptyball} is essentially equivalent to a result by Kozma et al.~\cite{KozmaEA:ConnectivityThreshold:2010}, but they do not state the probability explicitly. Lemma~\ref{lem:iterclose} is a straight-forward consequence of Lemma~\ref{lem:emptyball}. Variants of both lemmas are known~\cite{Steele:ProbabilisticClassical:1990,Penrose:StrongMST:1999,Penrose:LongestMST:1997, GuptaKumar:CriticalPower:1999}, but, for completeness, we state and prove both lemmas in the forms that we need. \begin{lemma} \label{lem:emptyball} For every $\beta > 0$, there exists a $\cball = \cball(\beta, d)$ such that, with a probability of at least $1-n^{-\beta}$, every hyperball of radius $\rball = \cball \cdot (\log n/n)^{1/d}$ and with center in $[0,1]^d$ contains at least one point of $X$ in its interior. \end{lemma} \begin{proof} We sketch the simple proof. We cover $[0,1]^d$ with hypercubes of side length $\Omega(\rball)$ such that every ball of radius $\rball$ -- even if its center is in a corner (for a point on the boundary, still at least a $2^{-d} = \Theta(1)$ fraction is within $[0,1]^d$) -- contains at least one box. The probability that such a box does not contain a point, which is necessary for a ball to be empty, is at most $\bigl(1-\Omega(\rball)^d\bigr)^n \leq n^{-\Omega(1)}$ by independence of the points in $X$ and the definition of $\rball$. The rest of the proof follows by a union bound over all $O(n/\log n)$ boxes. \end{proof} We also need the following lemma, which essentially states that if $z$ and $z'$ are sufficiently far away, then there is -- with high probability -- always a point $y$ between $z$ and $z'$ in the following sense: the distance of $y$ to $z$ is within a predefined upper bound $2\rball$, and $y$ is closer to $z'$ than $z$. \begin{lemma} \label{lem:iterclose} For every $\beta > 0$, with a probability of at least $1- n^{-\beta}$, the following holds: For every choice of $z, z' \in [0,1]^d$ with $\|z-z'\| \geq 2\rball$, there exists a point $y \in X$ with the following properties: \begin{itemize} \item $\|z-y\| \leq 2\rball$. \item $\|z'-y\| < \|z' - z\|$. \end{itemize} \end{lemma} \begin{proof} The set of candidates for $y$ contains a ball of radius $\rball$, namely a ball of this radius whose center is at a distance of $\rball$ from $z$ on the line between $z$ and $z'$. This allows us to use Lemma~\ref{lem:emptyball}. \end{proof} \begin{lemma}[longest edge] \label{lem:longest} For every constant $\beta > 0$, there exists a constant $\cedge=\cedge(\beta)$ such that, with a probability of at least $1 - n^{-\beta}$, every edge of an optimal PA graph and an optimal boundary PA graph $\pa_B$ is of length at most $\redge = \cedge \cdot (\log n/n)^{1/d}$. \end{lemma} \begin{proof} We restrict ourselves to considering PA graphs. The proof for boundary PA graphs is almost identical. Let $T$ be any PA graph. Let $\cedge = 4 k^{1/p} \cball/(1-\sqrt{3}^{-p})^{1/p}$, where $k$ is an upper bound for the number of vertices without a pairwise connection at a distance between $r$ and $r/\sqrt{3}$ for arbitrary $r$. It follows from Lemma~\ref{lem:conefactor} and its proof, that $k$ is a constant that depends only on $p$ and $d$. Note that $\cedge > 2 \cball$. We are going to show that if $T$ contains an edge that is longer than $\redge$, then we can find a better PA graph with a probability of at least $1 - n^{-\beta}$, which shows that $T$ is not optimal. Let $v$ be a vertex incident to the longest edge of $T$, and let $\rbig > \redge$ be the length of this longest edge. (The longest edge is unique with a probability of $1$. The node $v$ is not unique as the longest edge connects two points.) We decrease the power of $v$ to $\rbig/\sqrt 3$. This implies that $v$ loses contact to some points -- otherwise, the power assignment was clearly not optimal. The number $\cball$ depends on the exponent $\beta$ of the lemma. Let $x_1, \ldots, x_{k'}$ with $k' \leq k$ be the points that were connected to $v$ but are in different connected components than $v$ after decreasing $v$'s power. This is because the only nodes that might lose their connection to $v$ are within a distance between $\rbig/\sqrt 3$ and $\rbig$, and there are at most $k$ such nodes without a pairwise connection. Consider $x_1$. Let $z_0 = v$. According to Lemma~\ref{lem:iterclose}, there is a point $z_1$ that is closer to $x_1$ and at most $2 \rball$ away from $v$. Iteratively for $i = 1, 2, \ldots$, we distinguish three cases until this process stops: \begin{enumerate}[(i)] \setlength{\itemsep}{0mm} \item $z_i$ belongs to the same component as $x_j$ for some $j$ ($z_i$ is closer to $x_1$ than $z_{i-1}$, but this does not imply $j=1$). We increase $z_i$'s power such that $z_i$ is able to send to $z_{i-1}$. If $i>1$, then we also increase $z_{i-1}$'s power accordingly. \label{firstcase} \item $z_i$ belongs to the same component as $v$. Then we can apply Lemma~\ref{lem:iterclose} to $z_i$ and $x_1$ and find a point $z_{i+1}$ that is closer to $x_1$ than $z_i$ and at most at a distance of $2\rball$ of $z_i$. \item $z_i$ is within a distance of at most $2\rball$ of some $x_j$. In this case, we increase the energy of $z_i$ such that $z_i$ and $x_j$ are connected. (The energy of $x_j$ is sufficiently large anyhow.) \end{enumerate} Running this process once decreases the number of connected components by one and costs at most $2(2\rball)^p = 2^{p+1} \rball^p$ additional power. We run this process $k' \leq k$ times, thus spending at most $k2^{p+1} \rball^p$ of additional power. In this way, we obtain a valid PA graph. We have to show that the new PA graph indeed saves power. To do this, we consider the power saved by decreasing $v$'s energy. By decreasing $v$'s power, we save an amount of $\rbig^p - (\rbig/\sqrt 3)^p > (1-\sqrt{3}^{-p}) \cdot \redge^p$. By the choice of $\cedge$, the saved amount of energy exceeds the additional amount of $k2^{p+1}\rball^p$. This contradicts the optimality of the PA graph with the edge of length $\rbig > \redge$. \end{proof} \begin{remark} Since the longest edge has a length of at most $\redge$ with high probability, i.e., with a probability of $1-n^{-\Omega(1)}$, and any ball of radius $\redge$ contains roughly $O(\log n)$ points due to Chernoff's bound~\cite[Chapter 4]{MitzenmacherUpfal:ProbComp:2005}, the maximum degree of an optimum PA graph of a random point set is $O(\log n)$ with high probability -- contrasting Lemma~\ref{lem:unbounded}. \end{remark} Yukich gave two different notions of smoothness in mean~\cite[(4.13) and (4.20) \& (4.21)]{Yukich:ProbEuclidean:1998}. We use the stronger notion, which implies the other. \begin{definition}[\mbox{smooth in mean~\cite[(4.20), (4.21)]{Yukich:ProbEuclidean:1998}}] \label{def:smoothinmean} A Euclidean functional $\functional$ is called \emph{smooth in mean} if, for every constant $\beta > 0$, there exists a constant $c = c(\beta)$ such that the following holds with a probability of at least $1-n^{-\beta}$: \[ \bigl|\functional(n) - \functional(n \pm k)\bigr| \leq c k \cdot \bigl(\frac{\log n}{n}\bigr)^{p/d} \] and \[ \bigl|\functional_B(n) - \functional_B(n \pm k)\bigr| = c k \cdot \bigl(\frac{\log n}{n}\bigr)^{p/d} . \] for all $0 \leq k \leq n/2$. \end{definition} \begin{lemma} \label{lem:smoothinmean} $\pa_B$ and $\pa$ are smooth in mean for all $p > 0$ and all $d$. \end{lemma} \begin{proof} The bound $\pa(n+k) \leq \pa(n) + O\bigl(k \cdot \bigl(\frac{\log n}{n}\bigr)^{\frac pd}\bigr)$ follows from the fact that for all $k$ additional vertices, with a probability of at least $1-n^{-\beta}$ for any $\beta > 0$, there is a vertex among the first $n$ within a distance of at most $O\bigl((\log n/n)^{1/d}\bigr)$ according to Lemma~\ref{lem:emptyball} ($\beta$ influences the constant hidden in the $O$). Thus, we can connect any of the $k$ new vertices with costs of $O\bigl((\log n/n)^{p/d}\bigr)$ to the optimal PA graph for the $n$ nodes. Let us now show the reverse inequality $\pa(n) \leq \pa(n+k) + O\bigl(k \cdot \bigl(\frac{\log n}{n}\bigr)^{\frac pd}\bigr)$. To do this, we show that with a probability of at least $1 - n^{-\beta}$, we have \begin{equation} \pa(n) \leq \pa(n+1) + O\left(\left(\frac{\log n}{n}\right)^{\frac pd}\right) . \label{equ:smoothiter} \end{equation} Then we iterate $k$ times to obtain the bound we aim for. The proof of \eqref{equ:smoothiter} is similar to the analogous inequality in Yukich's proof~\cite[Lemma 4.8]{Yukich:ProbEuclidean:1998}. The only difference is that we first have to redistribute the power of the point $U_{n+1}$ to its closest neighbors as in the proof of Lemma~\ref{lem:smooth}. In this way, removing $U_{n+1}$ results in a constant number of connected components. The longest edge incident to $U_{n+1}$ has a length of $O\bigl((\log n/n)^{1/d}\bigr)$ with a probability of at least $1-n^{-\beta}$ for any constant $\beta > 0$. Thus, we can connect these constant number number of components with extra power of at most $O\bigl((\log n/n)^{p/d}\bigr)$. The proof of \[ \left|\pa(n) - \pa(n - k)\right| = O\left(k \cdot \left(\frac{\log n}{n}\right)^{\frac pd}\right) \] and the statement \[ \left|\pa_B(n) - \pa_B(n \pm k)\right| = O\left(k \cdot \left(\frac{\log n}{n}\right)^{\frac pd}\right) \] for the boundary functional are almost identical. \end{proof} \begin{definition}[\mbox{close in mean~\cite[(4.11)]{Yukich:ProbEuclidean:1998}}] \label{def:closeinmean} A Euclidean functional $\functional$ is close in mean to its boundary functional $\functional_B$ if \[ \expected\left(\left|\functional(n) - \functional_B(n)\right|\right) = o\left(n^{\stdexp}\right). \] \end{definition} \begin{lemma} \label{lem:closemean} $\pa$ is close in mean to $\pa_B$ for all $d$ and $p \geq 1$. \end{lemma} \begin{proof} It is clear that $\pa_B(X) \leq \pa(X)$ for all $X$. Thus, in what follows, we prove that $\pa(X) \leq \pa_B(X) + o\bigl(n^{\stdexp}\bigr)$ holds with a probability of at least $1 - n^{-\beta}$, where $\beta$ influences the constant hidden in the $o$. This implies closeness in mean. With a probability of at least $1-n^{-\beta}$, the longest edge in the graph that realizes $\pa_B(X)$ has a length of $\cedge \cdot (\log n/n)^{1/d}$ (Lemma~\ref{lem:longest}). Thus, with a probability of at least $1 - n^{-\beta}$ for any constant $\beta > 0$, only vertices within a distance of at most $\cedge \cdot (\log n/n)^{1/d}$ of the boundary are connected to the boundary. As the $d$-dimensional unit cube is bounded by $2^d$ hyperplanes, the expected number of vertices that are so close to the boundary is bounded from above by $\cedge n 2^d \cdot (\log n/n)^{1/d} = O\bigl((\log n)^{1/d} n^{\frac{d-1}d}\bigr)$. With a probability of at least $1 - n^{-\beta}$ for any $\beta > 0$, this number is exceeded by no more than a constant factor. Removing these vertices causes the boundary PA graph to fall into at most $O\bigl((\log n)^{1/d} n^{\frac{d-1}d}\bigr)$ components. We choose one vertex of every component and start the process described in the proof of Lemma~\ref{lem:longest} to connect all of them. The costs per connection is bounded from above by $O\bigl((\log n/n)^{p/d}\bigr)$ with a probability of $1 - n^{-\beta}$ for any constant $\beta > 0$. Thus, the total costs are bounded from above by \[ O\bigl((\log n/n)^{p/d}\bigr) \cdot O\bigl((\log n)^{1/d} n^{\frac{d-1}d}\bigr) = O\left((\log n)^{\frac{p-1}d} \cdot n^{\frac{d-1-p}d}\right) = o\bigl(n^\stdexp\bigr) \] with a probability of at least $1 - n^{-\beta}$ for any constant $\beta > 0$. \end{proof} \section{Convergence} \label{sec:convergence} \subsection{Standard Convergence} \label{ssec:standard} Our findings of Sections~\ref{ssec:deterministic} yield complete convergence of $\pa$ for $p<d$ (Theorem~\ref{thm:stdcc}). Together with the probabilistic properties of Section~\ref{ssec:probabilistic}, we obtain convergence in mean in a straightforward way for all combinations of $d$ and $p$ (Theorem~\ref{thm:convmean}). In Sections~\ref{ssec:warnke} and~\ref{ssec:cc}, we prove complete convergence for $p \geq d$. \begin{theorem} \label{thm:stdcc} For all $d$ and $p$ with $1 \leq p < d$, there exists a constant $\gamma_{\pa}^{d,p}$ such that \[ \frac{\pa^p(n)}{n^{\stdexp}} \] converges completely to $\gamma_{\pa}^{d,p}$. \end{theorem} \begin{proof} This follows from the results in Section~\ref{ssec:deterministic} together with results by Yukich~\cite[Theorem 4.1, Corollary 6.4]{Yukich:ProbEuclidean:1998}. \end{proof} \begin{theorem} \label{thm:convmean} For all $p \geq 1$ and $d \geq 1$, there exists a constant $\gamma_{\pa}^{d,p}$ (equal to the constant of Theorem~\ref{thm:stdcc} for $p<d$) such that \[ \lim_{n \to \infty} \frac{\expected\bigl(\pa^p(n)\bigr)}{n^{\stdexp}} = \lim_{n \to \infty} \frac{\expected\bigl(\pa^p_B(n)\bigr)}{n^{\stdexp}} = \gamma_{\pa}^{d,p}. \] \end{theorem} \begin{proof} This follows from the results in Sections~\ref{ssec:deterministic} and~\ref{ssec:probabilistic} together with results by Yukich~\cite[Theorem~4.5]{Yukich:ProbEuclidean:1998}. \end{proof} \subsection{Concentration with Warnke's Inequality} \label{ssec:warnke} McDiarmid's or Azuma-Hoeffding's inequality are powerful tools to prove concentration of measure for a function that depends on many independent random variables, all of which have only a bounded influence on the function value. If we consider smoothness in mean (see Lemma~\ref{lem:smoothinmean}), then we have the situation that the influence of a single variable is typically very small (namely $O((\log n/n)^{p/d})$), but can be quite large in the worst case (namely $O(1)$). Unfortunately, this situation is not covered by McDiarmid's or Azuma-Hoeffding's inequality. Fortunately, Warnke~\cite{Warnke} proved a generalization specifically for the case that the influence of single variables is typically bounded and fulfills a weaker bound in the worst case. The following theorem is a simplified version (personal communication with Lutz Warnke) of Warnke's concentration inequality~\cite[Theorem 2]{Warnke}, tailored to our needs. \begin{theorem}[Warnke] \label{thm:TL} Let $U_1, \ldots, U_n$ be a family of independent random variables with $U_i \in [0,1]^d$ for each $i$. Suppose that there are numbers $\cgood \leq \cbad$ and an event $\Gamma$ such that the function $\functional:([0,1]^d)^n \to \real$ satisfies \begin{multline} \max_{i \in [n]}\max_{x \in [0,1]^d}\left|\functional(U_1, \ldots, U_n)-\functional(U_1, \ldots, U_{i-1}, x, U_{i+1}, \ldots, U_k)\right| \\ \leq \begin{cases} \cgood & \text{if $\Gamma$ holds and}\\ \cbad & \text{otherwise.} \end{cases}\label{eq:TL} \end{multline} Then, for any $t \geq 0$ and $\gamma \in (0,1]$ and $\eta = \gamma(\cbad-\cgood)$, we have \begin{equation}\label{eq:PTL} \textstyle \probab\bigl(|\functional(n) - \expected(\functional(n))| \geq t\bigr) \le 2\exp\bigl(-\frac{t^2}{2n (\cgood+\eta)^2}\bigr) + \frac{n}{\gamma} \cdot \probab(\neg \Gamma) . \end{equation} \end{theorem} \begin{proof}[Proof sketch] There are two differences of this simplified variant to Warnke's result~\cite[Theorem 2]{Warnke}: First, the numbers $\cgood$ and $\cbad$ do not depend on the index $i$ but are chosen uniformly for all indices. Second, and more importantly, the event $\mathcal B$~\cite[Theorem 2]{Warnke} is not used in Theorem~\ref{thm:TL}. In Warnke's theorem~\cite[Theorem 2]{Warnke}, the event $\mathcal B$ plays only a bridging role: it is required that $\probab(\mathcal B) \leq \sum_{i =1}^n \frac 1{\gamma_i} \cdot \probab(\lnot \Gamma)$ for some $\gamma_1, \ldots, \gamma_n$ that show up in the tail bound as well. Choosing $\gamma_i = \gamma$ for all $i$ yields $\probab(\mathcal B) \leq \frac n{\gamma} \cdot \probab(\lnot \Gamma)$. Then \[ \probab\bigl(\functional(n) \geq \expected(\functional(n) + t \text{ and } \lnot \mathcal B\bigr) \le \exp\left(-\frac{t^2}{2n (\cgood+\eta)^2}\right) \] yields \[ \probab\bigl(|\functional(n) - \expected(\functional(n))| \geq t\bigr) \le 2\exp\left(-\frac{t^2}{2n (\cgood+\eta)^2}\right) + \frac{n}{\gamma} \cdot \probab(\neg \Gamma) \] by observing that a two-sided tail bound can be obtained by symmetry and adding an upper bound for the probability of $\mathcal B$ to the right-hand side. \end{proof} Next, we define \emph{typical smoothness}, which means that, with high probability, a single point does not have a significant influence on the value of~$\functional$, and we apply Theorem~\ref{thm:TL} for typically smooth functionals $\functional$. The bound of $c \cdot (\log n/n)^{p/d}$ in Definition~\ref{def:typsmooth} below for the typical influence of a single point is somewhat arbitrary, but works for $\pa$ and $\mst$. This bound is also essentially the smallest possible, as for there can be regions of diameter $c' \cdot (\log n/n)^{1/d}$ for some small constant $c' > 0$ that contain no or only a single point. It might be possible to obtain convergence results for other functionals for weaker notions of typical smoothness. \begin{definition}[typically smooth] \label{def:typsmooth} A Euclidean functional $\functional$ is \emph{typically smooth} if, for every $\beta > 0$, there exists a constant $c = c(\beta)$ such that \[ \max_{x \in [0,1]^d, i \in [n]} \bigl|\functional(U_1, \ldots, U_n) - \functional(U_1, \ldots, U_{i-1}, x, U_{i+1}, \ldots, U_n) \bigr| \leq c \cdot \left(\frac{\log n}{n}\right)^{p/d} \] with a probability of at least $1 - n^{-\beta}$. \end{definition} \begin{theorem}[concentration of typically smooth functionals] \label{thm:concentration} Assume that $\functional$ is typically smooth. Then \[ \probab\bigl(|\functional(n) - \expected(\functional(n))| \geq t\bigr) \leq O(n^{-\beta}) + \exp\left(- \frac{t^2 n^{\frac{2p}d -1}}{C (\log n)^{2p/d}}\right) \] for an arbitrarily large constant $\beta> 0$ and another constant $C>0$ that depends on $\beta$. \end{theorem} \begin{proof} We use Theorem~\ref{thm:TL}. The event $\Gamma$ is that any point can change the value only by at most $O\bigl((\log n/n)^{p/d})$. Thus, $\cgood = O\bigl((\log n/n)^{p/d})$ and $\cbad = O(1)$. The probability that we do not have the event $\Gamma$ is bounded by $O(n^{-\beta})$ for an arbitrarily large constant $\beta$ by typical smoothness. This only influences the constant hidden in the $O$ of the definition of $\cgood$. We choose $\gamma = O\bigl((\log n/n)^{p/d})$. In the notation of Theorem~\ref{thm:TL}, we choose $\eta = O(\gamma)$, which is possible as $\cbad - \cgood \approx \cbad = \Theta(1)$. Using the conclusion of Theorem~\ref{thm:TL} yields \begin{align*} \textstyle \probab\bigl(|\functional(n) - \expected(\functional(n)) | \geq t\bigr) & \textstyle\leq \frac n \gamma \cdot \probab(\lnot \Gamma) + \exp\left(- \frac{t^2 n^{2p/d}}{nC (\log n)^{2p/d}}\right) \\ & \textstyle \leq O(n^{-\beta})+ \exp\left(- \frac{t^2 n^{2p/d}}{nC (\log n)^{2p/d}}\right) \end{align*} for some constant $C > 0$. Here, $\beta$ can be chosen arbitrarily large. \end{proof} Choosing $t = n^\stdexp/\log n$ yields a nontrivial concentration result that suffices to prove complete convergence of typically smooth Euclidean functionals. \begin{corollary} \label{cor:tail} Assume that $\functional$ is typically smooth. Then \begin{equation} \probab\bigl(|\functional(n) - \expected(\functional(n))| > n^\stdexp/\log n\bigr) \leq O\left(n^{-\beta} + \exp\left(- \frac{n}{C (\log n)^{2 + \frac{2p}d}}\right) \right) \label{equ:explicitconcentration} \end{equation} for any constant $\beta$ and $C$ depending on $\beta$ as in Theorem~\ref{thm:concentration}. \end{corollary} \begin{proof} The proof is straightforward by exploiting that the assumption that $\functional(n)/n^\stdexp$ converges in mean to $\gamma_{\functional}^{d,p}$ implies $\expected(\functional(n)) = \Theta(n^\stdexp)$. \end{proof} \subsection[Complete Convergence for p>=d]{\boldmath Complete Convergence for $p \geq d$} \label{ssec:cc} In this section, we prove that typical smoothness (Definition~\ref{def:typsmooth}) suffices for complete convergence. This implies complete convergence of $\mst$ and $\pa$ by Lemma~\ref{lem:typsmooth} below. \begin{theorem} \label{thm:cc} Assume that $\functional$ is typically smooth and $\functional(n)/n^{\stdexp}$ converges in mean to $\gamma_{\functional}^{d,p}$. Then $\functional(n)/n^{\stdexp}$ converges completely to $\gamma_{\functional}^{d,p}$. \end{theorem} \begin{proof} Fix any $\eps > 0$. Since \[ \lim_{n \to \infty} \expected\left(\frac{\functional(n)}{n^{\stdexp}}\right) = \gamma_{\functional}^{d,p} , \] there exists an $n_0$ such that \[ \expected\left(\frac{\functional(n)}{n^{\stdexp}}\right) \in \left[\gamma_{\functional}^{d,p} - \frac \eps 2 ,\gamma_{\functional}^{d,p} + \frac \eps 2 \right] \] for all $n \geq n_0$. Furthermore, there exists an $n_1$ such that, for all $n \geq n_1$, the probability that $\functional(n)/n^\stdexp$ deviates by more than $\eps/2$ from its expected value is smaller than $n^{-2}$ for all $n \geq n_1$. To see this, we use Corollary~\ref{cor:tail} and observe that the right-hand side of \eqref{equ:explicitconcentration} is $O(n^{-2})$ for sufficiently large $\beta$ and that the event on the left-hand side is equivalent to \[ \left|\frac{\functional(n)}{n^\stdexp} - \frac{\expected(\functional(n))}{n^\stdexp}\right| > O\left(\frac 1{\log n}\right), \] where $O(1/\log n) < \eps/2$ for sufficiently large $n_1$ and $n \geq n_1$. Let $n_2 = \max\{n_0, n_1\}$. Then \[ \sum_{n=1}^\infty \probab\left(\left|\frac{\pa(X)}{n^{\stdexp}}\right| > \eps \right) \leq n_2 + \sum_{n=n_2+1}^\infty n^{-2} = n_2 + O(1) < \infty. \] \end{proof} Although similar in flavor, smoothness in mean does not immediately imply typical smoothness or vice versa: the latter makes only a statement about \emph{single} points at \emph{worst-case} positions. The former only makes a statement about adding and removing \emph{several} points at \emph{random} positions. However, the proofs of smoothness in mean for $\mst$ and $\pa$ do not exploit this, and we can adapt them to yield typical smoothness. \begin{lemma} \label{lem:typsmooth} $\pa$ and $\mst$ are typically smooth. \end{lemma} \begin{proof} We first consider $\pa$. Replacing a point $U_k$ by some other (worst-case) point $z$ can be modeled by removing $U_k$ and adding $z$. We observe that, in the proof of smoothness in mean (Lemma~\ref{lem:smoothinmean}, we did not exploit that the point added is at a random position, but the proof goes through for any single point at an arbitrary position. Also the other way around, i.e., removing $z$ and replacing it by a random point $U_k$, works in the same way. Thus, $\pa$ is typically smooth. Closely examining Yukich's proof of smoothness in mean for $\mst$~\cite[Lemma 4.8]{Yukich:ProbEuclidean:1998} yields the same result for $\mst$. \end{proof} \begin{corollary} \label{cor:ccexample} For all $d$ and $p$ with $p \geq 1$, $\mst(n)/n^\stdexp$ and $\pa(n)/n^\stdexp$ converge completely to constants $\gamma_{\mst}^{d,p}$ and $\gamma_{\pa}^{d,p}$, respectively. \end{corollary} \begin{proof} Both $\mst$ and $\pa$ are typically smooth and converge in mean. Thus, the corollary follows from Theorem~\ref{thm:cc}. \end{proof} \begin{remark} Instead of Warnke's method of typical bounded differences, we could also have used Kutin's extension of McDiarmid's inequality~\cite[Chapter 3]{Kutin:PhD:2002}. However, this inequality yields only convergence for $p \leq 2d$, which is still an improvement over the previous complete convergence of $p<d$, but weaker than what we get with Warnke's inequality. Furthermore, Warnke's inequality is easier to apply and a more natural extension in the following way: intuitively, one might think that we could just take McDiarmid's inequality and add the probability that we are not in a nice situation using a simple union bound, but, in general, this is not true~\cite[Section 2.2]{Warnke}. \end{remark} \section{Average-Case Approximation Ratio of the MST Heuristic} \label{sec:MST} In this section, we show that the average-case approximation ratio of the MST heuristic for power assignments is strictly better than its worst-case ratio of $2$. First, we prove that the average-case bound is strictly (albeit marginally) better than $2$ for any combination of $d$ and $p$. Second, we show a simple improved bound for the 1-dimensional case. \subsection{The General Case} \label{ssec:generalcase} The idea behind showing that the MST heuristic performs better on average than in the worst case is as follows: the weight of the PA graph obtained from the MST heuristic can not only be upper-bounded by twice the weight of an MST, but it is in fact easy to prove that it can be upper-bounded by twice the weight of the heavier half of the edges of the MST~\cite{AveragePA}. Thus, we only have to show that the lighter half of the edges of the MST contributes $\Omega(n^\stdexp)$ to the value of the MST in expectation. For simplicity, we assume that the number $n=2m+1$ of points is odd. The case of even $n$ is similar but slightly more technical. We draw points $X=\{U_1, \ldots, U_n\}$ as described above. Let $\pt(X)$ denote the power required in the power assignment obtained from the MST. Furthermore, let $\heavy$ denote the $m$ heaviest edges of the MST, and let $\light$ denote the $m$ lightest edges of the MST. We omit the parameter $X$ since it is clear from the context. Then we have \begin{equation} \label{equ:treerelation} \heavy + \light = \mst \leq \pa \leq \pt \leq 2 \heavy = 2 \mst - 2\light \leq 2\mst \end{equation} since the weight of the PA graph obtained from an MST can not only be upper bounded by twice the weight of a minimum-weight spanning tree, but it is easy to show that the PA graph obtained from the MST is in fact by twice the weight of the heavier half of the edges of a minimum-weight spanning tree~\cite{AveragePA}. For distances raised to the power $p$, the expected value of $\mst$ is $(\gamma_{\mst}^{d,p} \pm o(1)) \cdot n^{\stdexp}$. If we can prove that the lightest $m$ edges of the MST are of weight $\Omega(n^{\stdexp})$, then it follows that the MST power assignment is strictly less than twice the optimal power assignment. $\light$ is lower-bounded by the weight of the lightest $m$ edges of the whole graph without any further constraints. Let $\reallight = \reallight(X)$ denote the weight of these $m$ lightest edges of the whole graph. Note that both $\light$ and $\reallight$ take edge lengths to the $p$-power, and we have $\reallight \leq \light$. Let $c$ be a small constant to be specified later on. Let $v_{d,r} = \frac{\pi^{d/2} r^d}{\Gamma(\frac n2 +1)}$ be the volume of a $d$-dimensional ball of radius $r$. For compactness, we abbreviate $c_d = \frac{\pi^{d/2}}{\Gamma(\frac n2 +1)}$, thus $v_{d, r} = c_d r^d$. Note that all $c_d$'s are constants since $d$ is constant. The probability $P_k$ that a fixed vertex $v$ has at least $k$ other vertices within a distance of at most $r=\ell \cdot \sqrt[d]{1/n}$ for some constant $\ell > 0$ is bounded from above by \[ P_k \leq \binom{n-1}k \cdot v_{d,r}^k \leq \frac{n^k (c_dr^d)^k}{k!} =\frac{n^k (c_d \ell^d n^{-1})^k}{k!} = \frac{\tilde c^k}{k!} \] for another constant $\tilde c = \ell^d c_d$. This follows from independence and a union bound. The expected number of edges of a specific vertex that have a length of at most $r$ is thus bounded from above by \[ \sum_{k=1}^{n-1} P_k \leq \sum_{k=1}^{n-1} \frac{\tilde c^k}{k!} \leq \sum_{k=1}^\infty \frac{\tilde c^k}{k!} = e^{\tilde c} -1. \] By choosing $\ell$ appropriately small, we can achieve that $\tilde c \leq 1/3$. This yields $e^{\tilde c} - 1 < 1/2$. By linearity of expectation, the total number of edges of length at most $r$ in the whole graph is bounded from above by $m/2$. Thus, at least $m/2$ of the lightest $m$ edges of the whole graph have a length of at least $r$. Hence, the expected value of $\reallight$ is bounded from below by \[ \frac m2 \cdot r^p = \frac m2 \cdot \ell^p n^{-\frac pd} \leq \frac{\ell^p}4 \cdot n^{\stdexp} = C_{\reallight}^{d,p} \cdot n^{\stdexp} . \] for some constant $C_{\reallight}^{d,p} > 0$. Then the expected value of $\pt$ is bounded from above by \[ \left(2\gamma_{\mst}^{d,p} - 2 C_{\reallight}^{d,p} + o(1)\right) \cdot n^{\stdexp} \] by \eqref{equ:treerelation}. From this and the convergence of $\pa$, we can conclude the following theorem. \begin{theorem} \label{thm:mstratio} For any $d \geq 1$ and any $p \geq 1$, we have \[ \gamma_{\mst}^{d,p} \leq \gamma_{\pa}^{d,p} \leq 2 \gamma_{\mst}^{d,p} - 2 C_{\reallight}^{d,p} < 2 \gamma_{\mst}^{d,p} \] for some constant $C_{\reallight}^{d,p} > 0$ that depends only on $d$ and $p$. \end{theorem} By exploiting that in particular $\pa$ converges completely, we can obtain a bound on the expected approximation ratio from the above result. \begin{corollary} \label{cor:mstratio} For any $d \geq 1$ and $p \geq 1$ and sufficiently large $n$, the expected approximation ratio of the MST heuristic for power assignments is bounded from above by a constant strictly smaller than $2$. \end{corollary} \begin{proof} The expected approximation ratio is $\expected\bigl(\pt(n)/\pa(n)\bigr) = \expected\bigl(\frac{\pt(n)/n^\stdexp}{\pa(n)/n^\stdexp}\bigr)$. We know that $\pa(n)/n^\stdexp$ converges completely to $\gamma_{\pa}^{d,p}$. This implies that the probability that $\pa(n)/n^\stdexp$ deviates by more than $\eps>0$ from $\gamma_{\pa}^{d,p}$ is $o(1)$ for any $\eps > 0$. If $\pa(n)/n^\stdexp \in [\gamma_{\pa}^{d,p} - \eps, \gamma_{\pa}^{d,p} + \eps]$, then the expected approximation ratio can be bounded from above by $\frac{2 \gamma_{\mst}^{d,p} - 2 C_{\reallight}^{d,p}}{\gamma_{\pa}^{d,p} - \eps}$. This is strictly smaller than $2$ for a sufficiently small $\eps > 0$. Otherwise, we bound the expected approximation ratio by the worst-case ratio of $2$, which contributes only $o(1)$ to its expected value. \end{proof} \begin{remark} \label{rem:ptconv} Complete convergence of the functional $\pt$ as well as smoothness and closeness in mean has been shown for the specific case $p=d$~\cite{AveragePA}. We believe that $\pt$ converges completely for all $p$ and $d$. Since then $\gamma_{\pt}^{d,p} \leq 2 \gamma_{\mst}^{d,p} - 2 C_{\reallight}^{d,p} < 2 \gamma_{\mst}^{d,p}$, we would obtain a simpler proof of Corollary~\ref{cor:mstratio}. \end{remark} \subsection{An Improved Bound for the One-Dimensional Case} The case $d=1$ is much simpler than the general case, because the MST is just a Hamiltonian path starting at the left-most and ending at the right-most point. Furthermore, we also know precisely what the MST heuristic does: assume that a point $x_i$ lies between $x_{i-1}$ and $x_{i+1}$. The MST heuristic assigns power $\pa(x_i) = \max\{|x_{i} - x_{i-1}|, |x_{i} - x_{i+1}|\}^p$ to $x_i$. The example that proves that the MST heuristic is no better than a worst-case 2-approximation shows that it is bad if $x_i$ is very close to either side and good if $x_i$ is approximately in the middle between $x_{i-1}$ and $x_{i+1}$. In order to show an improved bound for the approximation ratio of the MST heuristic for $d=1$, we introduce some notation. First we remark that for $X=\{U_1,\ldots, U_n\} $ with high probability, there is no subinterval of length $c \log n/n$ of $[0,1]$ that does not contain any of the $n$ points $U_1, \ldots, U_n$ (see Lemma~\ref{lem:emptyball} for the precise statement). We assume that no interval of length $c \log n/n$ is empty for some sufficiently large constant $c$ for the rest of this section. We proceed as follows: Let $x_0 = 0$, $x_{n+1} = 1$, and let $x_1 \leq \ldots \leq x_n$ be the $n$ points (sorted in increasing order) that are drawn uniformly and independently from the interval $[0,1]$. Now we distribute the weight of the power assignment $\pt(X)$ in the power assignment obtained from the MST, and the weight of the MST as follows: For the power assignment, every point $x_i$ (for $1 \leq i \leq n$) gets a charge of $P_i = \max\{x_i - x_{i-1}, x_{i+1}-x_i\}^p$. This is precisely the power that this point needs in the power assignment obtained from the spanning tree. For the minimum spanning tree, we divide the power of an edge $(x_{i-1}, x_i)$ (for $1 \leq i \leq n+1$) evenly between $x_{i-1}$ and $x_i$. This means that the charge of $x_i$ is $M_i = \frac 12 \cdot \bigl((x_i - x_{i-1})^p + (x_{i+1}-x_i)^{p}\bigr)$. The length of the minimum spanning tree is thus \[ \mst =\underbrace{\sum_{i=1}^n M_i}_{M^\star} + \underbrace{\frac 12 \cdot \bigl((x_1-x_0)^p + (x_{n+1} - x_n)^p\bigr)}_{=M'}. \] The total power for the power assignment obtained from this tree is \[ \pt = \underbrace{\sum_{i=1}^n P_i}_{P^\star} + \underbrace{(x_1-x_0)^p + (x_{n+1} - x_n)^p}_{=P'}. \] Note the following: If the largest empty interval has a length of at most $c \log n/n$, then the terms $P'$ and $M'$ are negligible according to the following lemma. Thus, we ignore $P'$ and $M'$ afterwards to simplify the analysis. \begin{lemma} \label{lem:negligible} Assume that the largest empty interval has a length of at most $c \log n/n$. Then $M' = O\bigl(M^\star \cdot \frac{(\log n)^p}{n}\bigr)$ and $P' = O\bigl(P^\star \cdot \frac{(\log n)^p}{n}\bigr)$. \end{lemma} \begin{proof} We have $M' \leq (c \log n/n)^p$ and $P' \leq 2 (c \log n/n)^p$ because $x_1 \leq c \log n/n$ and $x_n \geq 1-c \log n/n$ by assumption. Thus, $M', P' = O\bigl(\bigl(\frac{\log n}n\bigr)^p\bigr)$. Furthermore, \[ M^\star = \sum_{i=1}^n \frac 12 \cdot \left(\bigl(x_i - x_{i-1}\bigr)^p + \bigl(x_{i+1}-x_i\bigr)^p\right). \] Since $p \geq 1$, this function becomes minimal if we place $x_1, \ldots, x_n$ equidistantly. Thus, \[ M^\star \geq \sum_{i=1}^n \left(\frac 1{n+1}\right)^p = n \cdot \left(\frac 1{n+1}\right)^p = \Omega\bigl(n^{1-p}\bigr). \] With a similar calculation, we obtain $P^\star = \Omega\bigl(n^{1-p}\bigr)$ and the result follows. \end{proof} For simplicity, we assume from now on that $n$ is even. If $n$ is odd, the proof proceeds in exactly the same way except for some changes in the indices. In order to analyze $M$ and $P$, we proceed in two steps: First, we draw all points $x_1, x_3, \ldots, x_{n-1}$ (called the \emph{odd points}). Given the locations of these points, $x_i$ for even $i$ ($x_i$ is then called an \emph{even point}) is distributed uniformly in the interval $[x_{i-1}, x_{i+1}]$. Note that we do not really draw the odd points. Instead, we let an adversary fix these points. But the adversary is not allowed to keep an interval of length $c \log n/n$ free (because randomness would not do so either with high probability). Then the sums \[ \meven = \sum_{i=1}^{n/2} M_{2i} \] and \[ \peven = \sum_{i=1}^{n/2} P_{2i} \] are sums of independent random variables. (Of course $M_{2i}$ and $P_{2i}$ are dependent.) Now let $\ell_{2i} = x_{2i+1} - x_{2i-1}$ be the length of the interval for $x_{2i}$. The expected value of $M_{2i}$ is \[ \expected(M_{2i}) = \frac 1{\ell_{2i}} \cdot \int_{0}^{\ell_{2i}} \frac 12 \cdot \bigl(x^p + (\ell_{2i} - x)^p\bigr) \,\text dx = \frac{\ell_{2i}^p}{p+1}. \] Analogously, we obtain \begin{align*} \expected(P_{2i}) & = \frac 1{\ell_{2i}} \cdot \int_{0}^{\ell_{2i}} \max\{x, \ell_{2i} - x\}^p \,\text dx \\ & = \frac 2{\ell_{2i}} \cdot \int_{0}^{\ell_{2i}/2} (\ell_{2i} - x)^p \,\text dx = \left(2-\frac{1}{2^p}\right) \cdot \frac{\ell_{2i}^p}{p+1} . \end{align*} We observe that $\expected(P_{2i})$ is a factor $2-2^{-p}$ greater than $\expected(M_{2i})$. In the same way, the expected value of $P_{2i+1}$ is a factor of $2-2^{-p}$ greater than the expected value of $M_{2i+1}$. This is already an indicator that the approximation ratio should be $2-2^{-p}$. Because $\meven$ and $\peven$ are sums of independent random variables, we can use Hoeffding's inequality to bound the probability that they deviate from the expected values $\expected(\meven)$ and $\expected(\peven)$. \begin{lemma}[Hoeffding's inequality~\cite{Hoeffding:SumsBounded:1963}] \label{lem:hoeffding} Let $X_1, \ldots, X_m$ be independent random variables, where $X_i$ assumes values in the interval $[a_i, b_i]$. Let $X = \sum_{i = 1}^m X_i$. Then for all $t > 0$, \[ \probab\bigl(X - \expected(X) \geq t\bigr) \leq \exp\left(-\frac{2 t^2}{\sum_{i = 1}^m (b_i-a_i)^2}\right). \] By symmetry, the same bound holds for $\probab\bigl(X - \expected(X) \leq -t\bigr)$. \end{lemma} Let us start with analyzing the probability that $\meven < (1-n^{-1/4}) \cdot \expected(\meven)$. We have $m=n/2$ in the above. Furthermore, we have $b_i = \ell_{2i}^p/2$ (obtained if $x_{2i} = x_{2i-1}$ or $x_{2i} = x_{2i+1}$) and $a_i = (\ell_{2i}/2)^p$. Thus, $(b_i - a_i)^2 = \ell_{2i}^{2p} \cdot (2^{-1} - 2^{-p})^2$. If $p > 1$ is a constant, then this is $c_{p} \ell_{2i}^{2p}$ for some constant $c_p$. For $p = 1$, it is $0$. However, in this case, the length of the minimum spanning tree is exactly $1$, without any randomness. Thus, for $p = 1$, we do not have to apply Hoeffding's inequality. For $p > 1$, we obtain \begin{align} \probab\left(\meven < \bigl(1-n^{-1/4}\bigr) \cdot \expected(\meven)\right) & \leq \exp\left( - \frac{2 n^{-1/2} \expected(\meven)^2}{\sum_{i=1}^{n/2}c_{p} \ell_{2i}^{2p}}\right) \notag \\ & = \exp\left( - \frac{2 n^{-1/2} \bigl(\sum_{i=1}^{n/2} \frac{\ell_{2i}^p}{p+1}\bigr)^2}{\sum_{i=1}^{n/2}c_{p} \ell_{2i}^{2p}}\right) \notag \\ & = \exp\left( - c' n^{-1/2} \cdot \frac{\bigl(\sum_{i=1}^{n/2} \ell_{2i}^p\bigr)^2}{\sum_{i=1}^{n/2} \ell_{2i}^{2p}}\right) \label{fraction} \end{align} with $c' = \frac{2}{(p+1)^2 c_p}$. To estimate the exponent, we use the following technical lemma. \begin{lemma} \label{lem:tech} Let $p \geq 1$ be a constant. Let $s_1, \ldots, s_m \in [0, \beta]$ be positive numbers for some $\beta > 0$ with $\sum_{i=1}^m s_i = \gamma$ for some number $\gamma$. (We assume that $m\beta \geq \gamma$.) Then \[ \frac{\bigl(\sum_{i=1}^m s_i^p\bigr)^2}{\sum_{i=1}^m s_i^{2p}} \geq m \cdot \left(\frac{\gamma}{m\beta}\right)^p. \] \end{lemma} \begin{proof} We rewrite the numerator as \[ \sum_{i=1}^m s_i^p \sum_{j=1}^m s_j^p \] and the denominator as \[ \sum_{i=1}^m s_i^p s_i^p . \] Now we see that the coefficient for $s_i^p$ in the numerator is $\sum_{j=1}^m s_j^p$, and it is $s_i^p \leq \beta^p$ in the denominator. Because of $p \geq 1$, the sum $\sum_{j=1}^m s_j^p$ is convex as a function of the $s_j$. Thus, it becomes minimal if all $s_j$ are equal. Thus, the numerator is bounded from below by $m \cdot (\gamma/m)^p$. \end{proof} With these results, we obtain the following theorem. \begin{theorem} \label{thm:d1} For all $p \geq 1$, we have $\gamma_{\mst}^{1,p} \leq \gamma_{\pa}^{1,p} \leq (2 - 2^{-p}) \cdot \gamma_{\mst}^{1,p}$. \end{theorem} \begin{proof} The first inequality is immediate. For the second inequality, we apply Lemma~\ref{lem:tech} with $\beta = \frac{4\log n}{n}$ and $\gamma = 1-o(1) \geq 1/2$ (the $o(1)$ stems from the fact the we have to ignore the distance $x_1 - x_0$ and even $x_{n+1} - x_n$) and $s_i = \ell_{2i}$ and $m=n/2$ to obtain a lower bound of $\frac n2 \cdot \left(\frac{1}{4\log n}\right)^p$ for the ratio of the fraction in \eqref{fraction}. This yields \[ \probab\left(\meven < \bigl(1-n^{-1/4}\bigr) \cdot \expected(\meven)\right) \leq \exp\left(-\Omega\left(\frac{\sqrt n}{(\log n)^p}\right)\right). \] In the same way, we can show that \[ \probab\left(\peven > \bigl(1+n^{-1/4}\bigr) \cdot \expected(\peven)\right) \leq \exp\left( - \Omega(n^{1/4})\right). \] Furthermore, the same analysis can be done for $\podd$ and $\modd$. Thus, both the power assignment obtained from the MST and the MST are concentrated around their means, their means are at a factor of $2- 2^{-p}$ for large $n$, and the MST provides a lower bound for the optimal PA. \end{proof} The high probability bounds for the bound of $2-2^{-p}$ of the approximation ratio of the power assignment obtained from the spanning tree together with the observation that in case of any ``failure'' event we can use the worst-case approximation ratio of $2$ yields the following corollary. \begin{corollary} \label{cor:d1} The expected approximation ratio of the MST heuristic is at most $2-2^{-p} + o(1)$. \end{corollary} \section{Conclusions and Open Problems} \label{sec:concl} We have proved complete convergence of Euclidean functionals that are \emph{typically smooth} (Definition~\ref{def:typsmooth}) for the case that the power $p$ is larger than the dimension $d$. The case $p > d$ appears naturally in the case of transmission questions for wireless networks. As examples, we have obtained complete convergence for the $\mst$ (minimum-spanning tree) and the $\pa$ (power assignment) functional. To prove this, we have used a recent concentration of measure result by Warnke~\cite{Warnke}. His strong concentration inequality might be of independent interest to the algorithms community. As a technical challenge, we have had to deal with the fact that the degree of an optimal power assignment graph can be unbounded. To conclude this paper, let us mention some problems for further research: \begin{enumerate} \setlength{\itemsep}{0mm} \item Is it possible to prove complete convergence of other functionals for $p \geq d$? The most prominent one would be the traveling salesman problem (TSP). However, we are not aware that the TSP is smooth in mean, \item Concerning the average-case approximation ratio of the MST heuristic, we only proved that the approximation ratio is smaller than $2$. Only for the case $d=1$, we provided an explicit upper bound for the approximation ratio. Is it possible to provide an improved approximation ratio as a function of $d$ and $p$ for general $d$? \item Can Rhee's isoperimetric inequality~\cite{Rhee:Subadditive:1993} be adapted to work for $p \geq d$? Rhee's inequality can be used to obtain convergence for the case that the points are not identically distributed, and has for instance been used for a smoothed analysis of Euclidean functionals~\cite{BlaeserEA:Partitioning:2013}. (Smoothed analysis has been introduced by Spielman and Teng to explain the performance of the simplex method~\cite{SpielmanTeng:SmoothedAnalysisWhy:2004}. We refer to two surveys for an overview~\cite{SpielmanTeng:CACM:2009,MantheyRoeglin:SmoothedSurvey:2011}.) \item Can our findings about power assignments be generalized to other settings? For instance, to get a more reliable network, we may want to have higher connectivity. Another issue would be to take into account interference of signals or noise such as the SINR or related models. \end{enumerate} \section*{Acknowledgment} We thank Samuel Kutin, Lutz Warnke, and Joseph Yukich for fruitful discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} One of the most basic questions in cosmology is whether the universe had a beginning or has simply existed forever. It was addressed in the singularity theorems of Penrose and Hawking \cite{HawkingEllis}, with the conclusion that the initial singularity is not avoidable. These theorems rely on the strong energy condition and on certain assumptions about the global structure of spacetime. There are, however, three popular scenarios which circumvent these theorems: eternal inflation, a cyclic universe, and an ``emergent'' universe which exists for eternity as a static seed before expanding. Here we shall argue that none of these scenarios can actually be past-eternal. Inflation violates the strong energy condition, so the singularity theorems of Penrose and Hawking do not apply. Indeed, quantum fluctuations during inflation violate even the weak energy condition, so that singularity theorems assuming only the weak energy condition \cite{Borde1994} do not apply either. A more general incompleteness theorem was proved recently \cite{Borde} that does not rely on energy conditions or Einstein's equations. Instead, it states simply that past geodesics are incomplete provided that the expansion rate averaged along the geodesic is positive: $H_{av} > 0$. This is a much weaker condition, and should certainly apply to the past of any inflating region of spacetime. Therefore, although inflation may be eternal in the future, it cannot be extended indefinitely to the past. Another possibility could be a universe which cycles through an infinite series of big bang followed by expansion, contraction into a crunch that transitions into the next big bang \cite{Steinhardt}. A potential problem with such a cyclic universe is that the entropy must continue to increase through each cycle, leading to a ``thermal death'' of the universe. This can be avoided if the volume of the universe increases through each cycle as well, allowing the ratio $S/V$ to remain finite \cite{Tolman}. But if the volume continues to increase over each cycle, $H_{av} > 0$, meaning that the universe is past-incomplete. We now turn to the emergent universe scenario, which will be our main focus in this paper. \section{Emergent universe scenario} In the emergent universe model, the universe is closed and static in the asymptotic past (recent work includes \cite{Ellis, Barrow, Sergio,Yu,Graham}; for early work on oscillating models see \cite{Dabrowski}). Then $H_{av} = 0$ and the incompleteness theorem \cite{Borde} does not apply. This universe can be thought of as a ``cosmic egg'' that exists forever until it breaks open to produce an expanding universe. In order for the model to be successful, two key features are necessary. First, the universe should be stable, so that quantum fluctuations will not push it to expansion or contraction. In addition, it should contain some mechanism to exit the stationary regime and begin inflation. One possible mechanism involves a massless scalar field $\phi$ in a potential $V(\phi)$ which is flat as $\phi \to -\infty$ but increases towards positive values of $\phi$. In the stationary regime the field ``rolls'' from $-\infty$ at a constant speed, ${\dot\phi}=const$, but as it reaches the non-flat region of the potential, inflation begins \cite{Mulryne}. Graham et al. \cite{Graham} recently proposed a simple emergent model featuring a closed universe ($k = +1$) with a negative cosmological constant ($\Lambda < 0 $) and a matter source which obeys $P = w \rho$, where $-1<w<-1/3$. Graham et al. point out that the matter source should not be a perfect fluid, since this would lead to instability from short-wavelength perturbations \cite{Graham}. One such material that fulfills this requirement is a network of domain walls, which has $w = -2/3$. Then the energy density is \begin{equation} \rho(a) = \Lambda + \rho_0 a^{-1} \label{rho} \end{equation} and the Friedmann equation for the scale factor $a$ has solutions of the form of a simple harmonic oscillator: \begin{equation} a = \omega^{-1} (\gamma-\sqrt{\gamma^2 -1} \cos(\omega t)), \label{a} \end{equation} where \begin{equation} \omega = \sqrt{\frac{8\pi}{3} G|\Lambda|} \label{omega} \end{equation} and \begin{equation} \gamma=\sqrt{\frac{2\pi G\rho_0^2}{3|\Lambda|}}. \label{gamma} \end{equation} In the special case where $\gamma = 1$, the universe is static. Although this model is stable with respect to classical perturbations, we will see that there is a quantum instability \cite{DabrowskiLarsen,MithaniVilenkin}. \subsection{Quantum mechanical collapse} We consider the quantum theory for this system in the minisuperspace where the wave function of the universe $\psi$ depends only on the scale factor $a$. In the classical theory, the Hamiltonian is given by \begin{equation} {\cal H} = -\frac{G}{3\pi a}\left( p_a^2 + U(a) \right), \label{H} \end{equation} where \begin{equation} p_a = -\frac{3\pi}{2G}a{\dot a} \end{equation} is the momentum conjugate to $a$ and the potential $U(a)$ is given by \begin{equation} U(a) = \left(\frac{3\pi}{2G}\right)^2 a^2\left(1-\frac{8\pi G}{3}a^2\rho(a)\right). \label{U1} \end{equation} With the Hamiltonian constraint ${\cal H} = 0$, enforcing zero total energy of the universe, we recover the oscillating universe solutions discussed in \cite{Graham}. We quantize the theory by letting the momentum become the differential operator $p_a \to -i\frac{d}{d a}$ and replacing the Hamiltonian constraint with the Wheeler-DeWitt equation \cite{DeWitt} \begin{equation} {\cal H} \psi = 0. \end{equation} From the Hamiltonian in Eq.~(\ref{H}), the WDW equation becomes \begin{equation} \left(-\frac{d^2}{da^2}+U(a)\right)\psi(a)=0, \label{WDW} \end{equation} with the potential from Eqs.~(\ref{rho}) and ~(\ref{U1}). Note that in quantum theory the form of the potential (see Fig.~1) is no longer that of a harmonic oscillator. \begin{figure}[h] \begin{center} \includegraphics[width=9cm]{potential1} \caption{The potential $U(a)$ with turning points $a_+$ and $a_-$} \label{potential} \end{center} \end{figure} Instead, there is an oscillating region between the classical turning points $a_+$ and $a_-$, which are given by \begin{equation} a_{\pm} = \omega^{-1} \left( \gamma \pm \sqrt{\gamma^2 -1} \right), \end{equation} and the universe may tunnel through the classically forbidden region from $a_-$ to $a=0$. The semiclassical tunneling probability as the universe bounces at $a_-$ can be determined from\footnote{Semiclassical tunneling in oscillating universe models has been studied in the early work by Dabrowski and Larsen \cite{DabrowskiLarsen}.} \begin{equation} {\cal P} \sim e^{-2S_{WKB}} \end{equation} where the tunneling action is \begin{equation} S_{WKB} = \int_0^{a_-} \sqrt{U(a)}da = \frac{9M_{P}^4}{16 | \Lambda |} \left[ \frac{\gamma^2}{2} + \frac{\gamma}{4}\left( \gamma^2 -1 \right) \ln\left( \frac{\gamma-1}{\gamma+1} \right) -\frac{1}{3} \right]. \label{WKB} \end{equation} For a static universe, $\gamma = 1$ and $a_- = a_+ = \omega^{-1}$, \begin{equation} S_{WKB} = \frac{3M_{P}^4}{32 | \Lambda |}. \end{equation} Since the tunneling probability is nonzero, the simple harmonic universe cannot last forever. \subsection{Solving the WDW equation} First let us examine the well-known quantum harmonic oscillator. In that case, the wave function is a solution to the Schrodinger equation \begin{equation} \frac{1}{2}\left(-\frac{d^2}{dx^2} + \omega^2 x^2\right) \psi(x)= E\psi(x). \end{equation} After imposing the boundary conditions $\psi(\pm \infty ) \to 0$, the solutions represent a discrete set of eigenfunctions, each having energy eigenvalue $E_n = \left( n+\frac{1}{2} \right)\omega$. However, in the case of the simple harmonic universe the wave function is a solution to the WDW equation (\ref{WDW}), which has a fixed energy eigenvalue $E = 0$ from the Hamiltonian constraint. From the form of the potential in Fig.~\ref{potential}, it seems that we must choose $\psi(\infty) \to 0$, so that the wave function is bounded at $a\to\infty$. We are then not free to impose any additional condition at $a = 0$, or the system will be overdetermined. The wave function in the under-barrier region $0<a<a_-$ is generally a superposition of growing and decaying solutions, and we can expect that the solution that grows towards $a=0$ will dominate (unless the parameters of the model are fine-tuned; see \cite{MithaniVilenkin} for more details). A numerical solution to the WDW equation is illustrated in Fig.~\ref{oscillatingsolution}. It exhibits an oscillatory behavior between the classical turning points and grows by magnitude towards $a=0$. This indicates a nonzero probability of collapse. Similar behavior is found for the case of $\gamma=1$, corresponding to a classically static universe. \begin{figure}[h] \begin{center} \includegraphics[width=9cm]{solution} \caption{Solution of the WDW equation with $|\Lambda| / M_P^4 = .028$ and $\gamma = 1.3$ (dashed line). The WDW potential is also shown (solid line).} \label{oscillatingsolution} \end{center} \end{figure} One can consider a more general class of models including strings, domain walls, dust, radiation, etc., \begin{equation} \rho (a) = \Lambda + \frac{C_1}{a} + \frac{C_2}{a^2} + \frac{C_2}{a^3} + \frac{C_4}{a^4} + \dots. \end{equation} For positive values of $C_n$, the effect of this is that the potential develops another classically allowed region at small $a$. So the tunneling will now be to that other region, but the qualitative conclusion about the quantum instability remains unchanged. Altering this conclusion would require rather drastic measures. For example, one could add a matter component $\rho_n(a) = C_n/a^n$ with $n\geq 6$ and $C_n<0$. Then the height of the barrier becomes infinite at $a\to 0$ and the tunneling action is divergent. Note, however, that such a negative-energy matter component is likely to introduce quantum instabilities of its own. \section{Did the universe have a beginning?} At this point, it seems that the answer to this question is probably yes.\footnote{Note that we use the term ``beginning'' as being synonimous to past incompleteness.} Here we have addressed three scenarios which seemed to offer a way to avoid a beginning, and have found that none of them can actually be eternal in the past. Both eternal inflation and cyclic universe scenarios have $H_{av} > 0$, which means that they must be past-geodesically incomplete. We have also examined a simple emergent universe model, and concluded that it cannot escape quantum collapse. Even considering more general emergent universe models, there do not seem to be any matter sources that admit solutions that are immune to collapse.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $\cH$ be a real Hilbert space endowed with weak topology defined by the inner product $\langle \cdot , \cdot \rangle$ and its induced norm $\| \cdot \|$. Let $C \subseteq \cH$ be a nonempty closed convex subset and $f: \cH \times \cH \to \R \cup \{+\infty\}$ a bifunction such that $f(x, y) < +\infty$ for every $x, y \in C$. The equilibrium problem defined by the Nikaido-Isoda-Fan inequality that we are going to consider in this paper is given as $$\text{Find}\ x \in C: f(x, y) \geq 0 \ \forall y \in C. \eqno(EP)$$ This inequality first was used in 1955 by Nikaido-Isoda \cite{NI1955} in convex game models. Then in 1972 Ky Fan \cite{F1972} called this inequality a minimax one and established existence theorems for Problem $(EP)$. After the appearance of the paper by Blum and Oettli \cite{BO1994} Problem $(EP)$ has been contracted much attention of researchers. It has been shown in \cite{BCPP2013, BO1994, MO1992} that some important problems such as optimization, variational inequality, Kakutani fixed point and Nash equilibrium can be formulated in the form of $(EP)$. Many papers concerning the solution existence, stabilities as well as algorithms for Problem $(EP)$ have been published (see e.g. \cite{HM2011, IS2003, M2003, MQ2009, QAM2012, QMH2008, SS2011} and the survey paper \cite{BCPP2013}). A basic method for Problem $(EP)$ is the gradient (or projection) one, where the sequence of iterates is defined by taking \begin{equation}\label{1m} x^{k+1} = \min\left\{ \lambda_k f(x^k,y) +\frac{1}{2} \|y-x^k\|^2 : y \in C \right\}, \end{equation} with $\lambda_k$ is some appropriately chosen real number. Note that in the variational inequality case, where $f(x,y) := \langle F(x), y-x\rangle$, the iterate $x^{k+1}$ defined by (\ref{1m}) becomes $$x^{k+1} = P_C\left(x^k - \lambda_k F(x^k)\right),$$ where $P_C$ stands for the metric projection onto $C$. It is well known that under certain conditions on the parameter $\lambda_k$, the projection method is convergent if $f$ is strongly pseudomonotone or paramonotone \cite{IS2003, QMH2008}. However when $f$ is monotone, it may fail to converge. In order to obtain convergent algorithms for monotone, even pseudomonotone, equilibrium problems, the extragradient method first proposed by Korpelevich \cite{K1976} for the saddle point and related problems has been extended to equilibrium problems \cite{QMH2008}. In this extragradient algorithm, at each iteration, it requires solving the two strongly convex programs \begin{equation}\label{2m} y^k = \min\left\{ \lambda_k f(x^k,y) +\frac{1}{2} \|y-x^k\|^2 : y \in C \right\}, \end{equation} \begin{equation}\label{3m} x^{k+1} = \min\left\{ \lambda_k f(x^k,y) +\frac{1}{2} \|y-y^k\|^2 : y \in C \right\}, \end{equation} which may cause computational cost. In order to reduce the computational cost, several convergent algorithms that require solving only one strongly convex program or computing only one projection at each iteration have been proposed. \todo{These algorithms were applied to} some classes of bifunctions \todo{such as} strongly pseudomonotone and paramonotone \todo{ones}, with or without using an ergodic sequence (see e.g. \cite{AHT2016, DMQ2016, SS2011}). In another direction, also for the sake of reducing computational cost, some splitting algorithms have been developed (see e.g. \cite{AH2017, HV2017, M2009}) for monotone equilibrium problems where the bifunctions can be decomposed into the sum of two bifunctions. In these algorithms the convex subprograms (resp. regularized subproblems) involving the bifunction $f$ can be replaced by the two convex subprograms (resp. regularized subproblems), one for each $f_i$ $(i=1, 2)$ independently. In this paper we modify the projection algorithm in \cite{SS2011} to obtain a splitting convergent algorithm for paramonotone equilibrium problems. The main future of this algorithm is that at each iteration, it requires solving only one strongly convex program. Furthermore, in the case when $f = f_1 + f_2$, this strongly convex subprogram can be replaced by the two strongly convex subprograms, one for each $f_1$ and $f_2$ as the algorithm in \cite{AH2017, HV2017}, but for the convergence we do not require any additional conditions such as H\"older continuity and Lipschitz type condition as in \cite{AH2017, HV2017}. We also show that the ergodic sequence defined by the \todo{algorithm's} iterates \todo{converges} to a solution without paramonotonicity property. We apply the ergodic algorithm for solving a Cournot-Nash model with joint constraints. The computational results and experiences show that the ergodic algorithm is efficient for this model with a restart strategy. The remaining part of the paper is organized as follows. The next section \todo{lists} preliminaries containing some lemmas that will be used in proving the convergence of the proposed algorithm. Section \ref{SectionAlgorithm} is devoted to the description of the algorithm and its convergence analysis. \todo{Section \ref{SectionExperiments} shows an application of the algorithm in solving a Cournot-Nash model with joint constraints. Section \ref{SectionConclusion} closed the paper with some conclusions. } \section{Preliminaries} We recall from \cite{BCPP2013} the following well-known definition on monotonicity of bifunctions. \begin{defi} A bifunction $f: \cH \times \cH \to \R \cup \{+\infty\}$ is said to be \begin{itemize}[leftmargin = 0.5 in] \item[(i)] strongly monotone on $C$ with modulus $\beta > 0$ (shortly $\beta$-strongly monotone) if $$f(x, y) + f(y, x) \leq -\beta \| y - x \|^2 \quad \forall x, y \in C;$$ \item[(ii)] monotone on $C$ if $$f(x, y) + f(y, x) \leq 0 \quad \forall x, y \in C;$$ \item[(iii)] strongly pseudo-monotone on $C$ with modulus $\beta > 0$ (shortly $\beta$-strongly pseudo-monotone) if $$f(x, y) \geq 0 \implies f(y, x) \leq -\beta\| y - x \|^2 \quad \forall x, y \in C;$$ \item[(iv)] pseudo-monotone on $C$ if $$f(x, y) \geq 0 \implies f(y, x) \leq 0 \quad \forall x, y \in C.$$ \item[(v)] paramonotone on $C$ with respect to a set $S$ if $$x^* \in S, x\in C \text{ and } f(x^*, x) = f(x, x^*) = 0 \text{ implies } x \in S.$$ \end{itemize} \end{defi} Obviously, $(i) \implies (ii) \implies (iv)$ and $(i) \implies (iii) \implies (iv)$. Note that a strongly pseudo-monotone bifunction may not be monotone. Paramonotone bifunctions have been used in e.g. \cite{SS2011,S2011}. Some properties of paramonotone operators can be found in \cite{IS1998}. Clearly in the case of optimization problem when $f(x,y) = \varphi(y) - \varphi(x)$, the bifunction $f$ is paramonotone on $C$ with respect to the solution set of the problem $\min_{x\in C} \varphi(x)$. The following well known lemmas will be used for proving the convergence of the algorithm to be described in the next section. \begin{lem}\label{lem1}{\em (see \cite{TX1993} Lemma 1)} Let $\{\alpha_k\}$ and $\{\sigma_k\}$ be two sequences of nonnegative numbers such that $\alpha_{k+1} \leq \alpha_k + \sigma_k$ for all $k \in \mathbb{N}$, where $\sum_{k=1}^{\infty} \sigma_k < \infty$. Then the sequence $\{\alpha_k\}$ is convergent. \end{lem} \begin{lem}\label{lem2}{\em (see \cite{P1979})} Let $\cH$ be a Hilbert space, $\{x^k\}$ a sequence in $\cH$. Let $\{r_k\}$ be a sequence of nonnegative number such that $\sum_{k=1}^{\infty} r_k = +\infty$ and set $z^k := \dfrac{\sum_{i=1}^k r_i x^i}{\sum_{i=1}^k r_i}$. Assume that there exists a nonempty, closed convex set $S \subset \cH$ satisfying: \begin{itemize}[leftmargin = 0.5 in] \item[(i)] For every $z \in S$, $\lim_{n \to \infty}\|z^k - z\|$ exists; \item[(ii)] Any weakly cluster point of the sequence $\{z^k\}$ belongs to $S$. \end{itemize} Then the sequence $\{z^k\}$ weakly converges. \end{lem} \begin{lem}\label{LemmaXu} {\em (see \cite{X2002})} Let $\{\lambda_k\}, \{\delta_k\}, \{\sigma_k\}$ be sequences of real numbers such that \begin{itemize}[leftmargin = 0.5 in] \item[(i)] $\lambda_k \in (0, 1)$ for all $k \in \mathbb{N}$; \item[(ii)] $\sum_{k = 1}^{\infty} \lambda_k = +\infty$; \item[(iii)] $\limsup_{k \to +\infty} \delta_k \le 0$; \item[(iv)] $\sum_{k = 1}^{\infty} |\sigma_k| < +\infty$. \end{itemize} Suppose that $\{\alpha_k\}$ is a sequence of nonnegative real numbers satisfying \beqs \alpha_{k+1} \le (1 - \lambda_k) \alpha_k + \lambda_k \delta_k + \sigma_k \qquad \forall k \in \mathbb{N}. \eeqs Then we have $\lim_{k \to +\infty} \alpha_k = 0$. \end{lem} \section{The problem, algorithm and its convergence}\label{SectionAlgorithm} \subsection{The problem} In what follows, for the following equilibrium problem $$\text{Find}\ x \in C: f(x, y) \geq 0 \ \forall y \in C \eqno(EP)$$ we suppose that $f(x, y) = f_1(x, y) + f_2(x, y)$ and that $f_i(x, x) = 0$ ($i=1, 2$) for every $x, y \in C$. The following assumptions for the bifunctions $f, f_1, f_2$ will be used in the sequel. \begin{itemize}[leftmargin = 0.5 in] \item[(A1)] For each $i =1, 2$ and each $x\in C$, the function $f_i(x, \cdot)$ is convex and sub-differentiable, while $f(\cdot, y)$ is weakly upper semicontinuous on $C$; \item[(A2)] If $\{x^k\} \subset C$ is bounded, then for each $i = 1, 2$, the sequence $\{g^k_i\}$ with $g^k_i \in \partial f_i(x^k,x^k)$ is bounded; \item[(A3)] The bifunction $f$ is monotone on $C$. \end{itemize} Assumption (A2) has been used in e.g. \cite{S2011}. Note that Assumption (A2) is satisfied if the functions $f_1$ and $f_2$ are jointly weakly continuous on an open convex set containing $C$ (see \cite{VSN2015} Proposition 4.1).\\ The dual problem of $(EP)$ is $$\text{find}\ x \in C: f(y, x) \leq 0 \ \forall y \in C. \eqno(DEP)$$ We denote the solution sets of $(EP)$ and $(DEP)$ by $S(C,f)$ and $S^d(C,f)$, respectively. A relationship between $S(C,f)$ and $S^d(C,f)$ is given in the following lemma. \begin{lem}\label{lem3}{\em (see \cite{KS2000} Proposition 2.1)} (i) If $f(\cdot, y)$ is weakly upper semicontinuous and $f(x, \cdot)$ is convex for all $x, y \in C$, then $S^d(C,f) \subset S(C,f)$. (ii) If $f$ is pseudomonotone, then $S(C, f) \subset S^d(C, f)$. \end{lem} Therefore, under the assumptions (A1)-(A3) one has $S(C, f) = S^d(C, f)$. In this paper we suppose that $S(C, f)$ is nonempty. \subsection{The algorithm and its convergence analysis} The algorithm below is a gradient one for paramonotone equilibrium problem $(EP)$. The stepsize is computed as in the algorithm for equilibrium problem in \cite{SS2011}. \begin{algorithm}[H] \caption{A splitting algorithm for solving paramonotone or strongly pseudo-monotone equilibrium problems.}\label{alg2} \begin{algorithmic} \State \textbf{Initialization:} Seek $x^0\in C$. Choose a sequence $\{\beta_k\}_{k \geq 0} \subset \mathbb{R}$ satisfying the following conditions \beqs \quad \sum_{k=0}^\infty \beta_k = +\infty, \quad \sum_{k=0}^\infty \beta_k^2 < +\infty. \eeqs \State \textbf{Iteration} $k = 0, 1, \ldots$:\\ \qquad Take $g_1^k \in \partial_2 f_1(x^k, x^k), g_2^k \in \partial_2 f_2(x^k, x^k)$.\\ \qquad Compute \begin{align*} \eta_k &:= \max\{\beta_k, \|g_1^k\|, \|g_2^k\|\}, \ \lambda_k := \dfrac{\beta_k}{\eta_k},\\ y^k &:= \arg\min\{\lambda_k f_1(x^k, y) + \dfrac{1}{2}\|y - x^k\|^2 \mid y \in C\},\\ x^{k+1} &:= \arg\min\{\lambda_k f_2(x^k, y) +\dfrac{1}{2}\|y - y^k\|^2 \mid y \in C\}. \end{align*} \end{algorithmic} \end{algorithm} \begin{thm} \label{thm1} In addition to the assumptions {\em (A1), (A2), (A3)} we suppose that $f$ is paramonotone on $C$, and that either int $C \not=\emptyset$ or for each $x\in C$ both bifunctions $f_1(x, \cdot)$, $f_2(x, \cdot)$ are continuous at a point in $C$. Then the sequence $\{x^k\}$ generated by the algorithm \ref{alg2} converges weakly to a solution of $(EP)$. Moreover, if $f$ is strongly pseudomonotone, then $\{x^k\}$ strongly converges to the unique solution of $(EP)$. \end{thm} \begin{proof} First, we show that, for each $x^* \in S(f, C)$, the sequence $\{\|x^k - x^*\|\}$ is convergent. Indeed, for each $k \geq 0$, for simplicity of notation, let \begin{equation*} h_1^k (x) := \lambda_k f_1(x^k, x) + \frac{1}{2}\| x-x^k \|^2, \end{equation*} \begin{equation*} h_2^k (x) := \lambda_k f_2(x^k, x) + \frac{1}{2}\| x-y^k \|^2. \end{equation*} By Assumption (A1), the functions $h_1^k$ is strongly convex with modulus $1$ and subdifferentiable, which implies \begin{equation} \label{ct2} h_1^k (y^k) + \langle u_1^k, x - y^k \rangle + \frac{1}{2} \| x - y^k \|^2 \leq h_1^k(x) \quad \forall x \in C \end{equation} for any $u_1^k \in \partial h_1^k (y^k)$. On the other hand, from the definition of $y^k$, using the regularity condition, by the optimality condition for convex programming, we have $$0 \in \partial h_1^k (y^k) + N_C (y^k)$$ In turn, this implies that there exists $-u_1^k \in \partial h_1^k (y^k)$ such that $\langle u_1^k, x - y^k \rangle \geq 0$ for all $x \in C$. Hence, from (\ref{ct2}), for each $x \in C$, it follows that $$ h_1^k(y^k) + \frac{1}{2} \| x - y^k \|^2 \leq h_1^k(x), $$ i.e., $$\lambda_k f_1(x^k, y^k) + \frac{1}{2}\| y^k - x^k \|^2 + \dfrac{1}{2}\|x - y^k\|^2 \leq \lambda_k f_1(x^k, x) + \frac{1}{2}\| x - x^k \|^2,$$ or equivalently, \begin{equation} \label{ct3} \|y^k - x\|^2 \leq \|x^k - x\|^2 +2\lambda_k \left( f_1(x^k, x)-f_1(x^k, y^k) \right) - \|y^k - x^k\|^2. \end{equation} Using the same argument for $x^{k+1}$, we obtain \begin{equation} \label{ct4} \|x^{k+1} - x\|^2 \leq \|y^k - x\|^2 +2\lambda_k \left( f_2(x^k, x) -f_2(x^k, x^{k+1}) \right) - \|x^{k+1} - y^k\|^2. \end{equation} Combining (\ref{ct3}) and (\ref{ct4}) yields \begin{align} \label{ct5} \|x^{k+1} - x\|^2 &\leq \|x^k - x\|^2 - \|y^k - x^k\|^2 - \|x^{k+1} - y^k\|^2 \notag\\ & \quad + 2 \lambda_k \left( f_1(x^k, x) + f_2(x^k, x) \right) -2 \lambda_k \left( f_1(x^k, y^k) + f_2(x^k, x^{k+1}) \right) \notag\\ &= \|x^k - x\|^2 - \|y^k - x^k\|^2 - \|x^{k+1} - y^k\|^2 \notag\\ & \quad + 2 \lambda_k f(x^k, x) - 2\lambda_k \left( f_1(x^k, y^k) + f_2(x^k, x^{k+1}) \right). \end{align} From $g_1^k \in \partial_2f_1(x^k,x^k)$ and $f_1(x^k, x^k) = 0$, it follows that \beqs f_1(x^k, y^k) - f_1(x^k, x^k) \ge \langle g_1^k, y^k - x^k \rangle, \eeqs which implies \beq \label{ct6} -2 \lambda_k f_1(x^k, y^k) \leq - 2 \lambda_k \langle g_1^k, y^k - x^k \rangle. \eeq By using the Cauchy-Schwarz inequality and the fact that $\|g_1^k\| \leq \eta_k$, from (\ref{ct6}) one can write \beq \label{ct7} -2 \lambda_k f_1(x^k, y^k) \leq 2 \frac{\beta_k}{\eta_k} \eta_k \|y^k - x^k\| = 2 \beta_k \|y^k - x^k\|. \eeq By the same argument, we obtain \beq \label{ct8} -2 \lambda_k f_2(x^k, x^{k+1}) \leq 2 \beta_k \|x^{k+1} - x^k\|. \eeq Replacing (\ref{ct7}) and (\ref{ct8}) to (\ref{ct5}) we get \begin{align} \label{ct9} \|x^{k+1} - x\|^2 &\leq \|x^k - x\|^2 + 2\lambda_k f(x^k, x) \notag\\ &\quad + 2 \beta_k \|y^k - x^k\| + 2 \beta_k \|x^{k+1}-x^k\| - \|y^k - x^k\|^2 - \|x^{k+1} - y^k\|^2 \notag\\ &= \|x^k - x\|^2 + 2\lambda_k f(x^k, x) \notag \\ &\quad + 2\beta_k^2 - \left(\|y^k - x^k\| - \beta_k\right)^2 - \left(\|x^{k+1} - x^k\| - \beta_k\right)^2 \notag\\ &\leq \|x^k - x\|^2 + 2\lambda_k f(x^k, x) + 2\beta_k^2. \end{align} Note that by definition of $x^* \in S(f, C) = S^d(f, C)$ we have $f(x^k, x^*) \leq 0$. Therefore, by taking $x = x^*$ in (\ref{ct9}) we obtain \begin{align}\label{ct12} \|x^{k+1} - x^*\|^2 &\leq \|x^k - x^*\|^2 + 2 \lambda_k f(x^k, x^*) + 2\beta_k^2 \notag\\ &\leq \|x^k - x^*\|^2 + 2\beta_k^2. \end{align} Since $\sum_{k = 0}^{\infty} \beta_k^2 < +\infty$ by assumption, in virtue of Lemma \ref{lem1}, it follows from (\ref{ct12}) that the sequence $\{\|x^k - x^*\|\}$ is convergent. Next, we prove that any cluster point of the sequence $\{x^k\}$ is a solution of $(EP)$. Indeed, from (\ref{ct12}) we have \beq\label{ct16bc} - 2 \lambda_k f(x^k, x^*) \leq \|x^k - x^*\|^2 - \|x^{k+1} - x^*\|^2 + 2 \beta_k^2 \quad \forall k \in \mathbb{N}. \eeq By summing up we obtain \beqs 2 \displaystyle \sum_{i = 0}^\infty \lambda_i\left(- f(x^i, x^*)\right) \leq \|x^0 - x^*\|^2 + 2 \displaystyle \sum_{i = 0}^\infty \beta_i^2 < \infty. \eeqs On the other hand, by Assumption (A2) the sequences $\{g_1^k\}, \{g_2^k\}$ are bounded. This fact, together with the construction of $\{\beta_k\}$, implies that there exists $M > 0$ such that $\|g_1^k\| \leq M, \|g_2^k\| \leq M, \beta_k \leq M$ for all $k \in \mathbb{N}$. Hence for each $k \in \mathbb{N}$ we have \beqs \eta_k = \max\{\beta_k, \|g_1^k\|, \|g_2^k\|\} \leq M, \eeqs which implies $\sum_{i=0}^\infty \lambda_i = \infty$. Thus, from $f(x^i,x^*) \leq 0$, it holds that $$\lim\sup f(x^k,x^*) = 0 \quad \forall x^* \in S(C,f).$$ Fixed $x^* \in S(C,f)$ and let $\{x^{k_j}\}$ be a subsequence of $\{x^k\}$ such that $$\lim_{k}\sup f(x^k,x^*) = \lim_{j} f(x^{k_j},x^*) = 0.$$ Since $\{x^{k_j}\}$ is bounded, we may assume that $\{x^{k_j}\}$ weakly converges to some $\bar{x}$. Since $f(\cdot,x^*)$ is weakly upper semicontinuous by assumption (A1), we have \beq\label{ct21} f(\bar{x},x^*) \geq \lim f(x^{k_j},x^*) = 0. \eeq Then it follows from the monotonicity of $f$ that $f(x^*,\bar{x}) \leq 0$. On the other hand, since $x^* \in S(C, f)$, by definition we have $f(x^*,\bar{x}) \geq 0$. Therefore we obtain $f(x^*,\bar{x}) = 0$. Again, the monotonicity of $f$ implies $f(\bar{x}, x^*) \le 0$, and therefore, by (\ref{ct21}) one has $f(\bar{x}, x^*) = 0$. Since $f(x^*,\bar{x}) = 0$ and $f(\bar{x}, x^*) = 0$, it follows from paramonotonicity of $f$ that $\bar{x}$ is a solution to $(EP)$. Since $\|x^k - \bar{x}\|$ converges, from the fact that $x^{k_j}$ weakly converges to $\bar{x}$, we can conclude that the whole sequence $\{x^k\}$ weakly converges to $\bar{x}$. Note that if $f$ is strongly pseudomonotone, then Problem $(EP)$ has a unique solution (see \cite{MQ2015} Proposition 1). Let $x^*$ be the unique solution of $(EP)$. By definition of $x^*$ we have \beqs f(x^*, x) \ge 0 \quad \forall x \in C, \eeqs which, by strong pseudomonotonicity of $f$, implies \beq\label{ct22} f(x, x^*) \le - \beta \|x-x^*\|^2 \quad \forall x \in C. \eeq By choosing $x = x^k$ in (\ref{ct22}) and then applying to (\ref{ct9}) we obtain \beqs \|x^{k+1} - x^*\|^2 \le (1 - 2\beta\lambda_k) \|x^k - x^*\|^2 + 2 \beta_k^2 \quad \forall k \in \mathbb{N}, \eeqs which together with the construction of $\beta_k$ and $\lambda_k$, by virtue of Lemma \ref{LemmaXu} with $\delta_k \equiv 0$, implies that \beqs \lim_{k \to +\infty} \|x^k - x^*\|^2 = 0, \eeqs i.e., $x^k$ strongly converges to the unique solution $x^*$ of $(EP)$. $\square$ \end{proof} The following simple example shows that without paramonotonicity, the algorithm may not be convergent. Let us consider the following example, taken from \cite{FP2003}, where $f_(x,y):= \langle Ax, y-x\rangle$ and $C:= \mathbb{R}^2$ and \beqs A = \begin{bmatrix} 0 & 1\\ -1 & 0 \end{bmatrix}. \eeqs Clearly, $x^*=(0,0)^T$ is the unique solution of this problem. It is easy to check that this bifunction is monotone, but not paramonotone. An elementary computation shows that $$x^{k+1} = x^k - \lambda_k A x^k = (x^k_1-\lambda_k x^k_2, x^k_2 + \lambda_k x^k_1)^T.$$ Thus, $\|x^{k+1}\|^2 = (1+\lambda^2_k)\|x^k\|^2 > \|x^k\|^2$ if $x^k \neq 0$ , which implies that the sequence $\{x^k\}$ does not converge to the solution $x^* = 0$ for any starting point $x^0 \neq 0$. To illustrate our motivation let us consider the following optimization problem \begin{align*} (OP) \quad \min \quad &\varphi(x) := \frac{1}{2} x^T Q x - \displaystyle \sum_{i=1}^n \ln (1 + \max\{0, x_i\})\\ \text{subject to} \quad &x_i \in [a_i, b_i] \subset \mathbb{R} \quad (i = 1, \ldots, n), \end{align*} where $Q \in \mathbb{R}^{n \times n}$ is a positive semidefinite matrix. This problem is equivalent to the following equilibrium problem \beqs \text{Find } x^* \in C \text{ such that } f(x^*, y) \ge 0 \ \forall y \in C, \eeqs where $C := [a_1, b_1] \times \ldots \times [a_n, b_n]$, and $f(x, y) := \varphi(y) - \varphi(x)$. We can split the function $f(x, y) = f_1(x, y) + f_2(x, y)$ by taking \beqs f_1(x, y) = \frac{1}{2} y^T Q y - \frac{1}{2} x^T Q x, \eeqs and \beqs f_2(x, y) = \displaystyle \sum_{i = 1}^n \left( \ln(1 + \max\{0, x_i\}) - \ln(1 + \max\{0, y_i\}) \right). \eeqs Since $Q$ is a positive semidefinite matrix and $\ln(\cdot)$ is concave on $(0, +\infty)$, the functions $f_1, f_2$ are equilibrium functions satisfying conditions (A1)-(A3). Clearly, $f_1(x, \cdot)$ is convex quadratic, not necessarily separable, while $f_2(x, \cdot)$ is separable, not necessarily differentiable, but their sum does not inherit these properties. In order to obtain the convergence without paramonotonicity we use the iterate $x^k$ to define an ergodic sequence by tanking $$z^k:= \dfrac{\sum_{i=0}^k \lambda_i x^i} { \sum_{i=0}^k \lambda_i}.$$ Then we have the following convergence result. \begin{thm}\label{thm2} Under the assumption in Theorem 1, the ergodic sequence $\{z^k\}$ converges weakly to a solution of $(EP)$. \end{thm} \begin{proof} In the proof of Theorem 1, we have shown that the sequence $\{\|x^k - x^*\|\}$ is convergent. By the definition of $z^k$, the sequence $\{\|z^k - x^*\|\}$ is convergent too. In order to apply Lemma \ref{lem2}, now we show that all weakly cluster points of $\{z^k\}$ belong to $S(f, C)$. In fact, using the inequality (\ref{ct12}), by taking the sum of its two sides over all indices we have \begin{align*} 2 \displaystyle \sum_{i = 0}^k \lambda_i f(x, x^i) &\leq \displaystyle \sum_{i = 0}^k \left(\|x^i - x\|^2 - \|x^{i+1} - x\|^2 + 2 \beta_i^2\right)\\ &= \|x^0 - x\|^2 - \|x^{k+1} - x\|^2 + 2 \displaystyle \sum_{i = 0}^k \beta_i^2\\ &\leq \|x^0 - x\|^2 + 2 \displaystyle \sum_{i = 0}^k \beta_i^2. \end{align*} By using this inequality, from definition of $z^k$ and convexity of $f(x, \cdot)$, we can write \begin{align}\label{ct18} f(x, z^k) &= f\left(x, \dfrac{\sum_{i=0}^k \lambda_i x^i}{\sum_{i=0}^k \lambda_i} \right) \notag\\ &\leq \dfrac{\sum_{i = 0}^k \lambda_i f(x, x^i)}{ \sum_{i = 0}^k \lambda_i} \notag\\ &\leq \dfrac{\|x^0 - x\|^2 + 2 \sum_{i = 0}^k \beta_i^2}{2 \sum_{i = 0}^k \lambda_i}. \end{align} As we have shown in the proof of Theorem 1 that \beqs \lambda_k = \dfrac{\beta_k}{\eta_k} \geq \dfrac{\beta_k}{M}. \eeqs Since $\sum_{k = 0}^{\infty} \beta_k = +\infty$, we have $\sum_{k=0}^{\infty} \lambda_k = +\infty.$ Then, it follows from (\ref{ct18}) that \beq\label{ct20} \lim_{k \to \infty} \inf f(x, z^k) \leq 0. \eeq Let $\bar{z}$ be any weakly cluster of $\{z^k\}$. Then there exists a subsequence $\{z^{k_j}\}$ of $\{z^k\}$ such that $z^{k_j} \rightharpoonup \bar{z}$. Since $f(x, \cdot)$ is lower semicontinuous, tt follows from (\ref{ct20}) that \beqs f(x, \bar{z}) \le 0. \eeqs Since this inequality hold for arbitrary $x \in C$, it means that $\bar{z} \in S^d(f, C) = S(f, C)$. Thus it follows from Lemma \ref{lem2} that the sequence $\{z^k\}$ converges weakly to a point $z^* \in S(f, C)$, which is a solution to $(EP)$. $\square$ \end{proof} \begin{rem}\label{rema2} In case that $\cH$ is of finite dimension, we have $\|z^{k+1}-z^k\| \to 0$ as $k\to \infty$. Since $\sum_{k \to +\infty} \lambda_k^2 < +\infty$, at large enough iteration $k$, the value of $\lambda_k$ closes to $0$, which makes the intermediate iteration points $y^k, x^{k+1}$ close to $x^k$. In turn, the new generated ergodic point $z^{k+1}$ does not change much from the previous one. This slows down the convergence of the sequence $\{z^k\}$. In order to enhance the convergence of the algorithm, it suggests a restart strategy by replacing the starting point $x^0$ with $x^k$ whenever $\|z^{k+1}-z^k\| \leq \tau$ with an appropriate $\tau > 0$. \end{rem} \section{Numerical experiments}\label{SectionExperiments} We used MATLAB R2016a for implementing the proposed algorithms. All experiments were conducted on a computer with a Core i5 processor, 16 GB of RAM, and Windows 10. As we have noted in Remark \ref{rema2}, to improve the performance of our proposed algorithm, we reset $x^0$ to $x^k$ whenever $\|z^{k+1}-z^k\| \leq \tau$ with an appropriate $\tau > 0$ and then restart the algorithm from beginning with the new starting point $x^0$ if the stoping criterion $\|z^{k+1}-z^k\| \leq \epsilon$ is still not satisfied. In all experiments, we set $\tau := 10^{-3}$, and terminated the algorithm when either the number of iterations exceeds $10^4$, or the distance between the two consecutive generated ergodic points is less than $\epsilon : =10^{-4}$ (i.e., $\|z^{k+1} - z^k\| < 10^{-4}$). All the tests reported below were solved within 60 seconds. We applied Algorithm 1 to compute a Nash equilibrium of a linear Cournot oligopolistic model with some additional joint constraints on the model's variables. The precise description of this model is as follows. There are $n$ firms producing a common homogeneous commodity. Let $x_i$ be the production level of firm $i$, and $x = (x_1, \ldots, x_n)$ the vector of production levels of all these firms. Assume that the production price $p_i$ given by firm $i$ depends on the total quantity $\sigma = \sum_{i = 1}^n x_i$ of the commodity as follows \begin{center} $p_i(\sigma) = \alpha_i - \delta_i \sigma\qquad(\alpha_i > 0, \delta_i > 0, i = 1, \ldots, n).$ \end{center} Let $h_i(x_i)$ denote the production cost of firm $i$ when its production level is $x_i$ and assume that the cost functions are affine of the forms \begin{center} $h_i(x_i) = \mu_i x_i + \xi_i \qquad(\mu_i > 0, \xi_i \ge 0, i = 1, \ldots, n).$ \end{center} The profit of firm $i$ is then given by \begin{center} $q_i(x_1, \ldots, x_n) = x_i p_i(x_1 + \ldots + x_n) - h_i(x_i) \qquad (i = 1, \ldots, n).$ \end{center} Each firm $i$ has a strategy set $C_i \subset \mathbb{R}_+$ consisting of its possible production levels, i.e., $x_i \in C_i$. Assume that there are lower and upper bounds on quota of the commodity (i.e., there exist $\underline{\sigma}, \overline{\sigma} \in \mathbb{R}_+$ such that $\underline{\sigma} \le \sigma = \sum_{i = 1}^n x_i \le \overline{\sigma}$). So the set of feasible production levels can be described by \beqs \Omega := \{x \in \mathbb{R}^n_+ \ | \ x_i \in C_i (i = 1, \ldots, n), \sum_{i = 1}^n x_i \in [\underline{\sigma}, \overline{\sigma}]\}. \eeqs Each firm $i$ seeks to maximize its profit by choosing the corresponding production level $x_i$ under the presumption that the production of the other firms are parametric input. In this context, a Nash equilibrium point for the model is a point $x^* \in \Omega$ satisfying \begin{center} $q_i(x^*[x_i]) \le q_i(x^*) \qquad \forall x \in \Omega, i = 1,\ldots, n,$ \end{center} where $x^*([x_i])$ stands for the vector obtained from $x^*$ by replacing the component $x_i^*$ by $x_i$. It means that, if some firm $i$ leaves its equilibrium strategy while the others keep their equilibrium positions, then the profit of firm $i$ does not increase. It has been shown that the unique Nash equilibrium point $x^*$ is also the unique solution to the following equilibrium problem \begin{equation} \textrm {Find $x \in \Omega$ such that $f(x, y) := (\tilde{B} x + \mu - \alpha)^T (y - x) + \frac{1}{2} y^T B y - \frac{1}{2} x^T B x \ge 0 \ \forall y \in \Omega$,} \tag{$EP1$} \end{equation} where $\mu = (\mu_1, \ldots, \mu_n)^T, \alpha = (\alpha_1, \ldots, \alpha_n)^T$, and \beqs \tilde{B} = \begin{bmatrix} 0 & \delta_1 & \delta_1 & \ldots & \delta_1\\ \delta_2 & 0 & \delta_2 & \ldots & \delta_2\\ \cdot & \cdot & \cdot & \ldots & \cdot\\ \delta_n & \delta_n & \delta_n & \ldots & 0 \end{bmatrix},\qquad B = \begin{bmatrix} 2 \delta_1 & 0 & 0 & \ldots & 0\\ 0 & 2 \delta_2 & 0 & \ldots & 0\\ \cdot & \cdot & \cdot & \ldots & \cdot\\ 0 & 0 & 0 & \ldots & 2 \delta_n \end{bmatrix}. \eeqs Note that $f(x, y) = f_1(x, y) + f_2(x, y)$ in which \begin{align*} f_1(x, y) &= (\tilde{B} x + \mu - \alpha)^T (y - x),\\ f_2(x, y) &= \frac{1}{2} y^T B y - \frac{1}{2} x^T B x. \end{align*} It is obvious that $f, f_1, f_2$ are equilibrium functions satisfying conditions (A1)-(A3). For numerical experiments, we set $C_i = [10, 50]$ for $i = 1, \ldots, n$, $\underline{\sigma} = 10n + 10$, and $\overline{\sigma} = 50n - 10$. The initial guess was set to $x^0_i = 30 (i = 1, \ldots, n)$. We tested the algorithm on problem instances with different numbers $n$ of companies but having the following fixed values of parameters $\alpha_i = 120, \delta_i = 1, \mu_i = 30$ for $i = 1, \ldots, n$. Table \ref{TableExp2} reports the outcomes of Algorithm 1 with restart strategy applied to these instances for different values of dimension $n$ and appropriate values of parameters $\beta_k$. \begin{table}[H] \centering \begin{tabular}{|c|c|c||c|c|} \hline \multirow{3}{*}{$n$} & \multirow{3}{*}{$\beta_k$} & \multirow{2}{*}{Total number of} & \multirow{2}{*}{Number of} & \multirow{2}{*}{Number of iterations}\\ & & & & \\ & & iterations & restarts & from the last restart\\ \hline 2 & $10/(k+1)$ & 2 & 0 & 2\\ 3 & $10/(k+1)$ & 639 & 2 & 9\\ 4 & $10/(k+1)$ & 911 & 2 & 4\\ 5 & $10/(k+1)$ & 1027 & 2 & 2\\ 10 & $10/(k+1)$ & 1201 & 1 & 2\\ 10 & $100/(k+1)$ & 266 & 1 & 2\\ 15 & $10/(k+1)$ & 2967 & 2 & 2\\ 15 & $100/(k+1)$ & 408 & 1 & 2\\ 20 & $10/(k+1)$ & 5007 & 2 & 2\\ 20 & $100/(k+1)$ & 539 & 1 & 2\\ \hline \end{tabular} \caption{Performance of Algorithm 1 in solving linear Cournot oligopolistic model with additional joint constraints.} \label{TableExp2} \end{table} On one hand, the results reported in Table \ref{TableExp2} show the applicability of Algorithm 1 for solving linear Cournot-Nash oligopolistic model with joint constraints. On the other hand, it follows from this table that the choice of parameter $\beta_k$ is crucial for the convergence of the algorithm, since changing the value of this parameter may significantly reduce the number of iterations. Furthermore, the last two columns of Table \ref{TableExp2} show that, by applying our suggested restart strategy, we can find `good' starting points from that the algorithm terminated after few iterations. \section{Conclusion}\label{SectionConclusion} We have proposed splitting algorithms for monotone equilibrium problems where the bifunction is the sum of the two ones. The first algorithm uses an ergodic sequence ensuring convergence without extragradient (double projection). The second one is for paramonotone equilibrium problems ensuring convergence without using the ergodic sequence. A restart strategy has been used to enhance the convergence of the proposed algorithms.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \footnote{This is the peer reviewed version of the article which has been published in final form at \textit{Progress in Photovoltaics: Research and Applications} (\href{http://dx.doi.org/10.1002/pip.2787}{DOI: 10.1002/pip.2787}). This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving.} \label{sec:intro} \ifieeetrans \IEEEPARstart{I}II-V \else III-V \fi multi-junction solar cells currently demonstrate the highest conversion efficiencies among photovoltaic materials by a wide margin\cite{Green:2015bk}. However, high substrate and fabrication costs limit their uses to the applications such as Space or concentrator photovoltaics that are more tolerant to cell cost. Currently the use of III-V multi-junction solar cells using compound semiconductors lattice-matched to GaAs is by far the most mature technology. Replacing the germanium or GaAs substrate with silicon is a promising approach to reduce the substrate cost. However, III-V materials that are lattice-matched to silicon are very limited, but using other III-V materials that are lattice mismached to silicon is very challenging to maintain their material qualities. Moreover, the qualities of III-V materials that constitute the optimal band-gap combinations for dual-junction (2J) or triple-junction (3J) III-V/Si solar cells are generally less optimized than standard III-V solar cell materials such as InGaP and GaAs\cite{Connolly:2014jm}. Designing and modeling III-V on silicon solar cells has been considered in a number of publications\cite{Connolly:2014jm}\cite{Jain:2014ko}\cite{Jain:2014ca}\cite{Jain:2012jk}. However, these publications mainly focus on the detailed layer design and optimization of III-V/Si solar cells. In this paper, we aim to investigate and review the design of III-V/Si solar cells from a different point of view, with an emphasis on the issues of the material quality of each subcell. In this work, we choose external radiative efficiency (ERE) as the measure of the material quality of solar cells. A similar approach has been applied to analyze the state-of-the-art solar cells and predict their performances at high concentrations\cite{Green:2011ea}\cite{Chan:2012ej}. We will use this framework to address the following questions of III-V/Si solar cells: What are the essential criteria for III-V/Si solar cells in order to match the efficiencies of state-of-the-art III-V multi-junction cells? How do these criteria compare to the material quality of state-of-the-art III-V/Si solar cells? Would it be acceptable to sacrifice the material quality of III-V cells in exchange for better current-matching in III-V/Si solar cells? In this paper, we will first describe the modeling approach and assumptions that we use to predict the performance of the solar cells and how we estimate the EREs from reported results for III-V/Si solar cells. We will then present the modeling results of III-V/Si solar cells with our estimated EREs taken into account. Finally we will discuss the implications of these results on designing III-V/Si solar cells. \section{Modeling Efficiencies of III-V on Silicon Solar Cells} \label{sec:modeling} We calculate the I-V characteristics of III-V/Si solar cells based on a detailed balance model\cite{Shockley:1961co,Araujo:1994jk,Nelson:1997fb}. We first assume the principle of superposition for the total current density of the solar cells, i.e., the total current density is the sum of recombination current density $J_{tot}(V)$ and short-circuit current density $J_{sc}$, which are decoupled from each other: \begin{equation} \label{eqn:jv_eq} J(V)=-J_{sc}+J_{tot}(V) \end{equation} $J_{sc}$ can be written as the integration of external quantum efficiencies (EQE) multiplied by the input spectrum over photon energy $E$, namely, \begin{equation} \label{eqn:Jsc} J_{sc}=q \int_0^{\infty} \phi(E) \cdot \mbox{EQE}(E) dE \end{equation} where $\phi(E)$ is incident photon spectrum. We assume flat, stepped EQEs in our calculations unless otherwise specified, i.e., \begin{equation} \label{eqn:qe_def} \mbox{EQE}(E)= \left\{ \begin{array}{ll} b , E \geq E_g \\ 0 , E < E_g \end{array} \right. \end{equation} where $E_g$ is the band gap of the material and $b$ is a chosen EQE value. As we mentioned in Section~\ref{sec:intro}, external radiative efficiency (ERE) is defined as the fraction of radiative recombination currents against total recombination currents. The total recombination current $J_{tot}(V)$ can thus be related to the radiative recombination current $J_{rad}(V)$ by \begin{equation} \label{eqn:eta_r} J_{tot}(V)=J_{rad}(V)/\eta_r \end{equation} where $\eta_r$ is ERE. Note that this equation assumes that the ERE is independent of the level of carrier injection. This may not be valid at the regime of high carrier injection. However, since we focus on III-V/Si solar cells at or near one-sun illumination in this study, assuming a constant ERE is reasonable. The radiative recombination current density is calculated by a detailed balance approach, which is the total radiative recombination photons escaping from the solar cell per area multiplied by the elementary charge $q$: \begin{equation} \label{eqn:gen_planck} J_{rad}(V)=\frac{2\pi q (n_c^2+n_s^2)}{\mbox{h}^3 \mbox{c}^2}\int_{0}^{\infty} \frac{ a(E) E^2 dE}{\exp\left(\frac{E-qV}{kT}\right)-1} \end{equation} where $n_c$ is the refractive index of the solar cell, $n_s$ is the refractive index of the medium over the solar cell, h is Planck's constant, c is the speed of light, k is Boltzmann constant, and $T$ is the absolute temperature. $a(E)$ is the absorptivity of the cell, which is approximated to be equal to $\mbox{EQE}(E)$ in this study. Details of this model and the derivation of (\ref{eqn:gen_planck}) can be found in \cite{Araujo:1994jk} or \cite{Nelson:1997fb}. (\ref{eqn:gen_planck}) assumes that the all radiative photons can escape from the surface where the medium next to it is a semiconductor. For the surface that exposes to air, only the radiated photons that lie within the light cone $\theta<\sin^{-1}(1/n_c)$ can escape from the surface. Also, the solar cell is assumed to be infinite and planar with the emission from the edges neglected. In the calculation of the efficiencies of multi-junction solar cells, the reflections and parasitic transmission losses of the top surface and the interfaces between junctions are neglected. Also, it is assumed that all absorbed photons can be converted to electrical currents, whereas photons that are not absorbed can fully transmit to the next junction. In other words, different EQE values of a junction are equivalent to different optical thicknesses of a subcell. With these assumptions we get the the following expression for the photons incident on the $i$-th junction: \begin{equation} \label{eqn:phi_i_E} \phi_i (E)=\phi_{i-1}(E)(1-\mbox{EQE}_{i-1}(E)) \end{equation} where $\phi_{i-1}(E)$ and $\mbox{EQE}_{i-1}(E)$ are the incident photons and the EQE of the junction stacked above the $i$-th junction. The I-V characteristics of $i$-th subcell can then be calculated by substituting $\phi_{i}(E)$ and $\mbox{EQE}_{i}(E)$ into (\ref{eqn:Jsc}) and (\ref{eqn:gen_planck}). In this work, we only consider two-terminal, series-connected multi-junction solar cells. The I-V of the multi-junction device is thus solved by interpolating the voltages for each current density $J$ for every subcell and adding up the subcell voltages to obtain the I-V characteristics of the multi-junction cell $V_{tot}(J)$, namely, \begin{equation} V_{tot}(J)=\sum_{i=1}^N V_i(J) \end{equation} where $V_i(J)$ is the voltage of the $i$-th subcell at the current density $J$. The efficiency is defined by the maximum power point of $V_{tot}(J)$ divided by the total power of the illumination spectrum. The illuminating spectrum is AM1.5g normalized to 1000 $\mbox{W/cm}^2$ throughout all the calculations in Section~\ref{sec:design}. Since our main focus in this study is the impact of non-radiative recombinations due to imperfect material qualities, loss mechanisms such as parasitic resistances and optical losses are neglected. Although ERE may depend on the geometry of solar cells\cite{Steiner:2013cc}, the geometry of multi-junction solar cells is fairly standard and therefore can be considered to be a constant factor. Due to the lattice mismatch between most of the III-V compounds and silicon, threading dislocation poses the main challenge to achieve high efficiency III-V/Si solar cells. ERE can be related to threading dislocation densities by using a empirical model proposed in \cite{Yamaguchi:1989de}. First, the reduction of minority carrier lifetime due to threading dislocations can be described by the following equation\cite{Yamaguchi:1989de}: \begin{equation} \frac{1}{\tau_{eff}}=\frac{1}{\tau_{rad}}+\frac{1}{\tau_{nr}}+\frac{\pi^3 N_d}{4} \end{equation} where $\tau_{eff}$ is the effective minority carrier lifetime, $\tau_{rad}$ is the radiative lifetime, $\tau_{nr}$ is the non-radiative lifetime, and $N_d$ is the threading dislocation density. After that, by assuming that the carrier injection density is low and the geometry factors are identical, the reduction of ERE due to threading dislocations can be written as \cite{Yamaguchi:1989de} \begin{equation} \frac{\eta^{TD}_{r}}{\eta^{0}_r}=\frac{\tau^{TD}_{eff}}{\tau^{0}_{eff}} \end{equation} where $\eta^{TD}_{r}$ is the ERE with threading dislocations, $\eta^{0}_{r}$ is the ERE without threading dislocations, $\tau^{TD}_{eff}$ is the effective minority carrier lifetime with threading dislocations, and $\tau^{0}_{eff}$ is the effective minority carrier lifetime without threading dislocations. If we only take the degradation of the p-type base layer into account, and assume $\tau^{0}_{eff}=20$ ns \cite{Yamaguchi:1989de}\cite{Andre:2004jy} and $D=80~\mbox{cm}^2 \mbox{/s}$ \cite{Jain:2012jk}, ERE against the threading dislocation density can then be calculated and plotted. See \figurename~\ref{fig:rad_DD_plot}. We will discuss this result further in Section~\ref{sec:design}. \begin{figure}[!t] \centering \includegraphics[width=2.5in]{./dd_rad_eta_fig_text} \caption{Fraction of ERE without threading dislocation ${\eta^{TD}_{r}}/{\eta^{0}_r}$ against threading dislocation densities. The arrows mark the range of ${\eta^{TD}_{r}}/{\eta^{0}_r}$ of III-V subcells on silicon with different fabrication listed in \tablename{~\ref{table:III-V_Si_ERE}}.} \label{fig:rad_DD_plot} \end{figure} \section{EREs of State-of-The Art III/V on Silicon Solar Cells} Based on the theoretical framework described Section~\ref{sec:modeling}, we can analyze the EREs of III-V and III-V/Si solar cells reported in the publications that are relevant to this study. We use the assumptions underlying the principle of superposition in (\ref{eqn:jv_eq}) to approximate ERE, that is, the value of $J_{sc}$ is equal to the total recombination current $J_{tot}$ at $V_{oc}$ \begin{equation} J_{tot}(V_{oc})=J_{sc} \end{equation} Following from the definition of ERE in (\ref{eqn:eta_r}), ERE can be written as \begin{equation} \label{eqn:extract_ERE} \eta_r=\frac{J_{rad}(V_{oc})}{J_{sc}}, \end{equation} where $J_{rad}(V_{oc})$ and $J_{sc}$ can be calculated from (\ref{eqn:gen_planck}) and (\ref{eqn:Jsc}), respectively. In this way, it only requires EQE, $V_{oc}$, and the illumination spectrum in order to estimate ERE. The EREs of some state-of-the-art solar cells have been presented in \cite{Green:2011ea} and \cite{Geisz:2013hi}. These are listed in \tablename~\ref{table:stta_ERE}. We also calculated the EREs of III-V/Si and III-V solar cells in \cite{Virshup:1985hw}\cite{Amano:1987es} and \cite{Soga:1995bv} using this approach. The results are listed in \tablename~\ref{table:III-V_Si_ERE}. For single-junction devices, estimating their EREs is straightforward by using (\ref{eqn:extract_ERE}) with measured open circuit voltages and EQEs. However, for multi-junction solar cells, we often can only know the open-circuit voltage of the entire device as opposed to those of the individual subcells. We thus need to make a few more assumptions in order to estimate the EREs of the subcells. First, we assume that the $V_{oc}$ of the whole device is the sum of $V_{oc}$ of each subcell, i.e., \begin{equation} \label{eqn:voc_sum} V_{oc}=\sum_{i=1}^{N}V_{oc}^i \end{equation} where $V_{oc}^i$ is the open-circuit voltage of the $i$-th junction of the $N$-junction solar cell. This assumption is reasonable because the current mismatch of the multi-junction cells that we selected to analyze is less than 10\%. After that, we select a reasonable voltage range of top cell's $V_{oc}$ and calculate its corresponding EREs. For 2J cells, we can then calculate the bottom cell's $V_{oc}$ and its ERE for each top cell's $V_{oc}$ based on the assumption of (\ref{eqn:voc_sum}). We use this method to estimate and compare the EREs of three III-V 2J solar cell reported in \cite{Dimroth:2014jn}. These were fabricated by different methods, including one InGaP/GaAs 2J cell on GaAs substrate, one InGaP/GaAs 2J cell on silicon substrate using wafer bonding and one InGaP/GaAs 2J on silicon substrate using direct growth. The estimated EREs of these 2J cells presented in \cite{Dimroth:2014jn} are plotted in \figurename~\ref{fig:dimroth_paper_radeta}. Since it is more likely that the InGaP cell and the GaAs cell have similar EREs, the actual range of EREs can be limited to the region near the intersection of the top and bottom cell's EREs. This is also due to that the y-axis in \figurename~\ref{fig:dimroth_paper_radeta} is presented in the log scale, and only the regions near the intersections cover the non-negligible range of EREs for both subcells. For 2J cells grown on GaAs, the range of the EREs can be further reduced to the left-hand side of the intersection because the ERE of GaAs is generally higher than InGaP. From \figurename~\ref{fig:dimroth_paper_radeta}, we can see that the intersection of ERE lines of the 2J cell grown on a GaAs substrate is around $10^{-2}$, which is close to the values of the state-of-the-art GaAs single junction cell listed in \tablename~\ref{table:stta_ERE}. The ERE of the wafer-bonded 2J cell drops to around $10^{-3}$, whereas the ERE of the 2J cell epitaxially grown on a silicon substrate is reduced to only around $10^{-6}$. These estimated EREs are also labeled in \figurename~\ref{fig:rad_DD_plot}. The corresponding dislocation density of direct-growth 2J on silicon cells of these EREs match the value reported in \cite{Dimroth:2014jn}, which is around $10^{8}~\mbox{cm}^{-2}$. This ERE estimation method was also applied to the case of GaInP/GaAs/Si 3J cells\cite{Essig:2015dw}. Since we have three cells with unknown EREs, we have to make an additional assumption that the top and middle cells have the same ERE so that we can make a two-dimensional plot as in \figurename~\ref{fig:dimroth_paper_radeta}. \figurename~\ref{fig:essig_paper_radeta} shows the estimated EREs of the 3J wafer-bonded InGaP/GaAs/Si solar cell measured at one sun and 112 suns. From the results in \figurename~\ref{fig:dimroth_paper_radeta}, we know that the EREs of InGaP/GaAs 2J solar cell are around $10^{-3}$. Because the cells in \figurename~\ref{fig:dimroth_paper_radeta} and \figurename~\ref{fig:essig_paper_radeta} came from the same research group, we may assume that the III-V subcells in \figurename~\ref{fig:essig_paper_radeta} have similar EREs at one sun. Thus the ERE of the silicon bottom cell is then around $10^{-4}$. When the cell is illuminated with concentrated sunlight, the EREs of the subcells can be raised by several orders of magnitude. This may be due to the saturation of defect states or the reduction of etendue loss\cite{Hirst:2010ch}. We also estimated the EREs of the III-V/Si solar cells presented in \cite{Yang:2014kz}. Although the EQE of the subcells are not reported in \cite{Yang:2014kz}, we infer that the EREs of the subcells are similar to the results in \figurename~\ref{fig:essig_paper_radeta} according to the reported values of short circuit current and open-circuit voltages. These results are all listed in \tablename~\ref{table:III-V_Si_ERE}. \begin{figure}[!t] \centering \includegraphics[width=2.5in]{dimroth_paper_radeta} \caption{Estimated ERE range of 2J InGaP/GaAs on silicon and GaAs substrates reported in \cite{Dimroth:2014jn}. WB\_Si stands for wafer bonding on silicon substrate, DG\_GaAs stands for direct growth on GaAs substrate, and DG\_Si stands for direct growth on silicon.} \label{fig:dimroth_paper_radeta} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=2.5in]{essig_paper_radeta} \caption{Estimated ERE range of 3J InGaP/GaAs/Si solar cells fabricated by wafer bonding as reported in \cite{Essig:2015dw}. Both the estimated EREs of the cell tested at one sun (1x) and 111 suns (111x) are plotted.} \label{fig:essig_paper_radeta} \end{figure} \section{Design Considerations of III-V/Si Solar Cells} \label{sec:design} This section presents the estimated efficiencies of multi-junction solar cells with different EREs and their implications for designing III-V/Si solar cells. Starting from the radiative-limit case, i.e. ERE is 1 and no optical losses for all subcells, \figurename~\ref{fig:rad_limit_2J} shows the efficiency contour of 2J III-V/Si solar cells as a function of the band gap and the EQE of the top cell. Altering EQE in this calculation is equivalent to different optical thicknesses of the top cell, as we mentioned in (\ref{eqn:phi_i_E}). In the calculations throughout this section, the band gap of silicon is assumed to be 1.12 eV and the EQE is 100\%. The result shows that the maximum predicted efficiency, with 1.73 eV as the top cell band gap, is 41.9\%. The optimal band gap of the top cell can potentially be achieved by using AlGaAs, Ga(As)PN or AlInGaP, but it remains challenging to achieve high quality in these materials. From the point of view of material quality, GaAs is a favored option for the top junction, but its band gap is too close to silicon and thus makes silicon as the current-limiting junction. Reducing the optical thickness of GaAs can mitigate the current-mismatch and raise the efficiency. As shown in \figurename~\ref{fig:rad_limit_2J}, the optimal EQE of the GaAs top cell is around 68.1\%, which gives 35.8\% efficiency of the 2J device. Another option is choosing InGaP (1.87eV) as the top junction on silicon cell. The limiting efficiency of this configuration is 37.6\%, which is even higher than the limiting efficiency of GaAs/Si. With this configuration, the current-limiting junction then becomes the top cell, providing the opportunity to use thinner silicon junction to reduce the recombination current. \figurename~\ref{fig:rad_limit_3J} shows the efficiency contours of 3J III-V/Si solar cells against the top and middle cell's band gaps. All of the subcells are assumed to have 100\% EQEs. The optimal band-gap combination for the top two junctions is 2.01 eV and 1.50 eV, which gives a limiting efficiency of 46.1\%. Ternary or quaternary compounds such as AlGaAs/AlGaAs, InGaP/GaAsP, (Al)InGaP/InGa(As)P , (Al)InGaP/AlGaAs, and GaPN/GaAsPN are candidates for this optimal band-gap configuration. Using conventional InGaP/GaAs on silicon can only achieve 36.3\% efficiency at this radiative limit because of current mismatch between the InGaP/GaAs top cell and the silicon bottom cell. As in the case of GaAs on silicon cells, reducing the optical thicknesses of InGaP/GaAs could yield better current-matching and therefore higher efficiency. Our calculations show that the optimal EQEs for InGaP and GaAs subcells are around 82.6\%, which gives a limiting efficiency of 43.3\%. This optimal EQE value will be used in the subsequent calculations for InGaP/GaAs/Si solar cells, the results of which are shown in \figurename~\ref{fig:3J_si_vary_radeta_conv} and \figurename~\ref{fig:rad_limit_3J_profile}. \begin{figure}[!t] \centering \includegraphics[width=2.5in]{rad_limit_2J_withtext} \caption{Efficiency contours of 2J solar cells with silicon bottom cell as a function of band gap and EQE of the top cell. The EREs are set to be 1 for both subcells. The EQE of the silicon bottom cell is 100\%. The color bar is efficiency (\%).} \label{fig:rad_limit_2J} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=2.5in]{rad_limit_3J_1eV_withtext} \caption{Efficiency contours of 3J solar cell with silicon bottom cell as a function of top and middle cell band gaps. The EREs and EQEs of all subcells are assumed to be 100\% in this calculation. The color bar is efficiency (\%).} \label{fig:rad_limit_3J} \end{figure} Next we consider how the EREs affect the efficiencies and designs of the solar cells. \figurename~\ref{fig:1.7eVtop_si_vary_radeta} shows the efficiency contours for a 1.73-eV top cell on a silicon bottom cell as a function of the top cell and bottom cell's EREs. Both of the subcells are assumed to have 100\% EQE. Based on \tablename~\ref{table:stta_ERE}, the EREs of the state-of-the-art silicon solar cell are around 0.006. For GaAs solar cells, although the cell made by Alta Devices can achieve an ERE of 0.225, this device adopts different cell geometries from the conventional solar cells\cite{Miller:2012fd}. We therefore select the ERE value of ISE's GaAs solar cell as the state-of-the-art value. With these ERE values of each subcell, the limiting efficiency of this 2J cell is 36.5\%. Because achieving the quality of state-of-the-art GaAs is still challenging for the candidate 1.73-eV III-V materials, a more realistic estimate is considering a top cell that matches the EREs of the AlGaAs reported in \cite{Virshup:1985hw} or \cite{Amano:1987es}, which are both around $10^{-4}$. The limiting efficiency of this 2J cell is then 33.9\% A similar calculation was performed for the case of 3J cells. \figurename~\ref{fig:3J_si_vary_radeta_conv} is a plot of the efficiency contours of 3J InGaP/GaAs/Si solar cells against the ERE of the III-V junction and the silicon junction. This calculation assumes that the EREs of the top and the middle cells are identical. This is a realistic assumption considering the recent improvements in the EREs of InGaP cells\cite{Geisz:2013hi}\cite{Geisz:2014ht}. Note that the EQEs of the top and middle cells are chosen to be 82.6\%, which are the optimal values for the conversion efficiencies. This result shows that the efficiency of InGaP/GaAs/Si could be close to current one-sun world record 3J cell (37.9 \%) \cite{Green:2015bk} if the material quality of every subcell can match the state-of-the-art performances. However, since this calculation ignores other loss mechanisms such as optical loss or resistance loss, this means that the EREs of these subcells have to be further improved in order to match the performance of current one-sun world-record 3J cell, even without the presence of threading dislocations caused by the lattice mismatch. \figurename~\ref{fig:3J_si_vary_radeta_optimal} shows the results of a similar calculation but with optimal band gaps for the top two junctions. This calculation assumes that all subcells have 100\% EQEs. This result shows that the efficiency of a 3J cell can achieve 40.8\% if the III-V materials that constitute this band-gap configuration can match the quality of state-of-the-art GaAs. Since the EREs of these candidate materials are still far less than GaAs, a more practical efficiency prospect may be estimated using the EREs of the AlGaAs cells listed in \tablename~\ref{table:III-V_Si_ERE}, which is around $10^{-4}$. This gives an efficiency of around 37.5\%. As we mentioned earlier, one dilemma in designing III-V/Si solar cells is that the materials that give better current-matching to silicon have poorer material quality, whereas materials with better qualities do not give perfect current-matching. By using ERE to quantify the material quality, this issue can be addressed in a more systematic way. \figurename~\ref{fig:rad_limit_3J_profile} shows calculated efficiencies against EREs of several different band-gap configurations of III-V/Si solar cells. In this calculation, the ERE of the silicon bottom cell is assumed to match the state-of-the-art value (0.006), as listed in \tablename~\ref{table:stta_ERE}. The EQE of the silicon bottom cell are set to be 100\%. In the cases of 3J cells, the EREs of the top two junctions are assumed to be identical. Also, we select the EQEs of the III-V junctions that give the best conversion efficiency. These EQE values are listed in the legend of \figurename~\ref{fig:rad_limit_3J_profile}. Note that this is equivalent to optimizing the optical thicknesses of the III-V top cells. By comparing the efficiency profiles of optimal band-gap combinations and conventional InGaP/GaAs, we see that optimal band-gap combinations can improve efficiencies as long as the ratio of these two EREs is within near $10^{-2}$. For example, as shown in \tablename~\ref{table:stta_ERE}, the ERE of the state-of-the-art conventional GaAs solar cell is around $10^{-2}$. Therefore, in order to match the efficiencies of InGaP/GaAs/Si with 0.01-ERE III-V subcells, and give an optimal band-gap combinations, the EREs of the III-V materials should be close to $2\times10^{-4}$. According to the EREs achieved by some AlGaAs solar cells reported in \cite{Virshup:1985hw}, \cite{Amano:1987es}, achieving this ERE value may be a realistic target. As mentioned before, this calculation assumes no optical loss in the subcells. In \tablename~\ref{table:stta_ERE} and \ref{table:III-V_Si_ERE}, we list $\eta_{opt}$, which is defined as the ratio between the measured $J_{sc}$ of the solar cell and its ideal $J_{sc}$. We can see that $\eta_{opt}$ of state-of-the-art silicon and GaAs solar cell can achieve more than 90\%, and the $\eta_{opt}$ of the InGaP is around 82\%. For AlGaAs, the best $\eta_{opt}$ value is around ~81\%, which is close to the $\eta_{opt}$ of the InGaP. Therefore, as an approximation, one can simply multiply the y-axis of \figurename~\ref{fig:rad_limit_3J_profile} by 80\% to take into account the parasitic optical losses of state-of-the-art cells. In this way, the ERE criteria for AlGaAs/AlGaAs/Si to outperform InGaP/GaAs/Si with optical loss considered would be close to the ERE criteria without parasitic optical loss. However, if the optimal-band-gap materials have much less $\eta_{opt}$ than InGaP and GaAs, we expect that this tolerance of the EREs for optimal band-gap materials would reduce. \begin{figure}[!t] \centering \includegraphics[width=2.5in]{optimal_top_si_radeta_withtext} \caption{Efficiency contours of 2J solar cells as a function of the EREs of the 1.73-eV top and the silicon bottom cell. The EQEs of all subcells are assumed to be 100\% in this calculation. The stars point to the ERE values of state-of-the-art III-V and silicon materials. The color bar is efficiency (\%). The EREs on x- and y-axis are in numerics.} \label{fig:1.7eVtop_si_vary_radeta} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=2.5in]{rad_eta_3J_conv_withtext} \caption{Efficiency contours of 3J InGaP/GaAs/Si solar cell as a function of the EREs of III-V and silicon bottom cell junctions. The EREs of top and middle junctions are assumed to be identical. The EQE of the bottom cell is 100\%, and the EQEs of the top and middle cells are 82.6\%, which are optimal EQE values for this configuration. The color bar is efficiency (\%). The EREs on x- and y-axis are in numerics.} \label{fig:3J_si_vary_radeta_conv} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=2.5in]{rad_eta_3J_optimal_withtext.pdf} \caption{Efficiency contours of 3J 2.01eV/1.50eV/Si solar cell as a function of the EREs of III-V and silicon bottom cell junctions. The EREs of top and middle junctions are assumed to be identical. The EQEs of all the subcells are set to be 100\%. The color bar is efficiency in percent (\%). The EREs on x- and y-axis are presented in numerics.} \label{fig:3J_si_vary_radeta_optimal} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=3in]{rad_eta_line_profiles_withtext.pdf} \caption{ Predicted efficiency limits of several band-gap configurations of III-V subcell on silicon bottom cell against the EREs of III-V subcells. The optimal band-gap combinations are plotted in solid lines and sub-optimal combinations are plotted in broken lines. The EQEs in these calculation are chosen to give the maximum overall efficiencies. The band-gap configurations (eV) and the EQEs(\%) of the top cells are described in the legend. Also, the EQE of the silicon bottom cell is assumed to be 100\% and the ERE is assumed to be 0.006, which is the state-of-the-art value reported in \cite{Green:2011ea}. The EREs on the x-axis are presented in numerics.} \label{fig:rad_limit_3J_profile} \end{figure} \section{Conclusion} By using a detailed balance model and EREs, we reviewed and compared the material qualities of several different single- or multi-junction III-V/Si solar cells. We also estimated the efficiencies of III-V/Si solar cells with various band-gap configurations and EREs. For InGaP/GaAs/Si solar cells, our calculation shows that, while all of the subcells can match the state-of-the-art EREs, they still cannot match the efficiency of the current one-sun 3J world record. Achieving this is more likely with optimal band gaps of the two junctions, but improving the material qualities of the candidate III-V compounds will be challenging. We also made relative comparison between InGaP/GaAs/Si and the optimal band-gap configuration, 2.01eV/1.50eV/Si. Our calculation indicates that choosing III-V materials with optimal band-gap combinations with silicon can yield better efficiency compared to InGaP/GaAs, as long as the EREs of these III-V materials are within two order of magnitude less than the EREs of InGaP/GaAs. The estimated EREs of previously reported AlGaAs solar cell suggest that this criteria may be achievable. \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{External radiative efficiencies of state-of-the-art silicon and GaAs solar cells. The data in the first four rows are excerpted from \cite{Green:2011ea}, whereas the last two rows are excerpted from \cite{Geisz:2013hi}. $\eta_{opt}$ is the ratio between the measured $J_{sc}$ of a cell and its ideal $J_{sc}$ calculated by using (\ref{eqn:qe_def}) and assuming 100\% EQE. } \label{table:stta_ERE} \centering \footnotesize \begin{tabular}{cccccc} \hline Device & $V_{oc} (\mbox{mV})$ & $J_{sc} (\mbox{mA}/\mbox{cm}^2)$ & $\eta_{opt} (\%)$ & $\eta$ (\%) & ERE \\ \hline Si UNSW\tablefootnote{University of New South Wales}& 706 & 42.7 & 97.5 & 25.0 & 0.0057\\ Si SPWR\tablefootnote{SunPower Corporation} & 721 & 40.5 & 92.4 &24.2 & 0.0056 \\ GaAs Alta \tablefootnote{Alta Devices} & 1107 & 29.6 & 92.3 & 27.6 & 0.225 \\ GaAs ISE \tablefootnote{Fraunhofer Institute for Solar Energy Systems} & 1030 & 29.8 & 92.9 &26.4 & 0.0126 \\ InGaP NREL \tablefootnote{National Renewable Energy Laboratory} (conventional) & 1406 & 14.8 & 79.9 & 18.4 & 0.0032 \\ InGaP NREL (inverted rear-hetero)& 1458 & 16.0 & 82.6 & 20.7 & 0.0871 \\ \hline \end{tabular} \end{table} \begin{table*}[!t] \renewcommand{\arraystretch}{1.3} \caption{Extracted EREs from selected publications. In the column \textit{Fabrication}, UG, MBE, and MOCVD stand for upright growth, molecular beam epitaxy, and metal-organic chemical vapor deposition, respectively. $\eta$ is conversion efficiency. $\eta_{opt}$ is the ratio between the measured $J_{sc}$ of a cell and its ideal $J_{sc}$ calculated by using (\ref{eqn:qe_def}) and assuming 100\% EQE. For multi-junction devices, we use the ideal $J_{sc}$ of top cells as the denominator for estimating $\eta_{opt}$.} \label{table:III-V_Si_ERE} \centering \tiny \begin{tabular}{ccccccccccccc} \hline Device & $\eta$(\%) & Spectrum & $V_{oc} (\mbox{mV})$ & $J_{sc} (\mbox{mA}/\mbox{cm}^2)$ & $\eta_{opt} (\%)$ & Fabrication & Year & ERE & Reference\\ \hline AlGaAs(1.64eV) on GaAs (1J) & 19.2 & AM2 & 1.18 & 14.5 & 80.0 & MBE, UG & 1985 & $10^{-4}$ & \cite{Virshup:1985hw} \\ AlGaAs(1.79eV) on GaAs (1J) & 14.6 & AM1.5 & 1.28 & 16.2 & 81.4 & MBE, UG & 1987 & $10^{-4}$ & \cite{Amano:1987es} \\ AlGaAs(1.54eV) on Si (2J) & 20.6 & AM0 & 1.51 & 23.0 & 67.1 & MOCVD, UG & 1996 & AlGaAs: $10^{-7}$ & \cite{Soga:1995bv} \\ & & & & & & & & Si: $10^{-6}$ & \\ InGaP/GaAs on GaAs (2J) & 27.1 & AM1.5g & 2.45 & 13.15 & 74.2 & MOCVD, UG & 2014 & $10^{-2}$ & \cite{Dimroth:2014jn} \\ InGaP/GaAs on Si (2J) & 26.0 & AM1.5g & 2.39 & 12.70 & 71.7 & Wafer Bonding & 2014 & $10^{-3}$ & \cite{Dimroth:2014jn} \\ InGaP/GaAs on Si (2J) & 16.4 & AM1.5g & 1.94 & 11.2 & 63.2 & MOCVD, UG & 2014 & $10^{-6}$ & \cite{Dimroth:2014jn} \\ GaInP/GaAs/Si (3J) & 27.2 & AM1.5d@1x & 2.89 & 11.2 & 73.8 & Wafer Bonding & 2013 & $10^{-3}\sim10^{-4}$ & \cite{Essig:2015dw} \\ GaInP/GaAs/Si (3J) & 30.0 & AM1.5d@111x & 3.4 &1125.4 & 66.8 & Wafer Bonding & 2013 & $\sim10^{-1}$ & \cite{Essig:2015dw} \\ GaInP/GaAs/Si (3J) & 27.3 & AM1.5g & III-V:2.23 & III-V:13.7 & 77.3 & Metal Interconnect & 2014 & $10^{-3}\sim10^{-4}$ & \cite{Yang:2014kz} \\ & & & Si:0.49 & Si: 6.88 & & & & $10^{-3}\sim10^{-4}$ & \cite{Yang:2014kz} \\ \hline \end{tabular} \end{table*} \section*{Acknowledgement} The authors would like to thank Japan New Energy and Industrial Technology Development Organization (NEDO) for supporting this research (NEDO 15100731-0). \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In this work, we study deterministic distributed algorithms under three different assumptions; see Figure~\ref{fig:models}. \begin{itemize}[leftmargin=4em] \item[($\mathsf{ID}$)] \emph{Networks with unique identifiers}. Each node is given a unique $O(\log n)$-bit label. \item[($\mathsf{OI}$)] \emph{Order-invariant algorithms}. There is a linear order on nodes. Equivalently, the nodes have unique labels, but the output of an algorithm is not allowed to change if we relabel the nodes while preserving the relative order of the labels. \item[($\mathsf{PO}$)] \emph{Anonymous networks with a port numbering and orientation}. For each node, there is a linear order on the incident edges, and for each edge, there is a linear order on the incident nodes. Equivalently, a node of degree $d$ can refer to its neighbours by integers $1, 2, \dotsc, d$, and each edge is oriented so that the endpoints know which of them is the head and which is the tail. \end{itemize} \begin{figure} \centering \includegraphics[page=\PModels]{figs.pdf} \caption{Three models of distributed computing.}\label{fig:models} \end{figure} While unique identifiers are often useful, we will show that they are seldom needed in local algorithms (constant-time distributed algorithms): there is a general class of graph problems such that local algorithms in $\mathsf{PO}$ are able to produce as good approximations as local algorithms in $\mathsf{OI}$ or~$\mathsf{ID}$. \subsection{Graph Problems} We study graph problems that are related to the structure of an unknown communication network. Each node in the network is a computer; each computer receives a \emph{local input}, it can exchange messages with adjacent nodes, and eventually it has to produce a \emph{local output}. The local outputs constitute a solution of a graph problem---for example, if we study the dominating set problem, each node produces one bit of local output, indicating whether it is part of the dominating set. The \emph{running time} of an algorithm is the number of synchronous communication rounds. From this perspective, the models $\mathsf{ID}$, $\mathsf{OI}$, and $\mathsf{PO}$ are easy to separate. Consider, for example, the problem of finding a maximal independent set in an $n$-cycle. In $\mathsf{ID}$ model the problem can be solved in $\Theta(\log^* n)$ rounds \cite{cole86deterministic, linial92locality}, while in $\mathsf{OI}$ model we need $\Theta(n)$ rounds, and the problem is not soluble at all in $\mathsf{PO}$, as we cannot break symmetry---see Figure~\ref{fig:cycles}. Hence $\mathsf{ID}$ is strictly stronger than $\mathsf{OI}$, which is strictly stronger than $\mathsf{PO}$. \begin{figure} \centering \includegraphics[page=\PCycles]{figs.pdf} \caption{In $\mathsf{ID}$, the numerical identifiers break symmetry everywhere---for example, a maximal independent set can be found in $O(\log^* n)$ rounds. In $\mathsf{OI}$, we can have a cycle with only one ``seam'', and in $\mathsf{PO}$ we can have a completely symmetric cycle.}\label{fig:cycles} \end{figure} \subsection{Local Algorithms}\label{ssec:local} In this work we focus on \emph{local algorithms}, i.e., distributed algorithms that run in a constant number of synchronous communication rounds, independently of the number of nodes in the network~\cite{naor95what, suomela09survey}. The above example separating $\mathsf{ID}$, $\mathsf{OI}$, and $\mathsf{PO}$ no longer applies, and there has been a conspicuous lack of \emph{natural} graph problems that would separate $\mathsf{ID}$, $\mathsf{OI}$, and $\mathsf{PO}$ from the perspective of local algorithms. Indeed, there are results that show that many problems that can be solved with a local algorithm in $\mathsf{ID}$ also admit a local algorithm in $\mathsf{OI}$ or $\mathsf{PO}$. For example, the seminal paper by Naor and Stockmeyer~\cite{naor95what} studies so-called $\mathsf{LCL}$ problems---these include problems such as graph colouring and maximal matchings on bounded-degree graphs. The authors show that $\mathsf{ID}$ and $\mathsf{OI}$ are indeed equally expressive among $\mathsf{LCL}$ problems. The followup work by Mayer, Naor, and Stockmeyer~\cite{mayer95local} hints of a stronger property: \begin{enumerate}[label=(\roman*)] \item \emph{Weak $2$-colouring} is an $\mathsf{LCL}$ problem that can be solved with a local algorithm in $\mathsf{ID}$ model~\cite{naor95what}. It turns out that the same problem can be solved in $\mathsf{PO}$ model as well~\cite{mayer95local}. \end{enumerate} Granted, contrived counterexamples do exist: there are $\mathsf{LCL}$ problems that are soluble in $\mathsf{OI}$ but not in $\mathsf{PO}$. However, most of the classical graph problems that are studied in the field of distributed computing are \emph{optimisation problems}, not $\mathsf{LCL}$ problems. \subsection{Local Approximation}\label{ssec:intro-local-apx} In what follows, we will focus on graph problems in the case of \emph{bounded-degree graphs}; that is, there is a known constant $\Delta$ such that the degree of any node in any graph that we may encounter is at most~$\Delta$. Parity often matters; hence we also define $\Delta' = 2 \floor{\Delta/2}$. In this setting, the best possible approximation ratios are surprisingly similar in $\mathsf{ID}$, $\mathsf{OI}$, and $\mathsf{PO}$. The following hold for any given $\Delta \ge 2$ and $\epsilon > 0$: \begin{enumerate}[resume*] \item \emph{Minimum vertex cover} can be approximated to within factor $2$ in each of these models \cite{astrand09vc2apx, astrand10vc-sc}. This is tight: \Apx{(2-\epsilon)} is not possible in any of these models \cite{czygrinow08fast, lenzen08leveraging, suomela09survey}. \item \emph{Minimum edge cover} can be approximated to within factor $2$ in each of these models \cite{suomela09survey}. This is tight: \Apx{(2-\epsilon)} is not possible in any of these models \cite{czygrinow08fast, lenzen08leveraging, suomela09survey}. \item \emph{Minimum dominating set} can be approximated to within factor $\Delta'+1$ in each of these models \cite{astrand10weakly-coloured}. This is tight: \Apx{(\Delta'+1-\epsilon)} is not possible in any of these models \cite{czygrinow08fast, lenzen08leveraging, suomela09survey}. \item \emph{Maximum independent set} cannot be approximated to within any constant factor in any of these models \cite{czygrinow08fast, lenzen08leveraging}. \item \emph{Maximum matching} cannot be approximated to within any constant factor in any of these models \cite{czygrinow08fast, lenzen08leveraging}. \end{enumerate} This phenomenon has not been fully understood: while there are many problems with identical approximability results for $\mathsf{ID}$, $\mathsf{OI}$, and $\mathsf{PO}$, it has not been known whether these are examples of a more general principle or merely isolated coincidences. In fact, for some problems, tight approximability results have been lacking for $\mathsf{ID}$ and $\mathsf{OI}$, even though tight results are known for $\mathsf{PO}$: \begin{enumerate}[resume*] \item \emph{Minimum edge dominating set} can be approximated to within factor $4-2/\Delta'$ in each of these models \cite{suomela10eds}. This is tight for $\mathsf{PO}$ but only near-tight for $\mathsf{ID}$ and $\mathsf{OI}$: \Apx{(4-2/\Delta'-\epsilon)} is not possible in $\mathsf{PO}$ \cite{suomela10eds}, and \Apx{(3-\epsilon)} is not possible in $\mathsf{ID}$ and $\mathsf{OI}$ \cite{czygrinow08fast, lenzen08leveraging, suomela09survey}. \end{enumerate} In this work we prove a theorem unifying all of the above observations---they are indeed examples of a general principle. As a simple application of our result, we settle the local approximability of the minimum edge dominating set problem by proving a tight lower bound in $\mathsf{ID}$ and $\mathsf{OI}$. \subsection{Main Result} A \emph{simple graph problem} $\ensuremath{\mathsf{\Pi}}$ is an optimisation problem in which a feasible solution is a subset of nodes or a subset of edges, and the goal is to either minimise or maximise the size of a feasible solution. We say that $\ensuremath{\mathsf{\Pi}}$ is a \emph{$\mathsf{PO}$-checkable graph problem} if there is a local $\mathsf{PO}$-algorithm $\ensuremath{\mathsf{A}}$ that recognises a feasible solution. That is, $\ensuremath{\mathsf{A}}(\ensuremath{\mathcal{G}},X,v) = 1$ for all nodes $v \in V(\ensuremath{\mathcal{G}})$ if $X$ is a feasible solution of problem $\ensuremath{\mathsf{\Pi}}$ in graph $\ensuremath{\mathcal{G}}$, and $\ensuremath{\mathsf{A}}(\ensuremath{\mathcal{G}},X,v) = 0$ for some node $v \in V(\ensuremath{\mathcal{G}})$ otherwise---here $\ensuremath{\mathsf{A}}(\ensuremath{\mathcal{G}},X,v)$ is the output of a node $v$ if we run algorithm $\ensuremath{\mathsf{A}}$ on graph $\ensuremath{\mathcal{G}}$ and the local inputs form an encoding of $X$. Let $\varphi\colon V(\ensuremath{\mathcal{H}}) \to V(\ensuremath{\mathcal{G}})$ be a surjective graph homomorphism from graph $\ensuremath{\mathcal{H}}$ to graph~$\ensuremath{\mathcal{G}}$. If $\varphi$ preserves vertex degrees, i.e., $\deg_{\ensuremath{\mathcal{H}}}(u)=\deg_{\ensuremath{\mathcal{G}}}(\varphi(u))$, then $\varphi$ is called a \emph{covering map}, and $\ensuremath{\mathcal{H}}$ is said to be a \emph{lift} of $\ensuremath{\mathcal{G}}$. The \emph{fibre} of $u\in V(\ensuremath{\mathcal{G}})$ is the set $\varphi^{-1}(u)$ of pre-images of $u$. We usually consider $n$-lifts that have fibres of the same cardinality $n$. It is a basic fact that a connected lift $\ensuremath{\mathcal{H}}$ of $\ensuremath{\mathcal{G}}$ is an $n$-lift for some $n$. See Figure~\ref{fig:lifts} for an illustration. \begin{figure} \centering \includegraphics[page=\PLifts]{figs.pdf} \caption{Graph $\ensuremath{\mathcal{H}}$ is a lift of $\ensuremath{\mathcal{G}}$. The covering map $\varphi\colon V(\ensuremath{\mathcal{H}}) \to V(\ensuremath{\mathcal{G}})$ maps $a_i \mapsto a$, $b_i \mapsto b$, $c_i \mapsto c$, and $d_i \mapsto d$ for each $i = 1, 2$. The fibre of $a \in V(\ensuremath{\mathcal{G}})$ is $\{a_1, a_2\} \subseteq V(\ensuremath{\mathcal{H}})$; all fibres have the same size.}\label{fig:lifts} \end{figure} Let $\ensuremath{\mathcal{F}}$ be a family of graphs. We say that $\ensuremath{\mathcal{F}}$ is \emph{closed under lifts} if $\ensuremath{\mathcal{G}} \in \ensuremath{\mathcal{F}}$ implies $\ensuremath{\mathcal{H}} \in \ensuremath{\mathcal{F}}$ for all lifts $\ensuremath{\mathcal{H}}$ of $\ensuremath{\mathcal{G}}$. A family is \emph{closed under connected lifts} if $\ensuremath{\mathcal{G}} \in \ensuremath{\mathcal{F}}$ implies $\ensuremath{\mathcal{H}} \in \ensuremath{\mathcal{F}}$ whenever $\ensuremath{\mathcal{H}}$ and $\ensuremath{\mathcal{G}}$ are connected graphs and $\ensuremath{\mathcal{H}}$ is a lift of~$\ensuremath{\mathcal{G}}$. \pagebreak Now we are ready to state our main theorem. \begin{thm}[Main Theorem]\label{thm:main} Let $\ensuremath{\mathsf{\Pi}}$ be a simple $\mathsf{PO}$-checkable graph problem. Assume one of the following: \begin{itemize} \item General version: $\ensuremath{\mathcal{F}}$ is a family of bounded degree graphs, and it is closed under lifts. \item Connected version: $\ensuremath{\mathcal{F}}$ is a family of connected bounded degree graphs, it does not contain any trees, and it is closed under connected lifts. \end{itemize} If there is a local $\mathsf{ID}$-algorithm $\ensuremath{\mathsf{A}}$ that finds an \Apx{\alpha} of $\ensuremath{\mathsf{\Pi}}$ in $\ensuremath{\mathcal{F}}$, then there is a local $\mathsf{PO}$-algorithm $\ensuremath{\mathsf{B}}$ that finds an \Apx{\alpha} of $\ensuremath{\mathsf{\Pi}}$ in $\ensuremath{\mathcal{F}}$. \end{thm} While the definitions are somewhat technical, it is easy to verify that the result is widely applicable: \begin{enumerate} \item Vertex covers, edge covers, matchings, independent sets, dominating sets, and edge dominating sets are simple $\mathsf{PO}$-checkable graph problems. \item Bounded-degree graphs, regular graphs, and cyclic graphs are closed under lifts. \item Connected bounded-degree graphs, connected regular graphs, and connected cyclic graphs are closed under connected lifts. \end{enumerate} \subsection{An Application} The above result provides us with a powerful tool for proving lower-bound results: we can easily transfer negative results from $\mathsf{PO}$ to $\mathsf{OI}$ and $\mathsf{ID}$. We demonstrate this strength by deriving a new lower bound result for the minimum edge dominating set problem. \begin{thm}\label{thm:eds} Let $\Delta \ge 2$, and let $\ensuremath{\mathsf{A}}$ be a local $\mathsf{ID}$-algorithm that finds an \Apx{\alpha} of a minimum edge dominating set on connected graphs of maximum degree $\Delta$. Then $\alpha \ge \alpha_0$, where \[ \alpha_0 = 4-2/\Delta' \ \text{ and }\ \Delta' = 2 \floor{\Delta/2}. \] This is tight: there is a local $\mathsf{ID}$-algorithm that finds an \Apx{\alpha_0}. \end{thm} \begin{proof} By prior work~\cite{suomela10eds}, it is known that there is a connected $\Delta'$-regular graph $\ensuremath{\mathcal{G}}_0$ such that the approximation factor of any local $\mathsf{PO}$-algorithm on $\ensuremath{\mathcal{G}}_0$ is at least $\alpha_0$. Let $\ensuremath{\mathcal{F}}_0$ consist of all connected lifts of $\ensuremath{\mathcal{G}}_0$, and let $\ensuremath{\mathcal{F}}$ consist of all connected graphs of degree at most $\Delta$. We make the following observations. \begin{enumerate} \item We have $\ensuremath{\mathcal{F}}_0 \subseteq \ensuremath{\mathcal{F}}$; by assumption, $\ensuremath{\mathsf{A}}$ finds an \Apx{\alpha} in $\ensuremath{\mathcal{F}}_0$. \item Family $\ensuremath{\mathcal{F}}_0$ consists of connected graphs of degree at most $\Delta$, it does not contain any trees, and it is closed under connected lifts. We can apply the connected version of the main theorem: there is a local $\mathsf{PO}$-algorithm $\ensuremath{\mathsf{B}}$ that finds an \Apx{\alpha} in $\ensuremath{\mathcal{F}}_0$. \item However, $\ensuremath{\mathcal{G}}_0 \in \ensuremath{\mathcal{F}}_0$, and hence $\alpha \ge \alpha_0$. \end{enumerate} The matching upper bound is presented in prior work~\cite{suomela10eds}. \end{proof} \subsection{Overview} Informally, our proof of the main theorem is structured as follows. \begin{enumerate} \item Fix a graph problem $\ensuremath{\mathsf{\Pi}}$, a graph family $\ensuremath{\mathcal{F}}$, and an $\mathsf{ID}$-algorithm $\ensuremath{\mathsf{A}}$ as in the statement of Theorem~\ref{thm:main}. Let $r$ be the running time of $\mathsf{ID}$-algorithm $\ensuremath{\mathsf{A}}$. \item Let $\ensuremath{\mathcal{G}} \in \ensuremath{\mathcal{F}}$ be a graph with a port numbering and orientation. \item \mbox{Section~\ref{ssec:homog-lift}:} We construct a certain lift $\ensuremath{\mathcal{G}}_\epsilon \in \ensuremath{\mathcal{F}}$ of $\ensuremath{\mathcal{G}}$. Graph $\ensuremath{\mathcal{G}}_\epsilon$ inherits the port numbering and the orientation from $\ensuremath{\mathcal{G}}$. \item \mbox{Section~\ref{ssec:mainthm-oi}:} We show that there exists a linear order $<_\epsilon$ on the nodes of $\ensuremath{\mathcal{G}}_\epsilon$ that gives virtually no new information in comparison with the port numbering and orientation. If we have an $\mathsf{OI}$-algorithm $\ensuremath{\mathsf{A}}'$ with running time~$r$, then we can simulate $\ensuremath{\mathsf{A}}'$ with a $\mathsf{PO}$-algorithm $\ensuremath{\mathsf{B}}'$ almost perfectly on $\ensuremath{\mathcal{G}}_\epsilon$: the outputs of $\ensuremath{\mathsf{A}}'$ and $\ensuremath{\mathsf{B}}'$ agree for a $(1-\epsilon)$ fraction of nodes. We deduce that the approximation ratio of $\ensuremath{\mathsf{A}}'$ on $\ensuremath{\mathcal{F}}$ cannot be better than the approximation ratio of $\ensuremath{\mathsf{B}}'$ on~$\ensuremath{\mathcal{F}}$. \item \mbox{Section~\ref{ssec:mainthm-id}:} We apply Ramsey's theorem to show that the unique identifiers do not help, either. We can construct a $\mathsf{PO}$-algorithm $\ensuremath{\mathsf{B}}$ that simulates $\ensuremath{\mathsf{A}}$ in the following sense: there exists an assignment of unique identifiers on a lift $\ensuremath{\mathcal{H}} \in \ensuremath{\mathcal{F}}$ of $\ensuremath{\mathcal{G}}_\epsilon$ such that the outputs of $\ensuremath{\mathsf{A}}$ and $\ensuremath{\mathsf{B}}$ agree for a $(1-\epsilon)$ fraction of nodes. We deduce that the approximation ratio of $\ensuremath{\mathsf{A}}$ on $\ensuremath{\mathcal{F}}$ cannot be better than the approximation ratio of $\ensuremath{\mathsf{B}}$ on~$\ensuremath{\mathcal{F}}$. \end{enumerate} Now if graph $\ensuremath{\mathcal{G}}$ was a directed cycle, the construction would be standard; see, e.g., Czygrinow et al.~\cite{czygrinow08fast}. In particular, $\ensuremath{\mathcal{G}}_\epsilon$ and $\ensuremath{\mathcal{H}}$ would simply be long cycles, and $<_\epsilon$ would order the nodes along the cycle---there would be only one ``seam'' in $(\ensuremath{\mathcal{G}}_\epsilon,\<_\epsilon)$ that could potentially help $\ensuremath{\mathsf{A}}'$ in comparison with $\ensuremath{\mathsf{B}}'$, and only an $\epsilon$ fraction of nodes are near the seam. However, the case of a general $\ensuremath{\mathcal{G}}$ is more challenging. Our main technical tool is the construction of so-called homogeneous graphs; see Section~\ref{ssec:homog}. Homogeneous graphs are regular graphs with a linear order that is useless from the perspective of $\mathsf{OI}$-algorithms: for a $(1-\epsilon)$ fraction of nodes, the local neighbourhoods are isomorphic. Homogeneous graphs trivially exist; however, our proof calls for homogeneous graph of an arbitrarily high degree and an arbitrarily large girth (i.e., there are no short cycles---the graph is locally tree-like). In Section~\ref{sec:homog-graphs} we use an algebraic construction to prove that such graphs exist. \subsection{Discussion} In the field of distributed algorithms, the running time of an algorithm is typically analysed in terms of two parameters: $n$, the number of nodes in the graph, and $\Delta$, the maximum degree of the graph. In our work, we assumed that $\Delta$ is a constant---put otherwise, our work applies to algorithms that have a running time independent of $n$ but arbitrarily high as a function of $\Delta$. The work by Kuhn et al.~\cite{kuhn04what, kuhn06price, kuhn10local} studies the dependence on $\Delta$ more closely: their lower bounds on approximation ratios apply to algorithms that have, for example, a running time sublogarithmic in $\Delta$. While our result is very widely applicable, certain extensions have been left for future work. One example is the case of planar graphs \cite{czygrinow08fast},~\cite[\S13]{lenzen11phd}. The family of planar graphs is not closed under lifts, and hence Theorem~\ref{thm:main} does not apply. Another direction that we do not discuss at all is the case of randomised algorithms. \section{Three Models of Distributed Computing} In this section we make precise the notion of a \emph{local algorithm} in each of the models $\mathsf{ID}$, $\mathsf{OI}$ and $\mathsf{PO}$. First, we discuss the properties common to all the models. We start by fixing a graph family $\ensuremath{\mathcal{F}}$ where every $\ensuremath{\mathcal{G}}=(V(\ensuremath{\mathcal{G}}),E(\ensuremath{\mathcal{G}}))\in\ensuremath{\mathcal{F}}$ has maximum degree at most $\Delta\in\ensuremath{\mathbb{N}}$. We consider algorithms $\ensuremath{\mathsf{A}}$ that operate on graphs in $\ensuremath{\mathcal{F}}$; the properties of $\ensuremath{\mathsf{A}}$ (e.g., its running time) are allowed to depend on the family $\ensuremath{\mathcal{F}}$ (and, hence, on $\Delta$). We denote by $\ensuremath{\mathsf{A}}(\ensuremath{\mathcal{G}},u)\in\Omega$ the output of $\ensuremath{\mathsf{A}}$ on a node $u\in V(\ensuremath{\mathcal{G}})$. Here, $\Omega$ is a finite set of possible outputs of $\ensuremath{\mathsf{A}}$ in $\ensuremath{\mathcal{F}}$. If the solutions to $\ensuremath{\mathsf{\Pi}}$ are sets of vertices, we shall have $\Omega = \{0,1\}$ so that the solution produced by $\ensuremath{\mathsf{A}}$~on $\ensuremath{\mathcal{G}}$, denoted $\ensuremath{\mathsf{A}}(\ensuremath{\mathcal{G}})$, is the set of nodes $u$ with $\ensuremath{\mathsf{A}}(\ensuremath{\mathcal{G}},u)=1$. Similarly, if the solutions to $\ensuremath{\mathsf{\Pi}}$ are sets of edges, we shall have $\Omega = \{0,1\}^\Delta$ so that the $i$th component of the vector $\ensuremath{\mathsf{A}}(\ensuremath{\mathcal{G}},u)$ indicates whether the $i$th edge incident to $u$ is included in the solution $\ensuremath{\mathsf{A}}(\ensuremath{\mathcal{G}})$---in each of the models a node will have a natural ordering of its incident edges. Let $r\in \ensuremath{\mathbb{N}}$ denote the constant running time of $\ensuremath{\mathsf{A}}$ in $\ensuremath{\mathcal{F}}$. This means that a node $u$ can only receive messages from nodes within distance $r$ in $\ensuremath{\mathcal{G}}$, i.e., from nodes in the radius-$r$ neighbourhood \[ B_{\ensuremath{\mathcal{G}}}(u,r) = \bigl\{ v\in V(\ensuremath{\mathcal{G}}) : \operatorname{dist}_{\ensuremath{\mathcal{G}}}(u,v) \le r \bigr\}. \] Let $\tau(\ensuremath{\mathcal{G}},u)$ denote the structure $(\ensuremath{\mathcal{G}},u)$ restricted to the vertices $B_{\ensuremath{\mathcal{G}}}(u,r)$, i.e., in symbols, \[ \tau(\ensuremath{\mathcal{G}},u) = (\ensuremath{\mathcal{G}},u) \upharpoonright B_{\ensuremath{\mathcal{G}}}(u,r). \] Then $\ensuremath{\mathsf{A}}(\ensuremath{\mathcal{G}},u)$ is a function of the data $\tau(\ensuremath{\mathcal{G}},u)$ in that $\ensuremath{\mathsf{A}}(\ensuremath{\mathcal{G}},u) = \ensuremath{\mathsf{A}}(\tau(\ensuremath{\mathcal{G}},u))$. The models $\mathsf{ID}$, $\mathsf{OI}$ and $\mathsf{PO}$ impose further restrictions on this function. \subsection{Model \texorpdfstring{$\boldsymbol\mathsf{ID}$}{ID}}\label{sec:id-model} Local $\mathsf{ID}$-algorithms are not restricted in any additional way. We follow the convention that the vertices have unique $O(\log n)$-bit labels, i.e., an instance $\ensuremath{\mathcal{G}}\in\ensuremath{\mathcal{F}}$ of order $n=|V(\ensuremath{\mathcal{G}})|$ has $V(\ensuremath{\mathcal{G}})\subseteq \{1,2,\dotsc,s(n)\}$ where $s(n)$ is some fixed polynomial function of $n$. Our presentation assumes $s(n)=\omega(n)$, even though this assumption can often be relaxed as we discuss in Remark~\ref{rem:identifiers}. \subsection{Model \texorpdfstring{$\boldsymbol\mathsf{OI}$}{OI}} A local $\mathsf{OI}$-algorithm $\ensuremath{\mathsf{A}}$ does not directly use unique vertex identifiers but only their relative \emph{order}. To make this notion explicit, let the vertices of $\ensuremath{\mathcal{G}}\in\ensuremath{\mathcal{F}}$ be linearly ordered by $<$, and call $(\ensuremath{\mathcal{G}},\<)$ an \emph{ordered graph}. Denote by $\tau(\ensuremath{\mathcal{G}},\<,u)$ the restriction of the structure $(\ensuremath{\mathcal{G}},\<,u)$ to the $r$-neighbourhood $B_{\ensuremath{\mathcal{G}}}(u,r)$, i.e., in symbols, \[ \tau(\ensuremath{\mathcal{G}},\<,u) = (\ensuremath{\mathcal{G}},\<,u) \upharpoonright B_{\ensuremath{\mathcal{G}}}(u,r). \] Then, the output $\ensuremath{\mathsf{A}}(\ensuremath{\mathcal{G}},\<,u)$ depends only on the \emph{isomorphism type} of $\tau(\ensuremath{\mathcal{G}},\<,u)$, so that if $\tau(\ensuremath{\mathcal{G}},\<,u) \allowbreak\simeq \tau(\ensuremath{\mathcal{G}}',\<',u')$ then $\ensuremath{\mathsf{A}}(\ensuremath{\mathcal{G}},\<,u) = \ensuremath{\mathsf{A}}(\ensuremath{\mathcal{G}}',\<',u')$. \subsection{Model \texorpdfstring{$\boldsymbol\mathsf{PO}$}{PO}}\label{sec:po-model} In the $\mathsf{PO}$ model the nodes are considered anonymous and only the following node specific structure is available: a node can communicate with its neighbours through ports numbered $1,2,\dotsc,\deg(u)$, and each communication link has an orientation. \paragraph{Edge-Labelled Digraphs.} To model the above, we consider \emph{$L$-edge-labelled directed graphs} (or \emph{$L$-digraphs}, for short) $\ensuremath{\mathcal{G}}=(V(\ensuremath{\mathcal{G}}),E(\ensuremath{\mathcal{G}}),\ell_{\ensuremath{\mathcal{G}}})$, where the edges $E(\ensuremath{\mathcal{G}})\subseteq V(\ensuremath{\mathcal{G}})\times V(\ensuremath{\mathcal{G}})$ are directed and each edge $e\in E(\ensuremath{\mathcal{G}})$ carries a label $\ell_{\ensuremath{\mathcal{G}}}(e)\in L$. We restrict our considerations to \emph{proper} labellings $\ell_{\ensuremath{\mathcal{G}}}\colon E(\ensuremath{\mathcal{G}})\to L${} that for each $u\in V(\ensuremath{\mathcal{G}})$ assign the incoming edges $(v,u)\in E(\ensuremath{\mathcal{G}})$ distinct labels and the outgoing edges $(u,w)\in E(\ensuremath{\mathcal{G}})$ distinct labels; we allow $\ell_{\ensuremath{\mathcal{G}}}(v,u)=\ell_{\ensuremath{\mathcal{G}}}(u,w)$. We refer to the outgoing edges of a node by the labels $L$ and to the incoming edges by the formal letters $L^{-1} = \{\ell^{-1} : \ell\in L\}$. In the context of $L$-digraphs, covering maps $\varphi\colon V(\ensuremath{\mathcal{H}}) \to V(\ensuremath{\mathcal{G}})$ are required to preserve edge labels so that $\ell_{\ensuremath{\mathcal{H}}}(u,v) = \ell_{\ensuremath{\mathcal{G}}}(\varphi(u), \varphi(v))$ for all $(u,v)\in E(\ensuremath{\mathcal{H}})$. A port numbering on $\ensuremath{\mathcal{G}}$ gives rise to a proper labelling $\ell_{\ensuremath{\mathcal{G}}}(v,u) = (i,j)$, where $u$ is the $i$th neighbour of $v$, and $v$ is the $j$th neighbour of $u$; see Figure~\ref{fig:ldigraph}. We now fix $L$ to contain every possible edge label that appears when a graph $\ensuremath{\mathcal{G}}\in\ensuremath{\mathcal{F}}$ is assigned a port numbering and an orientation. Note that $|L| \le \Delta^2$. \begin{figure} \centering \includegraphics[page=\PLDigraph]{figs.pdf} \caption{(a)~A graph $\ensuremath{\mathcal{G}}$ with a port numbering and an orientation. (b)~A proper labelling $\ell_{\ensuremath{\mathcal{G}}}$ that is derived from the port numbering. We have an $L$-digraph with $L = \{ a,b,c \}$, $a = (1,2)$, $b = (2,1)$, and $c = (3,1)$. (c)~The view of $\ensuremath{\mathcal{G}}$ from $u$ is an infinite directed tree $\ensuremath{\mathcal{T}} = \ensuremath{\mathcal{T}}(\ensuremath{\mathcal{G}},u)$; there is a covering map $\varphi$ from $\ensuremath{\mathcal{T}}$ to $\ensuremath{\mathcal{G}}$ that preserves adjacencies, orientations, and edge labels. For example, $\varphi(\lambda) = \varphi(aab^{-1}) = u$.}\label{fig:ldigraph} \end{figure} \paragraph{Views.} The information available to a $\mathsf{PO}$-algorithm computing on a node $u\in V(\ensuremath{\mathcal{G}})$ in an $L$-digraph $\ensuremath{\mathcal{G}}$ is usually modelled as follows~\cite{angluin80local, yamashita96computing, suomela09survey}. The \emph{view} of $\ensuremath{\mathcal{G}}$ from $u$ is an $L$-edge-labelled rooted (possibly infinite) directed tree $\ensuremath{\mathcal{T}}=\ensuremath{\mathcal{T}}({\ensuremath{\mathcal{G}}},u)$, where the vertices $V(\ensuremath{\mathcal{T}})$ correspond to all non-backtracking walks on $\ensuremath{\mathcal{G}}$ starting at $u$; see Figure~\ref{fig:ldigraph}c. Formally, a $k$-step walk can be identified with a word of length $k$ in the letters $L\cup L^{-1}$. A non-backtracking walk is a \emph{reduced} word where neither $\ell\ell^{-1}$ nor $\ell^{-1}\ell$ appear. If $w\in V(\ensuremath{\mathcal{T}})$ is a walk on $\ensuremath{\mathcal{G}}$ from $u$ to $v$, we define $\varphi(w) = v$. In particular, the root of $\ensuremath{\mathcal{T}}$ is the \emph{empty word} $\lambda$ with $\varphi(\lambda) = u$. The directed edges of $\ensuremath{\mathcal{T}}$ (and their labels) are defined in such a way that $\varphi\colon V(\ensuremath{\mathcal{T}}) \to V(\ensuremath{\mathcal{G}})$ becomes a covering map. Namely, $w\in V(\ensuremath{\mathcal{T}})$ has an out-neighbour $w\ell$ for every $\ell\in L$ such that $\varphi(w)$ has a outgoing edge labelled $\ell$. \paragraph{Local $\boldsymbol\mathsf{PO}$-Algorithms.} The inability of a $\mathsf{PO}$-algorithm $\ensuremath{\mathsf{B}}$ to detect cycles in a graph is characterised by the fact that $\ensuremath{\mathsf{B}}(\ensuremath{\mathcal{G}},u) = \ensuremath{\mathsf{B}}(\ensuremath{\mathcal{T}}(\ensuremath{\mathcal{G}},u))$. In fact, we \emph{define} a local $\mathsf{PO}$-algorithm as a function $\ensuremath{\mathsf{B}}$ satisfying $\ensuremath{\mathsf{B}}(\ensuremath{\mathcal{G}},u)=\ensuremath{\mathsf{B}}(\tau(\ensuremath{\mathcal{T}}(\ensuremath{\mathcal{G}},u)))$. An important consequence of this definition is that the output of a $\mathsf{PO}$-algorithm is invariant under lifts, i.e., if $\varphi\colon V(\ensuremath{\mathcal{H}})\to V(\ensuremath{\mathcal{G}})$ is a covering map of $L$-digraphs, then $\ensuremath{\mathsf{B}}(\ensuremath{\mathcal{H}},u) = \ensuremath{\mathsf{B}}(\ensuremath{\mathcal{G}},\varphi(u))$. The intuition is that nodes in a common fibre are always in the same state during computation as they see the same view. The following formalism will become useful. Denote by $(\ensuremath{\mathcal{T}}^*,\lambda)$ the complete $L$-labelled rooted directed tree of radius $r$ with $V(\ensuremath{\mathcal{T}}^*)$ consisting of reduced words in the letters $L\cup L^{-1}$, i.e., every non-leaf vertex in $\ensuremath{\mathcal{T}}^*$ has an outgoing edge and an incoming edge for each $\ell\in L$; see Figure~\ref{fig:complete}. The output of $\ensuremath{\mathsf{B}}$ on every graph $\ensuremath{\mathcal{G}}\in\ensuremath{\mathcal{F}}$ is completely determined after specifying its output on the subtrees of $(\ensuremath{\mathcal{T}}^*,\lambda)$. More precisely, let $\ensuremath{\mathfrak{W}}$ consist of vertex sets $W\subseteq V(\ensuremath{\mathcal{T}}^*)$ such that $(\ensuremath{\mathcal{T}}^*,\lambda)\upharpoonright W = \tau(\ensuremath{\mathcal{T}}(\ensuremath{\mathcal{G}},u))$ for some $\ensuremath{\mathcal{G}}\in\ensuremath{\mathcal{F}}$ and $u\in V(\ensuremath{\mathcal{G}})$. Then a function $\ensuremath{\mathsf{B}}\colon\ensuremath{\mathfrak{W}} \to \Omega$ defines a $\mathsf{PO}$-algorithm by identifying $\ensuremath{\mathsf{B}}((\ensuremath{\mathcal{T}}^*,\lambda)\upharpoonright W) = \ensuremath{\mathsf{B}}(W)$. \begin{figure} \centering \includegraphics[page=\PComplete]{figs.pdf} \caption{The complete $L$-labelled rooted directed tree $(\ensuremath{\mathcal{T}}^*,\lambda)$ of radius $r = 2$, for $L = \{a,b\}$.}\label{fig:complete} \end{figure} \section{Order Homogeneity} In this section we introduce some key concepts that are used in controlling the local symmetry breaking information that is available to a local $\mathsf{OI}$-algorithm. \subsection{Homogeneous Graphs}\label{ssec:homog} In the following, we take the \emph{isomorphism type} of an $r$-neighbourhood $\tau=\tau(\ensuremath{\mathcal{G}},\<,u)$ to be some canonical representative of the isomorphism class of $\tau$. \begin{definition} Let $(\ensuremath{\mathcal{H}},\<)$ be an ordered graph. If there is a set $U\subseteq V(\ensuremath{\mathcal{H}})$ of size $|U| \ge \alpha|\ensuremath{\mathcal{H}}|$ such that the vertices in $U$ have a common $r$-neighbourhood isomorphism type $\tau^*$, then we call $(\ensuremath{\mathcal{H}},\<)$ an \emph{$(\alpha,r)$-homogeneous graph} and $\tau^*$ the associated \emph{homogeneity type} of $\ensuremath{\mathcal{H}}$. \end{definition} Homogeneous graphs are useful in fooling $\mathsf{OI}$-algorithms: an $(\alpha,r)$\hyp homogeneous graph forces any local $\mathsf{OI}$-algorithm to produce the same output in at least an $\alpha$ fraction of the nodes in the input graph. However, there are some limitations to how large $\alpha$ can be: Let $(\ensuremath{\mathcal{G}},\<)$ be a connected ordered graph on at least two vertices. If $u$ and $v$ are the smallest and the largest vertices of $\ensuremath{\mathcal{G}}$, their $r$-neighbourhoods $\tau(\ensuremath{\mathcal{G}},\<,u)$ and $\tau(\ensuremath{\mathcal{G}},\<,v)$ cannot be isomorphic even for $r=1$. Thus, non-trivial finite graphs are not $(1,1)$-homogeneous. Moreover, an ordered $(2k-1)$-regular graph cannot be $(\alpha,1)$-homogeneous for any $\alpha > 1/2$; this is the essence of the weak $2$-colouring algorithm of Naor and Stockmeyer~\cite{naor95what}. \begin{figure} \centering \includegraphics[page=\PHomogTree]{figs.pdf} \caption{A fragment of a $4$-regular infinite ordered tree $(\ensuremath{\mathcal{G}},\<)$. The numbering of the nodes indicates a $(1,r)$-homogeneous linear order in the neighbourhood of node $27$; grey nodes are larger than $27$ and white nodes are smaller than $27$.}\label{fig:homog-tree} \end{figure} \begin{figure} \centering \includegraphics[page=\PHomog]{figs.pdf} \caption{A $4$-regular graph $\ensuremath{\mathcal{G}}$ constructed as the cartesian product of two directed $6$-cycles. We define the ordered graph $(\ensuremath{\mathcal{G}},\<)$ by choosing the linear order $11 < 12 < \dotsb < 16 < 21 < 22 < \dotsb < 66$. The radius-$1$ neighbourhood of node $25$ is isomorphic to the radius-$1$ neighbourhood of node $42$. In general, there are $16$ nodes (fraction $4/9$ of all nodes) that have isomorphic radius-$1$ neighbourhoods; hence $(\ensuremath{\mathcal{G}},\<)$ is $(4/9, 1)$-homogeneous. It is also $(1/9, 2)$-homogeneous.}\label{fig:homog} \end{figure} Our main technical tool will be a construction of graphs that satisfy the following properties: \begin{enumerate}[label=(\arabic*),noitemsep,align=left,labelwidth=4ex,leftmargin=8ex] \item $(1-\epsilon,r)$-homogeneous for any $\epsilon > 0$ and $r$, \item $2k$-regular for any $k$, \item large girth, \item finite order. \end{enumerate} Note that it is relatively easy to satisfy any three of these properties: \begin{itemize}[align=left,labelwidth=13ex,leftmargin=17ex] \item[(1), (2), (3)] Infinite $2k$-regular trees admit a $(1,r)$-homogeneous linear order; see Figure~\ref{fig:homog-tree} for an example. \item[(1), (2), (4)] We can construct a sufficiently large $k$-dimensional toroidal grid graph (cartesian product of $k$ directed cycles) and order the nodes lexicographically coordinate-wise; see Figure~\ref{fig:homog} for an example. However, these graphs have girth $4$ when $k \ge 2$. \item[(1), (3), (4)] A sufficiently large directed cycle is $(1-\epsilon,r)$-homogeneous and has large girth. However, all the nodes have degree~$2$. \item[(2), (3), (4)] It is well known that regular graphs of arbitrarily high girth exist. \end{itemize} Our construction satisfies all four properties simultaneously. \begin{thm}\label{thm:homog-graph} Let $k,r\in\ensuremath{\mathbb{N}}$. For every $\epsilon > 0$ there exists a finite $2k$-regular $(1-\epsilon,r)$-homogeneous connected graph $(\ensuremath{\mathcal{H}}_\epsilon,\<_\epsilon)$ of girth larger than $2r+1$. Furthermore, the following properties hold: \begin{enumerate} \item The homogeneity type $\tau^*$ of $(\ensuremath{\mathcal{H}}_\epsilon,\<_\epsilon)$ does not depend on $\epsilon$. \item The graph $\ensuremath{\mathcal{H}}_\epsilon$ and the type $\tau^*$ are $k$-edge-labelled digraphs. \end{enumerate} \end{thm} We defer the proof of Theorem~\ref{thm:homog-graph} to Section~\ref{sec:homog-graphs}. There, it turns out that Cayley graphs of \emph{soluble} groups suit our needs: The homogeneous toroidal graphs mentioned above are Cayley graphs of the abelian groups $\ensuremath{\mathbb{Z}}_n^k$. Analogously, we use the decomposition of a soluble group into abelian factors to guarantee the presence of a suitable ordering. However, to ensure large girth, the groups we consider must be sufficiently far from being abelian, i.e., they must have large derived length~\cite{conder10limitations}. \subsection{Homogeneous Lifts}\label{ssec:homog-lift} We fix some notation towards a proof of Theorem~\ref{thm:main}. By Theorem~\ref{thm:homog-graph} we let $(\ensuremath{\mathcal{H}}_\epsilon,\<_\epsilon)$, $\epsilon > 0$, be a family of $2|L|$-regular $(1-\epsilon,r)$-homogeneous connected graphs of girth $>2r+1$ interpreted as $L$-digraphs. The homogeneity type $\tau^*$ that is shared by all $\ensuremath{\mathcal{H}}_\epsilon$ is then of the form $\tau^*=(\ensuremath{\mathcal{T}}^*,\<^*,\lambda)$, where $\ensuremath{\mathcal{T}}^*$ is the complete $L$-labelled tree of Section~\ref{sec:po-model}. We use the graphs $\ensuremath{\mathcal{H}}_\epsilon$ to prove the following theorem. \begin{thm}\label{thm:subtree} Let $\ensuremath{\mathcal{G}}$ be an $L$-digraph. For every $\epsilon > 0$ there exists a lift $(\ensuremath{\mathcal{G}}_\epsilon,\<_{\ensuremath{\mathcal{G}}\epsilon})$ of $\ensuremath{\mathcal{G}}$ such that a $(1-\epsilon)$ fraction of the vertices in $(\ensuremath{\mathcal{G}}_\epsilon,\<_{\ensuremath{\mathcal{G}}\epsilon})$ have $r$-neighbourhoods isomorphic to a subtree of $\tau^*=(\ensuremath{\mathcal{T}}^*,\<^*,\lambda)$. Moreover, if $\ensuremath{\mathcal{G}}$ is connected, $\ensuremath{\mathcal{G}}_\epsilon$ can be made connected. \end{thm} \begin{proof} Write $(\ensuremath{\mathcal{C}},\<_{\ensuremath{\mathcal{C}}})=(\ensuremath{\mathcal{G}}_\epsilon,\<_{\ensuremath{\mathcal{G}}\epsilon})$ and $(\ensuremath{\mathcal{H}},\<_{\ensuremath{\mathcal{H}}}) = (\ensuremath{\mathcal{H}}_\epsilon,\<_\epsilon)$ for short. Our goal is to construct $(\ensuremath{\mathcal{C}},\<_{\ensuremath{\mathcal{C}}})$ as a certain product of $(\ensuremath{\mathcal{H}},\<_{\ensuremath{\mathcal{H}}})$ and $\ensuremath{\mathcal{G}}$; see Figure~\ref{fig:product}. This product is a modification of the common lift construction of Angluin and Gardiner~\cite{angluin81finite}. \begin{figure} \centering \includegraphics[page=\PProduct]{figs.pdf} \caption{Homogeneous lifts. In this example $L = |2|$, and the two labels are indicated with two different kinds of arrows. Graph $\ensuremath{\mathcal{H}}_\epsilon$ is a homogeneous $2|L|$-regular ordered $L$-digraph with a large girth---in particular, the local neighbourhood of a node looks like a tree. Graph $\ensuremath{\mathcal{G}}$ is an arbitrary $L$-digraph, not necessarily ordered. Their product $\ensuremath{\mathcal{G}}_\epsilon$ is a lift of $\ensuremath{\mathcal{G}}$, but it inherits the desirable properties of $\ensuremath{\mathcal{H}}_\epsilon$: a high girth and a homogeneous linear order.}\label{fig:product} \end{figure} The lift $\ensuremath{\mathcal{C}}$ is defined on the product set $V(\ensuremath{\mathcal{C}}) = V(\ensuremath{\mathcal{H}})\times V(\ensuremath{\mathcal{G}})$ by ``matching equi-labelled edges'': the out-neighbours of $(h,g)\in V(\ensuremath{\mathcal{C}})$ are vertices $(h',g')\in V(\ensuremath{\mathcal{C}})$ such that $(h,h')\in E(\ensuremath{\mathcal{H}})$, $(g,g')\in E(\ensuremath{\mathcal{G}})$ and $\ell_\ensuremath{\mathcal{H}}(h,h')=\ell_\ensuremath{\mathcal{G}}(g,g')$. An edge $((h,g),(h',g'))\in E(\ensuremath{\mathcal{G}})$ inherits the common label $\ell_\ensuremath{\mathcal{H}}(h,h')=\ell_\ensuremath{\mathcal{G}}(g,g')$. The properties of $\ensuremath{\mathcal{C}}$ are related to the properties of $\ensuremath{\mathcal{G}}$ and $\ensuremath{\mathcal{H}}$ as follows. \begin{enumerate} \item The projection $\varphi_{\ensuremath{\mathcal{G}}}\colon V(\ensuremath{\mathcal{C}})\to V(\ensuremath{\mathcal{G}})$ mapping $(h,g)\mapsto g$ is a covering map. This follows from the fact that each edge incident to $g\in V(\ensuremath{\mathcal{G}})$ is always matched against an edge of $\ensuremath{\mathcal{H}}$ in the fibre $V(\ensuremath{\mathcal{H}})\times\{g\}$. \item The projection $\varphi_{\ensuremath{\mathcal{H}}}\colon V(\ensuremath{\mathcal{C}})\to V(\ensuremath{\mathcal{H}})$ mapping $(h,g)\mapsto h$ is not a covering map in case $\ensuremath{\mathcal{G}}$ is not $2|L|$-regular. In any case $\varphi_{\ensuremath{\mathcal{H}}}$ is a graph homomorphism, and this implies that $\ensuremath{\mathcal{C}}$ has girth $> 2r+1$. \end{enumerate} Next, we define a partial order $<_p$ on $V(\ensuremath{\mathcal{C}})$ as $u<_p v \iff \varphi_{\ensuremath{\mathcal{H}}}(u) <_{\ensuremath{\mathcal{H}}} \varphi_{\ensuremath{\mathcal{H}}}(v)$, for $u,v\in V(\ensuremath{\mathcal{C}})$. Note that this definition leaves only pairs of vertices in a common $\varphi_{\ensuremath{\mathcal{H}}}$-fibre incomparable. But since $\ensuremath{\mathcal{H}}$ has large girth, none of the incomparable pairs appear in an $r$-neighbourhood of $\ensuremath{\mathcal{C}}$. We let $<_{\ensuremath{\mathcal{C}}}$ be any completion of $<_p$ into a linear order. The previous discussion implies that $<_{\ensuremath{\mathcal{C}}}$ satisfies $\tau(\ensuremath{\mathcal{C}},\<_{\ensuremath{\mathcal{C}}},u)=\tau(\ensuremath{\mathcal{C}},\<_p,u)$ for all $u\in V(\ensuremath{\mathcal{C}})$. Let $U_{\ensuremath{\mathcal{H}}} \subseteq V(\ensuremath{\mathcal{H}})$, $|U_{\ensuremath{\mathcal{H}}}| \ge (1-\epsilon)|\ensuremath{\mathcal{H}}|$, be the set of type $\tau^*$ vertices in $(\ensuremath{\mathcal{H}},\<_{\ensuremath{\mathcal{H}}})$. Set $U_{\ensuremath{\mathcal{C}}} = \varphi^{-1}_{\ensuremath{\mathcal{H}}}(U_{\ensuremath{\mathcal{H}}})$ so that $|U_{\ensuremath{\mathcal{C}}}| \ge (1-\epsilon)|\ensuremath{\mathcal{C}}|$. Let $u\in U_{\ensuremath{\mathcal{C}}}$. By our definition of $<_p$, $\varphi_{\ensuremath{\mathcal{H}}}$ maps the $r$-neighbourhood $\tau_u=\tau(\ensuremath{\mathcal{C}},\<_{\ensuremath{\mathcal{C}}},u)$ into $\tau(\ensuremath{\mathcal{H}},\<_{\ensuremath{\mathcal{H}}},\varphi_{\ensuremath{\mathcal{H}}}(u))\simeq \tau^*$ while preserving the order. But because $\tau^*$ is a tree, $\varphi_{\ensuremath{\mathcal{H}}}$ must be injective on the vertex set of $\tau_u$ so that $\tau_u$ is isomorphic to a subtree of $\tau^*$ as required. Finally, suppose $\ensuremath{\mathcal{G}}$ is connected. Then, by averaging, some connected component of $\ensuremath{\mathcal{C}}$ will have vertices in $U_{\ensuremath{\mathcal{C}}}$ with density at least $(1-\epsilon)$. This component satisfies the theorem. \end{proof} \section{Proof of Main Theorem}\label{sec:proof-mainthm} Next, we use the tools of the previous section to prove Theorem~\ref{thm:main}. For clarity of exposition we first prove Theorem~\ref{thm:main} in the special case where $\ensuremath{\mathsf{A}}$ is an $\mathsf{OI}$-algorithm. The subsequent proof for an $\mathsf{ID}$-algorithm $\ensuremath{\mathsf{A}}$ uses a somewhat technical but well-known Ramsey type argument. \subsection{Proof of Main Theorem for \texorpdfstring{$\boldsymbol\mathsf{OI}$}{OI}-algorithms}\label{ssec:mainthm-oi} We will prove the general and connected versions of Theorem~\ref{thm:main} simultaneously; for the proof of the connected version it suffices to consider only connected lifts below. We do not need the assumption that $\ensuremath{\mathcal{F}}$ does not contain any trees. Let $\ensuremath{\mathsf{\Pi}}$ be as in the statement of Theorem~\ref{thm:main}. Suppose an $\mathsf{OI}$-algorithm $\ensuremath{\mathsf{A}}$ finds an \Apx{\alpha} of $\ensuremath{\mathsf{\Pi}}$ in $\ensuremath{\mathcal{F}}$. We define a $\mathsf{PO}$-algorithm $\ensuremath{\mathsf{B}}$ simply by setting for $W \in \ensuremath{\mathfrak{W}}$, \[ \ensuremath{\mathsf{B}}(W) = \ensuremath{\mathsf{A}}\bigl((\ensuremath{\mathcal{T}}^*,\<^*,\lambda)\upharpoonright W\bigr). \] Now, Theorem~\ref{thm:subtree} translates into saying that for every $\ensuremath{\mathcal{G}}\in\ensuremath{\mathcal{F}}$ and $\epsilon > 0$ we have that $\ensuremath{\mathsf{A}}(\ensuremath{\mathcal{G}}_\epsilon,\<_{\ensuremath{\mathcal{G}}\epsilon},u)=\ensuremath{\mathsf{B}}(\ensuremath{\mathcal{G}}_\epsilon,u)$ for at least a $(1-\epsilon)$ fraction of nodes $u\in V(\ensuremath{\mathcal{G}}_\epsilon)$. The claim that $\ensuremath{\mathsf{B}}$ works as expected follows essentially from this fact as we argue next. For simplicity, we assume the solutions to $\ensuremath{\mathsf{\Pi}}$ are sets of vertices so that $\ensuremath{\mathsf{A}}(\ensuremath{\mathcal{G}})\subseteq V(\ensuremath{\mathcal{G}})$; solutions that are sets of edges are handled similarly. Fix $\ensuremath{\mathcal{G}}\in\ensuremath{\mathcal{F}}$ and let $\varphi_\epsilon\colon V(\ensuremath{\mathcal{G}}_\epsilon)\to V(\ensuremath{\mathcal{G}})$, $\epsilon > 0$, be the associated covering maps. \paragraph{Algorithm $\boldsymbol\ensuremath{\mathsf{B}}$ Finds a Feasible Solution of $\boldsymbol\ensuremath{\mathsf{\Pi}}$ on $\boldsymbol\ensuremath{\mathcal{G}}$.} Let $\ensuremath{\mathsf{V}}$ be a local $\mathsf{PO}$-algorithm verifying the feasibility of a solution for $\ensuremath{\mathsf{\Pi}}$; we may assume $\ensuremath{\mathsf{V}}$ also runs in time $r$. For $\epsilon>0$ sufficiently small, each $v\in V(\ensuremath{\mathcal{G}})$ has a pre-image $v'\in \varphi_\epsilon^{-1}(v)$ such that $\ensuremath{\mathsf{A}}$ and $\ensuremath{\mathsf{B}}$ agree on the vertices $\bigcup_{v\in V(\ensuremath{\mathcal{G}})} B_{\ensuremath{\mathcal{G}}\epsilon}(v',r)$. Thus, $\ensuremath{\mathsf{V}}$ accepts the solution $\ensuremath{\mathsf{B}}(\ensuremath{\mathcal{G}}_\epsilon)$ on the vertices $v'$. But because $\varphi_\epsilon(\{v':v\in V(\ensuremath{\mathcal{G}})\}) = V(\ensuremath{\mathcal{G}})$ it follows that $\ensuremath{\mathsf{V}}$ accepts the solution $\ensuremath{\mathsf{B}}(\ensuremath{\mathcal{G}}) = \varphi_\epsilon(\ensuremath{\mathsf{B}}(\ensuremath{\mathcal{G}}_\epsilon))$ on every node in $\ensuremath{\mathcal{G}}$. \paragraph{Algorithm $\boldsymbol\ensuremath{\mathsf{B}}$ Finds an $\boldsymbol\alpha$-Approximation of $\boldsymbol\ensuremath{\mathsf{\Pi}}$ on $\boldsymbol\ensuremath{\mathcal{G}}$.} We assume $\ensuremath{\mathsf{\Pi}}$ is a minimisation problem; maximisation problems are handled similarly. Let $X\subseteq V(\ensuremath{\mathcal{G}})$ and $X_\epsilon\subseteq V(\ensuremath{\mathcal{G}}_\epsilon)$ be some optimal solutions of $\ensuremath{\mathsf{\Pi}}$. As $\epsilon \to 0$, the solutions $\ensuremath{\mathsf{B}}(\ensuremath{\mathcal{G}}_\epsilon)$ and $\ensuremath{\mathsf{A}}(\ensuremath{\mathcal{G}}_\epsilon)$ agree on almost all the vertices. Indeed, a simple calculation shows that $|\ensuremath{\mathsf{B}}(\ensuremath{\mathcal{G}}_\epsilon)| \le f(\epsilon)\cdot|\ensuremath{\mathsf{A}}(\ensuremath{\mathcal{G}}_\epsilon)|$ for some $f$ with $f(\epsilon)\to 1$ as $\epsilon \to 0$. Furthermore, \[ \frac{|\ensuremath{\mathsf{B}}(\ensuremath{\mathcal{G}})|}{|X|} = \frac{|\varphi_\epsilon^{-1}(\ensuremath{\mathsf{B}}(\ensuremath{\mathcal{G}}))|}{|\varphi_\epsilon^{-1}(X)|} \le \frac{|\ensuremath{\mathsf{B}}(\ensuremath{\mathcal{G}}_\epsilon)|}{|X_\epsilon|} \le \frac{f(\epsilon)\cdot|\ensuremath{\mathsf{A}}(\ensuremath{\mathcal{G}}_\epsilon)|}{|X_\epsilon|} \le f(\epsilon)\alpha, \] where the first equality follows from $\varphi_\epsilon$ being an $n$-lift, and the first inequality follows from $\varphi^{-1}_\epsilon(\ensuremath{\mathsf{B}}(\ensuremath{\mathcal{G}}))=\ensuremath{\mathsf{B}}(\ensuremath{\mathcal{G}}_\epsilon)$ and the fact that $\varphi^{-1}_\epsilon(X)$ is a feasible solution so that $|X_\epsilon| \le |\varphi^{-1}_\epsilon(X)|$. Since the above inequality holds for every $\epsilon > 0$ we must have that $|\ensuremath{\mathsf{B}}(\ensuremath{\mathcal{G}})|/|X| \le \alpha$, as desired. \subsection{Proof of Main Theorem for \texorpdfstring{$\boldsymbol\mathsf{ID}$}{ID}-algorithms}\label{ssec:mainthm-id} We extend the above proof to the case of local $\mathsf{ID}$-algorithms $\ensuremath{\mathsf{A}}$ by designing ``worst-case'' vertex identifiers for the instances in $\ensuremath{\mathcal{F}}$ in order to make $\ensuremath{\mathsf{A}}$ behave similarly to a $\mathsf{PO}$-algorithm on tree neighbourhoods. To do this we use the Ramsey technique of Naor and Stockmeyer~\cite{naor95what}; see also Czygrinow et al.~\cite{czygrinow08fast}. For a reference on Ramsey's theorem see Graham et al.~\cite{graham80ramsey}. We use the following notation: if $(X,\<_X)$ and $(Y,\<_Y)$ are linearly ordered sets with $|X| \le |Y|$, we write $f\colon (X,\<_X)\hookrightarrow (Y,\<_Y)$ for the unique order-preserving injection $f\colon X\to Y$ that maps the $i$th element of $X$ to the $i$th element of $Y$. A \emph{$t$-set} is a set of size $t$, and the set of $t$-subsets of $X$ is denoted $X^{(t)}$. Write $\Omega^\ensuremath{\mathfrak{W}}$ for the family of functions $\ensuremath{\mathfrak{W}}\to\Omega$; recall that each $\ensuremath{\mathsf{B}}\in \Omega^\ensuremath{\mathfrak{W}}$ can be interpreted as a $\mathsf{PO}$-algorithm. Set $k=|\Omega^\ensuremath{\mathfrak{W}}|$ and $t=|\ensuremath{\mathcal{T}}^*|$. We consider every $t$-subset $A\in\ensuremath{\mathbb{N}}^{(t)}$ to be ordered by the usual order $<$ on $\ensuremath{\mathbb{N}}$. For $W\in\ensuremath{\mathfrak{W}}$ we let $f_{W,A}\colon (W,\<^*)\hookrightarrow(A,\<)$ so that the vertex-relabelled tree $f_{W,A}((\ensuremath{\mathcal{T}}^*,\lambda)\upharpoonright W)$ has the $|W|$ smallest numbers in $A$ as vertices. Define a $k$-colouring $c\colon \ensuremath{\mathbb{N}}^{(t)}\to\Omega^\ensuremath{\mathfrak{W}}$ by setting \[ c(A)(W) = \ensuremath{\mathsf{A}}(f_{W,A}((\ensuremath{\mathcal{T}}^*,\lambda)\upharpoonright W)). \] For each $m \ge t$ we can use Ramsey's theorem to obtain a number $R(m) \ge m$, so that for every $R(m)$-set $I\subseteq\ensuremath{\mathbb{N}}$ there exists an $m$-subset $J\subseteq I$ such that $J^{(t)}$ is monochromatic under $c$, i.e., all $t$-subsets of $J$ have the same colour. In particular, for every interval \[ I(m,i)=[(i-1) R(m)+1,\, i R(m)], \quad i \ge 1, \] there exist an $m$-subset $J(m,i)\subseteq I(m,i)$ and a colour (i.e., an algorithm) $\ensuremath{\mathsf{B}}_{m,i} \in \Omega^\ensuremath{\mathfrak{W}}$ such that $c(A) = \ensuremath{\mathsf{B}}_{m,i}$ for all $t$-subsets $A \subseteq J(m,i)$. This construction has the following property. \begin{prop}\label{prop:agreement} Suppose $m \ge |\ensuremath{\mathcal{G}}_\epsilon|+t$. Algorithms $\ensuremath{\mathsf{A}}$ and $\ensuremath{\mathsf{B}}_{m,i}$ produce the same output on at least a $(1-\epsilon)$ fraction of the vertices in the vertex-relabelled $L$-digraph $f_{m,i}(\ensuremath{\mathcal{G}}_\epsilon)$, where \[ f_{m,i}\colon (V(\ensuremath{\mathcal{G}}_\epsilon),\<_{\ensuremath{\mathcal{G}}\epsilon})\hookrightarrow(J(m,i),\<). \] \end{prop} \begin{proof} By Theorem~\ref{thm:subtree}, let $U\subseteq V(f_{m,i}(\ensuremath{\mathcal{G}}_\epsilon))$, $|U| \ge (1-\epsilon)|\ensuremath{\mathcal{G}}_\epsilon|$, be the set of vertices $u$ with $\tau(f_{m,i}(\ensuremath{\mathcal{G}}_\epsilon),\<,u)$ isomorphic to a subtree of $\tau^*$. In particular, for a fixed $u\in U$ we can choose $W\in \ensuremath{\mathfrak{W}}$ such that \[ \tau(f_{m,i}(\ensuremath{\mathcal{G}}_\epsilon),\<,u) \simeq (\ensuremath{\mathcal{T}}^*,\<^*,\lambda)\upharpoonright W. \] Now, as $m$ is large, there exists a $t$-set $A\subseteq J(m,i)$ such that \[ \tau(f_{m,i}(\ensuremath{\mathcal{G}}_\epsilon),u) = f_{W,A}((\ensuremath{\mathcal{T}}^*,\lambda)\upharpoonright W). \] Thus, $\ensuremath{\mathsf{A}}$ and $\ensuremath{\mathsf{B}}_{m,i}$ agree on $u$ by the definition of $\ensuremath{\mathsf{B}}_{m,i}$. \end{proof} For every $n\in\ensuremath{\mathbb{N}}$ some colour appears with density at least $1/k$ (i.e., appears at least $n/k$ times) in the sequence $\ensuremath{\mathsf{B}}_{m,1},\ensuremath{\mathsf{B}}_{m,2},\dotsc,\ensuremath{\mathsf{B}}_{m,n}$. Hence, let $\ensuremath{\mathsf{B}}_m$ be a colour that appears with density at least $1/k$ among these sequences for infinitely many $n$. Let $\ensuremath{\mathsf{B}}$ be a colour appearing among the $\ensuremath{\mathsf{B}}_m$ for infinitely many $m$. We claim $\ensuremath{\mathsf{B}}$ satisfies Theorem~\ref{thm:main}. In fact, Theorem~\ref{thm:main} follows from the following proposition together with the considerations of Section~\ref{ssec:mainthm-oi}. \begin{prop} For every $\ensuremath{\mathcal{G}}_\epsilon$ there exists an $n$-lift $\ensuremath{\mathcal{H}}$ of $\ensuremath{\mathcal{G}}_\epsilon$ such that $V(\ensuremath{\mathcal{H}})\subseteq\{1,2,\dotsc,s(|\ensuremath{\mathcal{H}}|)\}$ and $\ensuremath{\mathsf{A}}(\ensuremath{\mathcal{H}},u) = \ensuremath{\mathsf{B}}(\ensuremath{\mathcal{H}},u)$ for a $(1-\epsilon)$ fraction of nodes $u\in V(\ensuremath{\mathcal{H}})$. Moreover, if $\ensuremath{\mathcal{G}}_\epsilon$ is connected and not a tree, $\ensuremath{\mathcal{H}}$ can be made connected. \end{prop} \begin{proof} Let $m$ be such that $m \ge |\ensuremath{\mathcal{G}}_\epsilon|+t$ and $\ensuremath{\mathsf{B}} = \ensuremath{\mathsf{B}}_m$. For infinitely many $n$ there exists an $n$-set $I\subseteq[nk]$ of indices such that $\ensuremath{\mathsf{B}} = \ensuremath{\mathsf{B}}_{m,i}$ for $i\in I$. Consider the following $n$-lift of $\ensuremath{\mathcal{G}}_\epsilon$ obtained by taking disjoint unions: \[ \ensuremath{\mathcal{H}} = \bigcup_{i\in I} f_{m,i}(\ensuremath{\mathcal{G}}_\epsilon). \] Algorithms $\ensuremath{\mathsf{A}}$ and $\ensuremath{\mathsf{B}}$ agree on a $(1-\epsilon)$ fraction of the nodes in $\ensuremath{\mathcal{H}}$ by Proposition~\ref{prop:agreement}. Furthermore, we have $|\ensuremath{\mathcal{H}}|=n|\ensuremath{\mathcal{G}}_\epsilon|$ and $V(\ensuremath{\mathcal{H}})\subseteq\{1,2,\dotsc,n k R(m)\}$. We are assuming that $s(n)=\omega(n)$ so choosing a large enough $n$ proves the non-connected version of the claim. Finally, suppose $\ensuremath{\mathcal{G}}_\epsilon$ is connected and not a tree. We may assume that there is an edge $e=(u,v)\in E(\ensuremath{\mathcal{G}}_\epsilon)$ so that $\ensuremath{\mathcal{G}}_\epsilon$ remains connected when $e$ is removed and that a $(1-\epsilon)$ fraction of vertices in $\ensuremath{\mathcal{G}}_\epsilon$ have $r$-neighbourhoods not containing $e$ that are isomorphic into $\tau^*$. Now $\ensuremath{\mathcal{H}}$ above is easily modified into a connected graph by redefining the directed matching between the fibre $\{u_i\}_{i\in I}$ of $u$ and the fibre $\{v_i\}_{i\in I}$ of $v$. Namely, let $\pi$ be a cyclic permutation on $I$ and set \[ E' = \bigl(E(\ensuremath{\mathcal{H}}) \smallsetminus \{(u_i,v_i)\}_{i\in I} \bigr) \,\cup\, \{(u_i,v_{\pi(i)})\}_{i\in I}. \] Then $\ensuremath{\mathcal{H}}'=(V(\ensuremath{\mathcal{H}}), E')$ is easily seen to be a connected $n$-lift of $\ensuremath{\mathcal{G}}_\epsilon$ satisfying the claim. \end{proof} \begin{remark}\label{rem:identifiers} Above, we assumed that instances $\ensuremath{\mathcal{G}}$ have node identifiers $V(\ensuremath{\mathcal{G}})\subseteq \{1,2,\dotsc,s(n)\}$, $n=|\ensuremath{\mathcal{G}}|$, for $s(n)=\omega(n)$. By choosing identifiers more economically as in the work of Czygrinow et al.~\cite{czygrinow08fast} one can show lower bounds for the graph problems of Section~\ref{ssec:intro-local-apx} even when $s(n)=n$. \end{remark} \section{Construction of Homogeneous Graphs of Large Girth}\label{sec:homog-graphs} In this section we prove Theorem~\ref{thm:homog-graph}. Our construction uses Cayley graphs of semi-direct products of groups. First, we recall the terminology in use here; for a standard reference on group theory see, e.g., Rotman~\cite{rotman95introduction}. For the benefit of the reader who is not well-versed in group theory we include in Appendix~\ref{app:wreath} a short primer on the semi-direct product groups that are used below. \subsection{Semi-Direct Products} Let $G$ and $H$ be groups with $H$ acting on $G$ as a group of automorphisms. We write $h\cdot g$ for the action of $h\in H$ on $g\in G$ so that the mapping $g\mapsto h\cdot g$ is an automorphism of $G$. The \emph{semi-direct product} $G\rtimes H$ is defined to be the set $G\times H$ with the group operation given by \[ (g,h)(g',h') = (g(h\cdot g'),hh'). \] \subsection{Cayley Graphs} The \emph{Cayley graph} $\ensuremath{\mathcal{C}}(G,S)$ of a group $G$ with respect to a finite set $S\subseteq G$ is an $S$-digraph on the vertex set $G$ such that each $g\in G$ has an outgoing edge $(g,gs)$ labelled $s$ for each $s\in S$. We require that $1\notin S$ so as not to have any self-loops. We do not require that $S$ is a generating set for $G$, i.e., the graph $\ensuremath{\mathcal{C}}(G,S)$ need not be connected. If $\varphi\colon H\to G$ is an onto group homomorphism and $S\subseteq H$ is a set such that the mapping $\varphi$ is injective on $S\cup\{1\}$, then $\varphi$ naturally induces a covering map of digraphs $\ensuremath{\mathcal{C}}(H,S)$ and $\ensuremath{\mathcal{C}}(G,\varphi(S))$. \subsection{Proof of Theorem~\texorpdfstring{\ref{thm:homog-graph}}{3}}\label{ssec:proof-homog-graph} Let $n\in\ensuremath{\mathbb{N}}$ be an even number. We consider three families of groups, $\{H_i\}_{i \ge 1}$, $\{W_i\}_{i \ge 1}$, and $\{U_i\}_{i \ge 1}$, that are variations on a common theme. The families are defined iteratively as follows: \begin{align*} H_1 &= \ensuremath{\mathbb{Z}}_n, & W_1 &= \ensuremath{\mathbb{Z}}_2, & U_1 &= \ensuremath{\mathbb{Z}}, \\ H_{i+1} &= H_i^2 \rtimes \ensuremath{\mathbb{Z}}_n, & W_{i+1} &= W_i^2 \rtimes \ensuremath{\mathbb{Z}}_2, & U_{i+1} &= U_i^2 \rtimes \ensuremath{\mathbb{Z}}. \end{align*} Here, the cyclic group $\ensuremath{\mathbb{Z}}_n=\{0,1,\dotsc,n-1\}$ acts on the direct product $H_i^2 = H_i\times H_i$ by cyclically permuting the coordinates, i.e., the subgroup $2\ensuremath{\mathbb{Z}}_n \le \ensuremath{\mathbb{Z}}_n$ acts trivially and the elements in $1+ 2\ensuremath{\mathbb{Z}}_n$ swap the two coordinates. The groups $\ensuremath{\mathbb{Z}}_2$ and $\ensuremath{\mathbb{Z}}$ act analogously in the definitions of $W_i$ and $U_i$. See Appendix~\ref{app:wreath} for more information on groups $H_i$, $W_i$, and $U_i$. The underlying sets of the groups $H_i$, $W_i$, and $U_i$ consist of $d(i)$-tuples of elements in $\ensuremath{\mathbb{Z}}$, for $d(i)=2^i-1$, so that $W_i\subseteq H_i\subseteq U_i$ \emph{as sets}. Interpreting these tuples as points in $\ensuremath{\mathbb{R}}^{d(i)}$ we immediately get a natural embedding of every Cayley graph of these groups in $\ensuremath{\mathbb{R}}^{d(i)}$. This geometric intuition will become useful later. \begin{enumerate} \item The groups $W_i$ are $i$-fold iterated regular wreath products of the cyclic group $\ensuremath{\mathbb{Z}}_2$. These groups have order $|W_i|=2^{d(i)}$ and they are sometimes called \emph{symmetric $2$-groups}; they are isomorphic to the Sylow $2$-subgroups of the symmetric group on $2^i$ letters~\cite[p.\ 176]{rotman95introduction}. \item The groups $U_i$ are natural extensions of the groups $W_i$ by the free abelian group of rank $d(i)$: the mapping $\varphi_i\colon U_i\to W_i$ that reduces each coordinate modulo $2$ is easily seen to be an onto homomorphism with abelian kernel $(2\ensuremath{\mathbb{Z}})^{d(i)} \simeq \ensuremath{\mathbb{Z}}^{d(i)}$. \item The groups $H_i$ are intermediate between $U_i$ and $W_i$ in that the mapping $\psi_i\colon U_i\to H_i$ that reduces each coordinate modulo $n$ is an onto homomorphism, and the mapping $\varphi_i'\colon H_i\to W_i$ that reduces each coordinate modulo $2$ is an onto homomorphism. In summary, the following diagram commutes: \[ \xymatrix{ U_i \ar[r]^{\psi_i} \ar[rd]_{\varphi_i} & H_i \ar[d]^{\varphi_i'} \\ & W_i } \] \end{enumerate} Our goal will be to construct a suitable Cayley graph $\ensuremath{\mathcal{H}}$ of some $H_i$. We will use the groups $W_i$ to ensure $\ensuremath{\mathcal{H}}$ has large girth, whereas the groups $U_i$ will guarantee that $\ensuremath{\mathcal{H}}$ has an almost-everywhere homogeneous linear ordering. \paragraph{Girth.} Gamburd et al.~\cite{gamburd09girth} study the girth of random Cayley graphs and prove, in particular, that a random $k$-subset of $W_i$ generates a Cayley graph of large girth with high probability when $i\gg k$ is large. We only need the following weaker version of their theorem (see Appendix~\ref{app:high-girth} for an alternative, constructive proof). \begin{thm}[{Corollary to~\cite[Theorem~6]{gamburd09girth}}]\label{thm:high-girth} Let $k,r\in \ensuremath{\mathbb{N}}$. There exists an $i\in\ensuremath{\mathbb{N}}$ and a set $S\subseteq W_i$, $|S|=k$, such that the girth of the Cayley graph $\ensuremath{\mathcal{C}}(W_i,S)$ is larger than $2r+1$. \end{thm} Fix a large enough $j\in\ensuremath{\mathbb{N}}$ and a $k$-set $S\subseteq W_j$ so that $\ensuremath{\mathcal{C}}(W_j,S)$ has a girth larger than $2r+1$. Henceforth, we omit the subscript $j$ and write $H$, $W$, $U$, $\varphi$, $\psi$ and $d$ in place of $H_{j}$, $W_{j}$, $U_{j}$, $\varphi_{j}$, $\psi_{j}$ and $d(j)$. Interpreting $S$ as a set of elements of $H$ and $U$ (so that $\varphi(S)=\psi(S)=S$) we construct the Cayley graphs \[ \ensuremath{\mathcal{H}}=\ensuremath{\mathcal{C}}(H,S), \quad \ensuremath{\mathcal{W}} = \ensuremath{\mathcal{C}}(W,S), \quad\text{and}\quad \ensuremath{\mathcal{U}} = \ensuremath{\mathcal{C}}(U,S). \] As each of these graphs is a lift of $\ensuremath{\mathcal{W}}$, none have cycles of length at most $2r+1$ and their $r$-neighbourhoods are trees. \paragraph{Linear Order.} Next, we introduce a \emph{left-invariant} linear order $<$ on $U$ satisfying \[ u< v \implies wu < wv, \qquad \text{for all } u,v,w\in U. \] Such a relation can be defined by specifying a \emph{positive cone} $P\subseteq U$ of elements that are greater than the identity $1=1_{U}$ so that \[ u < v \iff 1 < u^{-1}v \iff u^{-1}v \in P. \] A relation $<$ defined this way is automatically left-invariant; it is transitive iff $u,v\in P$ implies $uv\in P$; and every pair $u\neq v$ is comparable iff for all $w\neq 1$, either $w\in P$ or $w^{-1}\in P$. The existence of a $P$ satisfying these conditions follows from the fact that $U$ is a torsion-free soluble group (e.g.,~\cite{conrad59right}), but it is easy enough to verify that setting \begin{equation}\label{eq:pos-cone} P = \bigl\{ (u_1,u_2,\dotsc,u_i, 0, 0, \dotsc, 0) \in U : 1 \le i \le d \text{ and } u_i > 0 \bigr\} \end{equation} satisfies the required conditions above (see Appendix~\ref{app:pos-cone}). Because $U$ acts (by multiplication on the left) on $\ensuremath{\mathcal{U}}$ as a vertex-transitive group of graph automorphisms, it follows that the structures $(\ensuremath{\mathcal{U}},\<,u)$, $u\in U$, are pairwise isomorphic. A fortiori, the $r$-neighbourhoods $\tau(\ensuremath{\mathcal{U}},\<,u)$, $u\in U$, are all pairwise isomorphic. Let $\tau^*$ be this common $r$-neighbourhood isomorphism type. \paragraph{Transferring the Linear Order on \texorpdfstring{$\boldsymbol U$}{U} to \texorpdfstring{$\boldsymbol\ensuremath{\mathcal{H}}$}{H}.} Let $V(\ensuremath{\mathcal{H}})$ be ordered by restricting the order $<$ on $U$ to the set $V(\ensuremath{\mathcal{H}})=\ensuremath{\mathbb{Z}}_n^d$ underlying the group $H$. Note that $<$ is not a left-invariant order on $H$ (indeed, no non-trivial finite group can be left-invariantly ordered). Nevertheless, we will argue that, as $n\to\infty$, almost all $u\in V(\ensuremath{\mathcal{H}})$ have $r$-neighbourhoods of type $\tau^*$. The neighbours of a vertex $u\in V(\ensuremath{\mathcal{U}})$ are elements $us$ where $s\in S\cup S^{-1}\subseteq [-1,1]^d$. The right multiplication action of $s\in S\cup S^{-1}$ on $u$ can be described in two steps as follows: First, the coordinates of $s$ are permuted (as determined by $u$) to obtain a vector $s'$. Then, $us$ is given as the standard addition of the vectors $u$ and $s'$ in $\ensuremath{\mathbb{Z}}^d\subseteq\ensuremath{\mathbb{R}}^d$. Hence, $us \in u+[-1,1]^d$, and moreover, \begin{equation}\label{eq:bur} B_{\ensuremath{\mathcal{U}}}(u,r) \subseteq u+[-r,r]^d. \end{equation} This means that vertices close to $u$ in the graph $\ensuremath{\mathcal{U}}$ are also close in the associated geometric $\ensuremath{\mathbb{R}}^d$-embedding. Consider the set of inner nodes $I=[r,(n-1)-r]^d$. Let $u\in I$. By \eqref{eq:bur}, the vertex set $B_{\ensuremath{\mathcal{U}}}(u,r)$ is contained in $\ensuremath{\mathbb{Z}}_n^d$. This implies that the cover map $\psi$ is the identity on $B_{\ensuremath{\mathcal{U}}}(u,r)$ and consequently the $r$-neighbourhood $\tau(\ensuremath{\mathcal{H}},\<,u)$ \emph{contains} the ordered tree $\tau(\ensuremath{\mathcal{U}},\<,u)\simeq\tau^*$. If $\tau(\ensuremath{\mathcal{H}},\<,u)$ had any additional edges to those of $\tau(\ensuremath{\mathcal{U}},\<,u)$, this would entail a cycle of length $\le 2r+1$ in $\ensuremath{\mathcal{H}}$, which is not possible. Thus, $\tau(\ensuremath{\mathcal{H}},\<,u)\simeq \tau^*$. The density of elements in $\ensuremath{\mathcal{H}}$ having $r$-neighbourhood type $\tau^*$ is therefore at least $|I|/|\ensuremath{\mathcal{H}}| =(n-2r)^d/n^d \ge 1-\epsilon$, for large $n$. Finally, to establish Theorem~\ref{thm:homog-graph} it remains to address $\ensuremath{\mathcal{H}}$'s connectedness. But if $\ensuremath{\mathcal{H}}$ is not connected, an averaging argument shows that some connected component must have the desired density of at least $(1-\epsilon)$ of type $\tau^*$ vertices. \section*{Acknowledgements} We thank Christoph Lenzen and Roger Wattenhofer for discussions. This work was supported in part by the Academy of Finland, Grants 132380 and 252018, the Research Funds of the University of Helsinki, and the Finnish Cultural Foundation.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The Hubbard model has played the role as a prototype of the strongly correlated electron system and the one-dimensional case has been well understood since the exact Bethe Ansatz solution by Lieb and Wu \cite{Liebwu,Lieb,Ovch,Shiba,Coll,Woyn,Ogata,Weng}. Our main understanding of the one-dimensional Hubbard model is the Luttinger liquid behavior such as the charge-spin separation. The study for the two-dimensional Hubbard model as a higher dimensional generalization of the Luttinger liquid was motivated by the observation of the non-Fermi liquid behavior of the normal state of the high $T_c$ superconductor \cite{Anderson}. In one dimension the charge and spin have different dispersion relations and gap behaviors. All excitations are identified as density excitations of either charge or spin and the correlation functions show power-law decay. While the charge gap at half filling disappears as soons as we dope holes, the spin remains gapless at any filling. Some numerical calculations for the one-dimensional Hubbard model as well as the $t$-$J$ model have provided better understanding for the charge-spin separation in one dimension \cite{Jagla,Penc,Zacher,Tohyama,Kim,Eder}. Jagla {\it et al}. observed separation of the single electron wave packet into the charge and spin density wave packets propagating with different velocities \cite{Jagla}. Zacher {\it et al}. identified separate excitations for the charge and spin in the single particle spectrum \cite{Zacher}. Similarly, for the $t$-$J$ model the dispersion energies of the charge and spin were found to scale with $t$ and $J$, respectively \cite{Kim,Eder}. The two-chain system has attracted more attention recently, related to the ladder compound \cite{Dagotto,Scalapino}. The phase diagram of the two-chain Hubbard model derived by several authors shows diversity depending on parameters such as the on-site Coulomb interaction $U$, the electron filling $n$, and the interchain hopping \cite{Fabrizio,Khveshchenko,Balents,Noack,Park}. At half filling the system is a spin-gapped insulator. As we dope holes the system becomes a Luttinger liquid for large interchain hopping as in one dimension. But for small interchain hopping the system becomes a spin-gapped phase with a gapless charge mode and the correlation functions show exponential decay. Therefore, the two-chain system shows differences from the one-dimensional case for small interchain hopping and at light hole doping the isotropic case, for which the interchain hopping is same as the intrachain hopping, belongs to this regime. The questions are whether there is the charge-spin separation for the two-chain system and how different they are from the one-dimensional case. The density matrix renormalization group (DMRG) method can describe the ground state properties and the low energy physics of a system quite accurately and it is proven to be more accurate in one dimension or in quasi one dimension \cite{White,Liang}. Hallberg studied the spin dynamical correlation function of the one-dimensional Heisenberg model using the DMRG method and the continued fraction expansion of the Green's function, that is, the recursion technique \cite{Hallberg}. Here a certain momentum state was desired and a careful choice of the target states was crucial depending on the number of states kept per block to produce accurate results for higher energy excitations and longer chains since the state does not remain in a given momentum sector during the DMRG iteration. However, because of the real space aspect of the DMRG method it is natural to consider a calculation of a quantity which is independent of the momentum such as the local correlation function to study the dynamics. In this case, one may expect better accuracy for the dynamical correlation functions with fewer states kept and target states. Later, the techniques to study the dynamical properties based on the DMRG method are developed by other authors. Pang {\it et al.} combined the DMRG method and the maximum entropy method \cite{Pang}. K{\" u}hner and White recently proposed an alternative method for the correction at higher frequencies \cite{Kuhner}. In this paper we study the local correlation functions of charge and spin for the one-chain and two-chain Hubbard model using the DMRG method and the recursion technique. We study the behavior of the gaps, bandwidths and weight of the spectra of charge and spin for various and mainly large values of $U$ and $n$, then we compare the results for the one-chain and two-chain. \section{Calculations} The local correlation function such as the local density of states of electrons of a many body system described by the Hamiltonian $H$\cite{Gagliano} is \begin{equation} n_A(i,t-t')=\langle 0|A^\dagger_i(t)A_i(t')|0\rangle, \end{equation} where $|0\rangle$ is the ground state of the system and $A_i(t) = e^{iHt/\hbar}A_ie^{-iHt/\hbar}$. $A_i$ is an operator in coordinate space. For example, it is $c^\dagger_i$ for electron, $c^\dagger_{i\uparrow} c_{i\downarrow}$ for spin, and $c^\dagger_{i\uparrow} c^\dagger_{i\downarrow}$ for charge. By inserting the identity $\sum_{n} |n\rangle \langle n| = I$,where $|n\rangle$ are the complete set of eigenstates of $H$, the Fourier transform of Eq. ($1$) is \begin{equation} n_A(i,w)=\sum_{n}|\langle n|A^\dagger_i|0\rangle|^2 \delta(w-(E_n-E_0)), \end{equation} where $E_0$ is the ground state energy of the system. We define the local Green's function as \begin{equation} G_A(i,z)=\langle0|A^\dagger_i(z-H)^{-1}A_i|0\rangle. \end{equation} Then the local correlation function can be expressed as \begin{equation} n_A(i,w)=-\frac{1}{\pi}Im G_A(i,w+i\epsilon+E_0). \end{equation} The local Green's function can be calculated from the recursion technique \cite{VM}. In the Lanczos routine we choose $|u_0\rangle = A_i|0\rangle$ as the initial state. Then we get a set of orthogonal states which satisfy \begin{equation} H|u_0\rangle = a_0|u_0\rangle+b_1|u_1\rangle \end{equation} and for $n \geq 1$, \begin{equation} H|u_n\rangle = a_n|u_n\rangle + b_{n+1}|u_{n+1}\rangle + b_n|u_{n-1}\rangle, \end{equation} where $a_n=\langle u_n|H| u_n\rangle/\langle u_n|u_n\rangle$, $b_0=0$, and $b^2_n=\langle u_n|u_n\rangle/\langle u_{n-1}|u_{n-1}\rangle$ for $n\geq 1$. With the coefficients $a$'s and $b$'s above, we have a continued fraction form of $G_A(x,z)$, \begin{equation} G_A(i,z) = \frac{1}{z-a_0-\frac{b_1^2}{z-a_1-\frac{b_2^2}{z-a_2-\cdots}}}. \end{equation} In the DMRG method the system is divided into block $23$, block $1$ and block $4$ \cite{White,Liang} and the ground state can be represented as a sum of products of states in each block, \begin{equation} |0\rangle = \sum_{i_{23},i_1,i_4} {\rm C}_{i_{23},i_1,i_4} |i_{23}\rangle |i_1\rangle |i_4\rangle. \end{equation} We take the site $i$ of $A_i$ in the block $23$ when the block $23$ is in the middle of the whole lattice during the DMRG iteration. Then the local correlation function have little boundary effect for the finite size lattice although we use the open boundary condition. We get the initial state $A_i|0\rangle$ by changing only the block $23$ part in $|0\rangle$. We prepare the ground state $|0\rangle$ from the DMRG iteration and get the coefficients $a$'s and $b$'s from the recursion equations, Eqs. ($5$) and ($6$) until the Lanczos routine converges. The Hubbard model Hamiltonian\cite{Lieb} is \begin{equation} H = -t\sum_{\langle i,j\rangle, \sigma} (c^\dagger_{i\sigma} c_{j\sigma} + {\rm H.c.}) + U\sum_{i} (n_{i\uparrow}-\frac{1}{2})(n_{i\downarrow}-\frac{1}{2}), \end{equation} where $t$ is the hopping integral and $n_{i\sigma}=c^\dagger_{i\sigma} c_{i\sigma}$ is the number of particles with spin $\sigma$ at site $i$. The factor $\frac{1}{2}$ in the second term is introduced to adjust the chemical potential for the particle-hole symmetry at half filling. The spin operators are \begin{equation} J^{3}=\frac{1}{2}\sum_{i}(n_{i\uparrow} - n_{i\downarrow}),\; J^{+}=\sum_{i}c^\dagger_{i\uparrow} c_{i\downarrow},\; J^{-}=(J^{+})^\dagger. \end{equation} The charge operators are \begin{equation} \hat{J}^{3}=\frac{1}{2}\sum_{i}(n_{i\uparrow} + n_{i\downarrow} - 1),\; \hat{J}^{+}=\sum_{i}(-1)^i c^\dagger_{i\uparrow} c_{i\downarrow},\; \hat{J}^{-}=(\hat{J}^{+})^\dagger. \end{equation} For the calculations of the local correlation function of spin we have \begin{equation} A_{s,i} = c^\dagger_{i\uparrow} c_{i\downarrow}, \end{equation} which commutes with the charge operators and excites the spin only without changing the total charge and the local charge. For charge, if we have \begin{equation} A_{c,i} = c^\dagger_{i\uparrow} c^\dagger_{i\downarrow}, \end{equation} it commutes with the spin operators and excites the charge only without changing the total spin and the local spin. However, since it creates a up spin and a down spin at the same site, there is always energy cost $U$ and we can not see the excitations in the lower band. Instead we have a bonding form of $A_{c,i}$ operator for the sites $i$ and $i+1$ (both in base $23$), \begin{equation} A_{c,i} = \frac{1}{\sqrt{2}}(c^\dagger_{i,\uparrow} c^\dagger_{i+1,\downarrow} + c^\dagger_{i+1,\uparrow} c^\dagger_{i,\downarrow}). \end{equation} For the two-chain we choose the sites $i$ and $i+1$ in the same rung. Then we see the excitations in the lower band from the components of the ground state where both site $i$ and $i+1$ are empty. In order to check the accuracy of the method we calculate the local density of states of electrons for free case ($U=0$) and compare with the exact result. In Fig. $1$, we have the result for $1$ by $20$ lattice. We have the ground state as the target state during the iteration and the number of the states kept per block is $200$. We use the open boundary condition for all calculations in this paper. For the DMRG result and exact result, the overall features of the spectra are similar and the bandwidths are the same. In particular, the low energy parts of the spectra are almost identical for the positions and the weights of the peaks, which implies great accuracy of this method for the low energy states. In this paper we calculate the local correlation functions of charge and spin for $1$ by $32$ and $2$ by $16$ Hubbard lattices. We choose parameters, $U=8, 16, 32$ and $n=1.0, 0.75, 0.5$. For the two-chain we have the same interchain hopping as the intrachain hopping. The number of states kept per block in the DMRG procedure is typically $200$. To make the spectrum of the local correlation functions visible we use the Lorentzian width $\epsilon=0.05$. For both charge and spin we calculate the counter part, the hole part with the operators $A^\dagger_{c,i}$ and $A^\dagger_{s,i}$, respectively. \section{Results} For the one-chain case (Figs. $2$,$3$ and $4$), at half filling the charge has a gap in the middle and has the particle-hole symmetry. The bandwidth of the charge is of the order of $8t$. As we dope holes, the charge gap disappears\cite{Dagotto2}. In the particle part of the charge spectrum there are lower band, upper band (energy $\sim U$ above the lower band) and another band (energy $\sim 2U$ above the lower band) and this depends on weather the sites $i$ and $i+1$ are occupied or not before we add electrons. Therefore, the weight of the lower band reflects the probability that both sites $i$ and $i+1$ are empty and this weight becomes larger as the hole doping is larger. The border line between the particle and hole spectra is twice of the chemical potential ($2\mu$) since we create two particles or two holes. As the case of small two-dimensional clusters\cite{Dagotto2}, it shifts down as the hole doping is larger. Since both this quantity ($2\mu$) and the gap between the lower and the upper band are of the order of $U$, the left edge of the upper band is always around $0$. The spin does not have a gap and has the particle-hole symmetry at half filling. The bandwidth is of the order of $2J$ ($J=4t^2/U$) when we take the half-width as the bandwidth. When we dope holes there are distinguishable inside peaks which have the same shape as the half filling case in a broad background. The width of the broad background is of the order of $8t$ and the width of the inside peak is proportional to $J$. Since there are holes which can move around and come back to the original position when we flip a spin, this broad background corresponds to the charge fluctuation and the bandwidth of this background is same as the bandwidth of a single electron \cite{Noack2}. As the hole doping is larger the weight and the width of the inside peak decrease. The width change approximately follows $J'$ for the squeezed spin chain with hole doping, $\delta=1-n$ derived by Weng {\it et al}. \cite{Weng}, \begin{equation} J' = J[(1-\delta) + \sin(2\pi\delta)/2\pi]. \end{equation} As the on-site Coulomb interaction $U$ increases the charge gap at half filling, which appears to be proportional to $U$, increases and the bandwidth of the charge remains same but the bandwidth of the spin decreases. This proves that the bandwidth of the charge scales with $t$ and the bandwidth of the spin scales with $J$, which is consistent with the results for the one-dimensional $t$-$J$ model \cite{Kim,Eder}. For large $U$, the distinction between the inside peak and outside background of the spin becomes clear. For the two-chain case, both charge and spin have similar features as the one-chain case. We show $U=32$ case only in Fig. $5$. Both charge and spin have the particle-hole symmetry at half filling. Like the one-chain case, the charge gap at half filling is proportional to $U$ and disappears as we dope holes. The weight of the lower band increases as the hole doping is larger. For the spin, the bandwidth at half filling is proportional to $J$ and away from half filling we find the inside peaks in a broad background. The weight and the width of the inside peak decrease as the hole doping is larger. However, there are some differences from the one-chain case. The bandwidth of the charge at half filling is larger for the two-chain than for the one-chain because of the additional interchain hopping term. For the spin, the sharp edges of the inside peaks show the existence of the gap. The spin spectrum for $n=0.5$ is significantly different from the one-chain case. The pseudogap feature here resembles the charge spectrum and this may be an indication of the absence of the charge-spin separation in this regime. \section{Conclusions} In this work we have studied the dynamics of charge and spin for the one-chain and two-chain Hubbard model. Since the DMRG method and the recursion technique produce the low energy part of the spectra of the local correlation functions with great accuracy, different behaviors of the charge and spin are clear in the low energy excitations for both one-chain and two-chain. The bandwidths are proportional to $t$ for the charge and $J$ for the spin, respectively. However, the background spectrum of the spin away from half filling shows charge behavior. The spin spectrum for the two-chain at large hole doping implies the different nature between one dimension and higher dimension. \indent\newline \Large {\bf Acknowledgements} \normalsize \indent\newline This work was partially supported by the Office of Naval Research through Contracts Nos. N00014-92-J-1340 and N00014-95-1-0398, and the National Science Foundation through Grant No. DMR-9403201. \pagebreak
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The transport of excitations, energy, charge or correlations is a topic of current interest both in the classical as well as in the quantum regimes. For example, efficient and coherent transport of excitations has been shown to play a crucial role in biological processes such as photosynthesis~\cite{engel2007evidence, scholes2011lessons, lee2007coherence}, which has inspired proposals for improvement of light collection and harvesting in solar cells~\cite{menke2013tailored}. In realistic scenarios, disorder and imperfections lead to an inhibition of transport, rendering it necessary to design strategies to combat such detrimental effects ~\cite{anderson1958absence, segev2013anderson, akselrod2014visualization, chabanov2000statistical, dalichaouch1991microwave, lahini2008anderson, lahini2009observation, john1984electromagnetic, john1987strong, schwartz2007transport, wiersma1997localization}. A simple toy model for testing possible scenarios where disorder can be circumvented is a one-dimensional chain of two-level systems: here, in the single excitation subspace comparisons of analytical results with large scale numerics are possible. The excitation hopping can be included as stemming from the vacuum-induced dipole-dipole coupling seen as an exchange interaction. Diagonal, or frequency disorder can be included as a natural consequence of inhomogeneous broadening, as different sites see different local environments leading to an imprecision in the definition of each site's natural transition frequency. Non-diagonal, or tunneling, disorder comes from the random positioning of the sites and therefore by a varying strength of the dipole-dipole interactions between nearest neighbors.\\ \begin{figure}[b] \includegraphics[width=0.80\columnwidth]{fig1.pdf} \caption{Sketch of a frequency disordered chain of two-level quantum systems inside an optical cavity (top). Schematics of the interactions in the system where the cavity works as a \textit{bus} mode providing an additional channel of long-range transport which can overcome the slower dipole-dipole mediated mechanism (bottom).} \label{fig1} \end{figure} \indent In the context of the simplified one-dimensional model treated here (illustrated in Fig.~\ref{fig1}) it has been shown~\cite{schachenmayer2015cavity} that, in the strong-coupling regime of light-matter platforms, the common coupling of $N$ sites to a single delocalized optical cavity mode can provide a scaling of transport inhibition from exponential to $N^{-2}$. This can be seen as a collective effect where the coupling of all sites to a common polaritonic 'bus' cavity mode~\cite{butte2006room, coles2014polariton, houdre1994measurement, hutchison2012modifying, kasprzak2006bose, kena2010room, lidzey1998strong, nelson1996room, weisbuch1992observation} leads to long-range interactions surpassing the efficiency of the nearest-neighbor excitation hopping process~\cite{feist2015extraordinary, zhong2016non, zhong2017energy, biondi2014self,reitz2018energy}. A different kind of collective delocalized states are encountered in densely packed ensembles of emitters where the common coupling to the infinite number of electromagnetic vacuum modes leads to superradiant/subradiant quantum superpositions~\cite{dicke1954coherence,ficek1987quantum,ficek2002entangled} exhibiting larger/smaller radiative loss than an individual, isolated two-level system. This mechanism can provide protection of excitations against decay~\cite{needham2019subradiance}. Efficient targeting of subradiant collective states has also been shown via tailored pumping where a sequence of phases are imprinted on a chain of coupled quantum emitters or via a combination of laser pulses and magnetic field gradients~\cite{plankensteiner2015selective}.\\ \indent We analyze possibilities of providing robustness of transport with respect to radiative loss in free space and to diagonal disorder in a cavity setting. In the free space scenario, we provide a partially analytical approach to the question of transport in the presence of decay and describe a phase imprinting mechanism for accessing asymmetric collective subradiant states with minimal radiative loss. Moreover, we analytically and numerically describe the preservation of quantum correlations between two propagating excitations, which we quantify by their concurrence as a measure of entanglement. In the cavity setting, we extend results from Ref.~\cite{schachenmayer2015cavity} to provide conditions for polaritonic transport with asymmetric cavity coupling in the presence of diagonal, frequency disorder.\\ \indent Section~\ref{Model and equations} introduces a simplified model of interacting quantum emitters coupled to a cavity mode and undergoing collective decay in the single excitation regime. Section~\ref{Free space transport} analytically and numerically describes the initialization and diffusion/propagation of a Gaussian wavepacket on a subradiant array and quantifies the robustness of quantum correlations between two propagating wavepackets. Section~\ref{Cavity transport} provides analytical results for polariton-mediated transport with asymmetric cavity coupling and numerical simulations for diagonal disorder.\\ \section{Model and equations} \label{Model and equations} We consider a chain of two-level systems (TLS) positioned at $\mathbf{r}_j$ with ground and excited states $\ket{g}_j$ and $\ket{e}_j$ for $j=1,S$. In some cases we will take $S=N$ where $N$ is the number of emitters within an optical cavity volume while in some particular cases we will consider $S=N+2M$ where in-coupling and out-coupling chains of $M$ emitters are added. The second case is useful in treating the problem of resonantly passing an excitation wavepacket through the cavity. Moving from one case to another simply requires setting $M=0$. We first write the master equation for the system which can be used either to derive equations of motion for averages which we will denote as a couple-dipoles model or to reduce the dynamics to the single excitation subspace which we dub as the quantum model. \subsection{Master equation} \label{master_equation} The free Hamiltonian of the system is written in terms of ladder operators $\sigma_j=\ket{g}_j\bra{e}_j$ and $\sigma_j^\dagger$ as $\mathcal{H}_0= \sum_{j} \omega_{j}\sigma^\dagger_j \sigma_j$ (notice that we set $\hbar=1$ and the Hamiltonian could be reexpressed in terms of population inversion operators $\sigma^z_j=2\sigma^\dagger_j \sigma_j-1$). Diagonal disorder can be included by assuming a given frequency distribution $\omega_j=\omega+\delta_j$ where $\delta_j$ is some zero-averaged distribution. The emitters see the same vacuum electromagnetic modes which give rise, after elimination, to dipole-dipole interactions of magnitude $\Omega_{ij}=\Omega(|\mathbf{r_{ij}}|)$, with $\mathbf{r_{ij}}=\mathbf{r_{i}}-\mathbf{r_{j}}$. The dipole-dipole contribution yields $\mathcal{H}_\Omega= \sum_{j \neq i} \Omega_{ij} \sigma^\dagger_{j} \sigma_{i}$, where $\Omega_{ij}$ strongly depends on the interparticle separation $r_{ij}$, the angle of the transition dipole with respect to the interparticle axis $\theta$ and the single particle independent decay rate $\gamma$ (see Appendix~\ref{A}). As in the near field the dipole-dipole interaction scales as $1/r_{ij}^3$ one can typically make the nearest neighbor approximation, therefore considering that the only non-vanishing coupling strengths are given by $\Omega_{j, j+1}=\Omega$. The TLS can be placed within the delocalized mode of an optical cavity of frequency $\omega_c$ and bosonic annihilation operator $a$, modeled by the Tavis-Cummings Hamiltonian \begin{equation} \mathcal{H}_c= \omega_c a^{\dagger} a+ \sum_{j} g_{j} (a^\dagger \sigma_j + a \sigma_j^\dagger) \end{equation} where $g_{j}$ is the coupling between the emitter $j$ and the cavity. Collective radiative decay is included in Lindblad form ${\cal{L}}_{\text{rad}}[\rho]=\sum_{jj'} \gamma_{jj'} [\sigma_{j} \rho \sigma^{\dagger}_{j'}- (\sigma_{j}^\dagger \sigma_{j'}\rho +\rho\sigma_{j}^\dagger \sigma_{j'})/2]$. The matrix $\gamma_{ij}$ describes both independent and mutual decay processes. Notice that $\gamma_{ij}$ strongly depends on the same parameters as $\Omega_{ij}$ as they both stem from the same physical mechanism (see Appendix~\ref{A}). The cavity photon loss is described by ${\cal{L}}_{\text{cav}}[\rho]= \kappa[a \rho a^{\dagger}-(a^\dagger a\rho +\rho a^\dagger a)/2]$. With the total Linblad term ${\cal{L}}[\rho]={\cal{L}}_{\text{rad}}[\rho]+{\cal{L}}_{\text{cav}}[\rho]$, the dynamics of the system can then be followed by solving the open system master equation \begin{equation} \dot{\rho}(t)=-i\left[\mathcal{H},\rho\right]+{\cal{L}}[\rho], \end{equation} where $\rho$ refers to both emitter and cavity states. From the master equation we can derive a set of coupled equations of motion for the averages $\alpha=\braket{a}$ and $\beta_i=\braket{\sigma_i}$. The equations can be linearized in the limit of weak excitation where $\braket{\sigma_j^\dagger \sigma_j}\ll 1$ (average population of each emitter is much smaller than unity) to lead to \begin{subequations} \begin{align} \label{CoupledDipoles} \dot{\beta}_i &= -(\frac{\gamma_i}{2}+i\omega_i) \beta_i-i g_i \alpha-\sum_{j} (i\Omega_{ij}+\frac{\gamma_{ij}}{2})\beta_j,\\ \dot{\alpha} &= -(\frac{\kappa}{2}+i\omega_{\text{c}})\alpha-i\sum_{j}g_j\beta_j. \end{align} \end{subequations} We will refer to this formulation as the coupled dipole model as in the weak excitation regime the dynamics is equivalent to that of a coherently and incoherently coupled system of oscillators. \subsection{The single excitation approximation} We construct the ground state as $\ket{G}=\ket{g_1,...g_S0_\text{ph}}$ with all spins down and no cavity photons and excited states as $\ket{j}=\ket{g_1,...e_j,...g_S0_\text{ph}}$ for $j=1,...,S$ and $\ket{S+1}=\ket{g_1,...g_j,...g_S1_\text{ph}}$ for the excitation residing inside the cavity mode. In consequence, when restricting the dynamics to a single excitation, the master equation requires the solution for $(S+2)\times(S+2)$ elements. Similarly to the approach of Ref.~\cite{needham2019subradiance} (but with an extension to include the cavity photon state as well as disordered frequencies) we derive simplified equations of motion that sees the ground state and the excited state manifold decoupling: \begin{subequations} \begin{align} \dot{\rho}_{GG} &= \sum_{i,j}\gamma_{ij} \rho_{ij}\\ \dot{\rho}_{Gj} &= i\omega_j \rho_{Gj} + i \sum_{k}\rho_{Gk} \left[\Omega_{kj}+\frac{i}{2}\gamma_{kj}+G_{kj}\right]\\ \dot{\rho}_{ij} &= -i \sum_{k} \rho_{kj} \left[\Omega_{ik}-\frac{i}{2}\gamma_{ik}+G_{ik}+\omega_i\delta_{ik}\right]\\ \nonumber &+ i \sum_{k} \rho_{ik} \left[\Omega_{kj}+\frac{i}{2}\gamma_{kj}+G_{kj}+\omega_j\delta_{kj}\right]. \end{align} \end{subequations} with the cavity-coupling being $G_{ij}=g_i \delta_{j,S+1}+g_j \delta_{i,S+1}$. One can now simply follow the evolution of the reduced density matrix in the single excitation manifold and write $\dot{\rho}_E=-i(Z\rho_E-\rho_E Z^*)$, where $Z_{ij}=\Omega_{ij}-\frac{i}{2}\gamma_{ij}+G_{ij}+\omega_j \delta_{ij}$. A quantity that one can numerically follow is the cavity transmission function~\cite{schachenmayer2015cavity} $T (t) = \sum_{j=M+N+1}^S \rho_{jj}(t)$ that quantifies the amount of excitation found on the out-coupling island. \section{Free space transport} \label{Free space transport} Before moving on to analyze the effect of a delocalized bosonic cavity field we aim at elucidating a few aspects of transport in free space when collective super- and subradiant states are taken into account. We will mainly consider the coupled dipoles model where we initialize a propagating wavepacket containing on average less than one excitation. The initialization stage could be done for example by applying a short pulse from a laser with a Gaussian profile and with a propagation direction tilted with respect to the chain axis. We describe diffusion and propagation with independent decay after which we show how subradiance can provide a protection of the excitation. We then analyze, within the quantum model, the propagation of two initially entangled wavepackets where quantum correlations are quantified by concurrence as a measure of entanglement. \begin{figure}[b] \centering \includegraphics[width=0.90\columnwidth]{fig2.pdf} \caption{Initialization scheme for the Gaussian wavepacket of excitation on a chain of near-field coupled emitters, achieved by a short laser pulse of duration $T$. For $t>0$, the imprinted excitation will propagate to the right with a quasimomentum $q_0$.} \label{fig2} \end{figure} \subsection{Wavepacket evolution with independent decay} We initialize a Gaussian wavepacket providing a weak excitation onto the system via an external tilted laser field with a Gaussian profile in amplitude (as depicted in Fig.~\ref{fig2}). The driving Hamiltonian reads \begin{equation} \mathcal{H}_{\text{drive}}=\sum_{j} \eta_{j}(t) \left[\sigma_{j} e^{i \omega_\ell t} e^{i \mathbf{k} \cdot \mathbf{r_{j}}}+\text{h.c.}\right], \end{equation} where $\omega_\ell$ is the laser frequency and $\mathbf{r}_j= a j \mathbf{x}$ describes positioning within an equidistant chain in the x-direction with lattice constant $a$. Notice that the tilting of the laser is equivalent to imprinting a quasi-momentum $q_0=k a \sin\phi$ derived from $\mathbf{k} \cdot \mathbf{r_{j}}=(k \sin\phi)(j a)=(k a \sin\phi)j=q_0 j$. The pulse is assumed constant between $t=-T$ and $t=0$ and the excitation amplitude follows a Gaussian profile with $f_{j}=1/\sqrt{\sqrt{2 \pi}w}e^{-(j-j_0)^2/(4w^2)}$. We assume that the pulse is fast enough ($T<\Omega^{-1}$) such that no hopping of excitations can occur during the driving. This allows one to neglect the dipole-dipole interaction during the initialization stage and derive a simple equation of motion for the coherences at each site: \begin{equation} \partial_t{\braket{\sigma_j}} =-i\omega_j \braket{\sigma_j}+ i \eta_j \braket{\sigma_j^z} e^{-i q_0 j} e^{-i \omega_\ell t}. \end{equation} Within the low-excitation approximation obtained by assuming that $\braket{\sigma_j^z} \sim -1$ and making the following notation $\beta_j=\braket{\sigma_{j}}$ one can rewrite the equation of motion for the $j^{\text{th}}$ dipole moment in a frame rotating at the laser frequency \begin{equation} \dot{\beta_j}=-i\Delta_j \beta_j-i \eta_{j}(t) e^{i q_0 j}, \end{equation} where $\Delta_j=\omega_j-\omega_\ell$. Since the equations are decoupled we can integrate them for the time of the pulse $-T<t<0$ and with initial condition $\beta_j(-T)=0$ (no excitation before the pump) to obtain \begin{equation} \beta_j(0) =2 i \eta_0 T f_j \frac{\sin\left(\Delta_j T/2\right)}{\Delta_j T/2}e^{-i q_0 j}=\beta_0 e^{-i q_0 j} f_j. \end{equation} To insure that the weak excitation condition is fulfilled we will impose the condition that the total population in the chain (and in the absence of disorder such that $\Delta_j=\Delta$) under resonance conditions $\sum_j |\beta_j(0)|^2=4(\eta_0 T)^2$ is much less than unity.\\ \indent After the initialization stage, we follow the evolution of the wavepacket for $t>0$ in the presence of hopping under the Hamiltonian $\mathcal{H}_{t>0}=\mathcal{H}_0+\mathcal{H}_\Omega$ and diagonal independent decay. To this purpose we write Eqs.~\ref{CoupledDipoles} (in the absence of the cavity mode and assuming all hopping rates equal to $\Omega$, all decay rates equal to $\gamma$ and all frequencies $\omega$) in a general form $\dot{\vec{\beta}}=- M \vec{\beta}$ where \begin{align} M_{jj'}=(i\omega+\gamma/2)\delta_{jj'}+i\Omega (\delta_{j,j'+1}+\delta_{j,j'-1}). \label{eqM} \end{align} We have already assumed that the dipole-dipole exchange can be reduced to a nearest-neighbor interaction and that we are in the case of open boundary condition (OBC). For periodic boundary conditions (PBC) we would add two extra terms $i\Omega (\delta_{j,1}\delta_{j',S}+\delta_{j,S}\delta_{j',1})$ which couple the first with the last emitter in the chain.\\ \indent Notice that the evolution matrix can be diagonalized by the same transformation that diagonalizes the Toeplitz matrix such that one can write $M=V\Lambda V^{-1}$. Assuming PBC we have \begin{align} \Lambda_{kk'}=i[\omega+2\Omega \cos{(k\theta)}-i\gamma/2]\delta_{kk'}=(i\mathcal{E}_k+\gamma/2)\delta_{kk'}, \end{align} (with $\theta=2\pi/S$) and the matrix of eigenvectors has the following elements $V_{jk}=e^{-i j k\theta }/\sqrt{S}$. Notice that this matrix is symmetric as $V_{jk}=V_{kj}$ and for the inverse matrix we have $[V^{-1}]_{jk}=V^*_{jk}$. Here, the index $k$ run from $0$ to $S-1$ while the index $j$ runs from $1$ to $S$. For OBC the eigenvalues are unchanged but one redefines $\theta=\pi/(S+1)$ and obtains real eigenvectors $V_{jk}=\sqrt{2/(S+1)}\sin{(\theta j k)}$ with the same properties as for PBC and all indexes run from $1$ to $S$.\\ \indent We can now generally write the solution for all dipole amplitudes as $\vec{\beta}(t) = V e^{-\Lambda t} V^{-1} \vec{\beta}(0)$. More explicitly, for each component: \begin{equation} \beta_j(t) = \beta_0 \sum_{k,j'} e^{-i\mathcal{E}_k t-i q_0 j'}e^{-\gamma t/2}V_{jk} V^*_{kj'} f_{j'}. \label{explicit-time-evolution} \end{equation} The sum over the initial Gaussian distribution of excitation can be analytically estimated in the particular case that the wavepacket is not too narrow. In the Fourier domain, this means that we ask for the $k$ distribution around the central value $k_0=q_0/\theta$ to be small such that a Taylor expansion of the energy dispersion relation is possible (see Fig.~\ref{fig3}a): \begin{equation} \mathcal{E}_k \simeq \mathcal{E}_{k_0}-2 \Omega \theta \sin{(k_0 \theta)} (k-k_0)- \Omega \theta^2 \cos{(k_0 \theta)}(k-k_0)^2. \label{eq-Omega-k} \end{equation} In the general case, under the approximation that a second order Taylor expansion suffices, the wavepacket maintains a Gaussian character and we can analytically describe the distribution of excitation in time as \begin{equation} |\beta_j(t)|^2= |\beta_0|^2 \frac{1}{\sqrt{2 \pi} \bar{w}(t)} e^{-\frac{\left[j-\bar{j}(t)\right]^2}{2 \bar{w}(t)^2}}e^{-\gamma t}. \label{p-nodecay} \end{equation} Both the wavepacket central position and its diffusion acquire a time dependence analytically expressed as \begin{subequations} \begin{align} \bar{w}^2(t) &=w^2\left[1+\frac{\Omega^2 t^2 }{w^4}\cos^2q_0\right] \label{eq:w(t)}, \\ \bar{j}(t)&=j_0+2 \Omega t \sin q_0. \end{align} \end{subequations} For $0<q_0<\pi$ (for the particular choice of $\Omega>0$) the packet moves to the right, reaching the fastest speed $v_g=2\Omega \sin q_0$ at $q_0=\pi/2$, while for $\pi<q_0<2 \pi$ the packet moves to the left. Stationary diffusion is reached for $q_0=0$ or $q_0=\pi$ where $\bar{j}(t)=j_0$ and the variance increases quadratically in time at large times where $\Omega t\gg w^2$. For minimal diffusion and optimal speed one sets $q_0=\pi/2$ obtaining $\bar{w}(t)=w$ and $\bar{j}(t) = j_0+2 \Omega t$ showing the wavepacket moving with the group velocity $v_g=2 \Omega$ and unchanged in shape. Notice that in this particular case, for OBC $k_0\approx S/2$ and the energy dispersion can be approximated by a line as illustrated in see Fig.~\ref{fig3}a.\\ \indent We recall that the value of $q_0$ could be adjusted by simply varying the angle $\phi$ at the initialization such that for optimal $\phi=\pi/2$ we have $q_0=2 \pi a /\lambda$. As for considerable nearest neighbour near field coupling one needs small interparticle distances, this procedure limits the achievable values of $q_0$ to smaller than $\pi/2$ values. Therefore the achievement of these values will need an additional protocol of implementation as for example the application of a magnetic field gradient as in Ref.~\cite{plankensteiner2015selective} or a more complicated interval level scheme where particles can be trapped with fields of small wavelength while the initialization of the wavepacket could be done via a larger wavelength field.\\ \subsection{Wavepacket evolution with subradiance} \label{subradiance free space} \begin{figure*}[t] \includegraphics[width=0.98\textwidth]{fig3.pdf} \caption{\textbf{(a)} Energy dispersion with OBC in black ($\mathcal{E}_k$ for collective states indexed by k from $1$ to $S$). The red line shows the Taylor expansion approximation assuming $q_0=\pi/2$. The green and blue curves are the $k$-space components of two initial wavepackets with $w=1$ and $w=5$, respectively. Parameters are $S=100$ and $\Omega=0.07$. \textbf{(b)} Normalized collective decay rates $\Gamma_k/\gamma$. The inset shows the scaling of the percentage of superradiant states ($\Gamma_k>\gamma$) with increasing interparticle separation. \textbf{(c)} Time evolution of an initial Gaussian wavepacket with independent and collective decay, where the quasimomentum initialization allows the direct tuning into superradiant ($q_0=0$) or subradiant $(q_0=\pi)$ behaviour. The blue curve shows robustness against decay when the excitation is initially encoded in a subradiant superposition. \textbf{(d)} Time evolution of a wavepacket initialized in the left part of the chain with $q_0=\pi/2$, comparison between individual decay (top) and collective decay (bottom), considering $S=110, w=5, a/\lambda=0.08$.} \label{fig3} \end{figure*} The presence of individual emitter decay has the trivial effect of exponentially reducing the excitation number during propagation. A straightforward way of tackling this detrimental aspect brought on by the radiative emission is to consider structures exhibiting robustness to decoherence, such as subradiant arrays. For small inter-particle separations $a<\lambda$, the diagonalization of the mutual decay rates matrix $\Gamma$ gives rise to $S$ channels of decay, some of superradiant character (decay rate larger than $\gamma$) but most of them exhibiting subradiance (decay rate smaller than $\gamma$). The inclusion of the collective decay effect is done in Eq.~\eqref{eqM} by replacing $\gamma$ with $\gamma_{jj'}$. The diagonalization of the coherent part leads to $V^{-1}MV=\Lambda+V^{-1}(\Gamma/2)V$. The last will have diagonal terms $\Gamma_k=\sum_{jj'} V^*_{jk} \gamma_{jj'}V_{j'k}$ describing decay of the collective state to the ground state of the system while all non-diagonal terms describe migration of excitation within the single excitation manifold. Assuming that the diagonal parts are dominant, one can estimate that most of the collective states are subradiant, as illustrated in Fig.~\ref{fig3}b. The inset shows a roughly linear dependence of the percentage of superradiant states with decreasing interparticle separation. For small separations, where subradiant effects are strong, the number of superradiant states reduces to less that $\sim 20\%$ of the total number of states.\\ \indent Let us analyze the influence of subradiant transport in the collective basis where the collective amplitudes are defined from the transformation $\vec{\tilde{\beta}}=V^{-1} \vec{\beta}$ which on components reads $\tilde{\beta}_k=\sum_j V^*_{kj} \beta_j$. Starting from example with a single localized excitation and with OBC, the initial occupancy of each collective state is simply $1/S$. For a mesoscopic ensemble we can then estimate the survival probability of the excitation (for time $t\gg\gamma^{-1}$ after all subradiant states decayed) simply from counting the number of subradiant states in Fig.~\ref{fig3}b. For an initial Gaussian wavepacket, the occupancy of the k-th collective state is found to be also a Gaussian \begin{equation} |\tilde{\beta}_k|^2=|\beta_0|^2 \frac{1}{\sqrt{2 \pi} \tilde{w}_k} e^{\frac{-(k-k_0)^2}{2 \tilde{w}_k^2}} \end{equation} centered at $k_0 =q_0/\theta$ and with a width $\tilde{w}_k =1/(2 \theta w)=S/(4 \pi w).$\\ \indent For an initial stationary wavepacket undergoing diffusion, Fig.~\ref{fig3}c shows the impact of subradiant collective states on the preservation of the excitation. At a time $t=\gamma^{-1}$, individual decay shows the expected decrease of the wavepacket amplitude while a strategy of constant illumination (corresponding to $q_0=0$) leads to a very quick superradiant decay of the excitation. Illumination with phases of adjacent emitters alternating by $\pi$ (corresponding to $q_0=\pi$) leads instead to the immediate mapping of the collective state onto a subradiant robust one. \subsection{Transport of correlations} Let us now move to the alternative scenario where dynamics takes place in the single excitation Hilbert space of dimensions $S+1$ with the basis vectors made up of the collective ground state and single excitation $\ket{j}$ states. We assume that an initial entangled state is prepared as a superposition between two Gaussians centered at $j_0$ and $j_0+d_0$ where $d_0$ quantifies the distance between the two Gaussians. We recall the previous definition $f_{j}=1/\sqrt{\sqrt{2 \pi}w}e^{-(j-j_0)^2/(4w^2)}$ and define the initial state as \begin{equation} \ket{\psi(0)} = \frac{1}{\sqrt{2}}\sum_{j=1}^{S} e^{i q_0 j} (f_{j} +f_{j-d_0}) \ket{j}. \end{equation} We aim at analyzing the behavior of quantum correlations with respect to independent decay and possibly utilize the robustness brought on by collective subradiant states. To this end we employ concurrence as a measure of bipartite entanglement defined as $\mathcal{C}_{jj'}=\text{Max}\{0, \sqrt{\lambda_1}-\sqrt{\lambda_2}-\sqrt{\lambda_3}-\sqrt{\lambda_4}\},$ where the eigenvalues are computed on the matrix $\Lambda^{(jj')}= \bar{\rho}^{(jj')}(\sigma_y \otimes \sigma_y)[\bar{\rho}^{(jj')}]^* (\sigma_y \otimes \sigma_y)$ and are arranged in decreasing order. The density matrix used to compute the concurrence is the reduced one obtained after tracing over all other particle and field states except for particles $j,j'$. As we are working in the single excitation only, the density matrix elements for double excitation are zero and the reduced matrix reads \begin{equation} \bar{\rho}^{(jj')}= \begin{bmatrix} \rho_{GG}+\sum_{n\neq j,j'} P_n & \rho_{Gj}& \rho_{Gj'} & 0 \\ (\rho_{Gj})^* &P_j &\rho_{jj'} &0 \\ (\rho_{Gj'})^* &(\rho_{jj'})^* &P_{j'} &0 \\ 0 &0 &0 &0 \\ \end{bmatrix} \end{equation} where $P_j=\rho_{jj}$. Notice that tracing over all particles except $j$ and $j'$ has the only consequence of increasing the weight of the zero excitation state in the reduced density matrix, while leaving all coherences (off-diagonal elements) unaffected. From here one can explicitly write the matrix $\Lambda^{(jj')}$ as \begin{widetext} \begin{equation} \Lambda^{(jj')}= \begin{bmatrix} 0 & \rho_{Gj}P_{j'}+\rho_{Gj'} \bar{\rho}^*_{jj'} & \rho_{Gj} P_{j}+\rho_{Gj}\bar{\rho}_{jj'} & -2\rho_{Gj}\rho_{Gj'} \\ 0 & P_{j}P_{j'}+|\rho_{jj'}|^2 & 2\rho_{jj} \rho_{jj'} & -\rho_{Gj'}P_{j}-\rho_{Gj}\rho_{jj'} \\ 0 & 2P_{j'} \rho^*_{jj'} & \rho_{jj} \rho_{j'j'}+|\rho_{jj'}|^2 & -\rho_{Gj}P_{j'}-\rho_{Gj'}\rho^*_{jj'} \\ 0 & 0 & 0 & 0 \\ \end{bmatrix}. \end{equation} \end{widetext} Surprisingly, the eigenvalues, in decreasing order, assume a very simple form independent of the coherence between the ground state and the single excitation states \begin{equation} \lambda_{1,2}=(\sqrt{P_j P_{j'}}\pm|\rho_{jj'}|)^2 \end{equation} and $\lambda_{3,4}=0$. The concurrence for sites $jj'$ then can be computed from here as specified above: \begin{equation} \mathcal{C}_{jj'}=|\sqrt{P_j P_{j'}}+|\rho_{jj'}||-|\sqrt{P_j P_{j'}}-|\rho_{jj'}||. \end{equation} Notice that as decoherence mechanisms typically affect the two particle coherence rather than populations, the concurrence for two sites is simply $\mathcal{C}_{jj'}=2|\rho_{jj'}|$ and therefore easily estimated even at the analytical level. For example, between a mixed state and a Bell maximally entangled state, the concurrence varies betweeen 0 and 1 as indicated by the off diagonal elements of the density matrix. For the two non-overlapping Gaussian wavepackets, we can define an average concurrence $\mathcal{C}_\text{av}(t)=\sum_{j\in\mathcal{D}_1,j\in\mathcal{D}_2}\mathcal{C}_{jj'}/(5w(t))$ where the sum is done over the non-overlapping domains $\mathcal{D}_{1,2}$ referring to the two wavepackets. The normalization by the average number of sites participating in the entangled state gives an average concurrence close to unity. At the analytical level, it is straightforward to show that for non-diffusive, initial wavepackets made of independently decaying emitters the concurrence simply decays in time as $e^{-\gamma t}$. For collective decay, the behavior reproduces closely the one of the propagating single wavepacket: as subradiance protects both decay of population and coherence, the average concurrence stays close to unity as long as the wavepacket does not decay. \section{Cavity transport} \label{Cavity transport} \begin{figure*}[t] \centering \includegraphics[width=0.98\textwidth]{fig4.pdf} \caption{\textbf{(a)} Cavity transmission comparison, in the presence of collective decay, between symmetric versus asymmetric couplings scenarios with parameters $M=30, N=50, \omega_c=\omega, g= 90 \Omega, \Omega'=10\Omega, \Delta_{S,A}=\pm 635.4$. \textbf{(b)} Energy dispersion curve shows little influence from the presence of disorder (at the order of $\Omega$ for all non-polaritonic $S-1$ states shown here. \textbf{(c)} Transmission in the presence of disorder and collective decay, considering that the wavepacket is matched into the antisymmetric polariton energy in which case the cavity transport is not influenced by disorder. In contrast, free space transport is slower (dashed, blue line in the absence of disorder) and strongly inhibited by disorder (full blue line). Disorder averaging has been performed over $100$ realizations. \textbf{(d)} Time evolution of the wavepacket through cavity, considering individual decay (top) versus collective decay (bottom). The grey lines denote the cavity boundaries.} \label{fig4} \end{figure*} A way to circumvent detrimental effects of disorder in the transport of energy has been proposed in Refs.~\cite{schachenmayer2015cavity,feist2015extraordinary}. The mechanism is based on the collective coupling to a cavity delocalized mode, which leads to the occurrence of additional polariton-mediated channels for enhanced energy transport. We propose here an additional improvement by showing that when polaritons are formed by the hybridization of the photon state with asymmetric superpositions of the quantum emitters, protection of excitation can be achieved by spreading the wavepacket into robust collective subradiant states. \\ \indent In the case of identical cavity-emitter couplings $g_{j(S)}=g$, a bright mode is formed as a symmetric superposition of all quantum emitters $B=\sum_{j} \sigma_{j}/\sqrt{N}$. The corresponding bright state is obtained by applying $B^\dagger$ to the ground state $\ket{G}$ obtaining a W-state. This mode is hybridized with the cavity field leading to polaritonic states that can be obtained from the action of the following operators $p^\dagger_{u,d (S)} =1/\sqrt{2}( a^\dagger \pm \sum_{j} \sigma^\dagger_{j}/\sqrt{N})$ onto the ground state. The two light-matter hybrid quantum states are the upper (u) and lower (d) polaritonic states energetically positioned at $\omega\pm g\sqrt{N}$. Notice that the same polaritonic energies can be obtained even if the couplings follow a different symmetry scaling as for example $g_{j(A)}=(-1)^jg$, albeit with very different collective states obtained as $\sum_{j} \sigma^\dagger_{j}/\sqrt{N}\ket{G}$. As the analysis in Refs.~\cite{schachenmayer2015cavity,feist2015extraordinary} neglected collective radiative effects, the symmetry of the collective polaritonic states did not play a role. However, symmetric modes are strongly superradiant at small particle-particle separations and therefore not optimized for robust transport. A natural choice is to consider instead transport through very asymmetric, typically very subradiant states. \\ \indent Let us first consider the eigenvalue problem of the Tavis-Cummings model plus nearest-neighbour dipole-dipole exchanges. We denote the eigensystem by $\omega_n$ and $\ket{n}$ such that the eigenvalue problem becomes $\mathcal{H}\ket{n}=\omega_n\ket{n}$ for $n$ running from $1$ to $N+1$. In the single excitation regime the general form of an eigenvector will then be of the form \begin{equation} \ket{n}= \sum_{j=1}^N c_j^{(n)}\ket{j}+\beta^{(n)}\ket{1_\text{ph}}, \end{equation} where normalization requires that $\sum_j |c_j^{(n)}|^2+|\beta^{(n)}|^2=1$. The task is to find all $\omega_n$ and corresponding coefficients of the emitter $c_j^{(n)}$ and photon $\beta^{(n)}$ content in each eigenvector. To this end we use the diagonal representation of the dipole-dipole interaction $\mathcal{H}_\text{dd}=2\Omega \sum_k\cos(k\theta)\ket{\tilde{k}}\bra{\tilde{k}}$ and the transformation $\ket{\tilde{k}}=\sum_{j}V_{jk}\ket{j}$ to find the representation $\mathcal{H}_\text{dd}=\sum_k\mathcal{E}_k\sum_{j}\sum_{j'}V_{jk}V^*_{j'k}\ket{j'}\bra{j}$. One can then proceed by finding a set of couple equations for $c_j^{(n)}$ and $\beta^{(n)}$ from which the eigenvalues can be extracted.\\ \indent For the symmetric case the sum $\sum_{j} V_{jk}=\sqrt{N}\delta_{k,0}$ selects only the symmetric collective mode with $k=0$ and one ends up solving for \begin{align} \left[(\omega_n-\omega)^2 -g^2 N\right] \beta^{(n)}-\mathcal{E}_0(\omega_n-\omega)\beta^{(n)} =0. \end{align} There are $N-1$ degenerate solutions with zero photonic component $\beta^{(n)}=0$ and two polariton states with energies obtained as solutions of a quadratic equation \begin{align} \omega_{\pm}^\text{sym}=\omega+\Omega\pm \sqrt{g^2 N+\Omega^2}. \end{align} For small tunneling rates $\Omega\ll g\sqrt{N}$ we can approximate the polariton energies at $\omega\pm g\sqrt{N}+\Omega$. The polaritonic states show a photon contribution \begin{equation} \beta^\pm=\frac{g \sqrt{N}}{\sqrt{(\omega_\pm-\omega)^2+g^2 N}}, \end{equation} while the matter contribution is \begin{equation} c_j^\pm= \frac{\omega_\pm-\omega}{\sqrt{N} \sqrt{(\omega_\pm-\omega)^2+g^2 N}}. \end{equation} Notice that, as expected, in the absence of dipole-dipole couplings, the expressions above reduce to the expected $\beta^\pm=1/\sqrt{2}$ and $c_j^\pm=\pm1/\sqrt{2N}$.\\ \indent In the completely asymmetric case where $g_j=g(-1)^j$ we select the asymmetric mode $\sum_{j} (-1)^jV_{jk}\approx \sqrt{N}\delta_{k,N/2}$ (for PBC) and the solution is similar to that above with a slight difference in the energy of the polaritons \begin{align} \omega_{\pm}^\text{asym}=\omega-\Omega\pm \sqrt{g^2 N+\Omega^2}. \end{align} The photonic part of the asymmetric eigenvectors is identical to above while the matter contribution shows the phase dependence dictated by the coupling variation among the emitters \begin{equation} c_j^\pm= (-1)^j\frac{\omega_\pm-\omega}{\sqrt{N} \sqrt{(\omega_\pm-\omega)^2+g^2 N}}. \end{equation} Having identified the energies of the asymmetrically driven polaritons, we can compare our results with those of Ref.~\cite{schachenmayer2015cavity}. In Fig.~\ref{fig4}a transmission through a cavity with $g_j=g(-1)^j$ is shown more efficient that the overall equal coupling mechanism. In the presence of disorder, Fig.~\ref{fig4}b illustrates that the dispersion curve does not change too much. Moreover, as Ref.~\cite{sommer2020molecular} describes in detail, the polaritonic energies are also very robust with disorder even at the level of $g\sqrt{N}$. Therefore, as also concluded in~\cite{schachenmayer2015cavity} (in the case of positional disorder), diagonal disorder plays almost no role in the transmission through the cavity even if it has a strong role of localization excitations in free space, as shown in Fig.~\ref{fig4}c. Finally, Fig.~\ref{fig4}d shows robust transport in the collective radiative regime (bottom propagation line) versus independently decaying emitters.\\ \section{Conclusions} We have treated aspects of excitation and quantum correlations propagation on a one dimensional chain of nearest neighbor coupled quantum emitters in the presence of a collective radiative bath. The robustness of collective subradiant states can be exploited towards more efficient transport of excitations by proper phase imprinting in free space. Also, not only excitations but quantum correlations as well can show robustness against radiative decay when transport takes place via subradiant collective states. In cavity settings, where a common delocalized bosonic light mode couples to all emitters, an asymmetric coupling pattern shows protection against radiative decay as well as against diagonal, frequency disorder in the chain of emitters. \\ \section{Acknowledgments} We acknowledge financial support from the Max Planck Society and from the German Federal Ministry of Education and Research, co-funded by the European Commission (project RouTe), project number 13N14839 within the research program "Photonik Forschung Deutschland" (C.~G.). This work was also funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- Project-ID 429529648 -- TRR 306 QuCoLiMa (``Quantum Cooperativity of Light and Matter''). We acknowledge fruitful discussions with J.~Schachenmayer and C.~Sommer.
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{One-dimensional Burgers' equation}\label{subsec:Burgers} \ifx Here we propose two alternative approaches to select the reduced basis size that accounts for specified accuracy levels in the reduced-order model solutions. Assume we have a probability space $(\Omega, \mathcal{F},\mathcal{P})$. These techniques employ construction of probabilistic models via ANN and GP, $ \phi: X \rightarrow \hat{y} $ where $\phi$ is the transformation function that learns through the input features $X$ to estimate the deterministic output $y \in \mathbb{R}$ through a real-valued random variable $\hat{y} : \Omega \mapsto \mathbb{R}$. \fi Burgers' equation is an important partial differential equation from fluid mechanics \cite{burgers1948mathematical}. The evolution of the velocity $u$ of a fluid evolves according to \begin{equation} \frac{\partial u}{\partial t} + u\frac{\partial u}{\partial x} = \mu \frac{\partial^2 u}{\partial x^2}, \quad x \in [0,L], \quad t \in (0,t_\textnormal{f}],\label{eqn:Burgers-pde} \end{equation} with $t_\textnormal{f} = 1$ and $L=1$. Here $\mu$ is the viscosity coefficient. The model has homogeneous Dirichlet boundary conditions $u(0,t) = u(L,t) = 0$, $t \in (0,t_\textnormal{f}]$. For the initial conditions, we used a seventh order polynomial constructed using the least-square method and the data set $\{(0,0);~(0.2,1);~(0.4,0.5);~(0.6,1); ~(0.8,0.2);\newline~~(0.9,0.1);~(0.95,0.05);~(1,0) \}$. We employed the polyfit function in Matlab and the polynomial is shown in Figure \ref{Fig::1D-Burgers-IC}. The discretization uses a spatial mesh of $N_s$ equidistant points on $[0,L]$, with $\Delta x=L/(N_s-1)$. A uniform temporal mesh with $N_t$ points covers the interval $[0,t_\textnormal{f}]$, with $\Delta t=t_\textnormal{f}/(N_t-1)$. The discrete velocity vector is ${\boldsymbol u}(t_j)\approx [u(x_i,t_j)]_{i=1,2, \ldots,N_{\rm{state}}} \in \mathbb{R}^{N_{\rm{state}}}$, $j=1,2, \ldots, N_t$, where $N_{\rm{state}}=N_s-2$ (the known boundaries are removed). The semi-discrete version of the model \eqref{eqn:Burgers-pde} is \begin{equation}\label{eqn:Burgers-sd} {\bf u}' = -{\bf u}\odot A_x{\boldsymbol u} + \mu A_{xx}{\boldsymbol u}, \end{equation} where ${\bf u}'$ is the time derivative of ${\bf u}$, and $A_x,A_{xx}\in \mathbb{R}^{N_{\rm{state}}\times N_{\rm{state}}}$ are the central difference first-order and second-order space derivative operators, respectively, which take into account the boundary conditions, too. The model is implemented in Matlab and the backward Euler method is employed for time discretization. The nonlinear algebraic systems are solved using the Newton-Raphson method and the allowed number of Newton iterations per each time step is set to $50$. The solution is considered to have converged when the Euclidean norm of the residual is less then $10^{-10}$. The viscosity parameter space $\mathcal{P}$ is set to the interval $[0.01,1]$. Smaller values of $\mu$ correspond to sharper gradients in the solution, and lead to dynamics more difficult to accurately approximate using reduced-order models. \begin{figure}[h] \centering \includegraphics[scale=0.37]{Initial_Cond_1D_Burgers.pdf} \caption{Seventh order polynomial used as initial conditions for 1D Burgers model. \label{Fig::1D-Burgers-IC}} \end{figure} The reduced-order models are constructed using POD method whereas the quadratic nonlinearities are computed via tensorial POD \cite{stefanescu2014comparison} for efficiency. A floating point operations analysis of tensorial POD, POD and POD/DEIM for $p^{\textrm{th}}$ order polynomial nonlinearities is available in \cite{stefanescu2014comparison}. The computational efficiency of the tensorial POD 1D Burgers model can be noticed in Figure \ref{Fig::1D-Burgers-CPU_time}. Both on-line and off-line computational costs are shown. Here we selected $\mu = \mu_p = 0.7$, $N_t = 301,$ POD dimension $K_{POD} = 9$, and we let the number of space points $N_s$ to vary. For $N_s = 201$ and $701$, the tensorial POD model is $5.17 \times$ and $61.12 \times$ times faster than the high-fidelity version. The rest of our numerical experiments uses $N_s = 201$ and $N_t = 301$. \begin{figure}[h] \centering \includegraphics[scale=0.37]{CPU_time_ROM.pdf} \caption{Computational efficiency of the tensorial POD 1D Burgers model. CPU time is given in seconds. \label{Fig::1D-Burgers-CPU_time}} \end{figure} \section{Conclusions} \label{sect:conc} In this study, we introduced new multivariate input-output models (MP-LROM) to predict the errors and dimensions of local parametric reduced-order models. Approximation of these mappings were built using Gaussian Process and Artificial Neural Networks. Initially, we compared our MP-LROM error models against those constructed with multi-fidelity correction technique (MFC) and reduced order model error surrogates method (ROMES). Since global bases are used by MFC and ROMES methods, we implemented corresponding local error models using only small subsets of the data utilized to generate our MP-LROM models. In contrast, the MP-LROM models are global and rely on a global database. Moreover, our MP-LROM models differ from the ROMES \cite{drohmann2015romes} and MFC models \cite{alexandrov2001approximation}, having more additional features such as reduced subspace dimension and are specially projected for accurate predictions of local parametric reduced-order models errors. As such, the MP-LROM models require significantly more and different data than MFC models. The numerical experiments revealed that our MP-LROM models are more accurate than the models constructed with MFC and ROMES methods for estimating the errors of local parametric reduced-order 1D-Burgers models with a single parameter. In the case of large parametric domains, the MP-LROM error models could be affected by the curse of dimensionality due to the large number of input features. In the future we plan to use only subsets of the global data set near the vicinity of the parameters of interest and combine our technique with the active subspace method \cite{constantine2014active} to prevent the potential curse of dimensionality that the MP-LROM models might suffer. Next we addressed the problem of selecting the dimension of a local reduced-order model when its solution must satisfy a desired level of accuracy. The approximated MP-LROM models based on Artificial Neural Networks better estimated the ROM basis dimension in comparison with the results obtained by truncating the spectrum of the snapshots matrix. In the future we seek to decrease the computational complexity of the MP-LROM error models. Currently the training data required by the machine learning regression MP-LROM models rely on many high-fidelity simulations. By employing error bounds, residual norms \cite{drohmann2015romes} and a-posteriori error estimation results \cite{Volwein_aposteriori_2016,nguyen2009reduced}, this dependency could be much decreased. On-going work focuses on applications of MP-LROM error model. We are currently developing several algorithms and techniques that employ MP-LROM error model as a key component to generate decomposition maps of the parametric space associated with accurate local reduced-order models. In addition, we plan to construct machine learning MP-LROM models to estimate the errors in quantities of interest computed with reduced-order models. The predictions of such error models can then be used to speed up the current trust-region reduced-order framework \cite{Arian_2000,bergmann2008optimal} by eliminating the need of high-fidelity simulations for the quality evaluation of the updated controls. \section{Introduction} \label{sect:Intro} Many physical phenomena are described mathematically by partial differential equations (PDEs), and, after applying suitable discretization schemes, are simulated on a computer. PDE-based models frequently require calibration and parameter tuning in order to provide realistic simulation results. Recent developments in the field of uncertainty quantification \cite{le2010spectral,smith2013uncertainty,grigoriu2012stochastic,cacuci2005sensitivity} provide the necessary tools for validation of such models even in the context of variability and lack of knowledge of the input parameters. Techniques to propagate uncertainties through models include direct evaluation for linearly parametric models, sampling methods such as Monte Carlo \cite{shapiro2003monte}, Latin hypercube \cite{helton2003latin} and quasi-Monte Carlo techniques \cite{lemieux2009monte}, perturbation methods \cite{cacuci2003sensitivity,Cacuci2015687,cacuci2015second} and spectral representation \cite{le2010spectral,eldred2009comparison,alekseev2011estimation}. While stochastic Galerkin methods \cite{le2010spectral} are intrusive in nature, Monte Carlo sampling methods \cite{shapiro2003monte} and stochastic collocations \cite{eldred2009comparison} do not require the modification of existing codes and hence they are non-intrusive. While uncertainty propagation techniques can measure the impact of uncertain parameters on some quantities of interest, they often become infeasible due to the large number of model realizations requirement. Similar difficulties are encountered when solving Bayesian inference problems since sampling from posterior distribution is required. The need for computational efficiency motivated the development of surrogate models such as response surfaces, low resolution, and reduced-order models. Data fitting or response surface models \cite{smith2013uncertainty} are data-driven models. The underlying physics remain unknown and only the input-output behavior of the model is considered. Data fitting can use techniques such as regression, interpolation, radial basis function, Gaussian Processes, Artificial Neural Networks and other supervised machine-learning methods. The latter techniques can automatically detect patterns in data, and one can use them to predict future data under uncertainty in a probabilistic framework \cite{murphy2012machine}. While easy to implement due to the non-intrusive nature, the prediction abilities may suffer since the governing physics are not specifically accounted for. Low-fidelity models attempt to reduce the computational burden of the high-fidelity models by neglecting some of the physical aspects (e.g., replacing Navier-Stokes and Large Eddy Simulations with inviscid Euler's equations and Reynolds-Averaged Navier-Stokes \cite{gano2005hybrid,sagaut2006large,wilcox1998turbulence}, or decreasing the spatial resolution \cite{Courtier_Thepaut1994,tremolet2007incremental}). The additional approximations, however, may considerably degrade the physical solution with only a modest decrease of the computational load. Reduced basis \cite{porsching1985estimation,BMN2004,grepl2005posteriori,rozza2008reduced,Dihlmann_2013} and Proper Orthogonal Decomposition \cite{karhunen1946zss,loeve1955pt,hotelling1939acs,lorenz1956eof,lumley1967structure} are two of the popular reduced-order modeling (ROM) strategies available in the literature. Data analysis is conducted to extract basis functions from experimental data or detailed simulations of high-dimensional systems (method of snapshots \cite{Sir87a, Sir87b, Sir87c}), for subsequent use in Galerkin projections that yield low dimensional dynamical models. While these type of models are physics-based and therefore require intrusive implementations, they are usually more robust than data fitting and low-fidelity models. However, since surrogate model robustness depends heavily on the problem, it must be carefully analyzed especially for large-scale nonlinear dynamical systems. ROM robustness in a parametric setting can be achieved by constructing a global basis \cite{hinze2005proper,prud2002reliable}, but this strategy generates large dimensional bases that may lead to slow reduced-order models. Local approaches have been designed for parametric or time domains generating local bases for both the state variables \cite{Rapun_2010,dihlmann2011model} and non-linear terms \cite{eftang2012parameter,peherstorfer2014localized}. A recent survey of state-of-the-art methods in projection-based parametric model reduction is available in \cite{benner2015survey}. In this study, we propose multivariate data fitting models to predict the local parametric Proper Orthogonal Decomposition reduced-order models errors and bases dimensions. We refer to them as MP-LROM models. Let us consider a local parametric reduced-order model of dimension $K_{POD}$ constructed using a high-fidelity solution associated with the parameter configuration $\mu_p$. Our first MP-LROM model consists in the mapping $\{\mu, \mu_p, K_{POD}\} \mapsto \log\varepsilon_{\mu,\mu_p,K_{POD}}^{HF}$, where $\varepsilon_{\mu,\mu_p,K_{POD}}^{HF}$ is the error of the local reduced-order model solution with respect to the high-fidelity solution for a viscosity parameter configuration $\mu$. Our proposed approach is inspired from the multi-fidelity correction (MFC) \cite{alexandrov2001approximation} and reduced order model error surrogates method (ROMES) \cite{drohmann2015romes}. MFC \cite{alexandrov2001approximation,eldred2004second,gano2005hybrid,huang2006sequential} has been developed for low-fidelity models in the context of optimization. The MFC model simulates the input-output relation $\mu \mapsto \varepsilon_{\mu}^{HF}$, where $\varepsilon_{\mu}^{HF}$ is the low-fidelity model error depending on a global reduced basis with a constant reduced-order model dimension. The ROMES method \cite{drohmann2015romes} introduced the concept of error indicators for global reduced-order models and generalized the MFC framework by approximating the mapping $\rho(\mu) \mapsto \log\varepsilon_{\mu}^{HF}$. The error indicators $\rho(\mu)$ include rigorous error bounds and reduced-order residual norms. No variation of the reduced basis dimension was taken into account. By estimating the log of the reduced-order model error instead of the error itself, the input-output map exhibits a lower variance as shown by our numerical experiments as well as those in \cite{drohmann2015romes}. The second proposed MP-LROM model addresses the issue of a-priori selection of the reduced basis dimension for a prescribed accuracy of the reduced solution. The standard approach is to analyze the spectrum of the snapshots matrix, and use the largest singular value removed from the expansion to estimate the accuracy level \cite{volkwein2007proper}. To also take into account the error due to the full-order-model equations projection in the reduced space, here we propose the mapping $\{\mu_p, \log\varepsilon_{\mu_p,\mu_p,K_{POD}}^{HF}\} \mapsto K_{POD}$ to predict the dimension of a local parametric reduced-order model given a prescribed error threshold. To approximate the mappings $\{\mu, \mu_p, K_{POD}\} \mapsto \log\varepsilon_{\mu,\mu_p,K_{POD}}^{HF}$ and $\{\mu_p, \log\varepsilon_{\mu_p,\mu_p,K_{POD}}^{HF}\} \mapsto K_{POD}$, we propose regression models constructed using Gaussian Processes (GP) \cite{slonski2011bayesian,lilley2004gaussian} and Artificial Neural Networks (ANN). In the case of one dimensional Burgers model, the resulted MP-LROM error models are accurate and their predictions are compared against those obtained by the MFC and ROMES models. The predicted dimensions of local reduced-order models using our proposed MP-LROM models are more accurate than those derived using the standard method based on the spectrum of snapshots matrix. The remainder of the paper is organized as follows. Section \ref{sect:ROM} reviews the reduced-order modeling parametric framework. The MP-LROM models and the regression machine learning methods used in this study to approximate the MP-LROM mappings are described in details in Section \ref{sect:MP-LROM}. Section \ref{sect:experm} describes the viscous 1D-Burgers model and compares the performances of the MP-LROM and state of the art models. Conclusions are drawn in Section \ref{sect:conc}. \section*{Acknowledgements} This work was supported in part and by the award NSF CCF 1218454 and by the Computational Science Laboratory at Virginia Tech. \label{sect:bib} \bibliographystyle{plain} \subsubsection{Gaussian process kernel method} \label{sect:GP} A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution \cite{rasmussen2006gaussian}. A Gaussian process is fully described by its mean and covariance functions \begin{equation} \label{GP_Dist} \phi(\mathbf{z}) \sim \textnormal{gp}\, \bigl({\it m}(\mathbf{z}), \mathcal{\bf K}) \bigr), \end{equation} where $ {\it m}(\mathbf{z})=\mathbb{E}\left[ \phi(\mathbf{z}) \right], $ and ${\bf K}$ is the covariance matrix with entries $ {K}_{i,j} = \mathbb{E} \left[\left(\phi(\mathbf{z}^i)-{\it m}(\mathbf{z}^i)\right) \left( \phi(\mathbf{z}^j)- {\it m} (\mathbf{z}^j) \right) \right]$ \cite{rasmussen2006gaussian}. In this work we employ the commonly used squared-exponential-covariance Gaussian kernel with \begin{equation} \label{eq_cov} k:\mathbb{R}^r \times \mathbb{R}^r \rightarrow \mathbb{R},~k(\mathbf{z}^i,\mathbf{z}^j) =\sigma^2_\phi\, \exp \left(-\frac{ \left\lVert \mathbf{z}^i - \mathbf{z}^j \right\rVert}{2\, \hslash ^2} \ \right)+ \sigma^2_n \, \delta_{i,j}, \end{equation} and ${K}_{ij} = k(\mathbf{z}^i,\mathbf{z}^j)$ \cite{rasmussen2006gaussian}, where $\mathbf{z}^i $ and $\mathbf{z}^j$ are the pairs of data points in training or test samples, $\delta $ is the Kronecker delta symbol and $\|\cdot \|$ is some appropriate norm. The model \eqref{eq_cov} has three hyper-parameters. The length-scale $\hslash$ governs the correlation among data points. The signal variance $\sigma^2 _\phi \in \mathbb{R}$ and the noise variance $\sigma^2 _n \in \mathbb{R} $ govern the precision of variance and noise, respectively. Consider a set of training data points ${\bf Z} = [\mathbf{z}^1~\mathbf{z}^2 ~~\cdots~~ \mathbf{z}^n] \in \mathbb{R}^{r \times n} $ and the corresponding noisy observations ${\bf y} = [y^1~y^2 ~~\cdots~~ y^n] \in \mathbb{R}^{1 \times n},$ \begin{equation} \label{GP_training} y^i=\phi(\mathbf{z}^i)+ \epsilon_i ,\quad \epsilon_i \sim \mathcal{N} \left(0, \sigma^2_n \right), \quad i = 1,\dots,n. \end{equation} Consider also the set of test points ${\bf Z}^* = [\mathbf{z}^{*1}~\mathbf{z}^{*2} ~~\cdots~~ \mathbf{z}^{*m}] \in \mathbb{R}^{r \times m}$ and the predictions ${\bf \hat{y}} = [\hat{y}^1~\hat{y}^2~~\cdots~~ \hat{y}^m] \in \mathbb{R}^{1 \times m}$, \begin{equation} \label{GP_test} \hat{y}^i=\phi\left(\mathbf{z}^{*i}\right), \quad i = 1,\dots,m. \end{equation} For a Gaussian prior the joint distribution of training outputs ${\bf y}$ and test outputs ${\bf \hat{y}}$ is \begin{equation} \label{GP_prior} \begin{bmatrix} {\bf y}^T\\ {\bf \hat{y}^T} \end{bmatrix} \sim \mathcal{N} \left( \begin{bmatrix} { {\bf m}}(\mathbf{Z})^T \\ { {\bf m}}(\mathbf{Z}^*)^T \end{bmatrix}\, , \, \begin{bmatrix} {\bf K}& {\bf K}^*\\ {\bf K}^{*T} & {\bf K}^{**}\\ \end{bmatrix} \right), \end{equation} where $${\bf m}({\bf Z}) = [{\it m}({\bf z}^1)~{\it m}({\bf z}^2) ~~\cdots~~ {\it m}({\bf z}^n)]\in \mathbb{R}^{1 \times n},~{\bf m}({\bf Z}^*)= [{\it m}({\bf z}^{*1}) ~{\it m}({\bf z}^{*2}) ~~\cdots~~ {\it m}({\bf z}^{*m})] \in \mathbb{R}^{1 \times m},$$ $${\bf K}^* = ({K_{ij}^*})_{i=1,\ldots,n;~j=1,\ldots,m} = k({\bf z}^i,{\bf z}^{j*}) \textrm{ and } {\bf K}^{**} = ({K_{ij}^{**}})_{i=1,\ldots,m;~j=1,\ldots,m} = k({\bf z}^{i*},{\bf z}^{j*}).$$ The predictive distribution represents the posterior after observing the data \cite{bishop2006pattern} and is given by \begin{equation} \label{GP_posterior} p\left({\bf \hat{y}}|\mathbf{Z},{\bf y},\mathbf{Z}^* \right) \sim \mathcal{N} \left(\, {\bf K}^{*T}{\bf K}^{-1}{\bf y}\, , \, {\bf K}^{**}- {\bf K}^{*T} {\bf K}^{-1} {\bf K}^*\, \right), \end{equation} where superscript $T$ denotes the transpose operation. The prediction of Gaussian process will depend on the choice of the mean and covariance functions, and on their hyper parameters $\hslash$, $\sigma^2 _\phi $ and $ \sigma^2_n $ which can be inferred from the data $${\bm \theta}^* = [\hslash, \sigma^2 _\phi, \sigma^2_n ] = \arg\min_{{\bm \theta}}\, L({\bm \theta}),$$ by minimizing the marginal negative log-likelihood function \[ L({\bm \theta}) = - \log\, p({\bf y}|\mathbf{Z},{\bm \theta})=\frac{1}{2} \log \det({\mathbf{K}}) + \frac{1}{2} ({\bf y}-{\bf m}(\mathbf{Z}))\, {\mathbf{K}}^{-1}\, ({\bf y}-{\bf m}(\mathbf{Z}))^T + \frac{n}{2}\, \log \left( 2 \pi \right). \] \subsubsection{Artificial Neural Networks} \label{sect:NN} The study of Artificial Neural Networks begins in the 1910s in order to imitate human brain's biological structure. Pioneering work was carried out by Rosenblatt, who proposed a three-layered network structure, the perceptron \cite{hagan2014neural} . ANN detect the pattern of data by discovering the input--output relationships. Applications include the approximation of functions, regression analysis, time series prediction, pattern recognition, and speech synthesis and recognition \cite{jang1997neuro,ayanzadeh2011fossil}. ANN consist of neurons and connections between the neurons (weights). Neurons are organized in layers, where at least three layers of neurons (an input layer, a hidden layer, and an output layer) are required for construction of a neural network. The input layer distributes input signals $\mathbf{z} = [{z}_1~{z}_2 ~~\cdots~~ {z}_r]$ to the first hidden layer. For a neural network with $L$ hidden layers and $m^{\ell}$ neurons in each hidden layer, let $ {\bf \hat{y}}^{\ell}= [\hat{y}^{\ell}_1~\hat{y}^{\ell}_2~~\cdots~~ \hat{y}^{\ell}_{m^{\ell}} ]$ be the vector of outputs from layer $\ell$, $\mathbf{b}^\ell = [b^{\ell}_1~b^{\ell}_2~~\cdots ~~ b^{\ell}_{m^{\ell}}]$ the biases at layer $\ell$, and ${\bf w}_{j}^\ell = [{w}_{j_1}^\ell {w}_{j_2}^\ell ~~\cdots~~ w^l_{j_{m^l}}]$ the weights connecting the neuron $j$ to the input of that layer (output of previous layer). The vectors ${\bf \hat{y}}^{\ell}$ and ${\bf w}_{j}^\ell$ share the same dimension which varies along the layers depending on the number of input features, neurons and outputs. Then the feed-forward operation is \[ \begin{array}{lr} {x}_j^{\ell+1}={{\bf w}_j^{\ell +1}}^T {\bf \hat{y}^{\ell}} + b_j^{\ell+1} , \quad {\bf \hat{y}}^0= \mathbf{z},\quad j=1,\ldots { , }m^{\ell}{.}\\ \hat{y}_j^{\ell+1}=\varphi \left(\mathbf{x}^{\ell+1} \right), \quad \ell=0, 1, \ldots, L-1. \end{array} \] {All products of previous layer output with current layer neuron weights will be summed and the bias value of each neuron will be added to obtain the vector $\mathbf{x}^\ell = [x^{\ell}_1~x^{\ell}_2~~\cdots ~~ x^{\ell}_{m^{\ell}}]$ .} Then the final output of each layer will be obtained by passing the vector $\mathbf{x}^\ell$ through the transfer function $\varphi$, which is a differentiable function and can be log-sigmoid, hyperbolic tangent sigmoid, or linear transfer function. The training process of ANN adjusts the weights and the biases in order to reproduce the desired outputs when fed the given inputs. The training process via the back propagation algorithm \cite{rumelhart1985learning} uses a gradient descent method to modify weights and thresholds such that the error between the desired output and the output signal of the network is minimized \cite{funahashi1989approximate}. In supervised learning the network is provided with samples from which it discovers the relations of inputs and outputs. The output of the network is compared with the desired output, and the error is back-propagated through the network and the weights will be adjusted. This process is repeated during several iterations, until the network output is close to the desired output \cite{haykin2009neural}. \section{Multivariate prediction of local reduced-order models characteristics (MP-LROM) \label{sect:MP-LROM}} We propose multivariate input-output models \begin{equation}\label{eqn:general_MP-LROM} \phi: {\bf z} \mapsto {{y}}, \end{equation} ${\bf z} \in \mathbb{R}^r$, to predict characteristics $y \in \mathbb{R}$ of local parametric reduced-order models \eqref{eqn::-3}. \subsection{Error Model} \label{sect:error_MP-LROM} Inspired from the MFC and ROMES methodologies we introduce an input-output model to predict the level of error $\varepsilon_{\mu,\mu_{p},K_{POD}}^{HF}$, where \begin{equation}\label{eqn:level_error_ML_ROM} \begin{array}{lr} {\varepsilon_{\mu,\mu_{p},K_{POD}}^{HF}} = \\ \| {\bf x}(\mu,t_1) - U_{ \mu_{p}}{\bf \tilde x}_{\mu_{p}}(\mu,t_1) \quad {\bf x}(\mu,t_2) - U_{ \mu_{p}}{\bf \tilde x}_{\mu_{p}}(\mu,t_2) \quad \cdots \quad {\bf x}(\mu,t_{N_t}) - U_{ \mu_{p}}{\bf \tilde x}_{\mu_{p}}(\mu,t_{N_t}) \|_F. \end{array} \end{equation} Here $\|\cdot\|_F$ denotes the Frobenius norm, and $K_{POD}$ is the dimension of the reduced-order model. In contrast with ROMES and MFC models that predict the error of global reduced-order models with fixed dimensions, using univariate functions, here we propose a multivariate model \begin{equation}\label{eqn:MP-LROM-error} \phi_{MP-LROM}^e: \{\mu, \mu_{p}, K_{POD}\} \mapsto \log \varepsilon_{\mu,\mu_{p},K_{POD}}^{HF} \end{equation} to predict the error of local parametric reduced-order models \eqref{eqn::-3} of various dimensions. Since the dimension of basis usually influences the level of error we include it among the input variables. To design models with reduced variances we look to approximate the logarithm of the error as suggested in \cite{drohmann2015romes}. For high-dimensional parametric spaces, ROMES method handles well the curse of dimensionality with their proposing univariate models. In combination with active subspace method \cite{constantine2014active}, we can reduce the number of input variables in case the amount of variability in the parametric space is mild. This will increase our error model feasibility even for high-dimensional parametric space. \subsection{Dimension of the reduced basis} The basis dimension represents one of the most important characteristic of a reduced-order model. The reduced manifold dimension directly affects both the on-line computational complexity of the reduced-order model and its accuracy \cite{kunisch2001galerkin,Hinze_Wolkwein2008,fahl2003reduced}. {By increasing the dimension of the basis, the projection error usually decreases and the accuracy of the reduced-order model is enhanced. However this is not necessarily valid as seen in \cite[Section 5]{rowley2004model}.} Nevertheless the spectrum of the snapshots matrix offers guidance regarding the choice of the reduced basis dimension when some prescribed reduced-order model error is desired. However the accuracy depends also on the `in-plane' error, which is due to the fact that the full-order-model equations are projected on the reduced subspace \cite{MRathinam_LPetzold_2003a,homescu2005error}. We seek to predict the dimension of the local parametric reduced-order model \eqref{eqn::-3} by accounting for both the orthogonal projection error onto the subspace, which is computable by the sum of squares of singular values, and the `in-plane' error. As such we propose to model the mapping \begin{equation}\label{eqn:eqn:MP-LROM-dimension} \phi_{MP-LROM}^d: \{\mu_{p}, \log \varepsilon_{\mu_{p},\mu_{p},K_{POD}}^{HF}\} \mapsto K_{POD}. \end{equation} Once such model is available, given a positive threshold $\bar{\varepsilon}$ and a parametric configuration $\mu_p$, we will be able to predict the dimension $K_{POD}$ of the basis $U_{\mu_p}$, such that the reduced-order model error satisfies \begin{equation} \label{eqn:level_error3} \| {\bf x}(\mu_p,t_1) - U_{ \mu_p}{\bf \tilde x}_{\mu_p}(\mu_p,t_1) \quad {\bf x}(\mu_p,t_2) - U_{ \mu_p}{\bf \tilde x}_{\mu_p}(\mu_p,t_2) \quad \cdots \quad {\bf x}(\mu_p,t_{N_t}) - U_{ \mu_p}{\bf \tilde x}_{\mu_p}(\mu_p,t_{N_t}) \|_F \approx \bar{\varepsilon}. \end{equation} \section{Numerical experiments} \label{sect:experm} We illustrate the application of the proposed MP-LROM models to predict the error and dimension of the local parametric reduced-order models for a one-dimensional Burgers model. The 1D-Burgers model proposed herein is characterized by the viscosity coefficient. To assess the performance of the MP-LROM models constructed using Gaussian Process and Artificial Neural Networks, we employ various cross-validation tests. The dimensions of the training and testing data sets are chosen empirically based on the number of samples. For Artificial Neural Networks models the number of hidden layers and neurons in each hidden layer vary for each type of problems under study. The squared-exponential-covariance kernel \eqref{eq_cov} is used for Gaussian Process models. The approximated MP-LROM error models are compared against the ROMES and multi-fidelity correction models, whereas the MP-LROM models that predict the dimension of the reduced-order models are verified against the standard approach based on the spectrum of snapshots matrix. \input{Burgers_model.tex} \input{Parameter_range} \subsubsection{Selecting the dimension of reduced-order model} \label{sect:optimal_base} Here we construct MP-LROM models to predict the reduced basis dimension that account for a-priori specified accuracy levels in the reduced-order model solution. The models are constructed using GP and ANN methods and have the following form % \begin{equation}\label{eqn:MP-LROM_dimension} \phi_{MP-LROM}^d: \{\mu_p,\log{{\varepsilon}}_{\mu_p,\mu_p,K_{POD}}^{HF}\} \mapsto \widehat{K_{POD}}. \end{equation} % The input features of this model consist of the viscosity parameter $\mu_p \in [0.01,1]$ and the log of the Frobenius norm of the error between the high-fidelity and reduced-order models \eqref{eqn:param_rang_err}. The searched output $\widehat{K_{POD}}$ is the estimation of the dimension of the reduced manifold $K_{POD}$. The data set contains equally distributed values of $\mu_p$ over the entire parametric domain $\, \mu_p \in \{0.01, 0.0113,0.0126,\ldots,0.9956 \}$, reduced basis dimensions $K_{POD}$ spanning the set $\{4,5,\ldots,14,15\}$ and the logarithm of the reduced-order model error $\log \varepsilon_{\mu_p,\mu_p,K_{POD}}^{HF}$. We use GP and ANN methods to construct two MP-LROM models to predict the dimension of local reduced-order models given a prescribed accuracy level. During the training phase, the MP-LROM models will learn the dimensions of reduced-order basis $K_{POD}$ associated with the parameter $\mu_p$ and the corresponding error $\log\varepsilon_{\mu_p,\mu_p,K_{POD}}^{HF}$. Later they will be able to estimate the proper dimension of reduced basis by providing it the specific viscosity parameter $\mu_p$ and the desired precision $\log\bar{\varepsilon}$. The computational cost is low once the models are constructed. The output indicates the dimension of the reduced manifold for which the ROM solution satisfies the corresponding error threshold. Thus we do not need to compute the entire spectrum of the snapshots matrix in advance which for large spatial discretization meshes translates into important computational costs reduction. Figure \ref{fig:basis_contour_log} illustrates the contours of the log of reduced-order model errors over all the values of the viscosity parameter $\mu_p \in \{0.01, 0.0113,0.0126\ldots 1\}$ and various POD dimensions $K_{POD} = \{4,5,\ldots,14,15\}$. A neural network with $5$ hidden layers and hyperbolic tangent sigmoid activation function in each layer is used while for the Gaussian Process we have used the squared-exponential-covariance kernel \eqref{eq_cov}. For both MP-LROM models, the results were rounded such as to generate natural numbers. Table \ref{tab:Opt_log} shows the average and variance of error in GP and ANN predictions using different sample sizes. ANN outperforms GP and as the number of data points grows, the accuracy increases and the variance decreases. The results are obtained using a conventional validation with $80\% $ of the sample size dedicated for training data and the other $20\% $ for the test data. The employed formula is described in equation \eqref{eqn:err_fold}. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth, height=0.40\textwidth]{contour_plot_m_vs_log_error.pdf} \caption{Isocontours of the reduced model errors for different POD basis dimensions and viscosity parameters $\mu_p$. \label{fig:basis_contour_log} } \end{figure} \begin{table}[H] \begin{center} \begin{tabular}{ | l | l | l | l | l |} \hline & \multicolumn{2}{|c|}{MP-LROM GP} & \multicolumn{2}{|c|}{MP-LROM ANN} \\ \hline sample size & $\textnormal{E}_{\rm fold}$ & $\textnormal{VAR}_{\rm fold}$ & $\textnormal{E}_{\rm fold}$ & $\textnormal{VAR}_{\rm fold}$ \\ \hline 100 & $ 0.2801 $ & $0.0901$ & $ 0.1580$ & $ 0.02204 $ \\ \hline 1000 & $0.1489$ & $ 0.0408 $ & $ 0.0121 $ & $ 0.0015 $ \\ \hline 3000 & $0.1013 $ & $ 0.0194 $ & $ 0.0273 $ & $ 0.0009 $ \\ \hline 5000 & $ 0.0884 $ & $ 0.0174 $ & $ 0.0080 $ & $ 0.0002 $ \\ \hline \end{tabular} \end{center} \caption{ Average and variance of errors in prediction of reduced basis dimension using MP-LROM models for different sample sizes} \label{tab:Opt_log} \end{table} Figures \ref{fig:hist_NNDim} and \ref{fig:hist_GPDim} show the prediction errors using $100$ and $1000$ training samples for both MP-LROM models constructed via ANN and GP models. The histograms shown in Figure \ref{fig:hist_GPDim}, as stated before, can assess the validity of GP assumptions. Once the number of samples is increased, the data set distribution shape is closer to the Gaussian profile $\mathcal{N} (0, \sigma_n^2)$ than in the case of the data set distribution shown in Figure \ref{fig:ParamHist_GP} used for generation of MP-LROM models for the prediction of local reduced-order model errors. \begin{figure}[h] \centering \subfigure[$100$ samples] {\includegraphics[scale=0.35] {histogram_NN_log_pod_dim1.pdf}} \subfigure[$1000$ samples] {\includegraphics[scale=0.35] {histogram_NN_log_pod_dim2.pdf}} \caption{ Histogram of errors in prediction of the reduced basis dimension using ANN MP-LROM for different sample sizes \label{fig:hist_NNDim}} \end{figure} \begin{figure}[h] \centering \subfigure[$100$ samples] {\includegraphics[scale=0.35] {histogram_gp_log_pod_dim1.pdf}} \subfigure[$1000$ samples] {\includegraphics[scale=0.35] {histogram_gp_log_pod_dim2.pdf}} \caption{ Histogram of errors in prediction of the reduced basis dimension using GP ML-ROM for different sample sizes \label{fig:hist_GPDim}} \end{figure} To assess the accuracy of the MP-LROM models, the data set is randomly partitioned into five equal size sub-samples, and five-fold cross-validation test is implemented. The five results from the folds are averaged and they are presented in Table \ref{tab:experm1}. The ANN model correctly estimated the dimension of the reduced manifold in $87\%$ cases. GP correctly estimates the POD dimension $53\%$ of the times. The variance results shows that the GP model has more stable predictions indicating a higher bias in the data. \begin{table}[H] \begin{center} \begin{small} \begin{tabular}{ | c | p{1.55cm} | p{1.25cm} | c | c | c | p{1.55cm} | p{2.75cm} |} \hline Dimension discrepancies& zero & one & two & three & four & $>$ four & $ VAR $ \\ \hline ANN MP-LROM & $87\% $ & $11\%$ & $2 \%$ & 0 & 0 & 0 & $2.779 \times 10^{-3}$ \\ \hline GP MP-LROM & $53\%$ & $23 \%$ & $15\%$ & $5 \%$ &$ 3\%$ & $1 \%$ & $4.575 \times 10^{-4}$ \\ \hline \end{tabular} \end{small} \end{center} \caption{POD basis dimension discrepancies between the MP-LROM predictions and true values over five-fold cross-validation. The errors variance is also computed.} \label{tab:experm1} \end{table} In Figure \ref{fig:expm1_pod}, we compare the output of the MP-LROM models against the { singular values} based estimation on a set of randomly selected test data. The estimation derived from {the singular values} is the standard method for selecting the reduced manifold dimension when a prescribed level of accuracy of the reduced solution is desired. Here the desired accuracy $\bar{\varepsilon}$ is set to $10^{-3}$. The mismatches between the predicted and true dimensions are depicted in Figure \ref{fig:expm1_pod}. The predicted values are the averages over five different MP-LROM models constructed using ANN and GP methods. The models were trained on random $80 \%$ split of data set and tested on the fixed selected $20 \%$ test data. We notice that the snapshots matrix spectrum underestimates the true dimension of the manifold as expected since the `in-plane' errors are not accounted. The ANN predictions were extremely accurate for most of the samples while the GP usually overestimated the reduced manifold dimensions. \begin{center} \begin{figure}[H] \begin{centering} \includegraphics[width=0.5\textwidth, height=0.4\textwidth]{optimal_basis_size.pdf} \caption{Average error of the POD dimension prediction on a randomly selected test data with desired accuracy of $\bar{\varepsilon}=10^{-3}$. The average of the absolute values of the error in prediction are 1.31, 0.21 and 1.38 for singular value based method and MP-LROM models constructed using ANN and GP.} \label{fig:expm1_pod} \end{centering} \end{figure} \end{center} \subsection{Applications of MP-LROM error prediction models} \subsubsection{Designing the decomposition of the parametric domain} \label{sec:parametric_map} We seek to build a decomposition of the viscosity domain $[0.01,1]$ for the 1D-Burgers model using the MP-LROM models introduced in Section \ref{sect:err_estimate}. The non-physical parameter $\nu$ is set to $1$. As discussed in Section \ref{sect:problem_Des}, we take the following steps. First we identify ``$\mu_p$-feasible'' intervals $[d_\ell,d_r]$ in the parameter space such that local reduced-order model depending only on the high-fidelity trajectory at $\mu_p$ is accurate to within the prescribed threshold for any $\mu \in [d_\ell,d_r]$. Second, a greedy algorithm generates the decomposition \begin{equation} [0.01,1] \subset \bigcup_{i=1}^M\, \left[d_\ell^i,d_r^i\right], \end{equation} by covering the parameter space with a union of $\mu_{p_i}$ feasible intervals, where each $\mu_{p_i}$-feasible interval is characterized by an error threshold $\bar \varepsilon_i$ (which can vary from one interval to another). This relaxation is suggested since for intervals associated with small parameters $\mu_{p_i}$, it is difficult to achieve small reduced-order models errors similar to those obtained for larger parametric configurations. For the existing reduced basis methods a global reduced-order model depending on multiple high-fidelity trajectories is usually constructed. In contrast, our approach uses the MP-LROM models to decompose the parameter space into smaller regions where the local reduced-order model solutions are {\color{red}accurate to within }some tolerance levels. Since the local bases required for the construction of the local reduced-order models depend on only a single full simulation, the {\color{red}dimension} of the {\color{red}POD subspace} is small, leading to lower on-line computational complexity. \setcounter{secnumdepth}{5} \paragraph{Construction of a $\mu_p-$feasible interval} \label{sec:feasible_interval} We noticed in the previous subsection that MP-LROM models can accurately estimate $\log \varepsilon_{\mu,\mu_p,K_{POD}}^{HF}$ \eqref{eqn:param_rang_err} associated with reduced-order models. Thus we can employ them to establish a range of viscosity parameters around $\mu_p$ such that the reduced-order solutions depending on $U_{\mu_p}$ satisfy some desired accuracy level. More precisely, starting from parameter $\mu_p$, a fixed POD basis dimension and a tolerance error $\log\bar{\varepsilon}$, we are searching for an interval $[d_l, d_r]$ such that the estimated prediction $\widehat{\log\varepsilon}_{\mu,\mu_p,K_{POD}}^{HF}$ of the true error $\log\varepsilon_{\mu,\mu_p,K_{POD}}^{HF}$ \eqref{eqn:param_rang_err} meets the requirement \begin{equation}\label{eqn:inequality_constraint} \widehat{\log\varepsilon}_{\mu,\mu_p,K_{POD}}^{HF}<\log\bar{\varepsilon}, \forall \mu \in [d_l, d_r]. \end{equation} Our proposed strategy makes use of a simply incremental approach by sampling the vicinity of $\mu_p$ to account for the estimated errors $\widehat{\log\varepsilon}_{\mu,\mu_p,K_{POD}}^{HF}$ forecasted by the MP-LROM models defined before. A grid of new parameters $\mu$ is built around $\mu_p$ and the error models predict the errors outward of $\mu_p$. Once the error models outputs are larger than the prescribed error $\log\bar{\varepsilon},$ the previous $\mu$ satisfying the constraint \eqref{eqn:inequality_constraint} is set as $d_l$, for $\mu < \mu_p$ or $d_r$ for $\mu > \mu_p$. Figure \ref{fig:expm2_range} illustrates the range of parameters predicted by the MP-LROM models via ANN and GP against the true feasible interval and the results show good agreement. For this experiment we set ${\mu_p}=0.7$, dimension of {\color{blue}POD subspace} $K_{POD} = 9$ and $\bar{\varepsilon} = 10^{-2}$. Values of $\mu = \mu_p \pm 0.001\cdot i,$ $i=1,2,\ldots$ are passed to the MP-LROM models. The average range of parameters obtained over five different configurations with ANN is $[0.650, 0.780]$ while in the case of GP we obtained $[0.655,0.780]$. In each configuration, we train the model with $80 \%$ random split of the data set and test it over the fixed test set of Figure \ref{fig:expm2_range}. For this design, the true range of parameters is $[0.650,0.785]$ underlying the predicting potential of MP-LROM models built using the regression machine learning techniques. \begin{figure}[H] \begin{centering} \includegraphics[width=0.5\textwidth, height=0.40\textwidth]{truth_error_compare.eps} \caption{The average range of parameter $\mu$ obtained with MP-LROM {\color{blue}models} for $K_{POD}=9$ and ${\mu_p}=0.7$. The desired accuracy is $\bar{\varepsilon} = 10^{-2}$. The numbers represent the left and the right edges of the predicted vs the true feasible intervals.} \label{fig:expm2_range} \end{centering} \end{figure} \paragraph{The decomposition of the parametric domain as a union of $\mu_p-$feasible intervals} \label{sec:parametric_map} A union of different $\mu_{p_k}$-feasible intervals can be designed to cover a general entire 1D-parametric domain $[A,B]$. Once such construction is {\color{red} available, it} will allow for reduced-order simulations with a-priori error quantification for any value of viscosity parameter $\mu \in [A,B]$. A greedy strategy based on the MP-LROM error models constructed in {\color{red} Section} \ref{sect:err_estimate} is described in Algorithm \ref{alg:map_generation} and its output is a collection of feasible intervals $\cup_{k=1}^n[d_l^k,d_r^k] \supset [A,B]$. After each iteration $k$ of the algorithm, a $\mu_{p_k}$-feasible interval $[d_l^k,d_r^k]$ is constructed. Each interval is associated with some accuracy threshold $\bar{\varepsilon}_k$. For small viscous parametric values we found out that designing $\mu_{p_k}-$feasible intervals associated with higher precision levels (very small thresholds $\bar{\varepsilon}_k$) is impossible since the dynamics of parametric 1D-Burgers model solutions change dramatically with smaller viscosity parameters. In consequence we decided to let $\bar{\varepsilon}_k$ vary along the parametric domain to accommodate the solution physical behaviour. Thus a small threshold $\bar{\varepsilon}_0$ will be initially set and as we will advance charting the parameter domain $[A,B]$ from right to left, the threshold $\bar{\varepsilon}_k$ will be increased. The algorithm starts by selecting the first centered parameter $\mu_{p_0}$ responsible for basis generation. It can be set to $\mu_{p_0} = B$ but may take any value in the proximity of $B$,~$\mu_{p_0}\leq B$. This choice depends on the variability of parametric solutions in this domain region and by selecting $\mu_{p_0}$ to differ from the right edge of the domain, the number $n$ of the feasible intervals should decrease. The next step is to set the threshold $\bar{\varepsilon}_0$ along with the maximum permitted size of the initial feasible interval to be constructed. This is set to $2\cdot r_0$, thus $r_0$ can be referred as the interval radius. Along with the radius, the parameter $\Delta r$ will decide the maximum number of MP-LROM model calls employed for the construction of the first $\mu_{p_0}$-feasible interval. While the radius is allowed to vary during the algorithm iterations, $\Delta r$ is kept constant. Finally the dimension of POD basis has to be selected together with three parameters $\beta_1,~\beta_2$ and $\beta_3$ responsible for changes in the threshold and radius and selecting a new parameter location $\mu_{p_k}$ encountered during the procedure. Next the algorithm starts the construction of the $\mu_{p_0}$ feasible interval. The process is described in the top part of Figure \ref{fig:describe_algorithm}(a). Then we are sampling the vicinity of $\mu_{p_0}$ for equally distributed parameters $\mu$ and compute the MP-LROM model predictions. The sampling process and the comparison between the predicted errors $\widehat{\log{{\varepsilon}}}_{\mu,\mu_{p_0},K_{POD}}^{HF}$ and $\log\bar{\varepsilon}_0$ are depicted in Figure \ref{fig:describe_algorithm}(a). A green vertical segment indicates that the estimated error satisfies the threshold; i.e., $\widehat{\log{{\varepsilon}}}_{\mu,\mu_{p_0},K_{POD}}^{HF} < \log\bar{\varepsilon}_0$, whereas the red segment indicates the opposite. The left limit of the $\mu_{p_0}-$feasible interval is obtained when either $\mu > \mu_{p_0}-r_0$ or $\widehat{\log{{\varepsilon}}}_{\mu,\mu_{p_0},K_{POD}}^{HF}> \log{\bar{\varepsilon}_0}$. The left limit $d_l^0$, denoted with a green dashed line in Figure \ref{fig:describe_algorithm}(a), is set equal to the last parameter $\mu$ such that $\widehat{\log{{\varepsilon}}}_{\mu,\mu_{p_0},K_{POD}}^{HF} \leq \log{\bar{\varepsilon}_0}$. The next step searches for a centered parameter $\mu_{p_{k+1}}$ and this process is described at the bottom of the Figure \ref{fig:describe_algorithm}(a) for $k=0$. The centered parameter $\mu_{p_{k+1}}$ is first proposed based on an empirical formula described in line $25$ of Algorithm \ref{alg:map_generation}. This formula depends on the current centered parameter $\mu_{p_k}$, the number of tested parameters $\mu$ during the construction of $\mu_{p_k}-$feasible interval, parameters $\Delta r$ and $\beta_3$. Next, the algorithm checks if the following constraint is satisfied \begin{equation}\label{eq::constrain2} [d_l^{k+1},d_r^{k+1}] \bigcap \bigg( \bigcup_{i=1}^{k} [d_l^{i},d_r^{i}] \bigg) \neq \emptyset, \end{equation} without taking into account the MP-LROM model error. This is achieved by comparing the error model prediction $\widehat{\log{{\varepsilon}}}_{d_l^k,\mu_{p_{k+1},K_{POD}}}^{HF}$ and threshold $\log \bar{\varepsilon}_{k+1}$ (see instruction $27$ and bottom of Figure \ref{fig:describe_algorithm}(a) for $k=0$). If the predicted error is smaller than the current threshold, assuming a monotonically increasing error with larger distances $d(\mu,\mu_{p_{k+1}})$, the reduced-order model solutions should satisfy the accuracy threshold for all $\mu \in [\mu_{p_{k+1}},d_l^k].$ In consequence the equation \eqref{eq::constrain2} will be satisfied for the current $\mu_{p_{k+1}}$, if we set $r_{k+1}=\mu_{p_{k+1}}-d_l^k$ (see instruction $30$). In the case the error estimate is larger than the present threshold, the centered parameter $\mu_{p_{k+1}}$ is updated to the middle point between old $\mu_{p_{k+1}}$ and $d_l^k$ (see also the bottom of Figure \ref{fig:describe_algorithm}(a)). For the situation where the monotonic property of the error does not hold in practice, a simply safety net is used at instruction $12$. The instructions between lines $5$ and $21$ generate the $\mu_{p_k}$-feasible interval, for the case when the current centered parameter $\mu_{p_k} \neq d_{l}^{k-1}$ (see top part of Figure \ref{fig:describe_algorithm}(b) for $k=1$). {\color{red} Here by {\it int} we refer to the integer part of a real number. We used the Matlab command floor for the implementation}. {\color{red} For situation when $\mu_{p_k} = d_l^{k-1}$ (see bottom of Figure \ref{fig:describe_algorithm}(b) for $k=2$), the threshold has to be increased (by setting $\bar{\varepsilon}_{k} = \beta_1\bar{\varepsilon}_k$ at line $23$), since the reduced-order model solutions can not satisfy the desired precision according to the predicted errors. {\color{red} In consequence, $\beta_1$ has to be selected larger than $1$. The need for relaxing the threshold suggests that the greedy search is currently operating into a parametric region where only a slight change in the parameter $\mu$ away from $\mu_{p_k}$ leads to predicted ROM errors larger than the current threshold. Relaxing the threshold and decreasing the radius size $(\textrm{select }\beta_2<1 \textrm{ in line 23 of Algorithm 3})$ can be used as a strategy to identify a feasible region for the current centered parameter $\mu_{p_k}$. Similarly, relaxing the threshold and expanding the search $(\beta_2>1)$ could also represent a viable strategy. However, expanding the search in a parametric regime with large changes in the model dynamics, even if the threshold was relaxed, may lead to useless evaluations of the expressions in lines $7$ and $18$ of the Algorithm 3. Thus $\beta_2$ should be selected smaller than $1$. Once the feasible region is obtained, the radius $r_k$ is reset to the initial value $r_0$ (see line 25 of the Algorithm 3). By selecting $\beta_3 > 1$, the computational complexity of Algorithm $3$ is decreased since the first proposal of the new centered parameter $\mu_{p_{k+1}}$ will always be smaller than the left limit $d_l^k$ of the current feasible interval. The entire algorithm stops when $\mu_{p_{k+1}} \leq A.$}} For our experiments we employed the ANN MP-LROM model, and we set $A=0.01$, $B=1$, $\bar{\varepsilon}_0 = { \color{blue} 10^{-2},~\Delta r = 5 \times 10^{-3}},~r_0=0.5,~{K_{POD}} = 9,~\beta_1 = 1.2,~\beta_2 = 0.9$ and $\beta_3=1.4$. We initiate the algorithm by setting $\mu_{p_0}=0.87$, and the first feasible interval $[ 0.7700,1]$ is obtained. Next the algorithm selects $\mu_1=0.73$ with the associated range of $[ 0.6700,0.8250]$ using the same initial threshold level. As we cover the parametric domain from right to left; i.e., selecting smaller and smaller parameters $\mu_{p_k}$, the algorithm enlarges the current threshold $\bar{\varepsilon}_k$, otherwise the error model predictions would not satisfy the initial precision. We continue this process until we get the threshold $6.25$ with $\mu_{32}=0.021$ and the corresponding feasible interval $[0.01,0.039]$. The generated decomposition is depicted in Figure \ref{fig:expm2_numMus} where the associated threshold varies with the parameter change. \begin{algorithm} \begin{algorithmic}[1] \State Select $\mu_{p_0}$ as the right edge of the parameter interval, i.e. $\mu_{p_0} = B$. \State Set error threshold $\hat{\varepsilon}_0$, step size $\Delta r$ for selection of sampling parameters $\mu$, the maximum search radius $r_0$, dimension of POD basis $K_{POD}$ and $\beta_1,~\beta_2$ and $\beta_3$. \State Set $k=0$. \State WHILE $\mu_{p_k} \geq A$ Do \State $\quad$ FOR {\color{red}$i=1$} to $int(\frac{r_k}{\Delta r})+1$ \State $\quad \quad$ Set $\mu = \mu_{p_k} + i \Delta r$ \State $\quad \quad$ IF $(\phi(\mu,\mu_{p_k},{K_{POD}}) > \log\bar{\varepsilon}_k ~{\color{red}\textrm{OR}~\mu>B)}$ THEN \State $\quad \quad \quad$ Set $d_r^k = \mu_{p_k} + (i-1) \Delta r$. EXIT. \State $\quad \quad$ END IF \State $\quad$END FOR \State $\quad$IF $k>0$ THEN \State $\quad \quad$ IF $d_r^k<d_l^{k-1}$ THEN \State $\quad \quad \quad$ $\mu_{p_k} = \frac{\mu_{p_k}+d_l^{k-1}}{2}$. GOTO $5$. \State $\quad \quad$END IF \State $\quad$END IF \State $\quad$FOR $j=1$ to $int(\frac{r_k}{\Delta r})+1$ \State $\quad \quad$ Set $\mu = \mu_{p_k} - j \Delta r$ \State $\quad \quad$ IF $(\phi(\mu,\mu_{p_k},{K_{POD}}) > \log\bar{\varepsilon}_k ~{\color{red}\textrm{OR}~\mu<A})$ THEN \State $\quad \quad \quad$ Set $d_l^k = \mu_{p_k} - (j-1) \Delta r$. EXIT. \State $\quad \quad$ END IF \State $\quad$END FOR \State $\quad$IF {\color{red}$(i=1)$}.OR.{\color{red}$(j=1)$} THEN \State $\quad \quad$ Set $\bar{\varepsilon}_{k} = \beta_1 \cdot \bar{\varepsilon}_k$; $r_{k} = \beta_2 \cdot r_k$; GOTO $5$. \State $\quad$ELSE \State $\quad \quad$ $\mu_{p_{k+1}} = \mu_{p_k} - \beta_3 (j-1) \Delta r$; $\bar{\varepsilon}_{k+1} = \bar{\varepsilon}_k$; ${\color{red} r_{k+1} = r_0}.$ \State $\quad$END IF \State $\quad $WHILE $\phi(d_l^k,\mu_{p_{k+1}},{K_{POD}}) > \log\bar{\varepsilon}_{k+1}$ DO \State $\quad \quad$ $\mu_{p_{k+1}} = \frac{\mu_{p_{k+1}} + d_l^k}{2}$. \State $\quad$END WHILE \State $\quad$ Set $r_{k+1}=\mu_{p_{k+1}}-d_l^k$. \State $\quad$ $k=k+1$. \State END WHILE \end{algorithmic} \caption{\color{red} Generation of 1D-parametric domain decomposition for reduced-order models usage. Extension to multi-dimensional parametric space is subject to future research.} \label{alg:map_generation} \end{algorithm} \begin{figure}[h] \begin{centering} \subfigure[Designing the first feasible interval (1) and selecting a new centered \newline parameter $\mu_{p_1}$ (2)] {\includegraphics[trim={4.6cm 0.5cm 0.0cm 1cm},scale=0.3]{Alg_photoa.pdf}} \subfigure[Designing a feasible interval (3) and increasing the tolerance level (4) ] {\includegraphics[trim={4.6cm 0.5cm 2.7cm 1cm},scale=0.3]{Alg_photob.pdf}} \caption {{\color{red} A description of the most important stages of the parameter domain decomposition algorithm. The arrows describe the internal steps for each stage initiated in the order depicted by the arrows' indices. }} \label{fig:describe_algorithm} \end{centering} \end{figure} \begin{figure}[H] \begin{centering} \includegraphics[width=0.5\textwidth, height=0.40\textwidth]{range_plot.pdf} \caption{The diffusion parametric domain decomposition defining the local feasible intervals and their corresponding errors. Associated with one feasible interval there is a centered parameter $\mu_p$ high-fidelity trajectory that guides the construction of a reduced basis and operators such that the subsequent reduced-order model solutions along this interval are accurate to within the threshold depicted by the Y-axis labels.} \label{fig:expm2_numMus} \end{centering} \end{figure} \subsection{Multivariate prediction of local reduced-order models characteristics (MP-LROM) using regression machine learning methods} \subsubsection{Error estimation of local ROM solutions \label{sect:err_estimate} Here, we will use GP and ANN to approximate the MP-LROM error model introduced in \eqref{eqn:MP-LROM-error}. The approximated models have the following form \begin{equation}\label{eqn:prob_model_scale} \phi_{MP-LROM}^e: \{\mu,\mu_p,K_{POD}\} \mapsto \widehat{\log{{\varepsilon}}}_{\mu,\mu_p,K_{POD}}^{HF}, \end{equation} where the input features include a viscosity parameter value $\mu$, a parameter value $\mu_p$ associated with the full model run that generated the basis $U_{\mu_p}$, and the dimension of the reduced manifold $K_{POD}$. The target is the estimated logarithm of error of the reduced-order model solution at $\mu$ using the basis $U_{\mu_p}$ and the corresponding reduced operators computed using the Frobenius norm \begin{equation} \label{eqn:param_rang_err} \begin{array}{lr} \log {{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF} =\\ \log\Bigg(\| {\bf x}(\mu,t_1) - U_{ \mu_p}{\bf \tilde x}_{\mu_p}(\mu,t_1) \quad {\bf x}(\mu,t_2) - U_{ \mu_p}{\bf \tilde x}_{\mu_p}(\mu,t_2) \quad \cdots \quad {\bf x}(\mu,t_{N_t}) - U_{ \mu_p}{\bf \tilde x}_{\mu_p}(\mu,t_{N_t}) \|_F\Bigg). \end{array} \end{equation} The probabilistic models described generically in equation \eqref{eqn:prob_model_scale} are just approximations of the MP-LROM model \eqref{eqn:MP-LROM-error} and have errors. For our experiments, the data set includes $10$ and $100$ equally distributed values of $\mu_p$ and $\mu$ over the entire parameter region; i.e., $\, \mu_p \in \{0.1, 0.2,\ldots, 1\}$ and $\, \mu \in \{ 0.01, \ldots, 1\}$, $12$ reduced basis dimensions $K_{POD}$ spanning the interval $\{4,5,\ldots,14,15\}$ and the reduced-order model logarithm of errors $\log{{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF}$. The entire data set contains $12000$ samples, and for each $12$ samples a high-fidelity model solution is calculated. Only one high-fidelity model simulation is required for computing the reduced solutions errors for the parametric configuration $\mu$ using reduced-order models of various $K_{POD}$ constructed based on a single high-fidelity trajectory described by parameter $\mu_p$. As such, $1000$ high-fidelity simulations were needed to construct the entire data set. High-fidelity simulations are used to accurately calculate the errors associated with the existing reduced-order models for parametric configurations $\mu$. Figure \ref{fig:parameter_contour} shows isocontours of the error ${{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF}$ and $\log{{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF}$ of the reduced-order model solution for various viscosity parameter values $\mu$ and POD basis dimensions. The design of the reduced-order models relies on the high-fidelity trajectory for $\mu_p=0.8$. The target values ${{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF}$ vary over a wide range (from 300 to $ 10 ^{-6}$) motivating the choice of implementing models that target $\log{{{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF}}$ to decrease the variance of the predicted results. \begin{figure}[h] \centering \subfigure[Isocontours of the errors ${{{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF}}$]{\includegraphics[scale=0.35]{contour_plot_parameter_vs_Nolog_error.pdf} \label{fig:parameter_contour_lin}} \subfigure[Isocontours for the logarithms of the errors $\log{{{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF}}$ ] {\includegraphics[scale=0.35]{contour_plot_parameter_vs_log_error.pdf} \label{fig:parameter_contour_log}} \caption{Isocontours of the reduced model errors for different POD basis dimensions and parameters $\mu$. The reduced-order model uses a basis constructed from the full order simulation with parameter value $\mu_p=0.8$.} \label{fig:parameter_contour} \end{figure} A more detailed analysis, comparing models \begin{equation}\label{eqn:prob_model_no_scale} \phi_{MP-LROM}^e: \{\mu,\mu_p,K_{POD}\} \mapsto \hat{\varepsilon}_{\mu,\mu_p,K_{POD}}^{HF} \end{equation} that target no scaled data and model \eqref{eqn:prob_model_scale} is given in the following. The approximated MP-LROM models for estimating the local parametric reduced-order model errors are constructed using a Gaussian Process with a squared-exponential covariance kernel \eqref{eq_cov} and a neural network with six hidden layers and hyperbolic tangent sigmoid activation function in each layer. Tables \ref{tab:Param_lin} and \ref{tab:Param_log} show the averages and variances of errors in prediction of MP-LROM models for different sample sizes. Every subset of samples is selected randomly from a shuffled original data set. The misfit is computed using the same formulas presented in \eqref{eqn:err_fold} to evaluate the prediction errors. Table \ref{tab:Param_lin} shows the prediction errors of \eqref{eqn:prob_model_no_scale} computed via equation \eqref{eqn:err_fold} with ${y} = {{{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF}}$ and $\hat{y} = \hat{\varepsilon}_{\mu,\mu_p,K_{POD}}^{HF}$; i.e., no data scaling; the predictions have a large variance and a low accuracy. Scaling the data and targeting $\log{{{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF}}$ results using \eqref{eqn:prob_model_scale}, reduce the variance of the predictions, and increase the accuracy, as shown in Table \ref{tab:Param_log}. The same formula \eqref{eqn:err_fold} with ${y} = {\log{{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF}}$ and $\hat{y} = \widehat{\log{{\varepsilon}}}_{\mu,\mu_p,K_{POD}}^{HF}$ was applied. We notice that, for increasing sample sizes less or equal than $700$ and for scaled data, the variances of GP and ANN predictions are not necessarily decreasing. This behavior changes and the variances of both regression models decrease for increasing sample sizes larger than $700$ as seen in Table \ref{tab:Param_log}. The performance of the ANN and GP is highly dependent on the number of samples in the data set. As the number of data points grows, the accuracy increases and the variance decreases. The results show that GP outperforms ANN for small numbers of samples $\leq 1000 $ whereas, for larger data sets, ANN is more accurate than GP. \begin{table}[H] \begin{center} \begin{tabular}{ | l | l | l | l | l |} \hline & \multicolumn{2}{|c|}{GP MP-LROM} & \multicolumn{2}{|c|}{ANN MP-LROM} \\ \hline Sample size & $\textnormal{E}_{\rm fold}$ & $\textnormal{VAR}_{\rm fold}$ & $\textnormal{E}_{\rm fold}$ & $\textnormal{VAR}_{\rm fold}$ \\ \hline 100 & $ 13.4519 $ & $ 5.2372 $ & $ 12.5189 $ & $ 25.0337 $ \\ \hline 400 & $ 6.8003 $ & $ 31.0974 $ & $ 6.9210 $ & $ 26.1814 $ \\ \hline 700 & $ 5.6273 $ & $ 14.3949 $ & $ 7.2325 $ & $ 19.9312 $ \\ \hline 1000 & $ 3.7148 $ & $ 13.8102 $ & $ 5.6067 $ & $ 14.6488 $ \\ \hline 3000 & $ 0.5468 $ & $ 0.0030 $ & $ 1.2858 $ & $ 1.2705 $ \\ \hline 5000 & $ 6.0563 $ & $ 22.7761 $ & $ 3.8819 $ & $ 23.9059 $ \\ \hline \end{tabular} \end{center} \caption{Average and variance of error in predictions of MP-LROM models \eqref{eqn:prob_model_no_scale} constructed via ANN and GP using errors ${{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF}$ in training data for different sample sizes. \label{tab:Param_lin}} \end{table} \begin{table}[H] \begin{center} \begin{tabular}{ | l | l | l | l | l |} \hline & \multicolumn{2}{|c|}{GP MP-LROM} & \multicolumn{2}{|c|}{ANN MP-LROM} \\ \hline Sample size & $\textnormal{E}_{\rm fold}$ & $\textnormal{VAR}_{\rm fold}$ & $\textnormal{E}_{\rm fold}$ & $\textnormal{VAR}_{\rm fold}$ \\ \hline 100 & $ 0.5319 $ & $ 0.0118 $ & $ 1.2177 $ & $ 0.1834 $ \\ \hline 400 & $ 0.3906$ & $ 0.0007 $ & $ 0.8988 $ & $ 0.2593 $ \\ \hline 700 & $ 0.3322 $ & $ 0.0018 $ & $ 0.7320 $ & $ 0.5602 $ \\ \hline 1000 & $ 0.2693 $ & $ 0.0002 $ & $ 0.5866 $ & $ 0.4084 $ \\ \hline 3000 & $ 0.1558 $ & $ 0.5535 \times 10^{-4} $ & $ 0.01202 $ & $ 0.2744 \times 10^{-4} $ \\ \hline 5000 & $ 0.0775 $ & $ 0.4085 \times 10^{-5} $ & $ 0.0075 $ & $ 0.3812 \times 10^{-5} $ \\ \hline \end{tabular} \end{center} \caption{Average and variance of error in predictions of MP-LROM models \eqref{eqn:prob_model_no_scale} constructed via ANN and GP using logarithms of errors $\log{{{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF}}$ in training data for different sample sizes. \label{tab:Param_log}} \end{table} Figures \ref{fig:ParamHist_NN} and \ref{fig:ParamHist_GP} show the corresponding histogram of the errors in prediction of MP-LROM models \eqref{eqn:prob_model_scale} and \eqref{eqn:prob_model_no_scale} using $100$ and $1000$ training samples for ANN and GP methods, respectively. The histograms shown in Figure \ref{fig:ParamHist_GP} can assess the validity of GP assumptions \eqref{GP_Dist}, \eqref{GP_training}, \eqref{GP_prior}. The difference between the true and estimated values should behave as samples from the distribution $ \mathcal{N} (0, \sigma_n^2) $ \cite{drohmann2015romes}. In our case they are hardly normally distributed and this indicates that the data sets are not from Gaussian distributions. \begin{figure}[h] \centering \subfigure[$\log{{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF} - \widehat{\log{{\varepsilon}}}_{\mu,\mu_p,K_{POD}}^{HF}$ \newline {}{- 100 samples}] {\includegraphics[scale=0.35]{histogram_NN_parameter_Log_range1.pdf}} \subfigure[${{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF} - {{\hat{\varepsilon}}}_{\mu,\mu_p,K_{POD}}^{HF}$ - 100 samples]{\includegraphics[scale=0.35] {histogram_NN_parameter_range1.pdf}} \subfigure[$\log{{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF} - \widehat{\log{{\varepsilon}}}_{\mu,\mu_p,K_{POD}}^{HF}$ - 1000 samples] {\includegraphics[scale=0.35]{histogram_NN_parameter_Log_range2.pdf}} \subfigure[${{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF} - {\hat{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF}$ - 1000 samples]{\includegraphics[scale=0.35] {histogram_NN_parameter_range2.pdf}} \caption{Histogram of errors in prediction using ANN MP-LROM. \label{fig:ParamHist_NN}} \end{figure} \begin{figure}[h] \centering \subfigure[$\log{{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF} - \widehat{\log{{\varepsilon}}}_{\mu,\mu_p,K_{POD}}^{HF}$ - 100 samples] {\includegraphics[scale=0.35]{histogram_gp_parameter_Log_range1.pdf}} \subfigure[${{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF} - {\hat{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF}$ - 100 samples]{\includegraphics[scale=0.35] {histogram_gp_parameter_range1.pdf}} \subfigure[$\log{{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF} - \widehat{\log{{\varepsilon}}}_{\mu,\mu_p,K_{POD}}^{HF}$ - 1000 samples]{\includegraphics[scale=0.35]{histogram_gp_parameter_Log_range2.pdf}} \subfigure[${{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF} - {\hat{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF}$ - 1000 samples]{\includegraphics[scale=0.35] {histogram_gp_parameter_range2.pdf}} \caption{Histogram of errors in prediction using GP MP-LROM. \label{fig:ParamHist_GP}} \end{figure} Scaling the data and targeting $\log{{{\varepsilon}}_{\mu,\mu_p,K_{POD}}^{HF}}$ errors clearly improve the performance of the MP-LROM models. Consequently for the rest of the manuscript we will only use model \eqref{eqn:prob_model_scale}. To asses the quality of the MP-LROM models, we also implemented a five-fold cross-validation test over the entire dataset. The results computed using formula \eqref{eqn:err_fold_average} are shown in Table \ref{tab:experm2}. ANN outperforms GP and estimates the errors more accurately. It also has less variance than the Gaussian Process which indicates it has more stable predictions. \begin{table}[H] \begin{center} \begin{tabular}{ | l | l | l |} \hline & $\textnormal{E} $ & $ \textnormal{VAR} $ \\ \hline ANN MP-LROM & $0.004004$ & $2.16 \times 10^{-6 }$ \\ \hline GP MP-LROM & $0.092352$ & $ 1.32 \times 10^{-5} $ \\ \hline \end{tabular} \end{center} \caption{MP-LROM statistical results over five-fold cross-validation.} \label{tab:experm2} \end{table} Figure \ref{fig:expm2_error_estimates} illustrates the average of errors in prediction of five different errors models computed using ANN and GP regression methods. The error models were constructed using a training set formed by $80\%$ randomly selected data of the entire data set. The predictions were made using a fixed test set randomly selected from the entire data set and contains various values of $\mu$, $K_{POD}$ and $\mu_p$ shown in the x-axes of Figure \ref{fig:expm2_error_estimates}. Building different GP and ANN MP-LROM error models, each trained on different part of the data set and then testing them with the same fixed test set, reduces the bias in prediction. Again, ANN outperforms GP having more accurate errors estimates. \begin{figure}[t!] \begin{centering} \includegraphics[width=0.5\textwidth, height=0.40\textwidth]{error_diff.pdf} \caption{The average of errors in predictions using five different trained models. The top labels shows the corresponding $\mu_p$ and $K_{POD}$ for each parameter $\mu$. } \label{fig:expm2_error_estimates} \end{centering} \end{figure} We also compared the MP-LROM models with those obtained by implementing ROMES method \cite{drohmann2015romes} and MFC technique \cite{alexandrov2001approximation}. The ROMES method constructs univariate models \begin{equation} \label{eqn::ROMES_math_framework} \phi_{ROMES}: \log \rho(\mu) \mapsto \log\varepsilon_{\mu}^{HF}, \end{equation} where the input $\rho(\mu)$ consists of error indicators. Examples of indicators include residual norms, dual-weighted residuals and other error bounds. MFC implements input-output models \begin{equation}\label{eqn::MFC_math_framework} \phi_{MFC}: \mu \mapsto \log\varepsilon_{\mu}^{HF}, \end{equation} where the input of error models is the viscosity parameter $\mu$. Both ROMES and MFC methods use a global reduced-order model with a fixed dimension in contrast to our method that employs local reduced-order models with various dimensions. ROMES and MFC models are univariate whereas the MP-LROM models are multivariate. To accommodate our data set to the requirements of the ROMES and MFC methods, we separated the data set into multiple subsets. Each of these subsets has $100$ samples corresponding to a single $\mu_p$ and $K_{POD}$ and $100$ values of parameter $\mu \in \{ 0.01,0.02 \ldots, 1\}$, so $100$ high-fidelity simulations are required. For each subset we constructed ANN and GP models to approximate the input-output models defined in \eqref{eqn::ROMES_math_framework} and \eqref{eqn::MFC_math_framework} using the same training set. In the case of ROMES method we employed the logarithms of residuals norms as inputs. We first computed the corresponding reduced-order solution and then the associated logarithm of residual norm by using the projected reduced order solution into the high-fidelity model for parameter $\mu$. The output of both ROMES and MFC models approximates the logarithm of the Frobenius norm of the reduced-order-model errors. Figures \ref{fig:contours_GP}-\ref{fig:VARcontours_NN} shows the isocontours of the $\textnormal{E}_{\rm fold}$ and $\textnormal{VAR}_{\rm fold}$ computed using \eqref{eqn:err_fold} for different $K_{POD} $ and $\mu_p$ using ROMES, MFC, and MP-LROM models constructed using GP and ANN methods. In total there are $12 \times 10$ configurations corresponding to different $K_{POD} $ and $\mu_p$ and as many ROMES and MFC models. The MP-LROM models are global in nature and the training set is the whole original data set. The testing set is the same for all the compared models and differs from the training sets. We can see that MP-LROM models are more accurate than those obtained via ROMES and MFC models. Including more samples associated with various POD basis sizes and $\mu_p$ is benefic. We also trained and tested all the models using five-fold cross-validation. The average error and variance of all 120 $ \textnormal{E}_{\rm fold}$s and $\textnormal{VAR}_{\rm fold}$s are compared against those obtained using MP-LROM error models and are summarized in tables \ref{tab:experm_ROMES_mul_Efold} and \ref{tab:experm_ROMES_mul_Vfold}. This shows that the MFC models outperform the ROMES ones, for our experiment, and the MP-LROM models are the most accurate. The MP-LROM models perform better since they employ more features and samples than the other models which help the error models tune the parameters better. We also notice the efficiency of the MFC models from accuracy point of view considering that they use very few samples. In the case of large parametric domains the MP-LROM error models may require a very large data set with a lot of features. By using only subsets of the whole data set near the vicinity of the parameters of interest and applying the active subset method \cite{constantine2014active} can help prevent the potential curse of dimensionality problem that MP-LROM might suffer. \begin{figure}[h] \centering \subfigure[MFC] {\includegraphics[scale=0.24]{contour_GP_mul.pdf}} \subfigure[ROMES ]{\includegraphics[scale=0.24]{contour_GP_ROMES.pdf}} \subfigure[MP-LROM ]{\includegraphics[scale=0.24]{contour_GP_org.pdf}} \caption{Isocontours for the $\textnormal{E}_{\rm fold}$ using GP method. \label{fig:contours_GP}} \end{figure} \begin{figure}[h] \centering \subfigure[MFC] {\includegraphics[scale=0.24]{contour_NN_mul.pdf}} \subfigure[ROMES ]{\includegraphics[scale=0.24]{contour_NN_ROMES.pdf}} \subfigure[MP-LROM ]{\includegraphics[scale=0.24]{contour_NN_org.pdf}} \caption{Isocontours for the $\textnormal{E}_{\rm fold}$ using ANN method. \label{fig:contours_NN}} \end{figure} \begin{figure}[h] \centering \subfigure[MFC] {\includegraphics[scale=0.24]{contour_VGP_mul.pdf}} \subfigure[ROMES]{\includegraphics[scale=0.24]{contour_VGP_ROMES.pdf}} \subfigure[MP-LROM ]{\includegraphics[scale=0.24]{contour_VGP_org.pdf}} \caption{Isocontours for the $\textnormal{VAR}_{\rm fold}$ using GP method. \label{fig:VARcontours_GP}} \end{figure} \begin{figure}[h] \centering \subfigure[MFC] {\includegraphics[scale=0.24]{contour_VNN_mul.pdf}} \subfigure[ROMES ]{\includegraphics[scale=0.24]{contour_VNN_ROMES.pdf}} \subfigure[MP-LROM]{\includegraphics[scale=0.24]{contour_VNN_org.pdf}} \caption{Isocontours for the $\textnormal{VAR}_{\rm fold}$ using ANN method. \label{fig:VARcontours_NN}} \end{figure} \begin{table}[H] \begin{center} \begin{tabular}{ | c | c | c | c |} \hline & ROMES & MFC & MP-LROM \\ \hline ANN & $0.3844$ & $ 0.0605$ & $ 8.8468 \times 10 ^{-4}$ \\ \hline GP & $ 0.2289$ & $ 0.0865 $& $ 0.0362 $ \\ \hline \end{tabular} \end{center} \caption{Average error of all 120 $ \textnormal{E}_{\rm fold}$s for three methods.} \label{tab:experm_ROMES_mul_Efold} \end{table} \begin{table}[H] \begin{center} \begin{tabular}{ | c | c | c | c | } \hline & ROMES & MFC & MP-LROM \\ \hline ANN & $0.0541$ & $ 0.0213$ & $ 4.9808 \times 10 ^{-7}$ \\ \hline GP & $ 0.0051$ & $ 0.0049 $& $5.4818 \times 10 ^{-4}$ \\ \hline \end{tabular} \end{center} \caption{Average variance of all 120 $ \textnormal{VAR}_{\rm fold}$s for three methods} \label{tab:experm_ROMES_mul_Vfold} \end{table} Finally we compare and show the average of the errors in prediction of five different errors models designed using ROMES, MFC, and MP-LROM methods for one of the subsets corresponding to $K_{POD} = 10$ and $\mu_p=1$. The testing set is randomly selected from the samples and is not included in the training sets. The training set for both ROMES and MFC models are the same. In order to prevent the bias in prediction, each time the error models are trained on randomly selected $80\%$ of the training sets and tested with the fixed test set. We repeated this five times and the average of error in prediction is obtained. Figure \ref{fig:error_MULROMES} shows the average of error in prediction for all models implemented using GP and ANN methods. \begin{figure}[h] \centering \subfigure[GP error model] {\includegraphics[scale=0.35]{mul_ROMES_GP.eps}} \subfigure[ANN error model]{\includegraphics[scale=0.35]{mul_ROMES_NN.eps}} \caption{The error in predictions of all methods using $K_{POD} = 10$ and $\mu_p=1$. For GP error models the overall average of errors in predictions is $0.0131, 0.0487, 0.0095$ for MFC, ROMES and MP-LROM respectively. For ANN error models the overall average of errors in predictions is $0.0056, 0.0240, 0.0029$ for MFC, ROMES and MP-LROM respectively. The top x-axis shows the corresponding logarithms of the residuals norms used as inputs for the ROMES method.} \label{fig:error_MULROMES} \end{figure} Since we are including more features in our mappings, we achieve more accurate predictions compared to other existing methods such as ROMES and MFC. However, there is always a trade-off between the computational complexity and the accuracy. For more accurate results, one can generate a bigger dataset with more samples taken from the parameter domain of the underlying model. This elevates the computational complexity since the dataset requires more high-fidelity model solutions and the probabilistic mappings are more costly to construct in the training phase. Techniques such as principal component analysis and active subspace can alleviate the curse of dimensionality for big datasets by selecting the most effective features and ignoring the less-effective ones. \input{Optimal_ROM_size.tex} \subsection{Supervised Machine Learning Techniques} \label{sect:prob_fram} In order to estimate the level of reduced-order model solution error ${\varepsilon_{\mu,\mu_{p_j},K_{POD}}^{HF}}$ \eqref{eqn:level_error_ML_ROM} and the reduced basis dimension $K_{POD}$, we will use regression machine learning methods to approximate the maps $\phi_{MP-LROM}^e$ and $\phi_{MP-LROM}^d$ described in \eqref{eqn:MP-LROM-error} and \eqref{eqn:eqn:MP-LROM-dimension}. Artificial Neural Networks and Gaussian Processes are used to build a probabilistic model $ \phi: {\bf z} \mapsto \hat{y} $, where $\phi$ is a transformation function that learns through the input features ${\bf z}$ to estimate the deterministic output $y$ \cite{murphy2012machine}. As such, these probabilistic models are approximations of the mappings introduced in \eqref{eqn:general_MP-LROM}. The input features ${\bf z}$ can be either categorical or ordinal. The real-valued random variable $\hat{y}$ is expected to have a low variance and reduced bias. The features of $ {\bf z} $ should be descriptive of the underlying problem at hand \cite{bishop2006pattern}. The accuracy and stability of estimations are assessed using the K-fold cross-validation technique. The samples are split into K subsets (``folds''), where typically $3 \le K \le 10$. The model is trained on $K-1$ sets and tested on the $K$-th set in a round-robin fashion \cite{murphy2012machine}. Each fold induces a specific error quantified as the average of the absolute values of the differences between the predicted and the $K$-th set values \begin{subequations} \begin{equation} \label{eqn:err_fold} \textnormal{E}_{\rm fold}=\frac{\sum_{i=1}^N | \hat{y}^i-y^i | }{N} , \quad \textnormal{VAR}_{\rm fold}=\frac{\sum_{i=1}^N \left( \hat{y}^i - \textnormal{E}_{\rm fold} \right)^2}{N-1}, \quad \rm fold=1,2, \ldots, K, \end{equation} where $N$ is the number of test samples in the fold. The error is then averaged over all folds: \begin{equation} \label{eqn:err_fold_average} \textnormal{E}=\frac{\sum_{\textnormal{fold}=1}^K\, \textnormal{E}_{\rm fold} }{K}, \quad \textnormal{VAR}=\frac{\sum_{\textnormal{fold}=1}^K \left(\textnormal{E}_{\rm fold}- \textnormal{E} \right)^2}{K-1}. \end{equation} \end{subequations} The variance of the prediction results \eqref{eqn:err_fold} accounts for the sensitivity of the model to the particular choice of data set. It quantifies the stability of the model in response to the new training samples. A smaller variance indicates more stable predictions, however, this sometimes translates into a larger bias of the model. Models with small variance and high bias make strong assumptions about the data and tend to underfit the truth, while models with high variance and low bias tend to overfit the truth \cite{biasVar_NG} . The trade-off between bias and variance in learning algorithms is usually controlled via techniques such as regularization or bagging and boosting \cite{bishop2006pattern}. In what follows we briefly review the Gaussian Process and Artificial Neural Networks techniques. \section{Parametric reduced-order modeling} \label{sec:ROM} \label{sect:ROM} Proper Orthogonal Decomposition has been successfully applied in numerous applications such as compressible flow \citep{Rowley2004} and computational fluid dynamics \citep{Kunisch_Volkwein_POD2002,Rowley2005,Willcox02balancedmodel}, to mention a few. It can be thought of as a Galerkin approximation in the state variable built from functions corresponding to the solution of the physical system at specified time instances. A system reduction strategy for Galerkin models of fluid flows based on a partition in slow, dominant, and fast modes, has been proposed in \cite{Noack2010}. Closure models and stabilization strategies for POD of turbulent flows have been investigated in \cite{San_Iliescu2013,wells2015regularized}. In this paper we consider discrete inner products (Euclidean dot product), though continuous products may be employed as well. Generally, an unsteady problem can be written in semi-discrete form as an initial value problem; i.e., as a system of nonlinear ordinary differential equations \begin{equation} \label{eqn::-4} \frac{d{\bf x}(\mu,t)}{dt} = {\bf F}({\bf x},t,\mu),~~~~{\bf x}(\mu,0) = {\bf x}_0 \in \mathbb{R}^{N_{\rm{state}}}, \quad \mu \in \mathcal{P}. \end{equation} The input-parameter $\mu$ typically characterizes the physical properties of the flow. By $\mathcal{P}$ we denote the input-parameter space. For a given parameter configuration $\mu_p$ we select an ensemble of $N_t$ time instances of the flow ${\bf x}(\mu_p, {t_1}),\ldots,{\bf x}(\mu_p , t_{N_t}) \in \mathbb{R}^{N_{\rm{state}}}$, where ${N_{\rm{state}}}$ is the total number of discrete model variables, and $N_t \in \mathbb{N^*}$. The POD method chooses an orthonormal basis $U_{\mu_p}=[{\bf u}_{1}^{\mu_p} ~~\cdots~~ {\bf u}_{K_{POD}}^{\mu_p}] \in \mathbb{R}^{{N_{\rm{state}}}\times K_{POD}}$, such that the mean square error between ${\bf x}(\mu_p,t_i)$ and the POD expansion ${\bf x}^\textsc{pod}_{\mu_p}(t_i) = U_{\mu_p}{\bf \tilde x_{\mu_p}}(\mu,t_i)$, ${\bf \tilde x_{\mu_p}}(\mu,t_i)= U_{\mu_p}^T {\bf x}(\mu_p , t_i) \in \mathbb{R}^ {K_{POD}} $, is minimized on average. The POD space dimension $K_{POD} \ll {N_{\rm{state}}}$ is appropriately chosen to capture the dynamics of the flow. Algorithm \ref{euclid} describes the reduced-order basis construction procedure \cite{stefanescu2014comparison}. \begin{algorithm} \begin{algorithmic}[1] \State Compute the singular value decomposition for the snapshots matrix $ [{\bf x}(\mu_p, {t_1})~ \cdots ~{\bf x}(\mu_p, {t_{N_t}})]= \bar U_{\mu_p} \Sigma_{\mu_p} {\bar V}^T_{\mu_p},$ with the singular vectors matrix $\bar U_{\mu_p} =[{\bf u}_1^{\mu_p}~~ \cdots ~~{\bf u}_{N_t}^{\mu_p}].$ \State Using the singular-values $\lambda_1\geq \lambda_2\geq \ldots \geq \lambda_{N_t} \geq 0$ stored in the diagonal matrix $\Sigma_{\mu_p}$, define $I(m)= {\sum_{i=1}^m \lambda_i^2/(\sum_{i=1}^{t_{N_t}} \lambda_i^2})$. \State Choose $K_{POD}$, the dimension of the POD basis, such that $ K_{POD}={\rm arg}\min_m \{I(m):I(m)\geq \gamma\}$ where $0 \leq \gamma \leq 1$ is the percentage of total information captured by the reduced space $\mathcal{X}^{K_{POD}}=\textnormal{range}(U_{\mu_p})$. It is common to select $\gamma=0.99$. The basis $U_{\mu_p}$ consists of the first $K_{POD}$ columns of $\bar U_{\mu_p}$. \end{algorithmic} \caption{POD basis construction} \label{euclid} \end{algorithm} Next, a Galerkin projection of the full model state \eqref{eqn::-4} onto the space $\mathcal{X}^{K_{POD}}$ spanned by the POD basis elements is used to obtain the reduced-order model \begin{equation}\label{eqn::-3} \frac{d{\bf \tilde x}_{\mu_p}(\mu,t)}{dt} = U_{ \mu_p}^T\,{\bf F}\bigg(U_{\mu_p}{\bf \tilde x}_{\mu_p}(\mu,t), t, \mu \bigg), \quad {\bf \tilde x}_{\mu_p}(\mu,0)= U_{\mu_p}^T\,{\bf x}_0. \end{equation} The notation ${\bf \tilde x}_{\mu_p}(\mu,t)$ expresses the solution dependence on the varying parameter $\mu$ and also on $\mu_p$ the configuration whose associated high-fidelity trajectory was employed to generate the POD basis. While being accurate for $\mu=\mu_p$, the reduced model \eqref{eqn::-3} may lose accuracy when moving away from the initial setting. Several strategies have been proposed to derive a basis that spans the entire parameter space. These include the reduced basis method combined with the use of error estimates \cite{rozza2008reduced,quarteroni2011certified,prud2002reliable}, global POD \cite{taylor2004towards,schmit2003improvements}, Krylov-based sampling methods \cite{daniel2004multiparameter,weile1999method}, and greedy techniques \cite{haasdonk2008reduced,nguyen2009reduced}. The fundamental assumption used by these approaches is that a smooth low-dimensional global manifold characterizes the model solutions over the entire parameter domain. The purpose of our paper is to estimate the solution error and dimension of the reduced-order model \eqref{eqn::-3} that can be subsequently used to generate a global basis for the parameter space.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec1} Classification is one of the most critical issues in data mining, pattern recognition, and machine learning. Numerous classification methods have been successful in various applications but, when these methods are performed on the imbalanced datasets, the performance of minority class may not be very satisfactory \citep{ataeian_2019}. If the distribution of samples in these two classes is unequal, then data sets are considered unequal. In this paper, the class with more number of samples is named 'majority class,' and the other class is called 'minority class.' The imbalanced datasets have various applications such as network intrusion detection \citep{Giacinto_2008, roshanfekr_2019}, credit scoring \citep{Schebesch_2008}, spam filtering \citep{Tang2008SpamSD}, text categorization \citep{Zheng_2004} and anomaly detection\citep{Pichara_2011}. Datasets are divided into three categories: data with all continuous attributes, data with all-discrete attributes, and data with both continuous and discrete attributes (hybrid attributes). There are two solutions in order to deal with these datasets. First, setting the distribution of data. Second, setting the classifier to operate with these datasets. Numerous techniques have been proposed for this solution. The most common method is the sampling (i.e., the data sets are balanced by reducing or increasing the size of datasets). These two methods are referred to as "under-sampling" and "over-sampling," respectively. The under-sampling method is straightforward, but it eliminates useful samples of the majority class, whereas the over-sampling method increases the risk of over-fitting \citep{Batista_2004}. Over-sampling is the opposite of the under-sampling method. It duplicates or interpolates minority samples in the hope of reducing the imbalance. The over-sampling method assumes the neighborhood of a positive sample to be still positive, and the samples between two positive samples positive \citep{Japkowicz_2002, Laurikkala_2002, Ling_1998, Kotsiantis_2006, Yen_2009, Sun_2009}. The common sampling methods are RO-Sampling (Random Over-Sampling) and RU-Sampling (Random Under-Sampling). RO-Sampling balances the distribution of data by duplicating randomly the samples of the minority class. However, RU-Sampling can eliminate randomly some useful samples of the majority class. SMOTE (Synthetic Minority Over-Sampling Technique) method is actually the over-sampling method that generated new synthetic samples along the line between the minority samples and their nearest neighbors. MSMOTE (Modified SMOTE) method is a modified SMOTE method by classifying the sample of minority class into three groups, which are security samples, border samples, and latent noise samples, and this method runs different strategy for any groups \citep{Hu_2009}. \cite{Yen_2009} presented a cluster-based under-sampling approach. First, all the samples are divided into some clusters in this approach. The main issue is that there are different clusters in a dataset, and each cluster appears to have unique characteristics. If a cluster has more minority class samples than a number of majority class samples, it does not have the characteristics of the majority class samples and behaves more like the minority class samples. Therefore, this method selects a suitable number of majority class samples from each cluster by considering the ratio of the number of majority class samples to the number of minority class samples in the cluster. \cite{Zhang_2014} proposed the RWO-Sampling method by generating some synthetic minority class samples via randomly walking from the data. The synthetic minority class expands the minority class boundaries since this method does not change the data distribution. \cite{Han_2005} presented borderline-SMOTE1 and borderline-SMOTE2. These methods are based on SMOTE. The borderline samples of the minority class are more easily misclassified than those ones far from the borderline. Thus, these methods only over-sample the borderline samples of the minority class. \cite{Barua_2014} presented a Majority Weighted Minority Oversampling Technique (MWMOTE) approach. The method uses the samples of the majority class near the decision boundary to select the samples of the minority class effectively. It then assigns the weights to the selected samples according to their importance in learning. The samples closer to the decision boundary are given more weights than others. AdaBoost increases the weights of misclassified instance and decreases those correctly classified using the same proportion, without considering the imbalance of the datasets \citep{Freund_1996}. Therefore, traditional boosting algorithms do not function well in the minority class. An improved boosting algorithm is proposed by \cite{Joshi}, which updated the weights of positive prediction differently from the weights of negative predictions. When dealing with imbalanced datasets, the class boundary learned by Support Vector Machines (SVMs) is apt to skew toward the minority class, thus increasing the misclassified rate of the minority class. A class boundary alignment algorithm is also proposed by \cite{Wu_2003}, which modifies the class boundary by changing the kernel function of SVMs. \cite{Bunkhumpornpat_2011} presented Density-Based Minority Over-sampling Technique DBSMOTE). They generated synthetic instances along the shortest path from each positive instance to a minority-class cluster with a graph. \cite{Perez_Ortiz_2015} proposed three methods are based on analyzing the data from a graph-based perspective, in order to easily include the ordering information in the synthetic pattern generation process. \cite{Luo_2014} presented a k-nearest neighbor (KNN) weighting strategy for handling the problem of class imbalance. They proposed CCW (class confidence weights) that uses the probability of attribute values given class labels to weight prototypes in KNN. In this work, we only discuss over-sampling and under-sampling approaches, and we try to increase the efficiency of the RWO method using hybrid methods. This study presents a modified RWO-Sampling \citep{Zhang_2014} by constructing two local graphs. At first, the samples of the minority class in high-density regions are selected by constructing a proximity graph (proximity graph with k-nearest neighbors), then the RWO-Sampling method is implemented. New samples from the minority class are generated without being affected by noises and outliers owing to these are either out of the graph or on the boundary of the graph. In the second graph, the samples of the majority class in high-density areas are selected, and the rest of the samples are eliminated. The samples of high majority class, which are noises and outliers, are mostly eliminated. We actually implement four classifiers to compare the performance of the proposed method with the RWO-Sampling, SMOTE \citep{Chawla_2011}, MWMOTE \cite{Barua_2014}, and RO-Sampling \citep{Batista_2004}. The performance of these classifiers was evaluated in terms of common metrics,such as F-measure, G-mean, accuracy, AUC, and TP rate. The experiment has been performed on ninebenchmark UCI datasets with different skew degrees The paper is organized into five sections. Section \ref{sec2} explains the RWO-Sampling method and proposed the method on how to handle the class imbalance problem. The results of the experiments on nine real datasets from UCI and performance estimators of the proposed approach are discussed in Section \ref{sec3}. Finally, Section \ref{sec4} includes the conclusion of this research work. \section{Proposed method}\label{sec2} \subsection{Background} In the RWO-Sampling method, consider the training dataset $T$, and the minority class instance set $P= \{x_{1},..., x_{n}\}$. Each $x_{j}$ represented by m attributes is an m-dimensional vector representing a point in the m-dimensional space. The attribute set is named $A= \{a_{1},..., a_{m}\}$, and $a_{i}(j)$ is used to denote the value of attribute $a_{i}$ for instance $x_{j}$. The RWO-Sampling method acts for continuous datasets and discrete datasets in different ways. For discrete attributes, this method uses roulette wheels to generate synthetic values for them, and continuous attributes; the method is shown in Algorithm 1 is used \citep{Zhang_2014}. This method, according to the central limit theorem, generates synthetic minority class samples for unknown data distribution problems. For a multiple attribute dataset, the mean and standard variance for each attribute using the minority class data denote $\mu_{i}$ and $\sigma_{i}$ for ith attribute $a_{i}$. Each attribute can be considered as a random value, and each value of the attribute can be considered as its one sampling value. $\mu_{i}'$ and $\sigma_{i}'$ denote its real mean and standard deviation for random variable $a_{i}$. In this case, if the number of the minority class samples methods is infinite, then it gives us following term; \begin{equation}\label{eq1} \frac{\mu_{i}-\mu_{i}'}{\sigma_{i}'/\sqrt{n}}\rightarrow N(0, 1). \end{equation} If Eq. \ref{eq1} is satisfied, we will have the following equation \begin{equation}\label{eq2} \mu_{i}'=\mu_{i}-r\times \frac{\sigma_{i}'}{\sqrt{n}} \end{equation} where r is a sampling value of distribution N(0, 1). \begin{table}[thb] \centering \begin{tabular}{|l|} \hline input: T, M (M is over sampling rate)\\ output: $M\times n$ synthetic instances for minority class\\ for $i=1$ to m\\ \quad if $a_{i}$ is a continuous attribute\\ \qquad calculating the mean $\mu_{i}=\frac{\sum_{j=1}^n a_{i}(j)}{n}$\\ \qquad and the variance $\sigma_{i}^2 = \frac{1}{n} \sum_{j=1}^n (a_{i}(j) - \mu_{i})^2$\\ \quad if $a_{i}$ is a discrete attribute \qquad calculating the occurrence probability for each value of $a_{i}$\\ while $(M>0)$\\ \quad for each $x_{j} \in p$\\ \qquad for each $a_{i} \in p$\\ \quad \qquad if $a_{i}$ is a continuous attribute\\ \qquad \qquad generating a random value $a'_{i}(j)=a_{i}(j) - \frac{\sigma_{i}}{\sqrt{n}} N(0, 1)$\\ \qquad \quad if $a_{i}$ a hybrid attribute\\ \qquad \qquad generating a random value for attribute $a_{i}$ using roulette\\ \qquad forming a synthetic instance $(a'_{1}(j), a'_{2}(j),…, a'_{m}(j))$\\ \qquad \qquad \qquad \qquad \qquad \qquad M=M-1\\ return the $M\times n$ instances for the minority class\\\hline \end{tabular}\caption{Algorithm 1- RWO-Sampling (T, M: a positive integer) \citep{Zhang_2014}.}\label{tab1} \end{table} This method keeps the data distribution unchanged and balances different class samples by creating synthetic samples through randomly walking from the real data. When some conditions are satisfied, it can be proved that both the expected average and the standard deviation of the generated samples are equal to that of the original minority class data \cite{Zhang_2014}. \subsection{UGRWO-Sampling approach} \cite{Zhang_2014} proposed the RWO-Sampling method that expands the minority class boundary after synthetic samples are generated. However, it does not generate new samples around the mean point of the real samples of the minority class, since it increases the likelihood of over-fitting. In this paper, we improved the disadvantages of the RWO method, which increases the over-fitting and dose not generate synthetic samples around the mean point. In this method, we eliminate the impact of samples that probably are noises and outliers, so artificial samples are generated around the mean point. We presented a modified version of the Random Walk Over-sampling method from the RWO method. The proposed method is attempted to find a solution to the problem of RWO-Samplings. Therefore, utilizing the KNN method, the UGRWO-Sampling creates artificial samples around the mean points since by eliminating the impact of noises or outliers, the new samples are generated around the mean, which decreases the probability of over-fitting. This method is based on two graphs. These two graphs are introduced to keep the proximity information. For each classes, an independent graph is constructed which are called majority local graph and minority local graph. In the first step, we construct the majority local graph. If the corresponding two vertexes are K-Nearest Neighbors (KNN) of each other, then an edge is added between a pair of vertexes (Fig. \ref{fig1}). The vertexes of the graph with K or more degrees will be retained and the rest of them are deleted. In second step, the second graph is made and the edges are created as before. The vertexes of the graph with K or more degrees will be retained and RWO-sampling runs on them. Specifically, the adjacent matrix of the majority class which is indicated by $U$ which is defined as follows, \begin{equation}\label{eq3} U_{ij} = \left\{ \begin{array}{rl} \tau_{ij}, & \qquad x_{i} \in N_{k}(j)\quad and \quad x_{j} \in N_{k}(i) \\ 0 & \qquad otherwise.\\ \end{array} \right. \end{equation} where $(N_{k}(j))$ is a set of the k-nearest neighbors in the majority class of the point $x_{j}$, $(N_{k}(j))$ k-nearest neighbors of the point $x_{i}$, and $U$ is adjacent matrix. $\tau_{ij}$ is a a scalar value or any characters for showing k-neareast neighbors in a special vertex. $\tau_{ij}$ can be assumed any amount expect zero, and $i,j=1,..,n$. Then we define the under-sampling coefficient as \citep{roshanfekr_2019}, \begin{equation}\label{eq4} u_{i} = \left\{ \begin{array}{rl} 1, & \qquad \sum_{j} U_{ij}\geq k \\ 0 & \qquad otherwise\\ \end{array} \right. \end{equation} \begin{table}[thb] \caption{Algorithm 2- UGRWO-Sampling (T,M,k: a positive integer)}\label{tab2} \centering \begin{tabular}{|l|l|} \hline & Input: T, M (k is parameter of KNN) \\ & Output: remaining samples of majority class\\ & \qquad Constructing majority local graph\\ &\qquad \qquad Defining adjacent matrix, $U_{ij}$, for samples of each class\\ &\qquad \qquad \quad If ($x_{i} \in N_{k}(j)$\quad and \quad $x_{j} \in N_{k}(i)$)\\ \textbf{step 1} & \qquad \qquad \quad $U_{ij} = \tau_{ij}$\quad else\quad $U_{ij} = 0$ \\ &\qquad \qquad Defining the under-sampling coefficient, $u_{i}$,for samples of each class\\ &\qquad \qquad \quad If $\sum_{j} U_{ij}$ is greater than or equal to k\\ &\qquad \qquad \quad $u_{i}=1$ \quad else \quad $u_{i}=0$\\ &\qquad \qquad Deleting samples of majority class with zero $u_{i}$\\ &\qquad Return samples of majority class with nonzero $u_{i}$\\\hline & Input: T, M, k (M is over sampling rate and k is parameter of KNN)\\ &Output: the synthetic samples for the minority class\\ &\qquad Constructing minority local graph\\ &\qquad \qquad Defining adjacent matrix, $U_{ij}$, for samples of each class\\ &\qquad \qquad \quad If ($x_{i} \in N_{k}(j)$\quad and \quad $x_{j} \in N_{k}(i)$)\\ &\qquad \qquad \quad $U_{ij} = \tau_{ij}$\quad else\quad $U_{ij} = 0$\\ &\qquad \qquad Defining the under-sampling coefficient, $u_{i}$,for samples of each class\\ &\qquad \qquad \quad If $\sum_{j} U_{ij}$ is greater than or equal to k\\ &\qquad \qquad \quad $u_{i}=1$ \quad else \quad $u_{i}=0$\\ \textbf{step 2}&\qquad Selecting samples of minority class with nonzero $u_{i}$ and using the following steps for them\\ &\qquad \qquad \quad for i=1 to m\\ &\qquad \qquad \qquad Calculating the mean $\mu_{i}=\frac{\sum_{j=1}^n a_{i}(j)}{n}$\\ &\qquad \qquad \qquad and the variance $\sigma_{i}^2 = \frac{1}{n} \sum_{j=1}^n (a_{i}(j) - \mu_{i})^2$\\ &\qquad \qquad \quad while (M>0)\\ &\qquad \qquad \qquad for each $x_{j} \in p$\\ &\qquad \qquad \qquad \quad for each $a_{j} \in A$\\ &\qquad \qquad \qquad \qquad Generating a random value $a'_{i}(j)=a_{i}(j) - \frac{\sigma_{i}}{\sqrt{n}} N(0, 1)$\\ &\qquad \qquad \qquad Forming a synthetic instance $(a'_{1}(j), a'_{2}(j),…, a'_{m}(j))$\\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad M= M-1\\ &Return the synthetic samples for the minority class\\\hline \end{tabular} \end{table} The sample $x_{i}$ with non-zero $u_{i}$ can be selected, and the rest of the samples are deleted. In this way, samples that may be noise or outliers will be eliminated. Generally, the samples in high-density regions have more chance to become non-zero degree vertexes, while the samples in low-density regions (e.g., outliers) increase the likelihood of isolated vertexes with zero-degree. The method shown in Algorithm 2 is used. \begin{figure*}[ht] \centerline{\includegraphics[width=1.1\textwidth,clip=]{fig1.jpg}} \caption{Representation of the decreasing the number of majority (a) and minority (b) classes.}\label{fig1} \end{figure*} The local minority graph is constructed as follows: At first, the samples of minority class with non-zero $u_{i}$ are selected, and then the RWO-Sampling method is run on the selected samples (see Figure \ref{fig1}). Then, the two local graphs eliminate the impact of noise and outliers in the RWO-Sampling method. In this way, a new sample will not be generated from the samples that may be noises or outliers. In Figure \ref{fig2}, the dashed closed line around the original data represents the boundary of the minority class data. RWO-Sampling uses a random walk to generate synthetic samples. Thus it has opportunities to expand the positive class border and increases the positive class classification accuracy. Our method separates the positive and negative samples, and the black points of the minority class samples and the red and blue points are the majority sample classes. In the algorithm, sample 3 is considered in the majority class as noise or outlier, and therefore the RWO algorithm will not run on it, and sample 3.1 will not be generated. In the majority class, red points are considered as noises and eventually eliminated. In subsection 3. 2, the proposed method is compared with the GRWO-Sampling method. The advantage of our method, in comparison to the RWO-Sampling, is that the outlier sample will not be increased. \section{Experimental results}\label{sec3} The proposed approach is applied to nine benchmark datasets from the UCI Repository\footnote{http://www.ics.uci. edu/~ mlearn/MLRepository. html} in Section \ref{ssec3_1}. Section \ref{ssec3_2} presents a summary of the performance obtained for all the proposed approaches. \subsection{Description of datasets and evaluation metrics}\label{ssec3_1} These benchmark datasets represent a wide range of fields, number of instances, and positive class labels. Table \ref{tab3} gives the characteristics of these datasets, the minority class for each dataset is shown in Table \ref{tab3}, and the rest of the classes are majority class. In SMOTE, MWMOTE, GRWO-Sampling, and UGRWO-Sampling, the K-parameter in the KNN method is selected from the set {3, 5, 10, 15}. The over-sampling rate is set to be 100, 200, 300, 400, and 500\% for data with continuous attributes. If an attribute is a missing value, the mean of that attribute is considered for it. A 10-fold cross-validation scheme is used to evaluate the performance of each over-sampling method. For the above procedure, we should implement the selected baseline algorithms under different over-sampling rates by toolboxes of MATLAB. The performance of a classifier is evaluated based on metrics such as accuracy \citep{Provost_1998}, Geometric Mean (G-mean) \citep{Kubat_1997}, F-measure \cite{Wu_2005}, AUC, and TPrate. In this paper, TP and FP are the numbers of true positive and false positive, respectively, and TN and FN are the numbers of a true negative and false negative, respectively. For the minority class data, its precision$=TP/(TP+FP)$ and TPrate=$TP/(TP+FN)$. For the majority class, its precision=$TN/(TN+FN)$, and TNrate=$TN/(TN+FP)$. $F-measure=\frac{((1+\beta^2)*recall+precision)}{(\beta^2*TPrate+precision)}$, where $\beta$ is set to 1 in this paper. In Tables 2 to 7, F-maj and F-min are F-measure for each class. \begin{table}[] \caption{Characteristics of the benchmark datasets.}\label{tab3} \begin{tabular}{llllll}\hline \textbf{Dataset}& \textbf{\#instances}& \textbf{\#positive instances}& \textbf{\#attributes}& \textbf{Positive class label}& \textbf{IR} \\\hline \textbf{Breast\_w}& 699& 241& 9& Malignant& 1.90\\ \textbf{Diabetes}& 768& 268& 8& Tested\_positive& 1.86\\ \textbf{Glass}& 214& 17& 9& 3& 11.58\\ \textbf{Ionosphere}& 351& 126& 34& B& 1.78\\ \textbf{Musk}& 476& 208& 168& Non-Musk& 1.77\\ \textbf{Satimage}& 6430& 625& 36& 2& 9.28\\ \textbf{Segmentation}& 1500& 205& 19& brickface& 6.31\\ \textbf{Sonar}& 208& 97& 60& Rock& 1.14\\ \textbf{Vehicle}& 846& 199& 18& Van& 3.25\\\hline \end{tabular} \end{table} \begin{figure*}[ht] \centerline{\includegraphics[width=1.1\textwidth,clip=]{fig2.jpg}} \caption{Representing of different over-sampling approaches. Black dots represent the minority samples, blue and red points denote majority samples and the other points represent the synthetic data due to the over-sampling. (1) Shows the original data. (2) RWO-Sampling, (3) UGRWO-Sampling.}\label{fig2} \end{figure*} \begin{figure*}[ht] \centerline{\includegraphics[width=.9\textwidth,clip=]{fig3.jpg}} \caption{AUCRatio on the nine continuous attribute datasets under five data over-sampling rates and on the original datasets.}\label{fig3} \end{figure*} \subsection{Test using UCI database}\label{ssec3_2} In this section, we will study the following classification approaches; Naïve Bayes, 5-Nearest Neighbor, Decision Tree, and AdaBoostM1. For each dataset, these five different classifiers should be trained and tested, and six different sampling methods should be performed. The GRWO-Sampling is the same as the RWO-Sampling method, except that the RWO-Sampling should be done only on the samples of the minority class, which is selected from the local graph. Tables 5-9 in the Appendix present the implementation of different sampling methods with different over-sampling rates, which are on various datasets. Table \ref{tab5} shows that when 5-NN is implemented on all datasets, UGRWO-Sampling has the best performance in most evaluation metrics. However, Naïve Bayes and Decision Tree classifiers, which also belong to this method, do not perform well on Vehicle and Glass datasets. With this over-sampling rate, most of the classifiers with UGRWO-Sampling will perform well on the datasets with high Imbalanced Ratios such as Glass and Satimage, but better on the datasets with low Imbalanced Ratios such as Breast\_w and Diabetes. Also, this method has an acceptable performance on the big dataset e.g., Satimage. When implementing the AdaBoostM1 classifier on Musk and Segmentation, RWO-Sampling and RO-sampling have the best performance, and UGRWO-Sampling outperforms best on the other datasets. From the results shown in Table \ref{tab6}, it can be concluded that UGRWO-Sampling performs best on most datasets. In all classifiers, the proposed sampling method does not perform best on the Glass dataset. In general, similar to the results of Table \ref{tab3}, UGRWO-Sampling on the small dataset with a high Imbalanced Ratio does not perform well, but it has an extraordinary efficiency on big datasets with high Imbalanced Ratio. AdaboostM1 classifier with UGRWO-Sampling has an acceptable performance on the most datasets except Segmentation and Glass datasets. Table \ref{tab7} indicates that RO-Sampling and SMOTE sampling methods perform best in most evaluation metrics when all classifiers are implemented on Glass datasets. The results on the Vehicle dataset show that only if 5-NN and AdaBoostM1 are conducted, UGRWO-Sampling will have the best performance in most evaluation metrics. The results on the Sonar dataset show that the 5-NN classifier has the best performance with RWO-Sampling, but the rest of the classifiers have better efficiency with UGRWO-Sampling. Similar to the results of Tables 5 and 6, UGRWO-Sampling performs best on the Breast\_w, Diabetes, and Satimage datasets. AdaBoostM1 classifier with RO-Sampling and RWO-Sampling had better performance on Musk, Segmentation, and Glass datasets. In Table \ref{tab8}, Naive Bayes classifier with RWO-Sampling has an acceptable performance on Diabetes, Segmentation, and Vehicle datasets, Decision Tree classifier performs well with RO-Sampling and MWMOTE has the best efficiency on Diabetes, Ionosphere and Segmentation data sets. All the classifiers except 5-NN with UGRWO-Sampling had better performance on the Sonar dataset. UGRWO-Sampling did not have an acceptable performance on the Glass dataset though, RO-Sampling performed well in the Decision Tree classifier on most of the datasets. When implementing the AdaBoostM1 classifier on Musk, Segmentation, and Glass, RO-Sampling and SMOTE outperform best, and UGRWO-Sampling has the best performance on other datasets. From the results presented in Table \ref{tab9}, UGRWO-Sampling performs best on the Breast\_w and Satimage datasets. RO-Sampling is useful when all classifier is implemented on Glass datasets. 5-NN classifier with RWO-Sampling performs best on the Sonar dataset and other classifiers with UGRWO-Sampling. When implementing Decision Tree classifier on Diabetes, Ionosphere, and Vehicle, RO-Sampling outperforms the other three approaches, and UGRWO-Sampling outperforms best in the other classifiers. When AdaBoostM1 is implemented, RO-Sampling and SMOTE have the best performance on Musk, Segmentation, and Glass datasets. Generally, as shown in Tables 5-9, the results on the large scale and highly imbalanced datasets, Satimage, show that most of the time, UGRWO-Sampling performs best in most metrics. Sonar and Musk are the highest-dimensional sample datasets, and RWO-Sampling and MWMOTE always have the best efficiency in 300, 400, and 500\% of over-sampling rates and UGRWO-Sampling performs well in all metrics in 100 and 200\% of over-sampling rates. Glass with the lowest-dimensional sample, high Imbalanced Ratio dataset, RO-Sampling and SMOTE always perform well under all over-sampling rates. On Breast\_w, Diabetes, and Ionosphere datasets, the results show that our approach performs well in most metrics. When the 5-NN and NB classifiers are implemented on the Segmentation dataset, UGRWO-Sampling has the best efficiency in all metrics except in 400\% of the sampling rate for NB. Furthermore, when Decision Tree and AdaBoostM1 classifiers are implemented, RO-Sampling and SMOTE have the best performance. Also, when Naive Bayes is implemented on the Vehicle dataset, RWO-Sampling always has the best efficiency. However, when the over-sampling rate is 300\%, AdaBoostM1 with UGRWO-Sampling does not have acceptable performance in all metrics. Generally, it seems that by increasing the over-sampling rate, the efficiency of the proposed method decreases on different datasets. This method performs well on large-scale datasets. It can be concluded that the proposed method can work well in almost all cases on low Imbalanced Ratio datasets. It should be noted that the method has also worked well in high Imbalanced Ratio datasets such as Satimage and Segmentation, but on Glass dataset, SMOTE and RO-Sampling methods have better performance in all classifiers. In order to compare the performance conveniently, we counted the number of wins in all cases for each over-sampling approach and provided the results in Table 4. The results show that UGRWO-Sampling outperforms the other five approaches in all six metrics when conducting Naive Nayes, 5-NN, and Decision Tree, and it loses to GROW-Sampling only one time in F-maj. The results also reveal that GRWO-Sampling and MWMOTE are the most time-consuming methods. Based on the results presented in Table 4, we can conclude that UGRWO-Sampling performs potentially well in imbalanced datasets no matter what classification algorithms are conducted. \begin{table}[h] \caption{Five evaluation metric win summary on nine continuous attribute datasets when implementing four baseline classifiers.}\label{tab4} \begin{tabular}{llllllll}\hline \textbf{Alg}& \textbf{OS}& \textbf{F-min}& \textbf{F-maj}& \textbf{acc}& \textbf{G-mean}&\textbf{TPrate}&\textbf{ACU} \\\hline \textbf{NB}& UGRWO& \textbf{28}& \textbf{25}& \textbf{30}& \textbf{27}& \textbf{25}& \textbf{27}\\ & GRWO& 1& 10& 0& 1& 6& 1\\ & RWO& 7& 7& 7& 9& 3& 6\\ & MWMOTE& 3& 1& 1& 1& 1& 2\\ & SMOTE& 1& 3& 2& 0& 5& 2\\ & ROS& 4& 1& 1& 4& 4& 1\\ \textbf{5-NN}& UGRWO& \textbf{30}& \textbf{13}& \textbf{28}& \textbf{24}& \textbf{32}& \textbf{29}\\ & GRWO& 0& 10& 0& 2& 0& 0\\ & RWO& 6& 4& 7& 8& 6& 4\\ & MWMOTE& 0& 1& 0& 0& 0& 2\\ & SMOTE& 2& 12& 4& 7& 9& 2\\ & ROS& 8& 4& 7& 4& 14& 1\\ \textbf{DT}& UGRWO& \textbf{24}& \textbf{12}& \textbf{22}& \textbf{14}& \textbf{19}& \textbf{23}\\ & GRWO& 0& 2& 2& 0& 0& 5\\ & RWO& 5& 4& 3& 5& 4& 4\\ & MWMOTE& 5& 8& 5& 5& 6& 7\\ & SMOTE& 0& 5& 4& 6& 5& 4\\ & ROS& 9& 9& 10& 8& 11& 1\\ \textbf{Ada}& UGRWO& \textbf{32}& 11& \textbf{30} & \textbf{18}& \textbf{27}& \textbf{24}\\ & GRWO& 0& \textbf{19}& 1& 7& 0& 2\\ & RWO& 2& 4& 2& 6& 1& 6\\ & MWMOTE& 1& 3& 2& 3& 1& 3\\ & SMOTE& 3& 2& 3& 3& 4& 6\\ & ROS& 7& 6& 6& 5& 7& 2\\\hline \end{tabular} \end{table} AUC (Area Under the Curve) is also an important metric to evaluate the performance of classifiers. According to the above results, the proposed method had an acceptable performance in the majority of the datasets, but we cannot clearly claim which classifier has the best performance. The summary in Table \ref{tab4} shows that UGRWO-Sampling outperforms the others statistically in terms of AUC. For a better conclusion, we use another measure called AUCRatio. At first, we computed the relative performance of a given method $M$ on a dataset $i$ as the ratio between its AUC and the highest among all the compared methods, \begin{equation*} AUCRatio_{i}(M)=\frac{AUC(M)}{max_{j} AUC(j)}. \end{equation*} Where AUC(j) is the AUC for method j on the dataset i. The larger the value of AUCRatioi(M), the better the performance of M in dataset i \cite{Kubat_1997}. Figure \ref{fig3} depicts the distribution of the relative performance of the six methods and all datasets. According to Figure \ref{fig3}, the proposed method has proved to have acceptable performance in comparison to other methods. AUCRatio shows that UGRWO-Sampling has outperformed the RWO-Sampling. The proposed method deals with a continuous attribute value in order to create synthetic samples and will enhance the efficiency of the RWO method. As shown in Figure \ref{fig3}, it can be concluded that in all classifications at the lower over-sampling rate, the proposed method would have better efficiency. In all cases, when Naive Bayes and AdaBoostM1 classifiers are in operation, the proposed method boosts their performance. When the 5-NN classifier is at operation level, UGRWO-Sampling and RWO-Sampling outperform the other approaches, respectively. Decision Tree with RO-Sampling also has a great performance, but UGRWO-Sampling remains to be better than RO-Sampling. \section{Conclusion and future works}\label{sec4} We have evaluated four classifiers, including Naive Bayes classifier, K-Nearest Neighbor, Decision Tree, and AdaBoostM1 on imbalanced datasets. Over-Sampling often affects the performance of k nearest neighbors (KNN), since prior to over-sampling, the number of the minority class instances in the fixed volume may be increased. A Naive Bayes classifier obtains the posterior probability for a test sample, and over-sampling increases the posterior probability of the minority class data, thus over-sampling influences the performance. Over-sampling often affects the performance of Decision Tree in response to the modification of data distribution since it influences the measure called information gain used for choosing the best attribute in Decision Tree and consequently leads to the modification of the constructed Decision Tree. Modifying the structure of the Decision Tree influences pruning and over-fitting avoidance, and consequently influences the performance of the Decision Tree. Also, Ada, Boost, and M1 changes the underlying data distribution and classifies the re-weighted data space iteratively. RO-Sampling is simple, but it increases the possibility of over-fitting. SMOTE uses linear interpolation for sampling generation, and new samples fall on the line segment connected by two neighbors. It does not expand the space occupied by the minority class data, and also changes the original data distribution of the minority class data, and also does not change the original data distribution of the minority class. When generating synthetic samples, RWO-Sampling tries to keep the minority class data distribution unchanged while aiming to expand the space occupied by the minority class data. Thus it has high generalization capability and performs well on imbalanced data classification. RWO-Sampling does not generate synthetic samples around the mean point of the real minority class data through random walk model, since it also tends to increase the likelihood of over-fitting, but the proposed method has attempted to find solutions for these problems. Therefore, by means of the KNN method, the UGRWO-Sampling tends to create artificial samples around the mean since by eliminating the impact of noise or outlier samples, new samples will be generated around the mean. Also, in comparison with RWO-Sampling, this method is less likely to increase the probability of over-fitting, and it has high generalizability. Due to the use of the KNN method for the proposed method and being aware of the fact that KNN would fail on a large scale, the proposed method would have a quite weak performance on large-scale. Obviously, RO-Sampling is less consuming in comparison with other approaches in generating new samples. SMOTE needs to calculate the K nearest neighbors for a chosen sample before generating new samples, and RWO-Sampling needs to calculate the mean and standard deviation for all attributes. K nearest neighbors are time-consuming compared to the calculation of the mean and standard deviation, so RWO-Sampling acts faster than SMOTE. In the new method, the K nearest neighbors, mean and standard deviation for all attributes need to be calculated. Therefore, in comparison with SMOTE and RWO-Sampling methods, UGRWO-Sampling is more time-consuming. A new combined method is proposed in order to reduce the disadvantages of over-sampling and under-sampling methods by which datasets perform well in most cases (e.g., in continuous datasets), though not performing well on discrete datasets and this issue would benefit from further research. \newpage \section{Appendix} \begin{longtable}{llllllll} \caption{Averaged results and standard deviations on nine continuous attribute datasets (over-sampling rate equals $100\%$).} \label{tab5} \endfirsthead \endhead \hline ds & Alg & OS & f-min & f-maj & O-acc & G-mean & TP rate \\\hline \textbf{Breast\_w}& NB& UGRWO& \textbf{97.01$\pm$0.01}& \textbf{96.98$\pm$0.03}& \textbf{96.53$\pm$0.05}& \textbf{97.50$\pm$0.56}& \textbf{97.54$\pm$0.45} \\ & &GRWO& 96.62$\pm$0.01& 96.78$\pm$0.01& 96.23$\pm$2.14& 97.30$\pm$3.16& 96.25$\pm$2.37\\ & &RWO& 96.38$\pm$0.01& 96.15$\pm$0.01& 96.26$\pm$1.52& 96.23$\pm$1.52& 97.28$\pm$2.70\\ & &MWMOTE& 95.24$\pm$0.02& 96.52$\pm$0.01& 96.42$\pm$0.05& 96.05$\pm$0.32& 97.26$\pm$0.21\\ & &SMOTE& 94.18$\pm$0.00& 96.77$\pm$0.00& 95.85$\pm$0.81& 96.10$\pm$0.70& 97.10$\pm$3.42\\ & &ROS& 96.52$\pm$0.02& 96.24$\pm$0.02& 96.39$\pm$2.17& 96.35$\pm$2.20& 97.52$\pm$1.87\\ &5-NN& UGRWO& \textbf{99.81$\pm$0.00}& \textbf{99.41$\pm$0.00}& \textbf{99.71$\pm$0.09}& \textbf{99.81$\pm$0.59}& \textbf{99.67$\pm$1.02}\\ && GRWO& 96.95$\pm$0.02& 97.91$\pm$0.01& 97.52$\pm$1.66& 97.55$\pm$1.67& 97.74$\pm$2.17\\ && RWO& 97.31$\pm$0.01& 97.14$\pm$0.01& 97.23$\pm$1.81& 97.20$\pm$1.83& 97.91$\pm$1.96\\ && MWMOTE& 96.25$\pm$0.02& 98.25$\pm$0.01& 98.23$\pm$0.54& 97.25$\pm$0.02& 97.85$\pm$0.16\\ && SMOTE& 95.44$\pm$0.02& 97.60$\pm$0.01& 96.86$\pm$1.99&96.66$\pm$2.57& 96.25$\pm$4.98\\ && ROS &97.84$\pm$0.01& 97.67$\pm$0.01& 97.76$\pm$1.36& 97.72$\pm$1.40& 98.75$\pm$1.42\\ &DT& UGRWO& \textbf{99.61$\pm$0.00}& \textbf{98.56$\pm$0.03}& \textbf{99.39$\pm$1.27}& \textbf{99.06$\pm$2.35}& \textbf{99.61$\pm$1.21}\\ &&GRWO &93.78$\pm$0.03 &95.85$\pm$0.07 &95.03$\pm$2.88 &94.83$\pm$3.32 &94.15$\pm$5.86\\ &&RWO &95.51$\pm$0.01 &95.29$\pm$0.01& 95.40$\pm$1.24& 95.36$\pm$1.27& 95.81$\pm$2.97\\ &&MWMOTE &96.23$\pm$0.01 &96.45$\pm$0.01 &96.87$\pm$0.23 &97.24$\pm$5.25 &95.25$\pm$2.32\\ &&SMOTE &91.92$\pm$0.04& 95.74$\pm$0.02& 94.43$\pm$2.88& 93.83$\pm$3.22& 92.13$\pm$5.31\\ &&ROS &96.52$\pm$0.02 &96.32$\pm$0.02 &96.49$\pm$2.35 &96.41$\pm$2.39 &98.13$\pm$2.28\\ &Ada&UGRWO& \textbf{99.43$\pm$0.00}& \textbf{98.25$\pm$0.02}& \textbf{99.14$\pm$1.38}& \textbf{99.43$\pm$0.90}& \textbf{99.28$\pm$1.50}\\ &Boost&GRWO& 93.62$\pm$0.03& 96.22$\pm$0.02& 95.19$\pm$2.84& 94.42$\pm$3.46& 92.33$\pm$6.28\\ &M1&RWO& 96.21$\pm$0.02& 96.11$\pm$0.02& 96.16$\pm$2.41& 96.17$\pm$2.41& 95.40$\pm$2.36\\ &&MWMOTE& 97.85$\pm$2.23& 97.45$\pm$0.01& 98.56$\pm$3.24& 98.52$\pm$6.23& 98.54$\pm$0.24\\ &&SMOTE& 91.70$\pm$0.03& 95.79$\pm$0.02& 94.42$\pm$2.04& 93.07$\pm$2.86& 89.21$\pm$4.46\\ &&ROS& 94.92$\pm$0.01& 94.86$\pm$0.01 &94.89$\pm$1.19& 94.82$\pm$1.48& 93.36$\pm$3.06\\ Diabetes& NB& UGRWO& 65.54$\pm$0.05& \textbf{87.69$\pm$0.02}& \textbf{81.90$\pm$3.92}& \textbf{85.35$\pm$4.38}& \textbf{91.60$\pm$7.24}\\ && GRWO& 52.20$\pm$0.08& 80.25$\pm$0.01& 71.60$\pm$2.85& 59.86$\pm$6.64& 95.60$\pm$1.26\\ &&RWO &69.20$\pm$0.07& 78.56$\pm$0.02& 74.80$\pm$4.14& 72.58$\pm$5.59& 95.20$\pm$3.01\\ && MWMOTE &\textbf{75.54$\pm$0.01}& 78.25$\pm$0.01& 75.26$\pm$0.02& 83.25$\pm$0.01 &92.32$\pm$2.03\\ && SMOTE &63.34$\pm$0.05& 81.55$\pm$0.03& 75.51$\pm$3.75& 71.03$\pm$4.31& 60.78$\pm$7.05\\ && ROS &74.94$\pm$0.03& 75.22$\pm$0.03& 75.09$\pm$3.25& 75.11$\pm$3.26& 72.00$\pm$3.57\\ &5-NN& UGRWO& \textbf{91.48$\pm$0.04}& 70.85$\pm$0.13& \textbf{86.86$\pm$6.58}& \textbf{85.14$\pm$8.83}& \textbf{88.29$\pm$5.25}\\ && GRWO& 62.48$\pm$0.08& 77.81$\pm$0.05& 72.03$\pm$4.96& 68.25$\pm$6.52& 56.46$\pm$9.54\\ && RWO& 64.28$\pm$0.05& 72.01$\pm$0.04& 69.88$\pm$4.54& 69.38$\pm$4.71& 60.08$\pm$6.24\\ && MWMOTE &65.23$\pm$0.02 &76.27$\pm$0.02& 75.32$\pm$0.23& 75.84$\pm$2.54& 78.54$\pm$6.25\\ && SMOTE& 55.76$\pm$0.06& \textbf{78.77$\pm$0.04}& 71.48$\pm$4.44& 64.77$\pm$5.12& 51.88$\pm$9.03\\ &&ROS& 75.04$\pm$0.04& 69.03$\pm$0.05& 72.39$\pm$5.06& 71.55$\pm$5.23& 80.24$\pm$5.74\\ &DT& UGRWO& \textbf{91.01$\pm$0.04}& 67.18$\pm$0.13& \textbf{85.53$\pm$7.48}& 76.62$\pm$9.36& \textbf{90.96$\pm$7.26}\\ && GRWO& 68.97$\pm$0.03& 78.42$\pm$0.03& 74.67$\pm$3.41& 73.61$\pm$3.45& 70.24$\pm$6.47\\ &&RWO& 80.44$\pm$0.05& \textbf{79.07$\pm$0.04}& 79.81$\pm$5.23& 79.67$\pm$5.15& 80.95$\pm$8.02\\ && MWMOTE& 89.23$\pm$0.02& 79.02$\pm$0.03& 82.32$\pm$0.54& 78.59$\pm$2.03& 85.67$\pm$5.23\\ && SMOTE& 58.39$\pm$0.05& 77.26$\pm$0.03& 70.70$\pm$3.86& 67.18$\pm$4.65& 59.34$\pm$9.43\\ && ROS& 81.93$\pm$0.02& 78.31$\pm$0.02& 80.31$\pm$2.33&\textbf{79.77$\pm$2.44}& 86.39$\pm$4.27\\ &Ada&UGRWO& \textbf{93.47$\pm$0.02}& 68.59$\pm$0.06& \textbf{89.21$\pm$0.88}& 78.11$\pm$15.62& \textbf{94.19$\pm$3.66}\\ &Boost& GRWO& 67.02$\pm$0.06& 80.19$\pm$0.03& 74.81$\pm$4.36& 69.05$\pm$4.41& 64.42$\pm$8.00\\ &M1& RWO& 78.66$\pm$0.05& 78.86$\pm$0.05& 78.85$\pm$5.29& \textbf{78.64$\pm$5.33}& 76.09$\pm$8.96\\ && MWMOTE& 80.26$\pm$0.01& \textbf{83.23$\pm$0.01}& 78.98$\pm$0.98& 79.52$\pm$2.35& 93.25$\pm$0.12\\ && SMOTE& 58.46$\pm$0.09& 81.21$\pm$0.03& 74.34$\pm$4.30& 66.90$\pm$8.35& 53.98$\pm$16.28\\ && ROS& 77.01$\pm$0.03& 74.42$\pm$0.05& 75.96$\pm$3.51& 75.39$\pm$4.27& 77.97$\pm$7.44\\ Ionosphere& NB& UGRWO& \textbf{96.89$\pm$0.03}& \textbf{87.11$\pm$0.13}& \textbf{95.06$\pm$5.73}& \textbf{91.55$\pm$9.40}& \textbf{96.20$\pm$7.44}\\ && GRWO& 77.96$\pm$0.11& 84.42$\pm$9.07& 81.78$\pm$7.50& 82.56$\pm$6.93& 87.74$\pm$9.58\\ && RWO& 84.27$\pm$0.02& 80.87$\pm$0.03& 82.77$\pm$3.07& 82.21$\pm$3.25& 87.64$\pm$5.15\\ && MWMOTE& 92.02$\pm$0.01& 86.23$\pm$0.02& 92.56$\pm$0.03& 90.23$\pm$2.03& 92.20$\pm$2.23\\ && SMOTE& 78.36$\pm$0.08& 85.39$\pm$0.06& 82.58$\pm$6.91& 83.49$\pm$7.01& 87.30$\pm$8.58\\ && ROS& 83.75$\pm$0.02& 80.13$\pm$0.02& 82.18$\pm$2.44& 81.51$\pm$2.62& 87.32$\pm$6.39\\ &5-NN& UGRWO& \textbf{89.77$\pm$0.06}& 78.06$\pm$0.05& 85.75$\pm$7.96& \textbf{89.96$\pm$5.95}& \textbf{82.63$\pm$10.78}\\ && GRWO& 78.08$\pm$0.06& \textbf{89.65$\pm$0.02} &85.96$\pm$3.43& 80.60$\pm$5.08& 66.64$\pm$7.95\\ && RWO& 87.10$\pm$0.05& 88.38$\pm$0.03& \textbf{87.81$\pm$4.52}& 87.71$\pm$4.77& 79.26$\pm$7.77\\ && MWMOTE& 82.23$\pm$0.02& 88.52$\pm$0.02& 86.32$\pm$2.03& 87.56$\pm$3.02& 80.23$\pm$6.23\\ && SMOTE& 72.60$\pm$0.09& 88.75$\pm$0.03& 84.07$\pm$4.95& 76.18$\pm$7.18& 59.74$\pm$10.76\\ && ROS& 85.91$\pm$0.06& 87.56$\pm$0.05& 86.80$\pm$5.74& 86.71$\pm$5.86& 77.44$\pm$8.92\\ &DT& UGRWO& \textbf{97.03$\pm$0.04}& 84.57$\pm$0.21& \textbf{95.07$\pm$7.67}& 89.81$\pm$16.27& \textbf{97.03$\pm$5.27}\\ &&GRWO& 86.51$\pm$0.07& 91.90$\pm$0.04& 89.78$\pm$5.42& 89.16$\pm$6.42& 87.58$\pm$11.03\\ && RWO& 90.44$\pm$0.04& 89.27$\pm$0.05& 89.90$\pm$5.14& 89.80$\pm$5.12& 90.86$\pm$6.19\\ && MWMOTE& 95.65$\pm$0.01& \textbf{93.85$\pm$0.03}& 92.03$\pm$2.15& 90.58$\pm$1.58& 96.35$\pm$3.54\\ && SMOTE& 86.22$\pm$0.05& 91.40$\pm$0.04& 89.45$\pm$4.90& 89.56$\pm$4.69& 90.51$\pm$6.37\\ && ROS& 94.07$\pm$0.03& 93.26$\pm$0.03& \textbf{93.70$\pm$3.48}& 93.61$\pm$3.46& 94.84$\pm$4.22\\ &Ada& UGRWO& \textbf{95.46$\pm$0.03}& 84.75$\pm$0.06& \textbf{92.90$\pm$6.06}& \textbf{90.74$\pm$5.53}& \textbf{96.31$\pm$6.29}\\ &Boost& GRWO& 86.05$\pm$0.05& \textbf{92.81$\pm$0.02} &90.52$\pm$3.93& 87.66$\pm$4.86& 79.12$\pm$7.87\\ &M1& RWO& 86.25$\pm$0.04& 87.19$\pm$0.03& 86.77$\pm$4.29& 86.74$\pm$4.42& 79.67$\pm$7.42\\ && MWMOTE& 93.23$\pm$0.02& 90.23$\pm$0.01& 91.45$\pm$0.25& 90.52$\pm$3.02& 95.23$\pm$1.23\\ && SMOTE& 83.41$\pm$0.11& 92.50$\pm$0.04& 89.71$\pm$6.65& 85.61$\pm$9.48& 75.89$\pm$14.94\\ && ROS& 90.36$\pm$0.06& 89.85$\pm$0.06& 90.12$\pm$6.79& 90.16$\pm$6.73& 88.52$\pm$8.44\\ Musk& NB &UGRWO& \textbf{91.25$\pm$0.01}& \textbf{94.25$\pm$0.01}& \textbf{89.25$\pm$0.12} &\textbf{88.47$\pm$2.36}& \textbf{89.36$\pm$5.34}\\ && GRWO& 89.25$\pm$0.02& 91.47$\pm$0.02& 84.95$\pm$6.45& 87.24$\pm$5.24& 87.24$\pm$9.25\\ && RWO& 90.25$\pm$0.01& 90.14$\pm$0.01& 84.57$\pm$5.24& 84.57$\pm$9.25& 86.98$\pm$0.02\\ && MWMOTE& 90.57$\pm$0.02& 87.52$\pm$0.02& 82.25$\pm$2.58& 82.47$\pm$5.36& 86.54$\pm$5.24\\ && SMOTE& 80.25$\pm$0.02& 80.45$\pm$0.01& 81.24$\pm$0.45& 81.47$\pm$0.01& 82.02$\pm$3.24\\ && ROS& 78.25$\pm$0.01& 79.25$\pm$0.02& 80.25$\pm$0.15& 80.24$\pm$0.01& 81.02$\pm$2.25\\ &5-NN& UGRWO& 90.84$\pm$0.01& \textbf{89.89$\pm$0.01}& \textbf{92.58$\pm$2.14}& \textbf{92.87$\pm$0.14}& \textbf{94.05$\pm$0.04}\\ && GRWO& 90.79$\pm$0.01& 81.24$\pm$0.02& 91.25$\pm$2.35& 92.54$\pm$6.24& 90.54$\pm$5.02\\ && RWO& \textbf{90.87$\pm$0.01}& 86.27$\pm$0.01& 90.58$\pm$0.89& 91.24$\pm$6.24& 92.58$\pm$0.89\\ && MWMOTE& 90.58$\pm$0.02& 89.04$\pm$0.01& 90.45$\pm$0.05& 90.94$\pm$1.32& 92.15$\pm$2.13\\ && SMOTE& 89.87$\pm$0.01& 89.25$\pm$0.02& 89.87$\pm$0.23& 90.24$\pm$2.87& 91.89$\pm$0.87\\ && ROS& 89.25$\pm$0.01& 88.25$\pm$0.02& 90.02$\pm$0.45& 90.54$\pm$0.01& 91.54$\pm$0.45\\ &DT& UGRWO& \textbf{85.25$\pm$0.01}& 87.45$\pm$0.02& \textbf{88.59$\pm$3.24}& \textbf{89.25$\pm$1.45}& 90.15$\pm$1.45\\ && GRWO& 80.01$\pm$0.02& 86.25$\pm$0.01& 87.48$\pm$0.57& 89.12$\pm$2.15& 89.45$\pm$5.24\\ && RWO& 82.04$\pm$0.02& \textbf{89.25$\pm$0.01}& 86.49$\pm$2.47& 88.15$\pm$2.15& 90.15$\pm$0.66\\ && MWMOTE& 82.54$\pm$0.01& 85.25$\pm$0.02& 85.45$\pm$6.25& 88.00$\pm$9.17& \textbf{91.78$\pm$0.34}\\ && SMOTE& 80.25$\pm$0.01& 86.47$\pm$0.02& 86.15$\pm$0.84& 85.17$\pm$3.45& 84.14$\pm$2.75\\ && ROS& 75.14$\pm$0.01& 88.25$\pm$0.02& 86.59$\pm$2.45& 84.17$\pm$6.15& 86.15$\pm$2.65\\ &Ada &UGRWO& 85.15$\pm$0.06& 90.24$\pm$0.01& 91.24$\pm$0.04& 90.24$\pm$2.15& 87.25$\pm$5.25\\ &Boost& GRWO& 84.15$\pm$0.02& 91.24$\pm$0.01& 90.45$\pm$0.89& 91.57$\pm$3.15& 89.14$\pm$7.25\\ &M1& RWO& \textbf{86.25$\pm$0.02}& \textbf{92.15$\pm$0.01}& \textbf{95.25$\pm$0.89}& \textbf{91.89$\pm$3.45}& 88.14$\pm$0.06\\ && MWMOTE& 85.25$\pm$0.01& 91.45$\pm$0.01& 90.28$\pm$2.45& 90.24$\pm$2.54& \textbf{89.15$\pm$2.14}\\ && SMOTE& 67.48$\pm$0.01& 70.54$\pm$0.02& 79.75$\pm$2.45& 80.25$\pm$2.14& 78.25$\pm$6.25\\ && ROS& 70.24$\pm$0.04& 79.24$\pm$0.01& 80.54$\pm$0.14& 81.47$\pm$6.25& 82.25$\pm$5.25\\ Satimage& NB& UGRWO& \textbf{94.25$\pm$0.02}& \textbf{99.56$\pm$0.02}& \textbf{98.52$\pm$1.02}& \textbf{95.01$\pm$0.03}& \textbf{91.42$\pm$3.02}\\ && GRWO& 93.29$\pm$0.01& 98.99$\pm$0.00& 98.25$\pm$0.30& 94.41$\pm$1.06& 89.47$\pm$2.06\\ && RWO& 94.12$\pm$0.01& 98.63$\pm$0.00& 97.78$\pm$0.38& 94.82$\pm$1.04& 90.25$\pm$1.97\\ && MWMOTE& 93.23$\pm$0.01& 98.52$\pm$0.02& 98.42$\pm$1.02& 94.95$\pm$2.12& 90.54$\pm$1.04\\ && SMOTE& 93.29$\pm$0.01& 99.20$\pm$0.00& 98.58$\pm$0.35& 94.63$\pm$1.26& 88.89$\pm$2.47\\ && ROS& 93.77$\pm$0.01& 98.55$\pm$0.00& 97.66$\pm$0.43& 94.47$\pm$1.23& 89.60$\pm$2.47\\ & 5-NN& UGRWO& \textbf{98.99$\pm$0.01}& \textbf{99.68$\pm$0.01}& \textbf{99.65$\pm$0.03}& \textbf{98.95$\pm$0.05}& \textbf{99.01$\pm$0.54}\\ && GRWO& 97.72$\pm$0.01& 99.67$\pm$0.00 &99.39$\pm$0.20& 98.97$\pm$0.36& 98.48$\pm$0.69\\ && RWO &98.25$\pm$0.01& 99.57$\pm$0.00& 99.31$\pm$0.40& 98.89$\pm$1.01& 98.22$\pm$2.03\\ && MWMOTE& 98.02$\pm$0.01& 98.54$\pm$0.02& 99.25$\pm$2.14& 98.15$\pm$2.56& 98.14$\pm$2.00\\ && SMOTE& 97.34$\pm$0.01& 99.67$\pm$0.00& 99.42$\pm$0.31& 98.28$\pm$1.28& 96.86$\pm$2.50\\ && ROS& 98.29$\pm$0.00& 99.58$\pm$0.00& 99.32$\pm$0.33& 98.90$\pm$0.59& 98.22$\pm$1.12\\ & DT& UGRWO& \textbf{97.80$\pm$0.00}& \textbf{99.85$\pm$0.00}& \textbf{99.25$\pm$0.00}& \textbf{98.45$\pm$0.12}&\textbf{98.67$\pm$1.02}\\ && GRWO& 96.43$\pm$0.02& 99.44$\pm$0.00& 99.03$\pm$0.62& 97.82$\pm$1.79& 96.24$\pm$3.36\\ && RWO &96.77$\pm$0.01& 99.20$\pm$0.00& 98.72$\pm$0.54& 98.09$\pm$1.01& 97.07$\pm$1.94\\ && MWMOTE& 97.25$\pm$0.01& 98.25$\pm$0.02& 99.08$\pm$0.54& 97.25$\pm$2.14& 98.45$\pm$0.02\\ && SMOTE& 95.59$\pm$0.01& 99.45$\pm$0.00& 99.03$\pm$0.38& 97.56$\pm$1.44& 95.74$\pm$2.81\\ && ROS& 97.73$\pm$0.00& 99.44$\pm$0.00& 99.10$\pm$0.33& 98.79$\pm$0.70& 98.29$\pm$1.46\\ &Ada& UGRWO& \textbf{96.51$\pm$0.01}& \textbf{99.39$\pm$0.02}& \textbf{98.98$\pm$2.02}& \textbf{98.00$\pm$1.35} &\textbf{94.63$\pm$2.58}\\ &Boost& GRWO& 94.95$\pm$0.01& 99.36$\pm$0.00& 98.79$\pm$0.32& 96.29$\pm$1.67& 93.03$\pm$3.48\\ &M1& RWO& 96.30$\pm$0.00& 99.11$\pm$0.00& 98.57$\pm$0.35& 97.03$\pm$0.87& 94.59$\pm$1.70\\ && MWMOTE& 96.02$\pm$0.01& 99.01$\pm$0.02& 98.02$\pm$1.32& 97.25$\pm$2.01& 93.54$\pm$2.03\\ && SMOTE& 94.49$\pm$0.01& 99.34$\pm$0.00& 98.83$\pm$0.38& 95.57$\pm$1.26& 91.60$\pm$2.34\\ && ROS& 95.38$\pm$0.01& 98.90$\pm$0.00& 98.23$\pm$0.04& 96.12$\pm$0.95& 92.81$\pm$1.87\\ Segmentation& NB& UGRWO& \textbf{99.43$\pm$0.00}& \textbf{98.25$\pm$0.02}& \textbf{99.14$\pm$1.38}& \textbf{99.43$\pm$0.90}& \textbf{99.28$\pm$1.50}\\ && GRWO& 93.62$\pm$0.03& 96.22$\pm$0.02& 95.19$\pm$2.84& 94.42$\pm$3.46& 92.33$\pm$6.28\\ && RWO &96.21$\pm$0.02& 96.11$\pm$0.02& 96.16$\pm$2.41& 96.17$\pm$2.41& 95.40$\pm$2.36\\ && MWMOTE& 95.05$\pm$0.01& 97.02$\pm$0.02& 97.55$\pm$2.02& 97.88$\pm$2.03& 98.05$\pm$0.54\\ && SMOTE& 91.70$\pm$0.03& 95.79$\pm$0.02& 94.42$\pm$2.04& 93.07$\pm$2.86& 89.21$\pm$4.46\\ && ROS& 94.92$\pm$0.01& 94.86$\pm$0.01& 94.89$\pm$1.19& 94.82$\pm$1.48& 93.36$\pm$3.06\\ & 5-NN& UGRWO& \textbf{90.57$\pm$0.10}& \textbf{94.84$\pm$0.05}& 93.39$\pm$6.99& \textbf{91.60$\pm$9.14}& 86.66$\pm$17.21\\ && GRWO& 79.99$\pm$19.22& 93.92$\pm$1.03& 89.93$\pm$6.75& 79.99$\pm$19.22& 71.66$\pm$19.22\\ && RWO& 72.67$\pm$0.19& 94.67$\pm$0.03& \textbf{99.36$\pm$7.36}& 80.70$\pm$15.80& 73.00$\pm$24.06\\ && MWMOTE& 88.52$\pm$0.02& 93.23$\pm$0.02& 98.25$\pm$0.02& 90.54$\pm$5.45& \textbf{95.48$\pm$2.03}\\ && SMOTE& 63.57$\pm$0.25& 94.67$\pm$0.03& 90.78$\pm$6.06& 76.96$\pm$17.02& 95.00$\pm$26.58\\ && ROS& 74.66$\pm$0.13& 91.26$\pm$0.05& 87.04$\pm$7.64& 85.21$\pm$9.13& 82.66$\pm$13.77\\ & DT& UGRWO& 91.71$\pm$0.08& \textbf{98.60$\pm$0.01}& \textbf{97.61$\pm$2.50}& \textbf{95.48$\pm$7.40}& \textbf{93.33$\pm$14.05}\\ && GRWO& 87.52$\pm$0.10& 94.09$\pm$0.04& 92.05$\pm$6.03& 89.00$\pm$9.30& 81.66$\pm$17.48\\ && RWO& \textbf{92.84$\pm$0.06}& 98.09$\pm$0.01& 96.99$\pm$2.91& 93.78$\pm$5.48& 88.66$\pm$9.83\\ && MWMOTE& 91.25$\pm$0.02& 97.52$\pm$0.01& 95.23$\pm$0.45& 93.25$\pm$1.02& 92.12$\pm$0.02\\ && SMOTE& 86.66$\pm$0.21& 98.32$\pm$0.02& 97.04$\pm$4.75& 92.07$\pm$15.52& 88.33$\pm$24.90\\ && ROS& 92.14$\pm$0.08& 98.10$\pm$0.01& 96.95$\pm$2.93& 93.24$\pm$7.49& 88.00$\pm$13.98\\ &Ada& UGRWO& 93.46$\pm$0.08& 95.64$\pm$0.05& 94.96$\pm$6.54& 93.93$\pm$8.14& 93.33$\pm$14.50\\ &Boost& GRWO& 92.28$\pm$0.08& 98.61$\pm$0.01& 97.66$\pm$2.46& 94.92$\pm$7.10& 91.66$\pm$13.60\\ &M1& RWO& 88.28$\pm$0.14& 97.33$\pm$0.02& 95.68$\pm$4.58& 89.96$\pm$11.95& 82.66$\pm$19.86\\ && MWMOTE& 90.23$\pm$0.02& 97.23$\pm$0.02& 97.25$\pm$2.03& 92.23$\pm$2.03& 90.23$\pm$1.02\\ && SMOTE& 94.00$\pm$0.09& 99.17$\pm$0.01& 98.54$\pm$2.33& 96.04$\pm$7.63& 93.33$\pm$8.43\\ && ROS& \textbf{96.88$\pm$0.06}& \textbf{99.17$\pm$0.02}& \textbf{98.69$\pm$2.93}& \textbf{97.63$\pm$5.01}& \textbf{96.00$\pm$8.43}\\ vehicle& NB& UGRWO& \textbf{80.36$\pm$0.03}& 72.05$\pm$0.05& 74.19$\pm$3.73& 74.51$\pm$4.77& 76.26$\pm$8.77\\ && GRWO& 60.88$\pm$0.07& 81.89$\pm$0.02& 75.29$\pm$3.96& 71.38$\pm$4.84& 67.32$\pm$3.31\\ && RWO& 72.43$\pm$0.04& \textbf{84.29$\pm$0.02}& \textbf{79.90$\pm$3.13}& \textbf{76.93$\pm$3.71}& 68.07$\pm$6.43\\ && MWMOTE& 75.26$\pm$2.32& 84.25$\pm$0.02& 78.25$\pm$2.03& 75.45$\pm$0.02& \textbf{93.25$\pm$2.01}\\ && SMOTE& 54.86$\pm$0.04& 72.71$\pm$0.04& 66.07$\pm$4.19& 71.96$\pm$3.97& 87.39$\pm$6.90\\ && ROS& 71.16$\pm$0.04& 71.15$\pm$0.05& 71.20$\pm$4.54& 73.20$\pm$4.74& 93.21$\pm$6.36\\ & 5-NN& UGRWO& \textbf{97.21$\pm$0.02}& 93.40$\pm$0.05& \textbf{96.08$\pm$3.53}& \textbf{95.25$\pm$4.61}& 97.21$\pm$2.69\\ && GRWO& 88.70$\pm$0.02& 95.59$\pm$0.00& 93.66$\pm$1.46& 92.12$\pm$2.96& 89.01$\pm$6.70\\ && RWO& 85.19$\pm$0.04& 91.72$\pm$0.02& 89.38$\pm$3.47& 87.31$\pm$4.00& 80.42$\pm$6.15\\ && MWMOTE& 96.23$\pm$0.02& 95.26$\pm$0.02& 92.23$\pm$2.54& 90.23$\pm$2.32& 89.25$\pm$2.32\\ && SMOTE& 87.41$\pm$0.05& \textbf{96.03$\pm$0.01}& 93.97$\pm$2.90& 91.95$\pm$3.77& 88.44$\pm$5.78\\ && ROS& 96.67$\pm$0.00& 93.25$\pm$0.01& 95.54$\pm$0.85& 93.60$\pm$1.35& \textbf{99.66$\pm$0.43}\\ & DT& UGRWO& \textbf{96.19$\pm$0.01}& 92.19$\pm$0.05& 94.74$\pm$2.01& 94.61$\pm$2.90& 94.84$\pm$2.72\\ && GRWO& 88.31$\pm$0.03& 94.79$\pm$0.01& 92.82$\pm$3.66& 91.54$\pm$2.56 &88.70$\pm$5.00\\ && RWO& 89.72$\pm$0.05& 93.75$\pm$0.03& 92.24$\pm$3.98& 91.50$\pm$4.23& 88.91$\pm$5.84\\ && MWMOTE& 95.23$\pm$0.05& \textbf{96.30$\pm$0.03}& \textbf{96.23$\pm$2.03}& \textbf{96.52$\pm$2.03}& \textbf{96.02$\pm$2.45}\\ && SMOTE& 85.68$\pm$0.07& 95.57$\pm$0.02& 93.24$\pm$3.18& 90.78$\pm$5.52& 86..84$\pm$9.80\\ && ROS& 94.23$\pm$0.02& 96.29$\pm$0.01& 95.49$\pm$2.03& 95.55$\pm$1.85& 95.96$\pm$2.72\\ & Ada& UGRWO& \textbf{94.28$\pm$0.02}& 91.38$\pm$0.04& \textbf{93.14$\pm$3.45}& \textbf{92.23$\pm$3.81}& \textbf{96.67$\pm$3.68}\\ &Boost& GRWO& 83.17$\pm$0.04& \textbf{93.78$\pm$0.01}& 89.56$\pm$2.56& 88.74$\pm$4.08& 87.51$\pm$9.29\\ &M1& RWO& 83.90$\pm$0.07& 91.65$\pm$0.02& 89.10$\pm$4.14& 85.77$\pm$6.67& 77.39$\pm$14.24\\ && MWMOTE& 88.25$\pm$0.02& 92.25$\pm$0.01& 89.23$\pm$2.03& 88.25$\pm$2.45& 96.25$\pm$3.03\\ && SMOTE& 78.92$\pm$0.06& 93.23$\pm$0.02& 89.82$\pm$3.24& 86.39$\pm$6.66& 81.86$\pm$14.64\\ && ROS& 88.23$\pm$0.01& 91.36$\pm$0.01& 90.05$\pm$1.79& 91.27$\pm$1.64& 97.75$\pm$3.21\\ sonar& NB& UGRWO& \textbf{92.60$\pm$0.05}& \textbf{96.38$\pm$0.22}& \textbf{87.94$\pm$9.44}& \textbf{86.28$\pm$13.3}& \textbf{88.18$\pm$9.63}\\ && GRWO& 74.69$\pm$0.07& 60.57$\pm$0.08& 78.10$\pm$8.38& 76.89$\pm$9.65& 68.18$\pm$16.7\\ && RWO& 78.57$\pm$0.09& 75.72$\pm$0.02& 77.33$\pm$8.57& 79.77$\pm$8.71& 66.97$\pm$10.8\\ && MWMOTE& 75.25$\pm$0.01& 93.25$\pm$0.14& 84.25$\pm$2.58& 85.57$\pm$6.25& 87.54$\pm$1.32\\ && SMOTE& 69.53$\pm$0.08& 62.06$\pm$0.15& 66.43$\pm$11.17& 65.31$\pm$12.57& 80.55$\pm$7.14\\ && ROS& 82.96$\pm$0.06& 62.28$\pm$0.17& 76.76$\pm$8.64& 69.09$\pm$13.59& 88.73$\pm$8.13\\ & 5-NN& UGRWO& \textbf{96.03$\pm$0.03}& 73.57$\pm$0.21& \textbf{92.75$\pm$6.95}& 85.14$\pm$14.80& \textbf{97.27$\pm$4.39}\\ && GRWO& 84.57$\pm$0.07& \textbf{83.39$\pm$0.10}& 84.03$\pm$7.22& 84.01$\pm$7.17& 82.69$\pm$8.67\\ && RWO& 89.86$\pm$0.03& 82.41$\pm$0.07& \textbf{87.18$\pm$4.63}& 86.35$\pm$6.55& 88.71$\pm$3.91\\ && MWMOTE& 95.87$\pm$0.01& 82.52$\pm$0.02& 91.25$\pm$2.03& 85.26$\pm$5.84& 92.32$\pm$1.45\\ && SMOTE& 77.67$\pm$0.07& 82.25$\pm$.06& 80.32$\pm$7.28& 79.28$\pm$6.98& 73.11$\pm$8.73\\ && ROS& 88.81$\pm$0.03& 79.19$\pm$0.09& 85.56$\pm$5.13& 83.22$\pm$17.67& 89.65$\pm$6.05\\ & DT& UGRWO& \textbf{93.41$\pm$0.05}& 65.61$\pm$0.18& \textbf{88.54$\pm$10.04} &77.08$\pm$14.07& \textbf{95.75$\pm$5.95}\\ && GRWO& 77.12$\pm$0.07& 74.04$\pm$0.09& 75.85$\pm$8.23& 75.32$\pm$8.44& 76.85$\pm$1.46\\ && RWO& 84.63$\pm$0.05& 72.98$\pm$0.09& 80.65$\pm$6.72& \textbf{79.59$\pm$8.39}& 84.94$\pm$10.65\\ && MWMOTE& 91.45$\pm$0.01& \textbf{76.25$\pm$0.01}& 87.45$\pm$5.24& 78.51$\pm$1.47& 94.25$\pm$4.15\\ && SMOTE& 73.10$\pm$0.11& 75.08$\pm$0.11& 74.40$\pm$11.08& 73.73$\pm$11.09& 75.11$\pm$15.35\\ && ROS& 82.30$\pm$0.06& 62.67$\pm$0.14& 76.05$\pm$8.61& 69.76$\pm$11.25& 87.13$\pm$6.47\\ &Ada& UGRWO& \textbf{95.49$\pm$0.03}& 52.29$\pm$0.18& \textbf{91.85$\pm$6.64}& 74.92$\pm$29.44& \textbf{96.36$\pm$4.69}\\ &Boost& GRWO& 81.65$\pm$0.07& \textbf{79.49$\pm$0.09}& 80.67$\pm$8.06& \textbf{80.50$\pm$8.25}& 80.32$\pm$8.17\\ &M1& RWO& 85.60$\pm$0.06& 74.02$\pm$0.09& 81.56$\pm$7.68& 78.62$\pm$7.41& 87.52$\pm$9.99\\ && MWMOTE& 92.23$\pm$0.01& 78.25$\pm$0.02& 90.23$\pm$1.87& 79.25$\pm$1.54& 89.25$\pm$4.25\\ && SMOTE& 73.08$\pm$0.08& 77.29$\pm$0.06& 75.55$\pm$6.99& 74.76$\pm$7.10& 72.33$\pm$13.22\\ && ROS& 86.25$\pm$0.07& 75.13$\pm$0.10& 82.56$\pm$8.10& 79.31$\pm$9.18& 88.05$\pm$11.91\\ Glass& NB& UGRWO& 60.23$\pm$0.05& 88.55$\pm$0.03& 81.62$\pm$6.37& 59.98$\pm$34.46& 46.66$\pm$32.20\\ && GRWO& 59.63$\pm$0.03& 94.11$\pm$0.03& 89.48$\pm$5.87& 57.10$\pm$2.83& 43.33$\pm$38.65\\ && RWO& 57.30$\pm$0.13& 94.06$\pm$0.01& 89.65$\pm$2.89& 68.19$\pm$12.48& 50.00$\pm$20.78\\ && MWMOTE& \textbf{88.25$\pm$0.02}& \textbf{98.25$\pm$0.01}& \textbf{96.25$\pm$0.14}& \textbf{94.95$\pm$0.57}& 94.25$\pm$0.94\\ && SMOTE& 81.24$\pm$0.06& 88.12$\pm$0.04& 85.49$\pm$5.23& 87.64$\pm$5.71& \textbf{95.71$\pm$9.64}\\ && ROS& 87.01$\pm$0.14& 97.05$\pm$0.03& 95.23$\pm$5.95& 94.55$\pm$7.76& 94.16$\pm$12.45\\ & 5-NN& UGRWO& \textbf{100.00$\pm$0.00}& \textbf{100.00$\pm$0.00}& \textbf{100.00$\pm$0.00}& \textbf{100.00$\pm$0.00}& \textbf{100.00$\pm$0.00}\\ && GRWO& 82.23$\pm$0.15& 97.98$\pm$0.01& 96.37$\pm$2.87& 86.13$\pm$13.47 &76.66$\pm$22.49\\ && RWO &68.73$\pm$0.23& 96.06$\pm$0.02& 93.05$\pm$4.70& 74.40$\pm$18.58& 59.16$\pm$28.17\\ && MWMOTE& 99.23$\pm$0.01& 99.25$\pm$0.03& 95.25$\pm$0.02& 98.25$\pm$2.01& 98.54$\pm$2.35\\ && SMOTE& 91.33$\pm$0.14& 99.24$\pm$0.01& 98.61$\pm$2.23& 96.56$\pm$9.14& 95.00$\pm$15.81\\ && ROS& 96.57$\pm$0.07& 99.20$\pm$0.01& 98.71$\pm$2.85& 99.22$\pm$1.74& \textbf{100.00$\pm$0.00}\\ & DT& UGRWO& 94.00$\pm$0.09& 98.60$\pm$0.02& 97.73$\pm$3.65& 95.81$\pm$7.63& 93.33$\pm$14.05\\ && GRWO& 91.14$\pm$0.09& 98.73$\pm$0.01& 97.78$\pm$2.33& 93.98$\pm$8.57& 90.00$\pm$22.49\\ && RWO& 86.21$\pm$0.15& 97.98$\pm$0.01& 96.51$\pm$3.43& 89.44$\pm$13.40& 82.50$\pm$22.03\\ && MWMOTE& \textbf{98.89$\pm$0.02}& \textbf{99.78$\pm$0.01}& \textbf{99.84$\pm$0.23}& \textbf{99.87$\pm$0.02}& \textbf{100.00$\pm$0.00}\\ && SMOTE& 98.57$\pm$0.04& 99.74$\pm$0.00& 99.56$\pm$1.37& 99.74$\pm$0.80& \textbf{100.00$\pm$0.00}\\ && ROS& 94.66$\pm$0.11& 99.49$\pm$0.01& 99.09$\pm$1.91& 96.81$\pm$9.20& 95.00$\pm$15.81\\ &Ada&UGRWO& \textbf{94.66$\pm$0.11}& \textbf{99.30$\pm$0.02}& 98.23$\pm$3.97& 96.03$\pm$8.39& 93.33$\pm$21.08\\ &Boost& GRWO& 87.66$\pm$0.17& 98.52$\pm$0.02& 97.37$\pm$3.67& 90.06$\pm$14.48& 83.33$\pm$23.57\\ &M1&RWO& 73.64$\pm$0.19& 96.39$\pm$0.02& 93.54$\pm$4.51& 78.99$\pm$15.83& 65.83$\pm$24.67\\ && MWMOTE& 94.02$\pm$0.11& 99.02$\pm$0.02& 92.23$\pm$2.13& 95.52$\pm$0.02& 93.52$\pm$1.24\\ && SMOTE& 90.00$\pm$0.16& 98.98$\pm$0.01& 98.18$\pm$3.17& 93.62$\pm$12.18& 90.00$\pm$21.08\\ && ROS& 94.12$\pm$0.11& 98.99$\pm$0.01& \textbf{98.29$\pm$2.95}& \textbf{96.55$\pm$9.14}& \textbf{95.00$\pm$15.81}\\ \end{longtable} \begin{longtable}{llllllll} \caption{Averaged results and standard deviations on nine continuous attribute datasets (over-sampling rate equals $200\%$).} \label{tab6} \endfirsthead \endhead \hline ds & Alg & OS & f-min & f-maj & O-acc & G-mean & TP rate \\\hline Breast\_w& NB& UGRWO& \textbf{98.23$\pm$0.03}& \textbf{96.99$\pm$0.03}& \textbf{96.98$\pm$2.03}& \textbf{96.52$\pm$2.04}& \textbf{98.01$\pm$1.34}\\ && GRWO& 95.52$\pm$0.02& 96.27$\pm$0.02& 96.08$\pm$2.23& 96.33$\pm$2.11& 97.51$\pm$2.40\\ && RWO& 97.36$\pm$0.01& 95.83$\pm$0.01& 96.77$\pm$1.48& 96.47$\pm$1.64& 97.77$\pm$1.49\\ && MWMOTE& 97.25$\pm$0.01& 95.26$\pm$0.02& 96.52$\pm$0.25& 95.25$\pm$6.25& 97.48$\pm$2.54\\ && SMOTE& 96.17$\pm$0.01& 95.96$\pm$0.01& 96.06$\pm$1.50& 96.03$\pm$1.47& 96.89$\pm$2.81\\ && ROS& 97.45$\pm$0.01& 95.92$\pm$0.02& 96.86$\pm$1.26& 96.54$\pm$1.49& 97.92$\pm$0.97\\ & 5-NN& UGRWO& \textbf{99.83$\pm$0.00}& \textbf{99.47$\pm$0.01}& \textbf{99.74$\pm$0.81}& \textbf{99.83$\pm$0.53}& \textbf{99.72$\pm$0.83}\\ && GRWO& 97.28$\pm$0.02& 97.90$\pm$0.01& 97.49$\pm$1.58& 97.56$\pm$2.58& 98.67$\pm$2.24\\ && RWO& 98.68$\pm$0.01& 97.89$\pm$0.01& 98.38$\pm$1.29& 98.02$\pm$1.42& 98.58$\pm$1.31\\ && MWMOTE& 97.54$\pm$0.01& 98.25$\pm$0.01& 99.45$\pm$2.54& 98.54$\pm$3.02& 97.45$\pm$1.54\\ && SMOTE& 98.06$\pm$0.01& 97.88$\pm$0.01& 97.97$\pm$1.54& 97.92$\pm$1.61& 99.17$\pm$1.07\\ && ROS& 98.77$\pm$0.01& 97.98$\pm$0.01& 98.47$\pm$1.31& 98.05$\pm$1.69& 99.86$\pm$0.43\\ & DT& UGRWO& \textbf{99.69$\pm$0.00}& \textbf{99.08$\pm$0.01}& \textbf{99.54$\pm$0.96}& \textbf{99.10$\pm$1.88}& \textbf{100.0$\pm$0.00}\\ && GRWO& 94.05$\pm$0.03& 96.25$\pm$0.02& 95.40$\pm$3.26& 95.23$\pm$3.10& 94.64$\pm$3.47\\ && RWO& 97.14$\pm$0.01& 95.52$\pm$0.01& 96.51$\pm$1.57& 96.25$\pm$1.60& 97.35$\pm$2.39\\ && MWMOTE& 97.84$\pm$0.01& 98.54$\pm$0.02& 98.25$\pm$2.03& 98.48$\pm$1.54& 99.48$\pm$2.15\\ && SMOTE& 95.74$\pm$0.00& 95.53$\pm$0.00& 95.64$\pm$0.79& 95.62$\pm$0.81& 95.65$\pm$1.79\\ && ROS& 98.29$\pm$0.01& 97.19$\pm$0.01& 97.88$\pm$1.34& 97.39$\pm$1.69& 99.44$\pm$0.97\\ &Ada& UGRWO& \textbf{99.66$\pm$0.00}& \textbf{98.89$\pm$0.02}& \textbf{99.31$\pm$1.09}& \textbf{99.47$\pm$1.10}& \textbf{99.38$\pm$1.22}\\ &Boost& GRWO& 94.55$\pm$0.03& 96.04$\pm$0.02& 95.25$\pm$2.09& 94.93$\pm$2.99& 93.88$\pm$3.75\\ &M1& RWO& 96.46$\pm$0.01& 94.66$\pm$0.02& 95.74$\pm$1.65& 95.82$\pm$1.66& 95.39$\pm$2.55\\ && SMOTE& 95.31$\pm$0.01& 95.32$\pm$0.01& 95.32$\pm$1.74& 95.33$\pm$1.77& 95.56$\pm$3.47\\ && ROS& 96.84$\pm$0.01& 95.18$\pm$0.02& 96.19$\pm$1.88& 96.22$\pm$1.79& 95.99$\pm$3.02\\ Diabetes& NB& UGRWO& 63.07$\pm$0.08& \textbf{88.27$\pm$0.03}& \textbf{82.26$\pm$4.18}& \textbf{84.84$\pm$7.69}& 94.64$\pm$9.67\\ && GRWO& \textbf{79.84$\pm$0.02}& 58.62$\pm$0.02& 73.32$\pm$3.71& 68.16$\pm$5.07& \textbf{95.40$\pm$4.42}\\ && RWO& 66.74$\pm$3.92& 77.78$\pm$3.72& 73.36$\pm$3.27& 74.07$\pm$3.99& 76.60$\pm$3.39\\ && MWMOTE& 78.45$\pm$0.02& 87.95$\pm$2.25& 80.88$\pm$0.97& 80.54$\pm$2.06& 93.02$\pm$2.14\\ && SMOTE& 73.19$\pm$0.04& 73.16$\pm$0.04& 73.26$\pm$4.12& 73.09$\pm$4.15& 70.70$\pm$6.26\\ && ROS& 75.50$\pm$0.03& 65.53$\pm$0.04& 71.39$\pm$3.63& 71.24$\pm$3.84& 71.63$\pm$4.15\\ & 5-NN& UGRWO& \textbf{92.42$\pm$0.03}& 69.70$\pm$0.14& \textbf{87.92$\pm$5.19}& \textbf{85.71$\pm$11.66}& 89.50$\pm$3.49\\ && GRWO& 64.19$\pm$0.05& \textbf{76.22$\pm$0.03}& 71.82$\pm$2.59& 68.74$\pm$5.86& \textbf{58.01$\pm$7.00}\\ && RWO& 74.29$\pm$0.04& 68.06$\pm$0.03& 71.63$\pm$4.01& 72.49$\pm$3.86& 67.17$\pm$7.55\\ && MWMOTE& 90.54$\pm$0.01& 70.25$\pm$0.01& 84.00$\pm$4.12& 84.15$\pm$2.06& 89.54$\pm$0.023\\ && SMOTE& 78.95$\pm$0.03& 72.43$\pm$0.05& 76.15$\pm$4.45& 74.94$\pm$4.73& 86.39$\pm$5.03\\ && ROS& 82.85$\pm$0.00& 64.95$\pm$0.02& 76.99$\pm$0.94& 70.85$\pm$2.05& 90.16$\pm$2.10\\ & DT& UGRWO& \textbf{93.49$\pm$0.02} &65.39$\pm$0.09& \textbf{86.76$\pm$4.44}& 76.10$\pm$7.75& \textbf{92.75$\pm$4.63}\\ && GRWO& 71.89$\pm$0.04& 76.67$\pm$0.06& 74.11$\pm$6.24 &73.67$\pm$6.15& 72.29$\pm$6.13\\ && RWO& 85.60$\pm$0.02& 77.42$\pm$0.04& 82.43$\pm$3.04& 81.61$\pm$3.50& 84.70$\pm$3.65\\ && MWMOTE& 85.62$\pm$0.01& 77.25$\pm$0.01& 78.25$\pm$2.03& 80.23$\pm$0.25& 83.25$\pm$3.25\\ && SMOTE& 75.50$\pm$0.03& 73.47$\pm$0.03& 74.61$\pm$3.26& 74.34$\pm$3.30& 75.89$\pm$6.14\\ && ROS& 88.18$\pm$0.02& \textbf{79.02$\pm$0.03}& 84.89$\pm$2.63& \textbf{82.42$\pm$3.19}& 91.41$\pm$2.6\\ &Ada&UGRWO& \textbf{92.21$\pm$0.01}& 68.68$\pm$0.08& \textbf{86.51$\pm$3.05}& 76.55$\pm$6.77& \textbf{94.75$\pm$4.15}\\ &Boost& GRWO& 72.21$\pm$0.07& \textbf{78.90$\pm$0.03}& 74.99$\pm$5.67& 74.29$\pm$6.06& 69.64$\pm$10.32\\ &M1& RWO& 86.20$\pm$0.03& 77.12$\pm$0.06& 82.83$\pm$4.45& \textbf{81.82$\pm$5.35}& 87.08$\pm$5.07\\ && MWMOTE& 86.21$\pm$0.01& 77.65$\pm$0.02& 80.26$\pm$0.32& 80.95$\pm$3.45& 90.15$\pm$2.13\\ && SMOTE& 77.06$\pm$0.04& 75.40$\pm$0.05& 76.35$\pm$4.72& 76.10$\pm$4.94& 76.71$\pm$6.22\\ && ROS& 82.95$\pm$0.03& 67.71$\pm$0.07& 77.76$\pm$4.77& 73.24$\pm$6.32& 87.79$\pm$5.41\\ Ionosphere& NB &UGRWO& \textbf{97.62$\pm$0.02}& \textbf{88.73$\pm$0.12}& \textbf{96.11$\pm$0.57}& \textbf{93.18$\pm$8.91}& \textbf{97.33$\pm$4.66}\\ && GRWO& 80.13$\pm$0.05& 83.63$\pm$0.06& 82.13$\pm$5.77& 82.82$\pm$5.34& 88.66$\pm$4.49\\ && RWO& 86.35$\pm$0.03& 76.76$\pm$0.05& 82.84$\pm$3.92& 81.13$\pm$4.53& 86.93$\pm$4.36\\ && MWMOTE& 86.25$\pm$0.03& 80.65$\pm$0.01& 82.32$\pm$0.02& 81.02$\pm$3.02& 85.16$\pm$1.23\\ && SMOTE& 82.53$\pm$0.05& 78.76$\pm$0.07& 80.95$\pm$5.87& 80.16$\pm$6.32& 85.67$\pm$8.72\\ && ROS& 87.47$\pm$0.05& 77.41$\pm$0.10& 83.89$\pm$6.92& 81.65$\pm$8.32& 89.16$\pm$4.70\\ & 5-NN& UGRWO& 90.47$\pm$0.04& 77.55$\pm$0.08& 86.19$\pm$6.61& 90.52$\pm$4.73& 83.33$\pm$6.47\\ && GRWO& 80.97$\pm$0.04& 90.00$\pm$0.01& 86.94$\pm$2.59& 82.86$\pm$4.40& 70.66$\pm$8.99\\ && RWO& 90.04$\pm$0.02& 86.40$\pm$0.03& 88.52$\pm$3.17& 89.88$\pm$3.08& 83.51$\pm$4.63\\ && MWMOTE& 90.52$\pm$0.01& 91.25$\pm$0.02& 90.45$\pm$2.25& 90.87$\pm$8.25& 85.64$\pm$1.45\\ && SMOTE &\textbf{91.88$\pm$0.05}& \textbf{92.17$\pm$0.04}& \textbf{92.04$\pm$5.21}& 87.33$\pm$8.50& \textbf{86.87$\pm$6.54}\\ && ROS& 91.45$\pm$0.04& 88.02$\pm$0.04& 90.04$\pm$4.56& \textbf{91.03$\pm$3.84}& 86.53$\pm$7.76\\ & DT& UGRWO& \textbf{96.02$\pm$0.03}& 85.40$\pm$0.09& 93.79$\pm$5.59& 90.32$\pm$8.99& 96.00$\pm$6.44\\ && GRWO& 88.40$\pm$0.05& 92.14$\pm$0.03& 90.66$\pm$4.61& 90.27$\pm$4.94& 90.00$\pm$6.02\\ && RWO& 92.66$\pm$0.02& 87.75$\pm$0.04& 90.85$\pm$3.40& 90.20$\pm$4.39& 91.99$\pm$4.17\\ && MWMOTE& 92.12$\pm$0.01& 90.23$\pm$0.05& 90.25$\pm$2.05& 90.75$\pm$2.56& 95.12$\pm$3.25\\ && SMOTE& 89.26$\pm$0.05& 88.35$\pm$0.05& 88.85$\pm$5.56& 88.74$\pm$5.56& 88.81$\pm$7.74\\ && ROS& 95.86$\pm$0.01& \textbf{92.55$\pm$0.02}& \textbf{94.68$\pm$2.05}& \textbf{93.36$\pm$2.61}& \textbf{98.15$\pm$2.79}\\ &Ada&UGRWO& \textbf{94.99$\pm$0.04}& 79.27$\pm$0.19& \textbf{92.08$\pm$6.82}& 85.62$\pm$14.85& \textbf{96.66$\pm$4.71}\\ &Boost& GRWO& 86.65$\pm$0.04& \textbf{92.82$\pm$0.02}& 90.69$\pm$3.21& \textbf{88.06$\pm$4.35}& 80.66$\pm$12.35\\ &M1& RWO& 88.87$\pm$0.06& 83.67$\pm$0.07& 86.82$\pm$6.70& 86.91$\pm$6.26& 85.89$\pm$9.76\\ && MWMOTE& 90.25$\pm$0.02& 90.25$\pm$6.25& 90.45$\pm$6.25& 87.14$\pm$2.54& 94.62$\pm$2.45\\ && SMOTE& 86.72$\pm$0.04& 86.80$\pm$0.04& 86.78$\pm$4.35& 86.84$\pm$4.32& 81.76$\pm$5.60\\ && ROS& 93.68$\pm$0.02&89.22$\pm$0.04& 92.06$\pm$3.02& 86.13$\pm$3.75& 94.18$\pm$4.94\\ Musk& NB& UGRWO& \textbf{89.25$\pm$0.02}& \textbf{89.25$\pm$0.01}& \textbf{90.12$\pm$3.02}& \textbf{88.95$\pm$6.25}& 95.16$\pm$0.12\\ && GRWO& 88.15$\pm$0.02& 89.01$\pm$0.01& 89.15$\pm$0.03& 87.95$\pm$2.15& 87.95$\pm$5.12\\ && RWO& 87.95$\pm$2.13& 89.16$\pm$0.03& 88.47$\pm$0.25& 87.91$\pm$5.26& 86.89$\pm$9.76\\ && MWMOTE& 87.99$\pm$0.01& 87.15$\pm$0.02& 87.51$\pm$2.65& 87.14$\pm$2.54& \textbf{95.62$\pm$2.45}\\ && SMOTE& 86.05$\pm$0.02& 88.15$\pm$0.01& 88.19$\pm$0.02& 86.84$\pm$6.14& 81.76$\pm$5.60\\ && ROS& 87.14$\pm$0.02& 89.01$\pm$0.02& 89.25$\pm$3.02& 85.13$\pm$3.75& 94.18$\pm$4.94\\ & 5-NN& UGRWO& \textbf{92.86$\pm$0.05}& 65.33$\pm$0.06& \textbf{88.58$\pm$8.25}& \textbf{86.24$\pm$12.5}& 88.52$\pm$9.74\\ && GRWO& 77.40$\pm$0.07& 80.48$\pm$0.05& 79.09$\pm$6.59& 79.15$\pm$8.96& 67.30$\pm$16.6\\ && RWO& 84.71$\pm$0.06& 74.05$\pm$0.07& 80.85$\pm$7.14& 84.82$\pm$6.18& 74.73$\pm$9.27\\ && MWMOTE& 91.25$\pm$0.03& \textbf{81.02$\pm$0.02}& 79.25$\pm$0.17& 84.26$\pm$0.23& \textbf{88.89$\pm$2.03}\\ && SMOTE& 82.65$\pm$0.04& 62.96$\pm$0.10& 76.43$\pm$6.49& 69.83$\pm$8.17& 88.23$\pm$6.14\\ && ROS& 84.69$\pm$0.03& 54.26$\pm$0.11& 77.13$\pm$5.50& 65.48$\pm$9.04& 87.64$\pm$5.62\\ & DT& UGRWO& \textbf{92.80$\pm$0.02}& 83.43$\pm$0.11& \textbf{94.41$\pm$3.61}& 88.77$\pm$7.49& \textbf{97.83$\pm$3.83}\\ && GRWO& 88.64$\pm$0.057& \textbf{91.37$\pm$0.01}& 90.91$\pm$4.70& \textbf{89.58$\pm$4.37}& 83.82$\pm$4.44\\ && RWO& 92.32$\pm$0.02& 75.97$\pm$0.11& 88.42$\pm$3.83& 86.12$\pm$10.45& 89.74$\pm$2.25\\ && MWMOTE& 90.23$\pm$0.02& 90.25$\pm$0.05& 93.15$\pm$6.25& 89.25$\pm$2.13& 97.82$\pm$0.02\\ && SMOTE& 91.72$\pm$0.02& 77.97$\pm$0.05& 88.06$\pm$2.97& 85.38$\pm$6.18& 90.31$\pm$5.96\\ && ROS& 94.89$\pm$0.01& 82.61$\pm$0.07& 92.15$\pm$2.29& 89.17$\pm$8.26& 94.18$\pm$3.00\\ &Ada&UGRWO& \textbf{91.85$\pm$0.00}& \textbf{89.33$\pm$0.02}& \textbf{89.77$\pm$0.71}& \textbf{89.86$\pm$0.44}& 89.80$\pm$0.62\\ &Boost& GRWO& 90.25$\pm$0.00& 88.03$\pm$0.01& 88.14$\pm$1.06& 88.05$\pm$1.13& 89.40$\pm$1.13\\ &M1& RWO& 90.21$\pm$0.00& 87.87$\pm$0.00& 88.84$\pm$0.53& 87.90$\pm$0.96 &\textbf{90.0$\pm$0.00}\\ && MWMOTE& 89.25$\pm$0.01& 86.25$\pm$0.03& 87.45$\pm$6.02& 88.03$\pm$3.20& 89.25$\pm$3.02\\ && SMOTE& 90.87$\pm$0.00& 87.54$\pm$0.01& 88.45$\pm$1.09& 87.74$\pm$1.54& 89.68$\pm$0.05\\ && ROS& 89.05$\pm$0.00& 87.42$\pm$0.01& 88.61$\pm$0.93& 87.66$\pm$1.71& 89.75$\pm$0.40\\ Satimage& NB& UGRWO& \textbf{94.92$\pm$0.01}& \textbf{99.25$\pm$0.01}& \textbf{98.80$\pm$1.34}& \textbf{95.42$\pm$2.32}& 90.54$\pm$2.54\\ && GRWO& 93.60$\pm$0.01& 99.01$\pm$0.00& 98.26$\pm$0.52& 94.50$\pm$1.21& 89.65$\pm$2.25\\ && RWO& 94.52$\pm$0.00& 98.10$\pm$0.00& 97.18$\pm$0.43& 94.99$\pm$0.84& \textbf{90.61$\pm$1.63}\\ && MWMOTE& 93.25$\pm$0.01& 98.25$\pm$0.01& 98.02$\pm$0.38& 94.16$\pm$0.58& 89.54$\pm$3.24\\ && SMOTE& 93.93$\pm$0.01& 98.59$\pm$0.00 &97.71$\pm$0.44& 94.63$\pm$1.02& 89.89$\pm$1.98\\ && ROS& 94.22$\pm$0.01& 98.01$\pm$0.00& 97.04$\pm$0.75& 94.68$\pm$1.47& 89.99$\pm$2.80\\ & 5-NN& UGRWO& \textbf{98.70$\pm$0.00}& 99.60$\pm$0.01& 99.12$\pm$0.12& \textbf{99.42$\pm$2.03}& \textbf{99.85$\pm$0.02}\\ && GRWO& 98.36$\pm$0.00& 99.68$\pm$0.00& \textbf{99.47$\pm$0.18}& 99.02$\pm$0.48& 98.36$\pm$1.02\\ && RWO& 98.50$\pm$0.00& 99.45$\pm$0.00& 99.19$\pm$0.25& 98.99$\pm$0.32& 98.57$\pm$0.63\\ && MWMOTE& 97.25$\pm$6.03& 98.25$\pm$0.32& 98.25$\pm$6.25& 98.47$\pm$0.54& 98.00$\pm$3.25\\ && SMOTE& 98.69$\pm$0.00 &\textbf{99.67$\pm$0.00}& 99.48$\pm$0.26& 99.30$\pm$0.37& 99.00$\pm$0.68\\ && ROS& 98.53$\pm$0.00& 99.45$\pm$0.00& 99.20$\pm$0.23& 99.09$\pm$0.22& 98.86$\pm$0.50\\ & DT& UGRWO& \textbf{98.42$\pm$0.00}& \textbf{99.85$\pm$0.00}& \textbf{99.32$\pm$0.02}& \textbf{99.78$\pm$0.23}& \textbf{99.52$\pm$0.02}\\ && GRWO& 96.70$\pm$0.00& 99.36$\pm$0.00& 98.93$\pm$0.23& 98.14$\pm$0.68& 97.00$\pm$1.41\\ && RWO& 97.77$\pm$0.00& 99.17$\pm$0.00& 98.80$\pm$0.23& 98.50$\pm$0.45& 97.86$\pm$1.07\\ && MWMOTE& 96.25$\pm$0.01& 98.25$\pm$0.02& 98.56$\pm$6.23& 97.84$\pm$2.54& 96.58$\pm$1.54\\ && SMOTE& 97.58$\pm$0.00& 99.40$\pm$0.00& 99.04$\pm$0.34& 98.56$\pm$0.61& 97.86$\pm$1.16\\ && ROS& 98.67$\pm$0.00& 99.51$\pm$0.00& 99.28$\pm$0.25& 99.24$\pm$0.31& 99.14$\pm$0.62\\ &Ada&UGRWO& \textbf{96.94$\pm$0.00}& 99.52$\pm$0.01& \textbf{98.95$\pm$2.05}& \textbf{97.52$\pm$0.65}& \textbf{95.85$\pm$2.03}\\ &Boost& GRWO& 95.25$\pm$0.01& 99.31$\pm$0.00& 98.79$\pm$0.61& 96.39$\pm$1.91& 93.32$\pm$3.52\\ &M1& RWO& 96.24$\pm$0.01& \textbf{98.67$\pm$0.00}& 98.03$\pm$0.72& 96.67$\pm$1.19& 93.88$\pm$2.14\\ && MWMOTE& 95.48$\pm$0.02& 96.01 $\pm$0.02& 97.58$\pm$6.25& 96.84$\pm$6.25& 94.87$\pm$5.24\\ && SMOTE& 96.82$\pm$0.02& 99.23$\pm$0.00& 98.76$\pm$0.38& 97.49$\pm$0.67& 95.44$\pm$1.16\\ && ROS& 96.87$\pm$0.00& 98.38$\pm$0.00& 98.35$\pm$0.27& 97.27$\pm$0.58& 95.02$\pm$1.35\\ Segmentation& NB& UGRWO& \textbf{99.69$\pm$0.00}& \textbf{99.08$\pm$0.01}& \textbf{99.54$\pm$0.96}& \textbf{99.10$\pm$1.88}& \textbf{100.0$\pm$0.00}\\ && GRWO& 94.05$\pm$0.03& 96.25$\pm$0.02& 95.40$\pm$3.26& 95.23$\pm$3.10& 94.64$\pm$3.47\\ && RWO& 97.14$\pm$0.01& 95.52$\pm$0.01& 96.51$\pm$1.57& 96.25$\pm$1.60& 97.35$\pm$2.39\\ && MWMOTE& 98.25$\pm$0.01& 94.28$\pm$0.01& 95.89$\pm$3.02& 97.84$\pm$3.25& 99.84$\pm$2.06\\ && SMOTE& 95.74$\pm$0.00& 95.53$\pm$0.00& 95.64$\pm$0.79& 95.62$\pm$0.81& 95.65$\pm$1.79\\ && ROS& 98.29$\pm$0.01& 97.19$\pm$0.01& 97.88$\pm$1.34& 97.39$\pm$1.69& 99.44$\pm$0.97\\ & 5-NN& UGRWO& \textbf{96.03$\pm$0.06}& \textbf{97.07$\pm$0.04}& \textbf{96.66$\pm$5.36}& \textbf{96.26$\pm$6.06}& \textbf{95.00$\pm$10.54}\\ &&GRWO& 78.74$\pm$0.13& 94.44$\pm$0.03& 90.43$\pm$6.08& 86.16$\pm$8.15& 80.00$\pm$13.33\\ && RWO& 76.10$\pm$0.13& 90.78$\pm$0.04& 86.80$\pm$6.21& 81.81$\pm$11.44& 74.28$\pm$21.48\\ && MWMOTE& 80.14$\pm$0.01& 93.02$\pm$0.02& 90.25$\pm$0.02& 90.84$\pm$6.25& 91.47$\pm$2.58\\ &&SMOTE& 79.61$\pm$0.12& 92.57$\pm$0.05& 89.14$\pm$7.19& 90.02$\pm$7.58& 92.00$\pm$10.32\\ && ROS& 80.54$\pm$0.09& 89.94$\pm$0.03& 86.84$\pm$5.14& 87.88$\pm$8.19& 92.50$\pm$15.81\\ & DT& UGRWO& 88.66$\pm$0.18& 93.3$\pm$0.08& 91.94$\pm$1.65& 89.71$\pm$15.56& 89.16$\pm$24.86\\ && GRWO& 90.85$\pm$0.08& 98.37$\pm$0.01& 97.25$\pm$2.36& 91.46$\pm$7.46& 84.16$\pm$13.86\\ && RWO& 89.67$\pm$0.08& 95.90$\pm$0.02& 94.16$\pm$4.21& 92.08$\pm$7.33& 88.39$\pm$14.19\\ && MWMOTE& \textbf{96.25$\pm$0.01}& 96.25$\pm$0.03& 95.87$\pm$2.89& \textbf{97.87$\pm$6.25}& 91.57$\pm$0.25\\ && SMOTE& 94.84$\pm$0.05& \textbf{98.63$\pm$0.01}& \textbf{97.84$\pm$2.27}& 95.67$\pm$5.07& \textbf{92.33$\pm$9.94}\\ &Ada& ROS& 95.02$\pm$0.04& 97.74$\pm$0.02& 96.90$\pm$3.03& 96.56$\pm$3.49& 96.07$\pm$6.34\\ && UGRWO& 89.39$\pm$0.12& 92.95$\pm$0.08& 90.88$\pm$11.02& 89.16$\pm$13.59& 90.00$\pm$17.48\\ && GRWO& 91.42$\pm$0.07& 98.37$\pm$0.01& 97.27$\pm$2.34& 91.96$\pm$6.91& 85.00$\pm$12.90\\ && RWO& 88.46$\pm$0.08& 95.73$\pm$0.02& 93.80$\pm$4.13& 90.26$\pm$8.25& 83.75$\pm$15.64\\ && MWMOTE& \textbf{98.90$\pm$0.01}& \textbf{99.78$\pm$0.02}& \textbf{99.89$\pm$2.03}& \textbf{98.89$\pm$1.54}& \textbf{98.89$\pm$3.45}\\ && SMOTE& 98.88$\pm$3.03& 99.72$\pm$0.00& 99.56$\pm$1.37& 98.97$\pm$3.33& 98.00$\pm$6.32\\ && ROS& 98.33$\pm$0.01& 99.47$\pm$0.01& 99.20$\pm$2.52& 98.45$\pm$4.89& 97.14$\pm$9.03\\ vehicle& NB& UGRWO& 80.39$\pm$0.05& 72.76$\pm$0.05& 74.69$\pm$5.41& 75.74$\pm$5.59& 72.69$\pm$7.44\\ && GRWO& 69.98$\pm$0.06& 83.85$\pm$0.03& 79.04$\pm$4.23& 75.88$\pm$7.99& 69.43$\pm$1.25\\ && RWO& \textbf{82.43$\pm$0.03}& \textbf{85.95$\pm$0.02}& \textbf{84.41$\pm$3.10}& \textbf{83.69$\pm$3.44}& 76.71$\pm$6.20\\ && MWMOTE& 81.26$\pm$0.01& 84.25$\pm$0.02& 83.46$\pm$0.25& 83.01$\pm$4.01& 92.15$\pm$2.15\\ && SMOTE& 70.90$\pm$0.03& 70.92$\pm$0.06& 71.00$\pm$4.82& 72.89$\pm$5.15& 92.21$\pm$3.79\\ && ROS& 78.12$\pm$0.02& 70.04$\pm$0.05& 74.76$\pm$3.49& 73.10$\pm$4.24& \textbf{93.63$\pm$3.24}\\ & 5-NN& UGRWO& \textbf{97.53$\pm$0.02}& 93.21$\pm$0.05& \textbf{96.38$\pm$3.27}& 95.40$\pm$4.30& 97.37$\pm$3.41\\ && GRWO& 86.92$\pm$0.04& 94.30$\pm$0.01& 91.96$\pm$2.39& 89.15$\pm$3.93& 83.73$\pm$7.11\\ && RWO& 85.12$\pm$0.02& 88.40$\pm$0.01& 86.98$\pm$2.15& 86.14$\pm$2.39& 77.89$\pm$4.05\\ && MWMOTE& 93.15$\pm$0.01& 91.25$\pm$0.03& 95.15$\pm$6.25& 92.15$\pm$4.02& 97.48$\pm$0.26\\ && SMOTE& 93.68$\pm$0.02& \textbf{95.75$\pm$0.01}& 94.92$\pm$2.02& \textbf{95.50$\pm$1.79}& 98.25$\pm$2.37\\ && ROS& 94.21$\pm$0.02& 94.03$\pm$0.03& 94.13$\pm$2.84& 94.18$\pm$2.86& \textbf{98.99$\pm$1.79}\\ & DT& UGRWO& 96.65$\pm$0.02& 91.75$\pm$0.03& 95.17$\pm$3.95& 95.10$\pm$4.66& 95.68$\pm$2.77\\ && GRWO& 90.08$\pm$0.02& 94.71$\pm$0.01& 92.91$\pm$1.84& 91.96$\pm$2.43& 88.91$\pm$4.82\\ && RWO& 95.01$\pm$0.02& 95.31$\pm$0.02& 95.17$\pm$2.70& 95.14$\pm$2.74& 95.46$\pm$3.49\\ && MWMOTE& \textbf{98.45$\pm$0.01}& 97.15$\pm$0.01& \textbf{97.84$\pm$0.01}& \textbf{98.25$\pm$0.02}& \textbf{98.48$\pm$2.14}\\ && SMOTE& 91.72$\pm$0.03& 94.88$\pm$0.02& 93.68$\pm$2.55& 93.33$\pm$2.87& 92.19$\pm$5.10\\ && ROS& 97.08$\pm$0.01& \textbf{97.27$\pm$0.01}& 97.18$\pm$1.42& 97.20$\pm$1.44& 97.98$\pm$2.34\\ &Ada& UGRWO& \textbf{96.19$\pm$0.02}& 87.89$\pm$0.08& \textbf{94.21$\pm$4.17}& \textbf{91.82$\pm$4.04}& \textbf{98.36$\pm$2.33}\\ &Boost& GRWO& 87.60$\pm$0.02& \textbf{94.23$\pm$0.01}& 92.14$\pm$1.78& 90.52$\pm$2.65& 86.65$\pm$6.22\\ &M1& RWO& 90.79$\pm$0.02& 90.36$\pm$0.02& 90.59$\pm$2.25& 90.59$\pm$2.32& 96.31$\pm$3.03\\ && MWMOTE& 96.01$\pm$0.01& 93.25$\pm$0.01& 94.15$\pm$0.01& 90.25$\pm$2.03& 96.26$\pm$3.15\\ && SMOTE& 89.63$\pm$0.03& 93.16$\pm$0.02& 91.77$\pm$2.75& 92.05$\pm$3.07& 93.70$\pm$6.01\\ && ROS& 91.99$\pm$0.01& 91.75$\pm$0.01& 91.87$\pm$1.80& 91.91$\pm$1.83& 96.99$\pm$2.46\\ sonar& NB& UGRWO& \textbf{92.88$\pm$0.05}& 65.33$\pm$0.06& \textbf{88.02$\pm$8.25}& \textbf{86.24$\pm$12.5}& \textbf{88.52$\pm$9.74}\\ && GRWO& 77.40$\pm$0.07& \textbf{80.48$\pm$0.05}& 79.09$\pm$6.59& 79.15$\pm$8.96& 67.30$\pm$16.6\\ && RWO& 84.71$\pm$0.06& 74.05$\pm$0.07& 80.85$\pm$7.14& 84.82$\pm$6.18& 74.73$\pm$9.27\\ && MWMOTE& 91.25$\pm$0.03& 80.02$\pm$0.02& 79.25$\pm$0.17& 84.26$\pm$0.23& 84.25$\pm$2.03\\ && SMOTE& 82.65$\pm$0.04& 62.96$\pm$0.10& 76.43$\pm$6.49& 69.83$\pm$8.17& 88.23$\pm$6.14\\ && ROS& 84.69$\pm$0.03& 54.26$\pm$0.11& 77.13$\pm$5.50& 65.48$\pm$9.04& 87.64$\pm$5.62\\ & 5-NN& UGRWO& \textbf{96.88$\pm$0.02}& 70.33$\pm$0.16& \textbf{94.20$\pm$4.40}& 82.01$\pm$16.21& \textbf{98.39$\pm$3.38}\\ && GRWO& 86.34$\pm$0.08& \textbf{83.70$\pm$0.07}& 85.29$\pm$7.82& 84.74$\pm$7.74& 85.71$\pm$13.04\\ && RWO &93.14$\pm$0.02& 81.67$\pm$0.08& 90.03$\pm$4.10& \textbf{87.10$\pm$6.71}& 93.13$\pm$3.21\\ && MWMOTE& 92.15$\pm$0.02& 82.15$\pm$0.05& 90.02$\pm$1.03& 86.15$\pm$6.45& 95.18$\pm$6.25\\ && SMOTE& 90.45$\pm$0.05& 82.07$\pm$0.10& 87.55$\pm$7.12& 85.14$\pm$7.86& 92.84$\pm$7.22\\ && ROS& 93.41$\pm$0.03& 81.21$\pm$0.10& 90.31$\pm$5.36& 85.43$\pm$8.63& 95.17$\pm$5.19\\ & DT& UGRWO& \textbf{95.59$\pm$0.05}& 64.33$\pm$0.18& \textbf{92.08$\pm$9.18}& 78.06$\pm$11.95& \textbf{96.02$\pm$5.54}\\ & & GRWO& 79.35$\pm$0.06& \textbf{75.65$\pm$0.06}& 77.76$\pm$6.60& 77.52$\pm$6.56& 79.67$\pm$11.00\\ && RWO& 90.55$\pm$0.03& 75.10$\pm$0.08& 86.33$\pm$5.04& \textbf{82.22$\pm$7.13}& 90.71$\pm$4.33\\ && MWMOTE& 94.56$\pm$0.01& 74.25$\pm$0.02& 87.14$\pm$0.02& 80.12$\pm$3.02& 95.15$\pm$3.21\\ && SMOTE& 86.25$\pm$0.03& 73.33$\pm$0.06& 81.97$\pm$3.78& 78.21$\pm$5.84& 89.21$\pm$6.09\\ && ROS& 85.79$\pm$0.05& 58.05$\pm$0.12& 78.90$\pm$7.03& 68.11$\pm$9.72& 88.68$\pm$7.71\\ &Ada& UGRWO& \textbf{97.31$\pm$0.02}& 76.71$\pm$0.14& \textbf{94.93$\pm$4.71}& \textbf{82.97$\pm$12.92}& \textbf{100.00$\pm$0.00}\\ &Boost& GRWO& 84.72$\pm$0.07& \textbf{80.57$\pm$0.06}& 82.91$\pm$8.25& 82.35$\pm$8.70& 84.42$\pm$8.68\\ &M1& RWO& 89.73$\pm$0.04& 72.19$\pm$0.10& 85.05$\pm$5.93& 79.51$\pm$7.91& 90.70$\pm$6.10\\ && MWMOTE& 96.58$\pm$0.02& 79.28$\pm$0.01& 86.45$\pm$0.04& 80.25$\pm$0.03& 99.58$\pm$0.47\\ && SMOTE& 87.20$\pm$0.02& 72.16$\pm$0.09& 82.61$\pm$4.43& 76.78$\pm$8.44& 92.76$\pm$6.08\\ && ROS& 92.25$\pm$0.02& 75.63$\pm$0.10& 88.29$\pm$3.76& 80.49$\pm$8.81& 95.88$\pm$3.54\\ Glass& NB& UGRWO& 73.08$\pm$0.15& 88.22$\pm$0.09& 82.27$\pm$14.38& 76.23$\pm$12.36& 59.50$\pm$17.86\\ && GRWO& 69.17$\pm$0.12& 94.31$\pm$0.02& 90.42$\pm$4.38& 76.23$\pm$11.09& 61.50$\pm$19.30\\ && RWO& 71.71$\pm$0.19& 94.36$\pm$0.02& 90.69$\pm$4.74& 78.24$\pm$17.14& 66.33$\pm$36.96\\ && MWMOTE& 90.25$\pm$0.02& 96.25$\pm$0.02& 94.85$\pm$3.02& 90.25$\pm$2.54& 93.02$\pm$3.02\\ && SMOTE& 82.90$\pm$0.16& 96.61$\pm$0.03& 94.40$\pm$5.77& 91.63$\pm$10.34& 89.16$\pm$18.44\\ && ROS& \textbf{91.85$\pm$0.08}& \textbf{97.63$\pm$0.02}& \textbf{96.35$\pm$4.02}& \textbf{96.09$\pm$4.80}& \textbf{96.00$\pm$8.43}\\ &5-NN& UGRWO& 90.14$\pm$0.16& 97.77$\pm$0.03& 96.42$\pm$5.05& 91.25$\pm$13.81& 85.00$\pm$22.49\\ && GRWO& 78.95$\pm$0.17& 97.28$\pm$0.01& 95.23$\pm$3.21& 83.38$\pm$15.31& 72.50$\pm$24.54\\ && RWO& 80.00$\pm$0.19& 96.11$\pm$0.03& 93.54$\pm$5.18& 83.24$\pm$16.05& 72.33$\pm$23.62\\ && MWMOTE& 96.15$\pm$0.02& 98.25$\pm$0.03& 97.84$\pm$2.54& 98.65$\pm$0.03& 99.84$\pm$0.15\\ && SMOTE& 96.03$\pm$0.06& \textbf{99.21$\pm$0.08}& 98.69$\pm$2.10& \textbf{99.24$\pm$1.24}& \textbf{100.00$\pm$0.00}\\ && ROS& \textbf{97.27$\pm$0.04}& 99.21$\pm$0.01& \textbf{98.78$\pm$1.95}& 99.22$\pm$1.24& \textbf{100.00$\pm$0.00}\\ & DT& UGRWO& 89.44$\pm$0.12& 96.16$\pm$0.03& 94.28$\pm$6.99& 93.80$\pm$9.02& 93.50$\pm$14.15\\ && GRWO& 93.84$\pm$0.08& 98.76$\pm$0.01& 97.94$\pm$2.87& 95.08$\pm$7.91& 91.50$\pm$14.53\\ && RWO& 97.77$\pm$0.04& 99.49$\pm$0.01& 99.18$\pm$1.72& 97.88$\pm$4.45& 98.00$\pm$8.43\\ && MWMOTE& 98.54$\pm$0.02& 98.74$\pm$0.02& 98.15$\pm$1.02& 98.74$\pm$2.25& 99.48$\pm$2.30\\ && SMOTE& 98.88$\pm$0.03& \textbf{99.74$\pm$0.00}& 99.58$\pm$1.31& \textbf{99.74$\pm$0.80}& \textbf{100.00$\pm$0.00}\\ && ROS& \textbf{99.23$\pm$0.02}& 99.72$\pm$0.00& \textbf{99.60$\pm$1.26}& 99.73$\pm$0.84& \textbf{100.00$\pm$0.00}\\ &Ada&UGRWO& 94.16$\pm$0.08& 98.26$\pm$0.02& 97.33$\pm$3.67& 96.65$\pm$6.95& 96.00$\pm$12.64\\ &Boost& GRWO& 92.14$\pm$0.10& 98.49$\pm$0.02& 97.48$\pm$3.52 &93.50$\pm$8.74& 88.50$\pm$15.46\\ &M1& RWO& 90.47$\pm$0.12& 97.72$\pm$0.02& 96.34$\pm$4.51& 92.84$\pm$9.51& 88.33$\pm$16.72\\ && MWMOTE& 97.84$\pm$0.01& 97.51$\pm$0.02& 98.25$\pm$0.02& 98.45$\pm$6.25& 98.45$\pm$8.14\\ && SMOTE& 96.34$\pm$0.05& 99.21$\pm$0.01& 98.71$\pm$2.07& 98.14$\pm$4.19& 97.50$\pm$7.90\\ && ROS& \textbf{98.18$\pm$0.03}& \textbf{99.48$\pm$0.01}& \textbf{99.20$\pm$1.68}& \textbf{9.49$\pm$1.06}& \textbf{100.00$\pm$0.00}\\ \end{longtable} \begin{longtable}{llllllll} \caption{Averaged results and standard deviations on nine continuous attribute datasets (over-sampling rate equals $300\%$).} \label{tab7} \endfirsthead \endhead \hline ds & Alg & OS & f-min & f-maj & O-acc & G-mean & TP rate \\\hline Breast\_w& NB& UGRWO& \textbf{99.23$\pm$0.02}& \textbf{96.25$\pm$0.01}& \textbf{97.85$\pm$0.02}& \textbf{96.84$\pm$2.05}& \textbf{98.54$\pm$2.35}\\ && GRWO& 95.94$\pm$0.03& 96.86$\pm$0.02& 96.31$\pm$2.38& 96.57$\pm$2.12& 98.00$\pm$1.72\\ && RWO& 97.90$\pm$0.01& 95.61$\pm$0.02& 97.16$\pm$1.46& 96.63$\pm$1.73& 98.11$\pm$1.28\\ && MWMOTE& 98.25$\pm$0.02& 95.26$\pm$0.02& 96.25$\pm$0.03& 95.46$\pm$1.02& 97.15$\pm$4.02\\ && SMOTE& 97.39$\pm$0.02& 95.83$\pm$0.03& 96.79$\pm$2.59& 96.48$\pm$2.78& 97.79$\pm$2.26\\ && ROS& 98.03$\pm$0.00& 95.83$\pm$0.01& 97.32$\pm$0.64& 96.74$\pm$0.87& 98.34$\pm$1.21\\ & 5-NN& UGRWO& \textbf{99.87$\pm$0.00}& \textbf{99.41$\pm$0.01}& \textbf{99.75$\pm$0.77}& \textbf{99.84$\pm$0.48}&99.77$\pm$0.71\\ && GRWO& 97.66$\pm$0.01& 97.70$\pm$0.01& 97.67$\pm$1.91& 97.66$\pm$1.92& 98.67$\pm$2.19\\ && RWO& 99.06$\pm$0.00& 97.97$\pm$0.01& 98.72$\pm$0.87& 98.00$\pm$1.38& \textbf{100.0$\pm$0.00}\\ && MWMOTE& 98.25$\pm$0.01& 98.25$\pm$0.01& 98.15$\pm$2.36& 98.47$\pm$1.57& 99.85$\pm$1.04\\ && SMOTE& 98.67$\pm$0.01& 97.76$\pm$0.01& 98.31$\pm$1.48& 97.87$\pm$1.81& 99.72$\pm$0.58\\ && ROS& 98.87$\pm$0.00& 97.53$\pm$0.01& 98.45$\pm$1.13& 97.67$\pm$1.69& 99.79$\pm$0.65\\ & DT& UGRWO& \textbf{99.44$\pm$0.00}& \textbf{98.56$\pm$0.03}& \textbf{99.45$\pm$1.13}& \textbf{99.09$\pm$2.36}& \textbf{99.72$\pm$0.87}\\ && GRWO& 94.90$\pm$0.02& 95.87$\pm$0.01& 95.01$\pm$2.76& 94.98$\pm$2.77& 95.00$\pm$4.68\\ && RWO& 97.76$\pm$0.01& 95.22$\pm$0.02& 96.95$\pm$1.62& 96.06$\pm$2.06& 98.53$\pm$1.31\\ && MWMOTE& 98.02$\pm$0.01& 95.85$\pm$0.01& 96.58$\pm$2.03& 96.54$\pm$3.01& 98.52$\pm$0.01\\ && SMOTE& 96.47$\pm$0.01& 94.43$\pm$0.01& 95.68$\pm$1.22& 95.43$\pm$1.52& 96.39$\pm$1.87\\ && ROS& 98.04$\pm$0.01& 95.80$\pm$0.02& 97.33$\pm$1.65& 96.74$\pm$2.27& 98.34$\pm$0.87\\ &Ada&UGRWO& \textbf{99.50$\pm$0.00}& \textbf{98.29$\pm$0.02}& \textbf{99.26$\pm$1.17}& \textbf{99.53$\pm$0.74}& \textbf{99.33$\pm$1.40}\\ &Boost& GRWO& 96.11$\pm$0.01& 96.32$\pm$0.02& 96.22$\pm$2.51& 96.16$\pm$2.59& 95.68$\pm$4.47\\ &M1& RWO& 97.57$\pm$0.01& 95.08$\pm$0.02& 96.75$\pm$1.76& 96.50$\pm$1.64& 97.17$\pm$2.56\\ && MWMOTE& 98.25$\pm$0.01& 97.25$\pm$0.01& 96.58$\pm$0.02& 98.54$\pm$0.02& 96.25$\pm$0.05\\ && SMOTE& 97.14$\pm$0.00& 95.57$\pm$0.01& 96.53$\pm$1.22& 96.53$\pm$1.58& 96.26$\pm$1.13\\ && ROS& 96.74$\pm$0.01& 93.47$\pm$0.02& 95.63$\pm$1.85& 95.61$\pm$2.08& 95.64$\pm$1.95\\ Diabetes& NB& UGRWO& 61.61$\pm$0.07& \textbf{88.82$\pm$0.03}& \textbf{82.78$\pm$5.11}& \textbf{84.94$\pm$6.57}& 93.40$\pm$7.99\\ && GRWO& 68.80$\pm$0.06& 77.87$\pm$0.03& 73.79$\pm$4.55& 72.04$\pm$5.49& \textbf{93.60$\pm$3.74}\\ && RWO& 66.44$\pm$3.53& 74.95$\pm$3.89& 71.32$\pm$3.67& 73.24$\pm$3.29& 81.38$\pm$3.44\\ && MWMOTE& 80.25$\pm$0.03& 87.25$\pm$0.01& 80.25$\pm$3.04& 82.45$\pm$3.25& 90.25$\pm$3.02\\ && SMOTE& 79.95$\pm$0.02& 69.51$\pm$0.03& 75.84$\pm$3.16& 74.91$\pm$3.03& 78.36$\pm$4.08\\ && ROS& \textbf{81.54$\pm$0.03}& 63.22$\pm$0.05& 75.44$\pm$3.86& 72.65$\pm$4.53& 79.66$\pm$3.80\\ & 5-NN& UGRWO& \textbf{93.31$\pm$0.02}& 68.96$\pm$0.10& \textbf{89.06$\pm$4.58}& \textbf{84.15$\pm$8.38}& 90.89$\pm$5.15\\ && GRWO& 68.01$\pm$0.03& 75.09$\pm$0.04& 72.06$\pm$3.94& 70.80$\pm$3.42& \textbf{61.38$\pm$4.00}\\ && RWO& 78.85$\pm$0.03& 65.43$\pm$0.04& 73.79$\pm$3.86& 74.73$\pm$3.78& 71.92$\pm$4.75\\ && MWMOTE& 92.03$\pm$0.01& 72.25$\pm$0.03& 88.56$\pm$2.58& 83.25$\pm$2.06& 92.58$\pm$2.05\\ && SMOTE& 84.95$\pm$0.02& 67.86$\pm$0.04& 79.52$\pm$2.80& 72.75$\pm$3.70& 93.79$\pm$3.63\\ && ROS& \textbf{86.57$\pm$0.01}& 62.22$\pm$0.05& 80.21$\pm$2.43& 69.33$\pm$4.57& 93.56$\pm$2.18\\ & DT& UGRWO& 92.05$\pm$0.02& 63.99$\pm$0.14& 86.48$\pm$4.29& 75.14$\pm$10.32& 91.86$\pm$3.14\\ && GRWO& 79.87$\pm$0.05& 78.48$\pm$0.05& 79.28$\pm$5.47& 79.06$\pm$5.68& 79.72$\pm$7.82\\ && RWO& 89.61$\pm$0.01& 78.16$\pm$0.03& 85.94$\pm$2.41& \textbf{83.93$\pm$3.02}& 89.08$\pm$2.85\\ && MWMOTE& \textbf{93.02$\pm$0.01}& \textbf{81.25$\pm$0.02}& \textbf{89.26$\pm$2.15}& 82.56$\pm$2.13& \textbf{97.58$\pm$2.45}\\ && SMOTE& 84.57$\pm$0.02& 73.99$\pm$0.04& 80.67$\pm$3.22& 78.56$\pm$3.76& 86.06$\pm$4.68\\ && ROS& 92.25$\pm$0.01& 80.56$\pm$0.02& 88.93$\pm$1.43& 83.57$\pm$2.56& 96.64$\pm$2.02\\ &Ada& UGRWO& \textbf{93.64$\pm$0.01}& 61.06$\pm$0.10& \textbf{89.86$\pm$3.32}& 71.48$\pm$8.57& \textbf{94.65$\pm$5.72}\\ &Boost& GRWO& 80.14$\pm$0.02& 78.68$\pm$0.03& 79.47$\pm$2.80& 79.34$\pm$2.82& 80.11$\pm$5.07\\ &M1& RWO& 90.23$\pm$0.01& \textbf{78.77$\pm$0.03}& 86.63$\pm$2.32& \textbf{84.01$\pm$2.84}& 90.66$\pm$3.31\\ && MWMOTE& 92.25$\pm$0.01& 78.02$\pm$0.02& 89.23$\pm$0.23& 83.54$\pm$2.56& 93.54$\pm$3.25\\ && SMOTE& 84.28$\pm$0.03& 70.52$\pm$0.06& 79.52$\pm$4.02& 75.49$\pm$4.85& 89.04$\pm$3.93\\ && ROS& 84.97$\pm$0.01& 62.37$\pm$0.03& 78.56$\pm$2.21& 70.48$\pm$3.01& 89.09$\pm$4.02\\ Ionosphere& NB& UGRWO& \textbf{97.85$\pm$0.02} &86.14$\pm$0.50& \textbf{96. 31$\pm$3.55}& \textbf{91.16$\pm$3.78}& \textbf{98.16$\pm$2.96}\\ && GRWO& 83.68$\pm$0.06& \textbf{93.69$\pm$0.02}& 90.70$\pm$3.53& 89.19$\pm$4.40& 88.26$\pm$10.9\\ && RWO& 82.91$\pm$3.33& 90.07$\pm$3.38& 87.44$\pm$3.90& 86.86$\pm$3.15& 84.92$\pm$3.32\\ && MWMOTE& 86.25$\pm$2.15& 90.56$\pm$3.02& 88.25$\pm$2.45& 90.45$\pm$5.26& 97.85$\pm$3.58\\ && SMOTE& 85.42$\pm$0.03& 75.46$\pm$0.04& 81.77$\pm$3.89& 80.04$\pm$4.08& 85.71$\pm$6.19\\ && ROS& 88.54$\pm$0.02& 75.07$\pm$0.04& 84.35$\pm$2.91& 81.73$\pm$3.47& 87.88$\pm$5.04\\ & 5-NN& UGRWO &91.10$\pm$0.04& 78.61$\pm$0.11& 87.13$\pm$8.25& 91.25$\pm$5.82& 84.48$\pm$8.01\\ && GRWO& 82.54$\pm$0.06& 89.83$\pm$0.03& 87.33$\pm$4.33& 84.21$\pm$5.66& 72.79$\pm$9.04\\ && RWO& 96.85$\pm$0.01& 93.37$\pm$0.03& \textbf{95.73$\pm$2.28}& \textbf{95.91$\pm$2.43}& 95.40$\pm$2.50\\ && MWMOTE& 95.85$\pm$0.01& 93.25$\pm$0.02& 94.12$\pm$0.12& 94.18$\pm$2.47& 93.45$\pm$0.03\\ && SMOTE& 96.50$\pm$0.01& \textbf{94.34$\pm$0.02}& 95.68$\pm$1.60& 95.79$\pm$3.04& 92.85$\pm$3.75\\ && ROS& \textbf{96.99$\pm$0.01}& 93.47$\pm$0.03& 95.69$\pm$1.94& 95.75$\pm$2.28& 96.03$\pm$2.47\\ &DT& UGRWO& \textbf{98.44$\pm$0.02}& 91.50$\pm$0.11& \textbf{97.39$\pm$3.62}& \textbf{95.40$\pm$7.52}& \textbf{98.19$\pm$4.01}\\ && GRWO& 88.74$\pm$0.06& 92.08$\pm$0.04& 90.71$\pm$4.80& 90.37$\pm$5.38& 91.85$\pm$6.09\\ && RWO& 96.12$\pm$0.06& 90.34$\pm$0.05& 93.81$\pm$4.03& 92.89$\pm$3.53& 95.20$\pm$5.09\\ && MWMOTE& 95.84$\pm$0.26& 90.01$\pm$0.02& 92.32$\pm$2.14& 90.25$\pm$3.62& 94.15$\pm$6.02\\ && SMOTE& 90.21$\pm$0.04& 83.21$\pm$0.05& 87.69$\pm$4.62& 85.97$\pm$4.41& 91.75$\pm$7.67\\ && ROS& 97.25$\pm$0.02& \textbf{93.68$\pm$0.05}& 96.17$\pm$3.19& 94.94$\pm$3.87& 98.03$\pm$2.92\\ &Ada&UGRWO& \textbf{96.66$\pm$0.03}& 83.75$\pm$0.11& \textbf{94.21$\pm$6.30}& \textbf{91.16$\pm$9.66}& \textbf{97.57$\pm$4.18}\\ &Boost& GRWO& 87.99$\pm$0.06& \textbf{92.69$\pm$0.03}& 90.93$\pm$4.33& 89.07$\pm$5.60& 82.52$\pm$11.76\\ &M1& RWO& 90.80$\pm$0.03& 82.12$\pm$0.05& 87.88$\pm$4.04& 87.93$\pm$3.54& 87.61$\pm$6.10\\ && MWMOTE& 95.88$\pm$3.02& 90.23$\pm$0.01& 93.25$\pm$3.26& 90.45$\pm$2.5& 90.25$\pm$3.26\\ && SMOTE& 87.13$\pm$0.04& 81.89$\pm$0.05& 85.04$\pm$4.60& 85.87$\pm$5.24& 81.20$\pm$6.83\\ && ROS& 93.25$\pm$0.02& 86.13$\pm$0.05& 90.94$\pm$3.66& 90.62$\pm$3.79& 91.27$\pm$5.30\\ Musk& NB& UGRWO& \textbf{95.25$\pm$0.02}& \textbf{99.56$\pm$0.02}& \textbf{98.92$\pm$1.02}& \textbf{96.01$\pm$0.03}& \textbf{92.52$\pm$3.02}\\ && GRWO& 93.29$\pm$0.01& 98.99$\pm$0.00& 98.25$\pm$0.30& 94.41$\pm$1.06& 89.47$\pm$2.06\\ && RWO& 94.12$\pm$0.01& 98.63$\pm$0.00& 97.78$\pm$0.38& 94.82$\pm$1.04& 90.25$\pm$1.97\\ && MWMOTE& 93.23$\pm$0.01& 98.52$\pm$0.02& 98.42$\pm$1.02& 94.95$\pm$2.12& 90.54$\pm$1.04\\ && SMOTE& 93.29$\pm$0.01& 99.20$\pm$0.00& 98.58$\pm$0.35& 94.63$\pm$1.26& 88.89$\pm$2.47\\ && ROS& 93.77$\pm$0.01& 98.55$\pm$0.00& 97.66$\pm$0.43& 94.47$\pm$1.23& 89.60$\pm$2.47\\ & 5-NN& UGRWO& \textbf{96.96$\pm$0.01}& 90.60$\pm$0.05& \textbf{95.41$\pm$2.76}& 92.65$\pm$4.05& \textbf{98.93$\pm$1.80}\\ && GRWO& 91.93$\pm$0.01& \textbf{92.66$\pm$0.01}& 91.90$\pm$1.30& 91.82$\pm$1.38& 96.14$\pm$3.86\\ && RWO& 94.69$\pm$0.01& 90.68$\pm$0.01& 93.26$\pm$1.44& \textbf{93.12$\pm$1.99}& 93.20$\pm$4.48\\ && MWMOTE& 93.26$\pm$0.01& 91.25$\pm$0.03& 92.25$\pm$2.13& 92.10$\pm$1.16& 92.25$\pm$2.42\\ && SMOTE& 94.89$\pm$0.01& 91.20$\pm$0.03& 93.54$\pm$2.17& 91.88$\pm$2.84& 98.69$\pm$1.34\\ && ROS& 95.37$\pm$0.01& 90.08$\pm$0.03& 93.69$\pm$1.93& 90.63$\pm$3.07& 99.83$\pm$0.35\\ & DT& UGRWO& 80.39$\pm$0.05& 72.76$\pm$0.05& 74.69$\pm$5.41& 75.74$\pm$5.59& 72.69$\pm$7.44\\ && GRWO &69.98$\pm$0.06& 83.85$\pm$0.03& 79.04$\pm$4.23& 75.88$\pm$7.99& 69.43$\pm$1.25\\ && RWO& \textbf{82.43$\pm$0.03}& \textbf{85.95$\pm$0.02}& \textbf{84.41$\pm$3.10}& \textbf{83.69$\pm$3.44}& 76.71$\pm$6.20\\ && MWMOTE& 81.26$\pm$0.01& 84.25$\pm$0.02& 83.46$\pm$0.25& 83.01$\pm$4.01& 92.15$\pm$2.15\\ && SMOTE& 70.90$\pm$0.03& 70.92$\pm$0.06& 71.00$\pm$4.82& 72.89$\pm$5.15& 92.21$\pm$3.79\\ && ROS& 78.12$\pm$0.02& 70.04$\pm$0.05& 74.76$\pm$3.49& 73.10$\pm$4.24& \textbf{93.63$\pm$3.24}\\ &Ada&UGRWO& 82.56$\pm$0.06& 72.67$\pm$0.05& 76.89$\pm$5.27& 78.69$\pm$5.33& 74.53$\pm$7.16\\ &Boost& GRWO& 76.30$\pm$0.04& 85.35$\pm$0.02& 81.92$\pm$3.16& 79.59$\pm$3.47& 71.71$\pm$5.62\\ &M1& RWO& \textbf{86.65$\pm$0.02}& \textbf{85.55$\pm$0.02}& \textbf{86.14$\pm$2.52}& \textbf{86.76$\pm$2.62}& 81.65$\pm$3.46\\ && MWMOTE& 85.26$\pm$0.02& 84.54$\pm$2.13& 85.95$\pm$0.03& 83.45$\pm$2.65& 80.25$\pm$3.12\\ && SMOTE& 78.95$\pm$0.04& 70.56$\pm$0.07& 75.49$\pm$5.59& 73.70$\pm$6.45& \textbf{95.15$\pm$3.37}\\ && ROS& 82.64$\pm$0.01& 69.75$\pm$0.03& 77.96$\pm$2.30& 73.49$\pm$2.95& 95.10$\pm$2.15\\ Satimage& NB&UGRWO& 94.52$\pm$0.01& 98.80$\pm$0.00& \textbf{98.89$\pm$2.00}& \textbf{95.53$\pm$3.21}& \textbf{90.87$\pm$0.87}\\ &&GRWO& 93.51$\pm$0.01& \textbf{98.94$\pm$0.00}& 98.16$\pm$0.41& 94.32$\pm$1.58& 89.33$\pm$2.95\\ && RWO& \textbf{94.83$\pm$0.01}& 97.62$\pm$0.00& 96.74$\pm$0.70& 95.16$\pm$0.95& 90.93$\pm$1.63\\ && MWMOTE& 93.25$\pm$0.01& 97.84$\pm$6.25& 95.84$\pm$2.32& 94.58$\pm$3.26& 90.02$\pm$2.06\\ && SMOTE& 94.32$\pm$0.01& 98.04$\pm$0.00& 97.09$\pm$0.76& 94.76$\pm$1.46& 90.13$\pm$2.74\\ && ROS& 94.36$\pm$0.00& 97.42$\pm$0.00& 96.46$\pm$0.54& 94.69$\pm$0.79& 90.00$\pm$1.45\\ & 5-NN& UGRWO& \textbf{99.25$\pm$0.00}& \textbf{99.80$\pm$0.01}& \textbf{99.50$\pm$0.03}& \textbf{99.50$\pm$0.12}& \textbf{99.84$\pm$0.15}\\ && GRWO& 98.61$\pm$0.00& 99.68$\pm$0.00& 99.48$\pm$0.25& 99.20$\pm$0.59& 98.77$\pm$1.15\\ && RWO& 99.15$\pm$0.00& 99.58$\pm$0.00& 99.43$\pm$0.16& 99.47$\pm$0.16& 99.57$\pm$0.36\\ && MWMOTE& 98.25$\pm$0.00& 98.45$\pm$0.01& 98.45$\pm$0.02& 98.58$\pm$0.25& 98.58$\pm$0.02\\ && SMOTE& 98.91$\pm$0.00& 99.59$\pm$0.00& 99.41$\pm$0.19& 99.37$\pm$0.24& 99.28$\pm$0.51\\ && ROS& 98.99$\pm$0.00& 99.50$\pm$0.00& 99.33$\pm$0.26& 99.33$\pm$0.28& 99.35$\pm$0.43\\ & DT& UGRWO& \textbf{99.18$\pm$0.00} &\textbf{99.85$\pm$0.00}& \textbf{99.48$\pm$0.23}& \textbf{99.74$\pm$0.23}& \textbf{99.85$\pm$0.09}\\ && GRWO& 96.92$\pm$0.01& 99.49$\pm$0.00& 99.12$\pm$0.26& 98.31$\pm$0.82& 97.34$\pm$1.44\\ && RWO& 98.10$\pm$0.00& 99.07$\pm$0.00& 98.75$\pm$0.57& 98.52$\pm$0.80& 97.86$\pm$0.79\\ && MWMOTE& 98.02$\pm$0.01& 98.56$\pm$0.02& 98.47$\pm$0.01& 98.58$\pm$3.01& 96.87$\pm$0.01\\ && SMOTE& 98.10$\pm$0.00& 99.30$\pm$0.00& 98.97$\pm$0.34& 98.82$\pm$0.54& 98.48$\pm$1.11\\ && ROS& 99.17$\pm$0.00& 99.58$\pm$0.00& 99.44$\pm$0.24& 99.56$\pm$0.19& 99.89$\pm$0.24\\ &Ada&UGRWO& \textbf{97.85$\pm$0.00}& \textbf{99.23$\pm$0.02}& \textbf{98.53$\pm$2.03}& \textbf{98.01$\pm$0.05}& \textbf{95.90$\pm$2.35}\\ &Boost& GRWO& 95.62$\pm$0.01& 99.21$\pm$0.00& 98.42$\pm$0.58& 96.39$\pm$1.49& 93.32$\pm$2.83\\ &M1& RWO& 96.58$\pm$0.01& 98.39$\pm$0.00& 97.81$\pm$0.67& 96.83$\pm$0.97& 94.13$\pm$1.82\\ && MWMOTE& 95.83$\pm$0.05& 97.97$\pm$0.02& 97.29$\pm$3.05& 97.27$\pm$3.34& 97.50$\pm$5.27\\ && SMOTE& 97.16$\pm$0.01& 98.98$\pm$0.00& 98.50$\pm$0.56& 97.49$\pm$1.04& 95.40$\pm$2.04\\ && ROS& 96.62$\pm$0.00& 98.52$\pm$0.00& 98.01$\pm$0.47& 97.30$\pm$0.59& 95.34$\pm$1.20\\ Segmentation& NB& UGRWO& \textbf{96.66$\pm$0.03}& 83.75$\pm$0.11& \textbf{94.21$\pm$6.30}& \textbf{91.16$\pm$9.66}& \textbf{97.57$\pm$4.18}\\ && GRWO& \textbf{87.99$\pm$0.06}& 92.69$\pm$0.03& 90.93$\pm$4.33& 89.07$\pm$5.60& 82.52$\pm$11.76\\ && RWO& 90.80$\pm$0.03& 82.12$\pm$0.05& 87.88$\pm$4.04& 87.93$\pm$3.54& 87.61$\pm$6.10\\ && MWMOTE& 89.25$\pm$0.01& 81.54$\pm$0.02& 93.25$\pm$0.03& 90.25$\pm$0.02& 87.84$\pm$0.02\\ && SMOTE& 87.13$\pm$0.04& 81.89$\pm$0.05& 85.04$\pm$4.60& 85.87$\pm$5.24& 81.20$\pm$6.83\\ && ROS& 93.25$\pm$0.02& 86.13$\pm$0.05& 90.94$\pm$3.66& 90.62$\pm$3.79& 91.27$\pm$5.30\\ & 5-NN& UGRWO& \textbf{93.84$\pm$0.08}& \textbf{95.07$\pm$0.07}& \textbf{94.55$\pm$7.77}& \textbf{94.29$\pm$8.06}& \textbf{93.85$\pm$11.35}\\ && GRWO& 65.23$\pm$0.18& 93.64$\pm$0.04& 89.24$\pm$7.24& 76.11$\pm$14.99& 64.16$\pm$24.23\\ && RWO& 71.49$\pm$0.13& 92.00$\pm$0.03& 87.55$\pm$6.04& 79.77$\pm$10.77& 69.66$\pm$17.99\\ && MWMOTE& 78.52$\pm$0.01& 91.25$\pm$0.02& 89.45$\pm$6.23& 78.25$\pm$6.54& 79.85$\pm$6.25\\ && SMOTE& 81.49$\pm$0.09& 90.10$\pm$0.04& 87.15$\pm$6.30& 88.58$\pm$6.90& 93.39$\pm$11.35\\ && ROS& 84.29$\pm$0.06& 87.62$\pm$0.06& 86.25$\pm$6.86& 87.79$\pm$5.98& 97.18$\pm$6.24\\ & DT& UGRWO& 88.22$\pm$0.12& 90.22$\pm$0.09& 89.55$\pm$10.31& 88.62$\pm$11.07& 86.66$\pm$18.62\\ && GRWO& 91.28$\pm$0.07& 97.85$\pm$0.02& 96.42$\pm$3.51& 93.50$\pm$4.79& 90.69$\pm$8.43\\ && RWO& 94.87$\pm$0.05& 97.28$\pm$0.02& 96.45$\pm$3.76& 95.59$\pm$4.77& 93.09$\pm$8.20\\ && MWMOTE& 95.25$\pm$0.05& 96.25$\pm$0.02& 95.84$\pm$2.58& 95.48$\pm$0.03& 92.25$\pm$6.25\\ && SMOTE& 95.83$\pm$0.05& 97.97$\pm$0.02& 97.29$\pm$3.05& 97.27$\pm$3.34& 97.50$\pm$5.27\\ && ROS& \textbf{98.17$\pm$0.02}& \textbf{98.85$\pm$0.01}& \textbf{98.59$\pm$1.81}& \textbf{98.87$\pm$1.45}& \textbf{100.0$\pm$0.00}\\ &Ada&UGRWO& 95.97$\pm$0.07& 95.95$\pm$0.05& 95.77$\pm$7.56& 95.63$\pm$7.74& 96.00$\pm$8.43\\ &Boost& GRWO& 90.77$\pm$0.10& 97.85$\pm$0.02& 96.85$\pm$4.29& 92.08$\pm$10.18& 86.00$\pm$16.96\\ &M1& RWO& 91.96$\pm$0.06& 95.62$\pm$0.03& 94.35$\pm$4.14& 93.27$\pm$5.35& 90.27$\pm$9.27\\ && MWMOTE& 90.25$\pm$0.01& 94.87$\pm$0.02& 93.45$\pm$0.25& 92.58$\pm$6.54& 89.25$\pm$3.24\\ && SMOTE& 97.23$\pm$0.03& 98.90$\pm$0.01& 98.43$\pm$2.02& 97.68$\pm$3.21& 96.07$\pm$6.34\\ && ROS& \textbf{99.52$\pm$0.01}& \textbf{99.71$\pm$0.00}& \textbf{99.64$\pm$1.12}& \textbf{99.71$\pm$0.89}& \textbf{100.00$\pm$0.00}\\ sonar& NB &UGRWO& \textbf{93.71$\pm$0.05}& 68.00$\pm$0.23& \textbf{89.39$\pm$9.00}& \textbf{89.26$\pm$10.2}& \textbf{89.83$\pm$9.08}\\ && GRWO& 79.82$\pm$0.09& \textbf{79.57$\pm$0.04}& 79.35$\pm$8.74& 81.22$\pm$8.35& 69.12$\pm$11.3\\ && RWO& 88.55$\pm$0.03& 72.89$\pm$0.04& 83.96$\pm$3.89& 87.61$\pm$2.54& 80.66$\pm$6.07\\ && MWMOTE& 87.95$\pm$0.02& 79.25$\pm$0.05& 86.25$\pm$0.03& 80.25$\pm$6.02& 86.25$\pm$0.02\\ && SMOTE& 85.41$\pm$0.02& 58.08$\pm$0.09& 78.39$\pm$3.96& 68.99$\pm$7.49& 87.29$\pm$2.74\\ && ROS& 88.10$\pm$0.05& 56.00$\pm$0.18& 81.33$\pm$8.45& 68.06$\pm$13.75& 89.43$\pm$7.31\\ & 5-NN& UGRWO& \textbf{96.39$\pm$0.02}& 69.66$\pm$0.18& 93.35$\pm$4.32& 79.68$\pm$15.96& \textbf{98.94$\pm$2.21}\\ && GRWO& 89.25$\pm$0.04& \textbf{85.44$\pm$0.06}& 87.09$\pm$5.20& 86.66$\pm$5.51& 88.93$\pm$7.68\\ && RWO& 96.23$\pm$0.02& \textbf{85.06$\pm$0.10}& \textbf{94.00$\pm$4.55}& 87.25$\pm$7.97& 98.71$\pm$3.25\\ && MWMOTE& 92.82$\pm$0.02& 84.57$\pm$0.04& 89.35$\pm$8.74& 81.22$\pm$8.35& 69.12$\pm$11.3\\ && SMOTE& 94.88$\pm$0.02& 84.17$\pm$0.08 &92.28$\pm$3.82& 86.19$\pm$6.48& 98.62$\pm$2.90\\ && ROS& 95.22$\pm$0.02& 81.06$\pm$0.08& 92.39$\pm$3.37& 85.09$\pm$7.04& 97.42$\pm$2.71\\ & DT& UGRWO& \textbf{96.20$\pm$0.02}& 73.57$\pm$0.25& \textbf{93.32$\pm$7.05}& \textbf{82.42$\pm$16.98}& \textbf{96.97$\pm$5.32}\\ && GRWO& 82.50$\pm$0.09& \textbf{73.98$\pm$0.08}& 78.22$\pm$10.54& 76.34$\pm$8.25& 83.65$\pm$12.20\\ && RWO& 90.25$\pm$0.02& 64.83$\pm$0.09& \textbf{84.75$\pm$4.07}& \textbf{75.56$\pm$6.51}& 90.97$\pm$3.90\\ && MWMOTE& 89.36$\pm$0.02& 72.15$\pm$0.04& 90.91$\pm$7.69& 80.15$\pm$15.20& 95.23$\pm$23.52\\ && SMOTE& 89.89$\pm$0.04& 71.33$\pm$0.10& 85.10$\pm$6.34& 78.16$\pm$8.03& 92.12$\pm$6.62\\ && ROS& 87.85$\pm$0.05& 54.14$\pm$0.18& 80.95$\pm$7.76& 66.84$\pm$14.93& 89.18$\pm$7.42\\ & Ada&UGRWO& \textbf{96.24$\pm$0.03} &66.71$\pm$0.26& \textbf{93.30$\pm$5.73}& 75.65$\pm$20.11& 97.80$\pm$3.54\\ &Boost& GRWO& 86.72$\pm$0.05& \textbf{78.22$\pm$0.09}& 83.04$\pm$7.20& \textbf{80.91$\pm$8.52}& 88.09$\pm$7.05\\ &M1& RWO &92.66$\pm$0.03& 72.56$\pm$0.13& 88.59$\pm$4.97& 80.73$\pm$12.38& 93.31$\pm$6.95\\ && MWMOTE& 91.54$\pm$0.01& 76.25$\pm$0.01& 87.95$\pm$0.25& 80.21$\pm$0.98& 92.58$\pm$0.32\\ && SMOTE& 91.05$\pm$0.02& 69.74$\pm$0.13& 86.31$\pm$4.60& 76.01$\pm$12.65& 95.19$\pm$4.64\\ && ROS& 92.38$\pm$0.01& 6259$\pm$0.09& 87.37$\pm$2.82& 68.70$\pm$7.57& \textbf{98.46$\pm$2.75}\\ Glass& NB& UGRWO& 76.64$\pm$0.13& 94.32$\pm$0.03& 90.57$\pm$5.71& 81.72$\pm$10.35& 70.33$\pm$17.52\\ && GRWO& 80.95$\pm$0.12& 88.22$\pm$0.06& 83.86$\pm$9.22& 82.83$\pm$10.77& 76.63$\pm$25.60\\ && RWO& 79.36$\pm$0.19& 94.15$\pm$0.04& 90.91$\pm$7.69& 84.15$\pm$15.20& 75.23$\pm$23.52\\ && MWMOTE& 80.25$\pm$0.01& 93.45$\pm$0.02& 90.02$\pm$3.02& 83.25$\pm$1.05& 78.95$\pm$6.24\\ && SMOTE& 88.71$\pm$0.09& \textbf{96.87$\pm$0.02}& \textbf{95.13$\pm$4.19}& 94.41$\pm$7.45& 94.00$\pm$0.04\\ && ROS& \textbf{90.94$\pm$0.06}& 96.61$\pm$0.02& 95.07$\pm$3.57& \textbf{95.26$\pm$4.37}& \textbf{95.71$\pm$6.90}\\ & 5-NN& UGRWO& 89.75$\pm$0.08& 96.34$\pm$0.02& 94.61$\pm$4.30& 90.45$\pm$7.52& 82.33$\pm$13.70\\ && GRWO& 79.22$\pm$0.12& 96.61$\pm$0.02& 94.16$\pm$4.41& 82.40$\pm$10.77& 71.50$\pm$28.58\\ && RWO& 77.81$\pm$0.08& 93.99$\pm$0.01& 90.56$\pm$3.20& 80.62$\pm$12.23& 66.19$\pm$12.23\\ && MWMOTE& 96.58$\pm$0.01& 92.25$\pm$0.02& 89.25$\pm$0.01& 80.95$\pm$0.26& 70.25$\pm$0.23\\ && SMOTE& 97.42$\pm$0.05& \textbf{99.17$\pm$0.01}& \textbf{98.75$\pm$2.81}& \textbf{99.19$\pm$1.82}& \textbf{100.00$\pm$0.00}\\ && ROS& \textbf{98.00$\pm$0.03}& 99.20$\pm$0.01& 98.60$\pm$2.42& 98.98$\pm$1.78& \textbf{100.00$\pm$0.00}\\ & DT& UGRWO& 93.93$\pm$0.01& 98.01$\pm$0.03 &97.02$\pm$4.82 &95.37$\pm$8.06 &92.66$\pm$13.40\\ && GRWO& 96.18$\pm$0.06& 98.99$\pm$0.01& 98.41$\pm$2.73& 97.55$\pm$3.95& 96.66$\pm$10.54\\ && RWO& 94.12$\pm$0.05& 97.96$\pm$0.01& 96.98$\pm$2.41& 95.94$\pm$4.84& 94.28$\pm$9.98\\ && MWMOTE& 98.58$\pm$0.01& 97.84$\pm$0.02& 97.84$\pm$0.23& 98.58$\pm$3.02& 99.84$\pm$0.02\\ && SMOTE& 99.23$\pm$0.02& \textbf{99.73$\pm$0.00}& \textbf{99.62$\pm$1.17}& \textbf{99.74$\pm$0.80}& \textbf{100.00$\pm$0.00}\\ && ROS& \textbf{99.33$\pm$0.02}&99.72$\pm$0.00& 99.61$\pm$1.21& 99.73$\pm$0.84& 100.00$\pm$0.00\\ &Ada& UGRWO& 97.23$\pm$0.06& 98.83$\pm$0.02& 98.36$\pm$3.71& 98.20$\pm$4.48& 98.00$\pm$6.32\\ &Boost& GRWO& 93.60$\pm$0.07& 98.49$\pm$0.02& 97.42$\pm$4.62& 95.06$\pm$6.43& 91.66$\pm$11.78\\ &M1& RWO& 80.80$\pm$0.15& 94.94$\pm$0.03& 92.01$\pm$6.30& 84.02$\pm$13.40& 73.33$\pm$22.72\\ && MWMOTE& 93.25$\pm$0.02& 98.57$\pm$0.01& 98.03$\pm$0.01& 98.25$\pm$0.02& 99.25$\pm$0.25\\ && SMOTE& 94.84$\pm$0.10& 98.75$\pm$0.02& 98.00$\pm$3.88& 96.56$\pm$9.14& 95.00$\pm$15.81\\ && ROS& \textbf{98.56$\pm$0.03}& \textbf{99.47$\pm$0.01}& \textbf{99.23$\pm$1.62}& \textbf{99.48$\pm$1.09}& \textbf{100.00$\pm$0.00}\\ \end{longtable} \begin{longtable}{llllllll} \caption{Averaged results and standard deviations on nine continuous attribute datasets (over-sampling rate equals $400\%$).} \label{tab8} \endfirsthead \endhead \hline ds & Alg & OS & f-min & f-maj & O-acc & G-mean & TP rate \\\hline Breast\_w& NB& UGRWO& \textbf{98.87$\pm$0.01}& \textbf{97.02$\pm$0.03}& \textbf{97.81$\pm$2.14}& \textbf{96.98$\pm$2.06}& \textbf{98.45$\pm$2.04}\\ && GRWO& 95.96$\pm$0.02& 96.88$\pm$0.01& 96.41$\pm$1.45& 96.63$\pm$1.35& 98.12$\pm$2.18\\ && RWO& 98.28$\pm$0.01& 95.52$\pm$0.02& 97.51$\pm$1.52& 96.78$\pm$2.06& 98.40$\pm$1.28\\ && MWMOTE& 97.58$\pm$0.02& 92.03$\pm$0.02& 96.25$\pm$0.01& 95.15$\pm$0.12& 98.02$\pm$0.02\\ && SMOTE& 97.55$\pm$0.01& 94.90$\pm$0.02& 96.69$\pm$1.56& 96.29$\pm$1.73& 97.40$\pm$1.64\\ && ROS& 98.00$\pm$0.00& 94.77$\pm$0.02& 97.11$\pm$1.19& 96.49$\pm$1.91& 97.84$\pm$1.12\\ & 5-NN& UGRWO& \textbf{99.85$\pm$0.00}& \textbf{99.33$\pm$0.02}& \textbf{99.77$\pm$0.71}& \textbf{99.86$\pm$0.44}& 99.80$\pm$0.62\\ && GRWO& 98.25$\pm$0.00& 98.03$\pm$0.01& 98.14$\pm$1.06& 98.05$\pm$1.13& 99.40$\pm$1.13\\ && RWO &99.21$\pm$0.00& 97.87$\pm$0.00& 98.84$\pm$0.53& 97.90$\pm$0.96& \textbf{100.0$\pm$0.00}\\ && MWMOTE& 99.25$\pm$0.02& 95.15$\pm$0.03& 98.15$\pm$3.25& 96.84$\pm$0.98& 99.58$\pm$0.03\\ && SMOTE& 98.87$\pm$0.00& 97.54$\pm$0.01& 98.45$\pm$1.09& 97.74$\pm$1.54& 99.68$\pm$0.05\\ && ROS& 99.05$\pm$0.00& 97.42$\pm$0.01& 98.61$\pm$0.93& 97.66$\pm$1.71& 99.75$\pm$0.40\\ & DT& UGRWO& \textbf{99.75$\pm$0.00}& \textbf{99.13$\pm$0.01}& \textbf{99.61$\pm$0.81}& \textbf{99.14$\pm$1.79}& \textbf{100.0$\pm$0.00}\\ && GRWO& 96.25$\pm$0.01& 95.87$\pm$0.01& 96.07$\pm$1.86& 96.06$\pm$1.86& 96.06$\pm$2.27\\ && RWO& 98.33$\pm$0.08& 95.54$\pm$0.02& 97.57$\pm$1.21& 96.60$\pm$2.09& 98.74$\pm$0.81\\ && MWMOTE& 98.25$\pm$0.01& 92.25$\pm$0.03& 96.25$\pm$2.15& 95.87$\pm$2.58& 98.58$\pm$6.02\\ && SMOTE& 97.82$\pm$0.00& 95.37$\pm$0.01& 97.19$\pm$1.19& 96.39$\pm$1.87& 98.13$\pm$1.82\\ && ROS &98.83$\pm$0.00& 96.94$\pm$0.01& 98.31$\pm$0.79& 97.73$\pm$1.07& 99.00$\pm$1.09\\ &Ada&UGRWO& \textbf{99.74$\pm$0.00}& \textbf{99.20$\pm$0.01}& \textbf{99.61$\pm$0.81}& \textbf{99.74$\pm$0.53}& \textbf{99.50$\pm$1.05}\\ &Boost& GRWO& 95.90$\pm$0.02& 95.59$\pm$0.02& 95.75$\pm$2.42& 95.77$\pm$2.42& 94.87$\pm$3.53\\ &M1& RWO& 97.97$\pm$0.01& 94.85$\pm$0.03& 97.09$\pm$1.75& 96.63$\pm$1.87& 97.64$\pm$1.93\\ && MWMOTE& 95.25$\pm$0.02& 93.02$\pm$0.04& 96.58$\pm$0.06& 95.84$\pm$0.25& 96.58$\pm$0.25\\ && SMOTE& 97.49$\pm$0.01& 94.84$\pm$0.02& 96.62$\pm$1.71& 96.40$\pm$1.87& 96.98$\pm$2.05\\ && ROS& 97.22$\pm$0.01& 93.00$\pm$0.03& 96.03$\pm$1.86& 95.75$\pm$2.03& 95.75$\pm$2.04\\ Diabetes& NB& UGRWO& 61.00$\pm$0.08& \textbf{89.87$\pm$0.03}& 84.03$\pm$4.82& 86.23$\pm$6.39& 94.46$\pm$9.83\\ && GRWO& 79.05$\pm$0.02& 75.78$\pm$0.04& 77.39$\pm$3.27& 77.47$\pm$3.55& \textbf{95.60$\pm$2.96}\\ && RWO& 78.43$\pm$0.02& 89.23$\pm$0.01& \textbf{85.65$\pm$2.08}& \textbf{88.47$\pm$1.36}& 95.60$\pm$1.36\\ && MWMOTE& \textbf{89.25$\pm$0.02}& 88.25$\pm$0.02& 84.57$\pm$3.02& 87.25$\pm$0.02& 87.58$\pm$5.24\\ && SMOTE& 84.16$\pm$0.02& 66.34$\pm$0.04& 78.50$\pm$3.23& 74.71$\pm$3.51& 84.05$\pm$4.44\\ && ROS& 86.55$\pm$0.01& 63.92$\pm$0.04& 80.43$\pm$2.41& 74.29$\pm$3.81& 86.56$\pm$3.20\\ & 5-NN& UGRWO& \textbf{93.79$\pm$0.02}& \textbf{70.29$\pm$0.11}& \textbf{89.75$\pm$4.43}& \textbf{87.18$\pm$7.15}& 90.77$\pm$4.83\\ & & GRWO& 73.33$\pm$0.05& \textbf{74.95$\pm$0.03}& 73.50$\pm$4.67& 73.83$\pm$4.71& 65.95$\pm$7.30\\ && RWO& 84.33$\pm$0.01& 66.38$\pm$0.04& 78.64$\pm$2.39& 78.30$\pm$3.42& 78.95$\pm$2.19\\ && MWMOTE& 85.25$\pm$0.03& 68.59$\pm$0.02& 87.54$\pm$0.65& 86.59$\pm$3.25& 91.25$\pm$0.23\\ && SMOTE& 87.90$\pm$0.02& 63.15$\pm$0.09& 81.80$\pm$3.85& 69.07$\pm$6.75& \textbf{96.83$\pm$2.44}\\ && ROS& 89.19$\pm$0.01& 60.41$\pm$0.07& 83.04$\pm$2.34& 67.86$\pm$6.27& 95.97$\pm$1.58\\ & DT& UGRWO& 93.52$\pm$0.01& 63.02$\pm$0.12& 88.63$\pm$2.93& 75.27$\pm$9.15& 93.60$\pm$2.03\\ && GRWO& 82.21$\pm$0.03& 77.93$\pm$0.03& 80.32$\pm$3.44 &80.04$\pm$3.31& 82.01$\pm$5.04\\ && RWO& 90.56$\pm$0.01& 74.54$\pm$0.03& 86.25$\pm$2.23& 81.90$\pm$2.77& 90.82$\pm$3.21\\ && MWMOTE& 89.58$\pm$0.25& 78.25$\pm$0.25& 87.59$\pm$0.26& 80.95$\pm$0.26& 91.25$\pm$0.26\\ && SMOTE& 87.42$\pm$0.02& 71.85$\pm$0.05& 82.63$\pm$3.07& 78.62$\pm$4.37& 88.52$\pm$2.77\\ && ROS& \textbf{93.97$\pm$0.01}& \textbf{81.16$\pm$0.04}& \textbf{90.86$\pm$2.12}& \textbf{84.23$\pm$3.68}& \textbf{97.61$\pm$1.25}\\ &Ada&UGRWO& \textbf{94.42$\pm$0.02}& 59.04$\pm$0.13& \textbf{89.92$\pm$3.31}& 70.55$\pm$10.91& \textbf{95.87$\pm$2.77}\\ &Boost& GRWO& 82.16$\pm$0.02& \textbf{78.42$\pm$0.04}& 79.52$\pm$3.32& 78.32$\pm$4.27& 84.71$\pm$5.86\\ &M1& RWO& 91.42$\pm$0.01& 76.77$\pm$0.07& 87.50$\pm$3.11& \textbf{83.78$\pm$6.37}& 91.26$\pm$2.65\\ && MWMOTE& 90.25$\pm$0.023& 74.25$\pm$0.32& 84.56$\pm$6.25& 82.54$\pm$6.58& 90.54$\pm$3.25\\ && SMOTE& 86.97$\pm$0.01& 66.96$\pm$0.04& 81.35$\pm$2.39& 73.67$\pm$3.39& 91.51$\pm$4.34\\ && ROS& 88.14$\pm$0.02& 61.98$\pm$0.08& 81.95$\pm$3.71& 70.67$\pm$6.48& 92.16$\pm$3.62\\ Ionosphere& NB& UGRWO& \textbf{97.67$\pm$0.02}& 86.73$\pm$0.12& \textbf{96.07$\pm$3.85}& \textbf{93.05$\pm$8.17}& \textbf{97.18$\pm$4.74}\\ && GRWO& 84.45$\pm$0.05& \textbf{94.16$\pm$0.04}& 91.20$\pm$3.21& 89.97$\pm$2.76& 88.14$\pm$9.74\\ && RWO& 82.95$\pm$3.70& 90.09$\pm$3.18& 87.46$\pm$3.21& 86.88$\pm$3.26& 84.92$\pm$3.05\\ && MWMOTE& 80.54$\pm$2.54& 90.84$\pm$3.12& 90.84$\pm$5.14& 85.45$\pm$2.13& 85.13$\pm$3.12\\ && SMOTE& 86.15$\pm$0.04& 70.38$\pm$0.10& 81.17$\pm$6.16& 78.46$\pm$8.59& 84.69$\pm$5.21\\ && ROS& 89.78$\pm$0.02& 72.61$\pm$0.07& 85.14$\pm$3.64& 81.59$\pm$6.10& 88.57$\pm$3.41\\ & 5-NN& UGRWO& 91.82$\pm$0.04& 79.17$\pm$0.08& 88.28$\pm$5.46& 92.17$\pm$3.78& 85.71$\pm$8.49\\ && GRWO& 82.64$\pm$0.06& 90.51$\pm$0.03& 87.59$\pm$5.95& 84.26$\pm$5.95& 73.43$\pm$11.02\\ && RWO& \textbf{98.63$\pm$0.01}& \textbf{96.24$\pm$0.03}& \textbf{97.99$\pm$2.08}& \textbf{97.60$\pm$2.59}& \textbf{98.40$\pm$2.12}\\ && MWMOTE& 97.84$\pm$0.21& 95.45$\pm$0.23& 96.58$\pm$0.23& 96.58$\pm$2.25& 97.84$\pm$2.15\\ && SMOTE& 98.21$\pm$0.01& 96.00$\pm$0.02& 97.53$\pm$1.55& 96.96$\pm$1.99& 98.40$\pm$2.06\\ && ROS& 98.32$\pm$0.01& 95.38$\pm$0.04& 97.53$\pm$2.68& 97.02$\pm$3.15& 98.09$\pm$2.45\\ & DT& UGRWO& \textbf{98.60$\pm$0.01}& 89.23$\pm$0.12& \textbf{97.54$\pm$2.58}& 91.28$\pm$11.13& 99.44$\pm$1.75\\ && GRWO& 89.74$\pm$0.05& 91.51$\pm$0.05& 90.78$\pm$5.29& 90.51$\pm$5.31& 90.84$\pm$7.12\\ && RWO& 94.44$\pm$0.01& 84.01$\pm$0.04& 91.77$\pm$2.42& 88.32$\pm$4.30& 95.19$\pm$3.12\\ && MWMOTE& 95.15$\pm$0.02& 89.25$\pm$0.02& 90.15$\pm$2.23& 87.45$\pm$0.03& 98.58$\pm$0.03\\ && SMOTE& 93.88$\pm$0.02& 85.04$\pm$0.07& 91.33$\pm$3.55& 88.31$\pm$6.32& 95.43$\pm$1.63\\ && ROS& 98.28$\pm$0.01& \textbf{94.84$\pm$0.03}& 97.48$\pm$1.89& \textbf{95.41$\pm$3.44}& \textbf{99.52$\pm$0.76}\\ & Ada&UGRWO& \textbf{97.13$\pm$0.01}& 83.47$\pm$0.09& \textbf{95.06$\pm$3.25}& 89.65$\pm$7.29& \textbf{97.67$\pm$2.99}\\ &Boost& GRWO&7 89.31$\pm$0.09& \textbf{92.91$\pm$0.03}& 91.49$\pm$3.72& \textbf{91.24$\pm$4.39}& 83.26$\pm$7.94\\ &M1& RWO& 91.06$\pm$0.02& 77.50$\pm$0.08& 87.31$\pm$3.81& 85.82$\pm$7.15& 88.31$\pm$5.61\\ && MWMOTE& 90.25$\pm$0.02& 90.45$\pm$0.03& 90.45$\pm$3.02& 90.84$\pm$2.01& 94.15$\pm$1.54\\ && SMOTE& 87.35$\pm$0.02& 78.42$\pm$0.04& 84.07$\pm$3.51& 86.33$\pm$3.33& 79.95$\pm$4.57\\ && ROS& 94.25$\pm$0.03& 85.66$\pm$0.06& 91.81$\pm$4.40& 91.22$\pm$3.78& 92.38$\pm$5.68\\ Musk& NB& UGRWO& \textbf{94.66$\pm$0.11}& \textbf{99.30$\pm$0.02}& 98.23$\pm$3.97& 96.03$\pm$8.39& 93.33$\pm$21.08\\ && GRWO& 87.66$\pm$0.17& 98.52$\pm$0.02& 97.37$\pm$3.67& 90.06$\pm$14.48& 83.33$\pm$23.57\\ && RWO& 73.64$\pm$0.19& 96.39$\pm$0.02& 93.54$\pm$4.51& 78.99$\pm$15.83& 65.83$\pm$24.67\\ && MWMOTE& 94.02$\pm$0.11& 99.02$\pm$0.02& 92.23$\pm$2.13& 95.52$\pm$0.02& 93.52$\pm$1.24\\ && SMOTE& 90.00$\pm$0.16& 98.98$\pm$0.01& 98.18$\pm$3.17& 93.62$\pm$12.18& 90.00$\pm$21.08\\ && ROS& 94.12$\pm$0.11& 98.99$\pm$0.01& \textbf{98.29$\pm$2.95}& \textbf{96.55$\pm$9.14}& \textbf{95.00$\pm$15.81}\\ & 5-NN& UGRWO& \textbf{97.07$\pm$0.02}& 67.79$\pm$0.18& \textbf{94.55$\pm$5.26}& 76.10$\pm$15.93& \textbf{98.66$\pm$2.81}\\ && GRWO& 89.65$\pm$0.05& \textbf{78.60$\pm$0.05}& 86.16$\pm$7.12& \textbf{83.52$\pm$10.25}& 91.53$\pm$7.34\\ && RWO& 93.31$\pm$0.01& 69.98$\pm$0.08& 89.09$\pm$2.52& 80.48$\pm$8.02& 93.40$\pm$2.69\\ && MWMOTE& 96.86$\pm$0.05& 70.60$\pm$0.02& 98.08$\pm$3.08& 80.37$\pm$4.47& 96.00$\pm$8.43\\ && SMOTE& 92.36$\pm$0.01& 65.95$\pm$0.10& 87.56$\pm$2.84& 73.25$\pm$9.93& 96.39$\pm$2.48\\ && ROS& 93.76$\pm$0.01& 60.81$\pm$0.11& 89.25$\pm$2.77 &66.98$\pm$8.58& 98.18$\pm$1.97\\ & DT& UGRWO& \textbf{98.53$\pm$0.02}& 68.66$\pm$0.17& \textbf{97.24$\pm$3.91}& 75.04$\pm$14.18& \textbf{100.00$\pm$0.00}\\ && GRWO& 89.29$\pm$0.04& \textbf{78.51$\pm$0.06}& 85.26$\pm$6.69& \textbf{82.75$\pm$5.28}& 90.35$\pm$7.08\\ && RWO& 94.13$\pm$0.02& 55.95$\pm$0.25& 89.75$\pm$4.53& 65.69$\pm$22.40& 97.41$\pm$3.74\\ && MWMOTE& 92.25$\pm$0.04& 69.25$\pm$0.02& 95.25$\pm$6.23& 80.25$\pm$1.23& 99.58$\pm$2.45\\ && SMOTE& 93.39$\pm$0.11& 61.24$\pm$0.11& 88.75$\pm$2.75& 68.86$\pm$10.01& 97.73$\pm$2.46\\ && ROS& 94.93$\pm$0.01& 61.48$\pm$0.13& 91.05$\pm$2.24& 67.45$\pm$10.90& 99.48$\pm$0.82\\ & Ada& UGRWO& 93.93$\pm$0.01& 98.01$\pm$0.03& 97.02$\pm$4.82& 95.37$\pm$8.06& 92.66$\pm$13.40\\ &Boost& GRWO& 96.18$\pm$0.06& 98.99$\pm$0.01& 98.41$\pm$2.73& 97.55$\pm$3.95& 96.66$\pm$10.54\\ &M1& RWO& 94.12$\pm$0.05& 97.96$\pm$0.01& 96.98$\pm$2.41& 95.94$\pm$4.84& 94.28$\pm$9.98\\ && MWMOTE& 98.58$\pm$0.01& 97.84$\pm$0.02& 97.84$\pm$0.23& 98.58$\pm$3.02& 99.84$\pm$0.02\\ && SMOTE& 99.23$\pm$0.02& \textbf{99.73$\pm$0.00}& \textbf{99.62$\pm$1.17}& \textbf{99.74$\pm$0.80}& \textbf{100.00$\pm$0.00}\\ && ROS& \textbf{99.33$\pm$0.02}& 99.72$\pm$0.00& 99.61$\pm$1.21& 99.73$\pm$0.84& 100.00$\pm$0.00\\ Satimage& NB& UGRWO& \textbf{95.23$\pm$0.00}& \textbf{98.99$\pm$0.01}& \textbf{98.25$\pm$1.32}& \textbf{94.87$\pm$1.24}& \textbf{90.99$\pm$2.04}\\ && GRWO& 93.53$\pm$0.01& 98.84$\pm$0.00& 98.01$\pm$0.56& 94.25$\pm$1.36& 89.21$\pm$2.57\\ && RWO& 94.91$\pm$0.01& 97.09$\pm$0.00& 96.30$\pm$0.90& 95.15$\pm$1.18& 90.92$\pm$2.12\\ && MWMOTE& 93.25$\pm$0.02& 96.25$\pm$0.26& 95.62$\pm$6.25& 94.58$\pm$2.03& 89.58$\pm$0.32\\ && SMOTE& 94.36$\pm$0.00& 97.42$\pm$0.00& 96.46$\pm$0.53& 94.69$\pm$0.81& 90.00$\pm$1.52\\ && ROS& 94.67$\pm$0.01& 96.94$\pm$0.00& 96.13$\pm$0.77& 94.92$\pm$1.05& 90.44$\pm$1.97\\ & 5-NN& UGRWO& \textbf{99.45$\pm$0.00}& \textbf{99.84$\pm$0.01}& \textbf{99.81$\pm$0.02}& \textbf{99.67$\pm$0.23}& \textbf{100.0$\pm$0.00}\\ && GRWO& 98.84$\pm$0.00& 99.69$\pm$0.00& 99.51$\pm$0.33& 99.37$\pm$0.37& 99.13$\pm$0.54\\ && RWO& 99.44$\pm$0.00& 99.65$\pm$0.00& 99.57$\pm$0.17& 99.65$\pm$0.13& 100.0$\pm$0.00\\ && MWMOTE& 98.85$\pm$0.02& 98.57$\pm$0.02& 98.58$\pm$6.02& 98.54$\pm$3.02& 99.85$\pm$2.15\\ && SMOTE& 99.17$\pm$0.00& 99.58$\pm$0.00& 99.45$\pm$0.38& 99.52$\pm$0.36& 99.75$\pm$0.37\\ && ROS& 99.20$\pm$0.00& 99.51$\pm$0.00& 99.39$\pm$0.22& 99.45$\pm$0.22& 99.68$\pm$0.31\\ & DT& UGRWO& \textbf{99.45$\pm$0.00}& 99.25$\pm$0.00& \textbf{99.58$\pm$0.23}& \textbf{99.68$\pm$0.22}& \textbf{99.98$\pm$0.12}\\ && GRWO& 97.54$\pm$0.00& 99.35$\pm$0.00& 98.94$\pm$0.25& 98.59$\pm$0.49& 97.94$\pm$0.95\\ && RWO& 98.50$\pm$0.00& 99.08$\pm$0.00& 98.86$\pm$0.42& 98.80$\pm$0.44& 98.56$\pm$0.60\\ && MWMOTE& 97.580.00& 98.15$\pm$0.02& 97.45$\pm$6.25& 97.84$\pm$3.02& 97.48$\pm$0.02\\ && SMOTE& 98.79$\pm$0.00& 99.40$\pm$0.00& 99.20$\pm$0.38& 99.13$\pm$0.49& 98.93$\pm$0.80\\ && ROS& 99.39$\pm$0.00& \textbf{99.62$\pm$0.00}& 99.53$\pm$0.34& 99.61$\pm$0.28& 99.94$\pm$0.12\\ & Ada&UGRWO& \textbf{97.85$\pm$0.00}& \textbf{99.56$\pm$0.02}& 98.54$\pm$3.02& 98.\textbf{54$\pm$2.13}& \textbf{96.90$\pm$0.56}\\ &Boost& GRWO& 96.16$\pm$0.01& 99.19$\pm$0.00& \textbf{98.62$\pm$0.34}& 96.55$\pm$0.72& 94.40$\pm$2.31\\ &M1& RWO& 97.16$\pm$0.00& 98.32$\pm$0.00& 97.89$\pm$0.43& 97.30$\pm$0.60& 95.24$\pm$1.31\\ && MWMOTE& 94.29$\pm$0.06& 98.34$\pm$0.01& 97.44$\pm$2.94& 95.60$\pm$2.17& 92.66$\pm$9.53\\ && SMOTE& 97.23$\pm$0.00& 98.97$\pm$0.00& 98.20$\pm$0.42& 97.56$\pm$0.56& 95.76$\pm$1.05\\ && ROS& 97.85$\pm$0.00& 98.70$\pm$0.00& 98.38$\pm$0.51& 98.09$\pm$0.65& 96.86$\pm$1.31\\ Segmentation& NB& UGRWO& 97.17$\pm$0.02& 75.66$\pm$0.20& 94.94$\pm$4.08& 85.27$\pm$14.82& 99.06$\pm$1.96\\ && GRWO& 90.48$\pm$0.06& 83.94$\pm$0.10& 88.10$\pm$7.78& 86.79$\pm$8.73& 91.22$\pm$7.95\\ && RWO& \textbf{97.29$\pm$0.00}& \textbf{85.83$\pm$0.06}& \textbf{95.46$\pm$1.61}& \textbf{86.80$\pm$5.27}& \textbf{100.00$\pm$0.00}\\ && MWMOTE& 96.86$\pm$0.05& 78.60$\pm$0.02& 92.08$\pm$3.08& 85.37$\pm$4.47& 96.00$\pm$8.43\\ && SMOTE& 96.18$\pm$0.01& 83.20$\pm$0.07& 93.78$\pm$2.58& 84.57$\pm$6.97& \textbf{100.00$\pm$0.00}\\ && ROS& 96.78$\pm$0.01& 83.59$\pm$0.06& 94.63$\pm$1.90& 85.98$\pm$5.59& 99.17$\pm$1.06\\ & 5-NN& UGRWO& \textbf{95.95$\pm$0.05}& \textbf{96.16$\pm$0.04}& \textbf{96.09$\pm$5.05}& \textbf{95.96$\pm$5.24}& 94.33$\pm$9.16\\ && GRWO& 78.82$\pm$0.10& 93.78$\pm$0.07& 90.43$\pm$4.93& 86.83$\pm$8.10& 82.00$\pm$14.75\\ && RWO& 83.14$\pm$0.08& 88.63$\pm$0.05& 86.45$\pm$6.59& 85.35$\pm$7.32& 80.76$\pm$11.02\\ && MWMOTE& 87.16$\pm$0.00& 88.32$\pm$0.00 &87.89$\pm$0.43& 87.30$\pm$0.60& 85.24$\pm$1.31\\ && SMOTE& 88.01$\pm$0.01 &91.60$\pm$0.01& 90.13$\pm$1.50& 91.70$\pm$1.47& 99.00$\pm$6.13\\ && ROS& 88.38$\pm$0.04& 88.96$\pm$0.05& 88.70$\pm$5.32& 89.61$\pm$5.51& \textbf{100.00$\pm$0.00}\\ & DT& UGRWO& 92.14$\pm$0.09& 91.81$\pm$0.09& 92.18$\pm$8.94& 91.70$\pm$9.53& 90.66$\pm$13.68\\ && GRWO &94.29$\pm$0.06& \textbf{98.34$\pm$0.01}& 97.44$\pm$2.94& 95.60$\pm$2.17& 92.66$\pm$9.53\\ && RWO& 95.00$\pm$0.03& 96.37$\pm$0.02& 95.80$\pm$2.68& 95.68$\pm$2.85& 95.38$\pm$5.37\\ && MWMOTE& 93.40$\pm$0.02& 94.98$\pm$0.01& 94.30$\pm$2.03& 94.08$\pm$1.99& 92.91$\pm$3.85\\ && SMOTE &97.13$\pm$0.04& 98.31$\pm$0.02& 97.88$\pm$2.99& 97.87$\pm$3.20& 98.00$\pm$4.21\\ && ROS& \textbf{97.77$\pm$0.01}& \textbf{98.28$\pm$0.01}& \textbf{98.06$\pm$1.66}& \textbf{98.30$\pm$1.45}& \textbf{100.0$\pm$0.00}\\ &Ada& UGRWO& 95.75$\pm$0.05& 95.64$\pm$0.05& 95.77$\pm$5.46& 95.49$\pm$5.87& 98.00$\pm$6.32\\ &Boost& GRWO& 92.50$\pm$0.08& 97.28$\pm$0.03& 96.01$\pm$4.61& 94.01$\pm$7.01& 90.17$\pm$11.70\\ &M1& RWO& 95.82$\pm$0.04& 97.36$\pm$0.02& 96.77$\pm$3.40& 95.99$\pm$4.28& 92.30$\pm$8.10\\ && MWMOTE& 93.40$\pm$0.02& 94.98$\pm$0.01& 94.30$\pm$2.03& 94.08$\pm$1.99& 92.91$\pm$3.85\\ && SMOTE& 97.56$\pm$0.03& 98.61$\pm$0.01& 98.23$\pm$2.46& 97.95$\pm$2.84& 97.09$\pm$2.80\\ && ROS& \textbf{99.22$\pm$0.01}& \textbf{99.44$\pm$0.01}& \textbf{99.35$\pm$1.36}& \textbf{99.32$\pm$1.44}& \textbf{99.23$\pm$2.43}\\ vehicle& NB& UGRWO& 83.56$\pm$0.06& 72.17$\pm$0.04& 78.87$\pm$4.40& 80.75$\pm$5.18& 75.97$\pm$5.79\\ && GRWO& 80.46$\pm$0.03& 85.77$\pm$0.02& 83.55$\pm$2.72& 82.36$\pm$3.03& 74.93$\pm$6.87\\ && RWO& \textbf{89.71$\pm$0.02}& \textbf{85.92$\pm$0.03}& \textbf{88.11$\pm$2.55}& \textbf{88.72$\pm$2.62}& 85.52$\pm$2.99\\ && MWMOTE& 86.49$\pm$0.01& 84.75$\pm$0.02& 84.75$\pm$2.14& 85.79$\pm$2.13& 85.67$\pm$2.51\\ && SMOTE& 83.21$\pm$0.02& 70.60$\pm$0.04& 78.65$\pm$3.03& 74.12$\pm$3.83& \textbf{95.84$\pm$3.10}\\ && ROS& 85.22$\pm$0.02& 68.79$\pm$0.06& 79.95$\pm$3.52& 73.20$\pm$4.69& 95.27$\pm$2.87\\ & 5-NN& UGRWO& \textbf{96.93$\pm$0.01}& 88.96$\pm$0.06& 95.21$\pm$2.61& 94.44$\pm$3.70& 95.65$\pm$2.73\\ && GRWO& 85.94$\pm$0.03& 92.07$\pm$0.01& 89.58$\pm$1.62& 87.30$\pm$2.67& 80.40$\pm$4.48\\ && RWO& 88.23$\pm$0.02& 85.21$\pm$0.02& 86.90$\pm$2.73& 88.04$\pm$2.45& 81.50$\pm$4.63\\ && MWMOTE& 80.46$\pm$0.03& 85.77$\pm$0.02& 83.55$\pm$2.72& 82.36$\pm$3.03& 74.93$\pm$6.87\\ && SMOTE& \textbf{95.85$\pm$0.01}& \textbf{94.34$\pm$0.01}& 95.21$\pm$1.23& \textbf{94.50$\pm$1.45}& \textbf{100.00$\pm$0.00}\\ && ROS& 96.47$\pm$0.01& 93.97$\pm$0.02& \textbf{95.55$\pm$1.69}& 94.17$\pm$2.28& \textbf{100.00$\pm$0.00}\\ & DT& UGRWO& 97.69$\pm$0.01& 92.49$\pm$0.03& 96.37$\pm$2.26& 95.58$\pm$4.55& 97.17$\pm$2.18\\ && GRWO& 93.40$\pm$0.02& 94.98$\pm$0.01& 94.30$\pm$2.03& 94.08$\pm$1.99& 92.91$\pm$3.85\\ && RWO& 96.49$\pm$0.01 &94.75$\pm$0.02& 94.75$\pm$2.14& 95.79$\pm$2.13& 95.67$\pm$2.51\\ && MWMOTE& 92.14$\pm$0.09& 91.81$\pm$0.09& 92.18$\pm$8.94& 91.70$\pm$9.53& 90.66$\pm$13.68\\ && SMOTE& 96.36$\pm$0.01& 95.50$\pm$0.02& 95.98$\pm$2.08& 95.90$\pm$2.10& 96.61$\pm$2.28\\ && ROS& \textbf{98.22$\pm$0.00}& \textbf{97.13$\pm$0.01}& \textbf{97.80$\pm$1.19}& \textbf{97.28$\pm$1.47}& \textbf{99.59$\pm$0.70}\\ &Ada&UGRWO& \textbf{96.84$\pm$0.01}& 91.19$\pm$0.03& \textbf{95.35$\pm$1.89}& \textbf{92.40$\pm$3.26}& \textbf{98.79$\pm$1.69}\\ &Boost& GRWO& 88.61$\pm$0.03& \textbf{91.69$\pm$0.02}& 89.65$\pm$2.58& 89.88$\pm$3.09& 93.74$\pm$8.08\\ &M1& RWO& 94.87$\pm$0.01& 91.30$\pm$0.02& 93.54$\pm$1.93& 92.02$\pm$2.48& 98.19$\pm$0.92\\ && MWMOTE& 93.40$\pm$0.02& 94.98$\pm$0.01& 94.30$\pm$2.03& 94.08$\pm$1.99& 92.91$\pm$3.85\\ && SMOTE& 93.21$\pm$0.00& 90.90$\pm$0.01& 92.23$\pm$1.17& 91.53$\pm$1.63& 96.47$\pm$3.12\\ && ROS& 93.95$\pm$0.01& 89.02$\pm$0.02& 92.20$\pm$1.89& 89.66$\pm$2.58& 99.69$\pm$0.48\\ sonar& NB& UGRWO& \textbf{94.09$\pm$0.05}& 65.66$\pm$0.23& \textbf{90.11$\pm$9.75}& 86.77$\pm$1.46& \textbf{90.52$\pm$9.21}\\ && GRWO& 81.78$\pm$0.04& \textbf{79.49$\pm$0.04}& 79.61$\pm$4.71& 82.63$\pm$4.55& 70.36$\pm$7.23\\ && RWO& 91.10$\pm$0.03& 73.54$\pm$0.07& 86.73$\pm$5.06& \textbf{89.85$\pm$3.56}& 84.71$\pm$6.43\\ && MWMOTE& 91.86$\pm$0.05& 78.60$\pm$0.02& 88.08$\pm$3.08& 87.37$\pm$4.47& 86.00$\pm$8.43\\ && SMOTE& 87.23$\pm$0.03& 54.98$\pm$0.1& 80.18$\pm$4.86& 68.80$\pm$8.70& 87.38$\pm$5.43\\ && ROS& 88.94$\pm$0.03& 51.44$\pm$0.16& 82.05$\pm$6.14& 67.10$\pm$14.21& 88.86$\pm$4.86\\ & 5-NN& UGRWO& 97.17$\pm$0.02& 75.66$\pm$0.20& 94.94$\pm$4.08& 85.27$\pm$14.82& 99.06$\pm$1.96\\ && GRWO& 90.48$\pm$0.06& 83.94$\pm$0.10& 88.10$\pm$7.78 &86.79$\pm$8.73& 91.22$\pm$7.95\\ && RWO& \textbf{97.29$\pm$0.00}& \textbf{85.83$\pm$0.06}& \textbf{95.46$\pm$1.61}& \textbf{86.80$\pm$5.27}& \textbf{100.00$\pm$0.00}\\ && MWMOTE& 93.40$\pm$0.02& 84.98$\pm$0.01& 94.30$\pm$2.03& 84.08$\pm$1.99& 92.91$\pm$3.85\\ && SMOTE &96.18$\pm$0.01& 83.20$\pm$0.07& 93.78$\pm$2.58& 84.57$\pm$6.97& \textbf{100.00$\pm$0.00}\\ && ROS &96.78$\pm$0.01& 83.59$\pm$0.06& 94.63$\pm$1.90& 85.98$\pm$5.59& 99.17$\pm$1.06\\ & DT& UGRWO& \textbf{96.61$\pm$0.03}& 67.10$\pm$0.17& \textbf{93.82$\pm$5.89}& \textbf{79.83$\pm$16.25}& \textbf{96.66$\pm$5.66}\\ && GRWO& 85.18$\pm$0.04& 73.41$\pm$0.08& 81.10$\pm$5.93& 78.05$\pm$7.10& 87.51$\pm$7.81\\ && RWO& 93.98$\pm$0.01& 71.93$\pm$0.10& 90.12$\pm$2.91& 81.58$\pm$9.50 &94.64$\pm$2.91\\ && MWMOTE& 91.86$\pm$0.05& 78.60$\pm$0.02& 88.08$\pm$3.08& 87.37$\pm$4.47& 86.00$\pm$8.43\\ && SMOTE& 94.39$\pm$0.02& \textbf{79.03$\pm$0.08}& 91.18$\pm$3.28& \textbf{85.05$\pm$7.50}& 95.33$\pm$2.99\\ && ROS& 89.31$\pm$0.01& 50.99$\pm$0.12& 82.54$\pm$3.25& 66.64$\pm$12.23& 89.70$\pm$4.21\\ &Ada& UGRWO& \textbf{97.07$\pm$0.02}& 67.79$\pm$0.18& \textbf{94.55$\pm$5.26}& 76.10$\pm$15.93& \textbf{98.66$\pm$2.81}\\ &Boost& GRWO& 89.65$\pm$0.05& \textbf{78.60$\pm$0.05}& 86.16$\pm$7.12& \textbf{83.52$\pm$10.25}& 91.53$\pm$7.34\\ &M1& RWO& 93.31$\pm$0.01& 69.98$\pm$0.08& 89.09$\pm$2.52& 80.48$\pm$8.02& 93.40$\pm$2.69\\ && MWMOTE& 96.86$\pm$0.05& 70.60$\pm$0.02& 98.08$\pm$3.08 &80.37$\pm$4.47& 96.00$\pm$8.43\\ && SMOTE& 92.36$\pm$0.01& 65.95$\pm$0.10& 87.56$\pm$2.84& 73.25$\pm$9.93& 96.39$\pm$2.48\\ && ROS& 93.76$\pm$0.01& 60.81$\pm$0.11& 89.25$\pm$2.77& 66.98$\pm$8.58& 98.18$\pm$1.97\\ Glass& NB& UGRWO& 85.23$\pm$0.08& 88.13$\pm$0.04& 85.12$\pm$6.78& 86.41$\pm$7.93& 75.23$\pm$13.80\\ && GRWO& 80.63$\pm$0.13& \textbf{94.39$\pm$0.03}& 91.31$\pm$5.08& 84.52$\pm$11.16& 75.00$\pm$18.82\\ && RWO& 85.24$\pm$0.07& 94.33$\pm$0.02& 91.85$\pm$4.05& 87.71$\pm$7.04& 80.00$\pm$13.81\\ && MWMOTE& 81.78$\pm$0.04& 79.49$\pm$0.04& 79.61$\pm$4.71& 82.63$\pm$4.55& 70.36$\pm$7.23\\ && SMOTE& 90.76$\pm$0.04& \textbf{96.61$\pm$0.01}& \textbf{95.07$\pm$2.58}& 94.59$\pm$3.53& 94.04$\pm$7.71\\ && ROS& \textbf{91.60$\pm$0.03}& 96.08$\pm$0.01& 94.66$\pm$2.51& \textbf{95.14$\pm$2.93}& \textbf{96.52$\pm$5.60}\\ & 5-NN& UGRWO& 84.19$\pm$0.08& 93.56$\pm$0.02& 91.71$\pm$7.47& 86.71$\pm$14.18& 78.80$\pm$22.60\\ && GRWO& 79.74$\pm$0.09& 95.23$\pm$0.03& 91.91$\pm$5.37& 82.35$\pm$8.55& 69.28$\pm$15.07\\ && RWO& 85.60$\pm$0.09& 94.94$\pm$0.02& 92.53$\pm$4.29& 87.28$\pm$7.79& 77.50$\pm$13.57\\ && MWMOTE& 93.40$\pm$0.02& 94.98$\pm$0.01& 94.30$\pm$2.03& 94.08$\pm$1.99& 92.91$\pm$3.85\\ &&SMOTE& \textbf{97.98$\pm$0.04}& \textbf{99.18$\pm$0.01}& \textbf{98.84$\pm$2.59}& \textbf{99.20$\pm$1.80}& \textbf{100.00$\pm$0.00}\\ && ROS &97.88$\pm$0.03& 98.96$\pm$0.01& 98.60$\pm$2.42& 98.98$\pm$1.78& \textbf{100.00$\pm$0.00}\\ & DT& UGRWO& 91.01$\pm$0.06& 95.75$\pm$0.02& 94.04$\pm$3.95& 92.34$\pm$5.45& 89.76$\pm$9.78\\ && GRWO& 97.80$\pm$0.04& 99.24$\pm$0.01& 98.97$\pm$2.51& 98.28$\pm$3.66& 97.14$\pm$6.02\\ && RWO& 94.62$\pm$0.07& 97.74$\pm$0.02& 96.82$\pm$4.27& 95.99$\pm$5.46& 94.16$\pm$8.53\\ && MWMOTE& 93.76$\pm$0.01& 90.81$\pm$0.11& 91.25$\pm$2.77& 96.98$\pm$8.58& 98.18$\pm$1.97\\ && SMOTE& 99.33$\pm$0.02& \textbf{99.74$\pm$0.05}& 99.62$\pm$1.17& \textbf{99.75$\pm$0.80}& \textbf{100.00$\pm$0.00}\\ && ROS& \textbf{99.41$\pm$0.01}& 99.74$\pm$0.00& \textbf{99.64$\pm$1.12}& 99.74$\pm$0.80& \textbf{100.00$\pm$0.00}\\ &Ada& UGRWO& 96.86$\pm$0.05& 98.60$\pm$0.02& 98.08$\pm$3.08& 97.37$\pm$4.47& 96.00$\pm$8.43\\ && GRWO &91.42$\pm$0.09 &97.80$\pm$0.02& 96.31$\pm$4.54& 93.37$\pm$8.02& 89.76$\pm$15.22\\ && RWO& 87.22$\pm$0.10& 95.06$\pm$0.03& 92.91$\pm$5.52& 89.71$\pm$9.06& 84.02$\pm$16.83\\ && MWMOTE& 79.74$\pm$0.09& 95.23$\pm$0.03& 91.91$\pm$5.37& 82.35$\pm$8.55& 69.28$\pm$15.07\\ && SMOTE& 97.31$\pm$0.04& 98.91$\pm$0.01& 98.46$\pm$2.68& 98.45$\pm$2.73& 98.57$\pm$4.51\\ && ROS& \textbf{99.00$\pm$0.03}& \textbf{99.44$\pm$0.01}& \textbf{99.28$\pm$2.25}& \textbf{99.45$\pm$1.71}& \textbf{100.00$\pm$0.00}\\ \end{longtable} \begin{longtable}{llllllll} \caption{Averaged results and standard deviations on nine continuous attribute datasets (over-sampling rate equals $500\%$).} \label{tab9} \endfirsthead \endhead \hline ds & Alg& OS & f-min & f-maj & O-acc & G-mean & TP rate \\\hline Breast\_w& NB& UGRWO& \textbf{98.85$\pm$0.01}& \textbf{97.04$\pm$0.02}& \textbf{97.85$\pm$2.14}& \textbf{96.98$\pm$1.48}& \textbf{98.55$\pm$0.08}\\ && GRWO& 96.36$\pm$0.01& 96.99$\pm$0.02& 96.62$\pm$2.50& 96.83$\pm$2.44& 98.52$\pm$2.85\\ && RWO& 98.32$\pm$0.00& 94.79$\pm$0.02& 97.46$\pm$1.33& 96.67$\pm$2.05& 98.18$\pm$1.10\\ && MWMOTE& 98.37$\pm$0.01& 96.91$\pm$0.01& 96.16$\pm$1.25& 95.96$\pm$1.47& 92.47$\pm$0.83\\ && SMOTE& 97.74$\pm$0.00& 94.18$\pm$0.02& 96.75$\pm$1.39& 96.25$\pm$1.77& 97.34$\pm$1.50\\ && ROS& 98.33$\pm$0.00& 94.80$\pm$0.02& 97.48$\pm$1.07& 96.68$\pm$1.56& 98.20$\pm$1.35\\ & 5-NN& UGRWO& \textbf{99.87$\pm$0.00}& \textbf{99.41$\pm$0.01}& \textbf{99.78$\pm$0.67}& \textbf{99.87$\pm$0.40}& 99.82$\pm$0.54\\ && GRWO& 98.37$\pm$0.01& 97.91$\pm$0.01& 98.16$\pm$1.25& 97.96$\pm$1.47& 99.47$\pm$0.83\\ && RWO& 99.27$\pm$0.00& 97.63$\pm$0.01& 98.88$\pm$0.72& 97.67$\pm$1.53& \textbf{100.0$\pm$0.00}\\ && MWMOTE& 98.58$\pm$0.02& 96.58$\pm$0.02& 96.58$\pm$2.13& 96.58$\pm$6.23& 99.58$\pm$0.45\\ && SMOTE& 99.13$\pm$0.00& 97.64$\pm$0.01& 98.73$\pm$0.91 &97.81$\pm$1.64& 99.83$\pm$0.35\\ && ROS& 99.20$\pm$0.00& 97.44$\pm$0.01& 98.79$\pm$0.82& 97.69$\pm$1.32& 99.79$\pm$0.46\\ & DT& UGRWO& \textbf{99.77$\pm$0.00}& \textbf{99.08$\pm$0.01}& \textbf{99.67$\pm$0.75}& \textbf{99.10$\pm$1.88}& \textbf{100.0$\pm$0.00}\\ && GRWO& 96.16$\pm$0.02& 96.25$\pm$0.02& 96.21$\pm$2.37& 96.16$\pm$2.38& 96.81$\pm$3.24\\ && RWO &98.67$\pm$0.00& 95.86$\pm$0.01& 97.98$\pm$0.74& 97.32$\pm$1.08& 98.60$\pm$0.73\\ && MWMOTE &97.50$\pm$0.00& 94.58$\pm$0.02& 96.58$\pm$0.03& 96.58$\pm$0.02& 96.54$\pm$0.03\\ && SMOTE& 98.26$\pm$0.00& 95.38$\pm$0.01& 97.47$\pm$0.68& 96.53$\pm$1.16& 98.58$\pm$1.24\\ && ROS& 99.24$\pm$0.00& 97.56$\pm$0.01& 98.84$\pm$0.84 &97.95$\pm$1.45& 99.65$\pm$0.58\\ &Ada&UGRWO& \textbf{99.85$\pm$0.00}& \textbf{99.33$\pm$0.02}& \textbf{99.76$\pm$0.75}& \textbf{99.85$\pm$0.45}& \textbf{99.77$\pm$0.71}\\ &Boost& GRWO& 97.12$\pm$0.01& 96.41$\pm$0.02& 96.81$\pm$1.93& 96.81$\pm$2.01& 96.70$\pm$1.90\\ &M1& RWO& 98.17$\pm$0.00& 94.38$\pm$0.02& 97.24$\pm$0.99& 96.44$\pm$1.69& 97.97$\pm$1.45\\ && MWMOTE& 96.25$\pm$0.02& 96.15$\pm$0.03& 96.58$\pm$.025& 95.58$\pm$2.15& 96.58$\pm$3.12\\ && SMOTE& 97.70$\pm$0.00& 94.09$\pm$0.02& 96.69$\pm$1.33& 96.27$\pm$1.82& 97.18$\pm$1.61\\ && ROS& 97.69$\pm$0.00& 92.99$\pm$0.02& 96.53$\pm$1.16& 95.97$\pm$1.52& 97.02$\pm$1.84\\ Diabetes& NB& UGRWO &62.08$\pm$0.06& 90.93$\pm$0.02& 85.43$\pm$4.17& 87.19$\pm$5.07& 91.28$\pm$7.37\\ && GRWO& 78.91$\pm$0.22& 78.97$\pm$0.02& 78.90$\pm$1.91& 80.17$\pm$1.92& \textbf{96.00$\pm$2.82}\\ && RWO& 78.36$\pm$0.04& \textbf{91.13$\pm$0.45}& \textbf{87.43$\pm$2.73}& \textbf{89.93$\pm$2.31}& 95.20$\pm$3.15\\ && MWMOTE& 85.59$\pm$0.26& 90.25$\pm$0.03& 86.25$\pm$0.95& 86.25$\pm$0.03& 95.15$\pm$0.03\\ && SMOTE& \textbf{86.69$\pm$0.01}& 62.83$\pm$0.06& 80.43$\pm$2.87& 73.08$\pm$5.26& 87.53$\pm$3.24\\ && ROS& 86.02$\pm$0.02& 58.15$\pm$0.05& 79.07$\pm$2.98& 71.90$\pm$4.35& 84.5$\pm$3.66\\ & 5-NN& UGRWO& \textbf{93.02$\pm$0.02}& 64.89$\pm$0.07& \textbf{88.33$\pm$4.06}& \textbf{83.27$\pm$7.16}& 89.93$\pm$4.35\\ && GRWO& 74.62$\pm$0.03& \textbf{75.43$\pm$0.03}& 72.98$\pm$3.23& 73.78$\pm$3.36 &67.54$\pm$4.43\\ && RWO& 86.02$\pm$0.02& 64.62$\pm$0.05& 79.98$\pm$2.97& 78.90$\pm$4.28& 80.84$\pm$2.95\\ && MWMOTE& 86.25$\pm$0.02& 63.25$\pm$3.03& 76.25$\pm$3.03& 76.45$\pm$3.15& 80.92$\pm$0.03\\ && SMOTE& 90.35$\pm$0.01& 62.57$\pm$0.06& 84.67$\pm$1.91& 68.38$\pm$5.33& 98.43$\pm$0.89\\ && ROS& 91.69$\pm$0.01& 62.05$\pm$0.08& 86.38$\pm$2.68& 68.11$\pm$2.68& \textbf{84.50$\pm$1.94}\\ & DT& UGRWO& 94.94$\pm$0.02& 61.36$\pm$0.08&90.95$\pm$3.95& 76.59$\pm$7.17& 75.65$\pm$2.85\\ && GRWO& 84.70$\pm$0.03& 77.92$\pm$0.05& 81.94$\pm$4.41& 81.13$\pm$4.51& 84.97$\pm$5.16\\ && RWO& 92.88$\pm$0.01& 76.60$\pm$0.04& 89.09$\pm$1.84& 83.98$\pm$3.92& 93.22$\pm$0.93\\ && MWMOTE& 86.02$\pm$0.02& 64.62$\pm$0.05& 79.98$\pm$2.97& 78.90$\pm$4.28& 80.84$\pm$2.95\\ && SMOTE& 89.15$\pm$0.01& 69.98$\pm$0.04& 84.07$\pm$2.08& 78.41$\pm$3.62& 89.85$\pm$2.20\\ && ROS& \textbf{96.26$\pm$0.00}& \textbf{86.07$\pm$0.04}& \textbf{94.11$\pm$1.46}& \textbf{87.66$\pm$3.84}& \textbf{99.25$\pm$1.00}\\ &Ada& UGRWO& \textbf{94.47$\pm$0.01}& 69.33$\pm$0.07& \textbf{90.18$\pm$3.38}& 67.57$\pm$18.52& \textbf{98.66$\pm$1.53}\\ &Boost& GRWO& 84.84$\pm$0.03& \textbf{79.07$\pm$0.04}& 82.42$\pm$4.13& \textbf{82.15$\pm$4.22} &84.69$\pm$2.78\\ &M1& RWO& 92.10$\pm$0.01& 72.49$\pm$0.07& 87.76$\pm$2.41& 80.30$\pm$1.87& 93.41$\pm$1.87\\ && MWMOTE& 91.25$\pm$0.02& 70.25$\pm$3.03& 86.25$\pm$6.25& 80.45$\pm$0.12& 90.15$\pm$3.02\\ && SMOTE& 88.31$\pm$0.02& 62.87$\pm$0.06& 82.22$\pm$3.24& 71.40$\pm$4.69& 92.23$\pm$2.72\\ && ROS& 88.96$\pm$0.01& 57.03$\pm$0.05& 82.44$\pm$2.40& 67.44$\pm$4.33& 92.78$\pm$2.40\\ Ionosphere& NB& UGRWO& \textbf{98.39$\pm$0.02}& 89.71$\pm$0.14& \textbf{97.22$\pm$4.14}& \textbf{93.78$\pm$9.50}& \textbf{98.42$\pm$3.55}\\ && GRWO& 85.71$\pm$0.23& \textbf{95.43$\pm$0.02}& 92.80$\pm$3.26& 91.06$\pm$2.98& 89.10$\pm$12.7\\ && RWO& 82.95$\pm$3.54& 90.09$\pm$2.75& 87.46$\pm$3.32& 86.88$\pm$3.71& 84.92$\pm$3.21\\ && MWMOTE& 80.15$\pm$0.12& 90.15$\pm$0.13& 90.15$\pm$0.13& 92.15$\pm$6.03& 95.15$\pm$3.02\\ && SMOTE& 87.82$\pm$0.03& 69.04$\pm$0.08& 82.57$\pm$5.34& 79.29$\pm$6.64& 85.71$\pm$5.49\\ && ROS&90.03$\pm$0.02& 69.58$\pm$0.07& 85.01$\pm$3.53& 81.02$\pm$5.88& 87.95$\pm$3.77\\ & 5-NN& UGRWO& 92.28$\pm$0.05& 78.79$\pm$0.07& 88.59$\pm$4.83& 92.50$\pm$3.24& 86.54$\pm$9.36\\ && GRWO& 84.30$\pm$0.05& 90.09$\pm$0.04& 87.37$\pm$4.13& 85.60$\pm$4.74& 75.84$\pm$7.88\\ && RWO& \textbf{99.29$\pm$0.00}& \textbf{97.48$\pm$0.03}& \textbf{98.87$\pm$1.48}& \textbf{97.83$\pm$2.66}& \textbf{99.73$\pm$0.87}\\ && MWMOTE& 96.25$\pm$0.00& 94.52$\pm$0.02& 97.45$\pm$1.23 &96.25$\pm$2.12& 98.52$\pm$1.54\\ && SMOTE& 98.41$\pm$0.00& 95.51$\pm$0.02& 97.66$\pm$1.35& 96.78$\pm$2.33& 98.57$\pm$1.57\\ && ROS& 98.59$\pm$0.01& 95.43$\pm$0.03& 97.85$\pm$1.56& 97.18$\pm$2.13& 98.40$\pm$1.96\\ & DT& UGRWO& 97.86$\pm$0.02& 84.80$\pm$0.16& 96.27$\pm$3.67& 89.48$\pm$14.12& 98.36$\pm$2.63\\ && GRWO& 90.89$\pm$0.05& 92.16$\pm$0.03& 91.47$\pm$5.67& 91.23$\pm$5.65& 91.90$\pm$5.88\\ && RWO& 95.21$\pm$0.01& 83.88$\pm$0.05& 92.62$\pm$2.65& 89.18$\pm$4.32& 95.34$\pm$2.10\\ && MWMOTE& \textbf{98.52$\pm$0.01}& \textbf{93.25$\pm$0.02}& \textbf{97.82$\pm$2.03}& \textbf{93.85$\pm$5.02}& \textbf{99.89$\pm$1.14}\\ && SMOTE& 93.33$\pm$0.02& 80.29$\pm$0.07& 90.06$\pm$3.42& 85.79$\pm$6.76& 94.12$\pm$2.37\\ && ROS& 98.18$\pm$0.01& 93.17$\pm$0.05& 97.32$\pm$2.16 &93.62$\pm$4.73& 99.86$\pm$0.42\\ &Ada& UGRWO& \textbf{96.80$\pm$0.02}& 83.43$\pm$0.11& \textbf{94.41$\pm$3.61}& 88.77$\pm$7.49& \textbf{97.83$\pm$3.83}\\ &Boost& GRWO& 88.64$\pm$0.05& \textbf{92.37$\pm$0.04}& 90.91$\pm$4.70& \textbf{89.58$\pm$4.37}& 83.82$\pm$4.44\\ &M1& RWO& 92.32$\pm$0.02& 75.97$\pm$0.11& 88.42$\pm$3.83& 86.12$\pm$10.45& 89.74$\pm$2.25\\ && MWMOTE& 95.02$\pm$0.01& 91.25$\pm$0.02& 93.25$\pm$2.23& 88.25$\pm$0.23& 95.25$\pm$1.45\\ && SMOTE& 91.72$\pm$0.02& 77.97$\pm$0.05& 88.06$\pm$2.97& 85.38$\pm$6.18& 90.31$\pm$5.96\\ && ROS &94.89$\pm$0.01& 82.61$\pm$0.07& 92.15$\pm$2.29& 89.17$\pm$8.26& 94.18$\pm$3.00\\ Musk& NB& UGRWO& 92.25$\pm$0.02& 92.56$\pm$0.02& 93.52$\pm$1.02& 93.01$\pm$0.03& 90.26$\pm$0.24\\ && GRWO& 92.29$\pm$0.01& 92.99$\pm$0.00& 93.25$\pm$0.30& \textbf{94.52$\pm$1.06}& \textbf{91.47$\pm$2.06}\\ && RWO& \textbf{93.45$\pm$0.01}& \textbf{93.63$\pm$0.00}& \textbf{93.78$\pm$0.38}& 93.82$\pm$1.04& 90.25$\pm$1.97\\ && MWMOTE& 92.23$\pm$0.01& 92.52$\pm$0.02& 92.42$\pm$1.02& 92.25$\pm$2.26& 90.26$\pm$1.04\\ && SMOTE& 91.29$\pm$0.01& 92.20$\pm$0.00& 91.58$\pm$0.35& 92.63$\pm$1.26& 91.25$\pm$2.47\\ && ROS& 90.77$\pm$0.01& 91.55$\pm$0.00& 91.66$\pm$0.43& 91.47$\pm$2.32& 89.80$\pm$3.25\\ &5-NN& UGRWO& \textbf{98.99$\pm$0.01}& \textbf{99.68$\pm$0.01}& \textbf{99.65$\pm$0.03}& \textbf{98.95$\pm$0.05}& \textbf{99.01$\pm$0.54}\\ && GRWO& 97.72$\pm$0.01& 99.67$\pm$0.00& 99.39$\pm$0.20& 98.97$\pm$0.36& 98.48$\pm$0.69\\ && RWO& 98.25$\pm$0.01& 99.57$\pm$0.00& 99.31$\pm$0.40& 98.89$\pm$1.01& 98.22$\pm$2.03\\ && MWMOTE& 98.02$\pm$0.01& 98.54$\pm$0.02& 99.25$\pm$2.14& 98.15$\pm$2.56& 98.14$\pm$2.00\\ && SMOTE& 97.34$\pm$0.01& 99.67$\pm$0.00& 99.42$\pm$0.31& 98.28$\pm$1.28& 96.86$\pm$2.50\\ && ROS& 98.29$\pm$0.00& 99.58$\pm$0.00& 99.32$\pm$0.33& 98.90$\pm$0.59 &98.22$\pm$1.12\\ &DT&UGRWO& 95.60$\pm$0.00& 96.85$\pm$0.00& 99.25$\pm$0.00& 96.45$\pm$0.12& 96.67$\pm$1.02\\ &&GRWO& 93.43$\pm$0.027& \textbf{97.44$\pm$0.00}& \textbf{96.03$\pm$0.62}& 97.82$\pm$1.79& 96.24$\pm$3.36\\ && RWO& \textbf{95.77$\pm$0.01}& 97.20$\pm$0.00& 96.72$\pm$0.54& \textbf{98.09$\pm$1.01}& \textbf{97.07$\pm$1.94}\\ && MWMOTE& 96.25$\pm$0.01& 96.62$\pm$0.02& 96.08$\pm$0.54& 96.25$\pm$2.14& 95.45$\pm$0.02\\ && SMOTE& 93.59$\pm$0.01& 96.45$\pm$0.00 &96.03$\pm$0.38& 96.56$\pm$1.44& 95.74$\pm$2.81\\ && ROS& 92.73$\pm$0.00& 96.44$\pm$0.00& 95.10$\pm$0.33& 96.79$\pm$0.70& 95.29$\pm$1.46\\ &Ada &UGRWO& 94.51$\pm$0.01& 94.39$\pm$0.02& 94.98$\pm$2.02& 94.00$\pm$1.35& 94.63$\pm$2.58\\ &Boost& GRWO& 94.95$\pm$0.01& 92.36$\pm$0.00& 94.79$\pm$0.32& 96.29$\pm$1.67& 93.03$\pm$3.48\\ &M1& RWO& 96.30$\pm$0.00& 92.11$\pm$0.00& 94.57$\pm$0.35& 96.03$\pm$0.87& 94.59$\pm$1.70\\ && MWMOTE& 96.02$\pm$0.01& 94.01$\pm$0.02& 95.02$\pm$1.32& 96.25$\pm$2.01& 93.54$\pm$2.03\\ && SMOTE& \textbf{94.49$\pm$0.01}& \textbf{98.34$\pm$0.00}& \textbf{98.83$\pm$0.38}& \textbf{96.57$\pm$1.26}& \textbf{95.60$\pm$2.34}\\ && ROS& 95.38$\pm$0.01& 96.90$\pm$0.00& 93.23$\pm$0.04& 96.12$\pm$0.95& 92.81$\pm$1.87\\ Satimage& NB &UGRWO& \textbf{94.52$\pm$0.01}& 98.54$\pm$0.02& \textbf{97.85$\pm$2.31}& \textbf{95.87$\pm$0.45}& 89.99$\pm$0.02\\ &&GRWO& 93.48$\pm$0.01& \textbf{98.75$\pm$0.00}& 97.88$\pm$0.44& 94.11$\pm$1.75& 88.95$\pm$3.43\\ && RWO& 94.85$\pm$0.00& 96.49$\pm$0.00& 95.82$\pm$0.66& 95.05$\pm$0.83& \textbf{90.75$\pm$1.67}\\ && MWMOTE& 93.45$\pm$0.01& 97.85$\pm$0.00& 96.25$\pm$0.02& 94.85$\pm$6.23& 89.25$\pm$2.47\\ && SMOTE& 94.46$\pm$0.01& 96.86$\pm$0.00& 95.99$\pm$0.79& 94.71$\pm$1.09& 90.04$\pm$2.09\\ && ROS& 94.81$\pm$0.00& 96.47$\pm$0.00& 95.79$\pm$0.36& 95.00$\pm$0.48& 90.61$\pm$1.14\\ & 5-NN& UGRWO& \textbf{99.82$\pm$0.00}& \textbf{99.80$\pm$0.01}& \textbf{99.58$\pm$0.03}& \textbf{99.68$\pm$0.23}& \textbf{100.0$\pm$0.00}\\ && GRWO& 98.88$\pm$0.00& 99.66$\pm$0.00& 99.48$\pm$0.25& 99.35$\pm$0.47& 99.12$\pm$0.96\\ && RWO& 99.50$\pm$0.00 &99.63$\pm$0.00& 99.57$\pm$0.25& 99.63$\pm$0.22& 100.0$\pm$0.00\\ && MWMOTE& 98.23$\pm$0.00& 98.25$\pm$0.02& 98.58$\pm$0.02& 99.02$\pm$3.02& 99.25$\pm$0.87\\ && SMOTE& 99.32$\pm$0.00& 99.57$\pm$0.00& 99.48$\pm$0.29& 99.56$\pm$0.24& 99.91$\pm$0.13\\ && ROS& 99.32$\pm$0.00& 99.50$\pm$0.00& 99.42$\pm$0.24& 99.45$\pm$0.24& 99.66$\pm$0.35\\ & DT& UGRWO& \textbf{99.50$\pm$0.00}& \textbf{99.68$\pm$0.00}& \textbf{99.62$\pm$0.21}& \textbf{99.63$\pm$0.23}& \textbf{99.65$\pm$0.25}\\ && GRWO& 97.82$\pm$0.00& 99.35$\pm$0.00& 99.00$\pm$0.19& 98.58$\pm$0.36& 97.82$\pm$0.83\\ && RWO& 98.84$\pm$0.00& 99.15$\pm$0.00& 99.02$\pm$0.30& 99.05$\pm$0.33& 98.86$\pm$0.68\\ && MWMOTE& 98.58$\pm$0.01& 99.05$\pm$0.02& 99.00$\pm$2.03& 98.58$\pm$6.23 &98.87$\pm$0.87\\ &&SMOTE& 99.00$\pm$0.00& 99.38$\pm$0.00& 99.24$\pm$0.24& 99.23$\pm$0.43& 99.20$\pm$0.65\\ && ROS& 99.49$\pm$0.00& 99.62$\pm$0.00& 99.56$\pm$0.15& 99.88$\pm$0.16& 99.60$\pm$0.12\\ &Ada& UGRWO& \textbf{97.85$\pm$0.03}& \textbf{98.85$\pm$0.01}& \textbf{98.99$\pm$3.02}& \textbf{97.68$\pm$0.36}&00 \textbf{96.54$\pm$1.24}\\ &Boost& GRWO& 96.88$\pm$0.01& 98.18$\pm$0.00& 98.61$\pm$0.24& 97.41$\pm$0.92& 95.30$\pm$1.79\\ &M1& RWO& 97.46$\pm$0.00& 98.19$\pm$0.00& 97.88$\pm$0.68& 97.60$\pm$0.78&95.87$\pm$1.40\\ && MWMOTE& 96.25$\pm$0.02& 98.05$\pm$0.02& 98.52$\pm$0.02& 96.25$\pm$0.60& 96.25$\pm$1.23\\ && SMOTE& 97.76$\pm$0.00& 98.65$\pm$0.00& 98.32$\pm$0.45& 97.05$\pm$0.60& 96.50$\pm$1.40\\ && ROS& 97.56$\pm$0.00& 98.32$\pm$0.00& 98.05$\pm$0.48& 97.75$\pm$0.51& 95.96$\pm$0.70\\ Segmentation& NB& UGRWO& \textbf{99.85$\pm$0.00}& \textbf{99.33$\pm$0.02}& \textbf{99.76$\pm$0.75}& \textbf{99.85$\pm$0.45}& \textbf{99.77$\pm$0.71}\\ && GRWO& 97.12$\pm$0.01& 96.41$\pm$0.02& 96.81$\pm$1.93& 96.81$\pm$2.01& 96.70$\pm$1.90\\ && RWO& 98.17$\pm$0.00& 94.38$\pm$0.02& 97.24$\pm$0.99& 96.44$\pm$1.69& 97.97$\pm$1.45\\ && MWMOTE& 98.78$\pm$0.00& 96.58$\pm$0.02& 96.25$\pm$0.02& 96.58$\pm$2.01& 96.58$\pm$0.25\\ && SMOTE& 97.70$\pm$0.00& 94.09$\pm$0.02& 96.69$\pm$1.33& 96.27$\pm$1.82& 97.18$\pm$1.61\\ && ROS& 97.69$\pm$0.00& 92.99$\pm$0.02& 96.53$\pm$1.16& 95.97$\pm$1.52& 97.02$\pm$1.84\\ & 5-NN& UGRWO& \textbf{93.39$\pm$0.06}& \textbf{92.53$\pm$0.07}& \textbf{93.03$\pm$6.71}& \textbf{92.78$\pm$6.71}& 93.00$\pm$12.01\\ && GRWO& 84.65$\pm$0.12& 94.64$\pm$0.04& 92.08$\pm$6.34& 90.00$\pm$8.50& 86.66$\pm$13.14\\ && RWO& 85.30$\pm$0.08& 88.15$\pm$0.06& 86.90$\pm$7.15& 86.40$\pm$7.34& 82.66$\pm$10.03\\ && MWMOTE& 92.03$\pm$0.01& 91.25$\pm$0.06& 92.25$\pm$4.12& 90.12$\pm$3.02& 92.25$\pm$6.02\\ && SMOTE& 89.47$\pm$0.07& 90.34$\pm$0.07& 90.00$\pm$7.04& 90.46$\pm$6.84& 97.69$\pm$7.29\\ && ROS& 89.13$\pm$0.05& 87.38$\pm$0.07 &88.36$\pm$6.34& 88.26$\pm$6.86& \textbf{100.00$\pm$0.00}\\ & DT &UGRWO& 89.93$\pm$0.08& 90.79$\pm$0.07& 90.10$\pm$9.91 &89.48$\pm$9.88& 88.33$\pm$13.72\\ && GRWO& 94.78$\pm$0.06& 98.36$\pm$0.01& 97.51$\pm$2.90& 95.98$\pm$5.98& 93.57$\pm$11.44\\ && RWO& 98.04$\pm$0.02 &98.33$\pm$0.02& 98.19$\pm$2.54& 98.17$\pm$2.57& 98.04$\pm$3.15\\ && MWMOTE& 97.25$\pm$0.02& 97.58$\pm$0.02& 98.02$\pm$2.05& 98.01$\pm$2.54& 98.05$\pm$3.14\\ && SMOTE& 97.28$\pm$0.03& 98.06$\pm$0.02& 97.74$\pm$2.65& 97.69$\pm$2.95& 97.69$\pm$5.19\\ && ROS& \textbf{98.42$\pm$0.02}& \textbf{98.57$\pm$0.02}& \textbf{98.50$\pm$2.12}& \textbf{98.54$\pm$2.04}& \textbf{99.37$\pm$1.97}\\ &Ada&UGRWO& 96.33$\pm$0.07& 96.33$\pm$0.07& 96.36$\pm$7.66& 96.32$\pm$7.73& 95.00$\pm$11.24\\ &Boost& GRWO& 93.65$\pm$0.06& 9807$\pm$0.01& 97.04$\pm$2.83& 94.92$\pm$4.90& 91.33$\pm$9.18\\ &M1& RWO& 97.62$\pm$0.02& 98.10$\pm$0.02& 97.89$\pm$2.48& 97.72$\pm$2.68& 96.08$\pm$4.61\\ && MWMOTE& 85.71$\pm$0.23& 95.43$\pm$0.02& 92.80$\pm$3.26& 91.06$\pm$2.98& 89.10$\pm$12.7\\ && SMOTE& 98.82$\pm$0.01& 99.17$\pm$0.01& 99.03$\pm$1.55& 98.93$\pm$1.74& 98.46$\pm$3.24\\ && ROS& \textbf{99.09$\pm$0.01}& \textbf{99.14$\pm$0.01}& \textbf{99.11$\pm$1.42}& \textbf{99.15$\pm$1.36}& \textbf{100.00$\pm$0.00}\\ vehicle& NB& UGRWO& 86.09$\pm$0.05& 72.23$\pm$0.04& 80.93$\pm$3.28& 82.58$\pm$3.39& 78.88$\pm$7.44\\ && GRWO& 83.31$\pm$0.03& \textbf{85.93$\pm$0.03}& 84.73$\pm$3.33& 84.31$\pm$3.41& 78.03$\pm$4.60\\ && RWO& \textbf{91.31$\pm$0.01}& 85.67$\pm$0.03& \textbf{89.18$\pm$2.34}& \textbf{89.77$\pm$2.46}& 87.68$\pm$2.58\\ && MWMOTE& 90.25$\pm$0.02& 85.25$\pm$0.03& 87.25$\pm$0.02& 88.58$\pm$1.25& 99.25$\pm$0.03\\ && SMOTE& 86.29$\pm$0.01& 70.53$\pm$0.04& 81.30$\pm$2.50& 74.38$\pm$3.85& \textbf{96.97$\pm$1.35}\\ && ROS& 87.36$\pm$0.01& 68.39$\pm$0.06& 81.96$\pm$2.56& 73.25$\pm$5.11& 95.89$\pm$1.87\\ & 5-NN& UGRWO& \textbf{97.20$\pm$0.01}& 90.03$\pm$0.04& 95.49$\pm$1.67& 93.89$\pm$4.35& 96.37$\pm$1.75\\ && GRWO& 87.75$\pm$0.03& 90.82$\pm$0.02& 88.75$\pm$3.13& 88.38$\pm$3.25& 82.37$\pm$4.78\\ && RWO& 89.11$\pm$0.02& 83.88$\pm$0.02& 87.01$\pm$2.30& 88.70$\pm$4.15& 82.32$\pm$4.15\\ && MWMOTE& 97.02$\pm$0.02& 92.25$\pm$0.02& 94.25$\pm$0.02& 90.25$\pm$0.63& 98.52$\pm$1.25\\ && SMOTE& 96.42$\pm$0.01& \textbf{93.89$\pm$0.02}& 95.49$\pm$1.58& \textbf{94.11$\pm$2.10}& \textbf{99.89$\pm$0.31}\\ && ROS& 96.67$\pm$0.00& 93.25$\pm$0.01& \textbf{95.54$\pm$0.85}& 93.60$\pm$1.35& 99.66$\pm$0.43\\ & DT& UGRWO& 97.93$\pm$0.01& 92.02$\pm$0.05& 96.71$\pm$2.64& 96.51$\pm$3.56& 97.83$\pm$2.36\\ && GRWO& 94.28$\pm$0.02& 94.97$\pm$0.01& 94.65$\pm$1.94& 94.61$\pm$1.99& 94.37$\pm$3.48\\ && RWO& 97.01$\pm$0.00& 94.53$\pm$0.01& 96.14$\pm$1.21& 95.87$\pm$1.44& 96.73$\pm$1.49\\ &&MWMOTE& 97.25$\pm$0.02& 93.02$\pm$0.01& 96.25$\pm$0.91& 94.25$\pm$0.02& 96.25$\pm$1.25\\ && SMOTE& 96.78$\pm$0.00& 95.03$\pm$0.01& 96.09$\pm$1.05& 95.88$\pm$1.20& 96.88$\pm$1.87\\ && ROS& \textbf{98.63$\pm$0.00}& \textbf{97.37$\pm$0.01}& \textbf{98.20$\pm$1.20}& \textbf{97.52$\pm$1.57}& \textbf{99.74$\pm$0.56}\\ & Ada& UGRWO& \textbf{96.96$\pm$0.01}& 90.60$\pm$0.05& \textbf{95.41$\pm$2.76}& 92.65$\pm$4.05& \textbf{98.93$\pm$1.80}\\ &Boost& GRWO& 91.93$\pm$0.01& \textbf{92.66$\pm$0.01}& 91.90$\pm$1.30& 91.82$\pm$1.38& 96.14$\pm$3.86\\ &M1& RWO& 94.69$\pm$0.01& 90.68$\pm$0.01 &93.26$\pm$1.44& \textbf{93.12$\pm$1.99}& 93.20$\pm$4.48\\ && MWMOTE& 93.26$\pm$0.01& 91.25$\pm$0.03& 92.25$\pm$2.13& 92.10$\pm$1.16 &92.25$\pm$2.42\\ && SMOTE& 94.89$\pm$0.01& 91.20$\pm$0.03& 93.54$\pm$2.17& 91.88$\pm$2.84& 98.69$\pm$1.34\\ && ROS& 95.37$\pm$0.01& 90.08$\pm$0.03& 93.69$\pm$1.93& 90.63$\pm$3.07& 99.83$\pm$0.35\\ Sonar& NB& UGRWO& \textbf{95.16$\pm$0.03}& 64.16$\pm$0.13& \textbf{91.45$\pm$6.13}& 86.71$\pm$12.7& \textbf{91.94$\pm$5.95}\\ && GRWO& 84.61$\pm$0.09& \textbf{78.92$\pm$0.07}& 82.18$\pm$9.94& 85.45$\pm$8.83& 74.88$\pm$13.4\\ && RWO& 92.61$\pm$0.03& 73.36$\pm$0.09& 88.46$\pm$4.82& \textbf{91.12$\pm$4.69}& 87.11$\pm$5.32\\ && MWMOTE& 91.25$\pm$0.03& 75.25$\pm$0.08& 88.61$\pm$4.25& 90.25$\pm$2.85& 87.95$\pm$0.03\\ && SMOTE& 88.36$\pm$0.02& 52.20$\pm$0.08& 81.38$\pm$2.95& 69.15$\pm$8.94& 87.20$\pm$4.32\\ && ROS& 91.37$\pm$0.02& 51.92$\pm$0.16& 85.41$\pm$3.97& 67.61$\pm$14.75& 91.75$\pm$2.67\\ & 5-NN& UGRWO& 97.19$\pm$0.03& 77.00$\pm$0.27& 95.01$\pm$6.44& 85.87$\pm$18.16& 98.35$\pm$2.89\\ && GRWO& 91.78$\pm$0.03& \textbf{84.53$\pm$0.08}& 88.98$\pm$4.02& 87.42$\pm$4.92& 93.23$\pm$5.18\\ && RWO& \textbf{97.82$\pm$0.01}& 86.23$\pm$0.08& \textbf{96.25$\pm$2.26}& \textbf{87.29$\pm$7.78}& \textbf{100.00$\pm$0.00}\\ && MWMOTE& 95.25$\pm$0.01& 85.25$\pm$0.08& 95.25$\pm$2.85& 87.01$\pm$1.24& 99.58$\pm$0.02\\ && SMOTE& 96.72$\pm$0.01& 82.23$\pm$0.06& 94.46$\pm$1.75& 83.67$\pm$5.66& \textbf{100.00$\pm$0.00}\\ && ROS& 97.74$\pm$0.01& 85.60$\pm$0.08& 96.10$\pm$2.15& 86.72$\pm$7.75& \textbf{100.00$\pm$0.00}\\ & DT& UGRWO& \textbf{96.94$\pm$0.02}& 72.57$\pm$0.22& \textbf{95.45$\pm$5.13}& \textbf{80.60$\pm$19.27}& \textbf{98.78$\pm$2.55}\\ && GRWO& 85.69$\pm$0.05& \textbf{74.31$\pm$0.09}& 81.71$\pm$6.90& 79.84$\pm$7.11& 84.97$\pm$8.82\\ && RWO& 94.65$\pm$0.01& 69.68$\pm$0.08& 90.91$\pm$2.58& 79.07$\pm$7.45& 95.71$\pm$2.31\\ && MWMOTE& 93.25$\pm$0.01&70.25$\pm$0.02& 90.25$\pm$2.54& 79.25$\pm$2.45& 94.25$\pm$3.02\\ && SMOTE& 93.22$\pm$0.02&68.91$\pm$0.09& 88.92$\pm$3.90& 78.35$\pm$7.22& 94.20$\pm$4.38\\ && ROS& 90.40$\pm$0.02& 50.80$\pm$0.09& 83.98$\pm$4.42& 67.40$\pm$6.29& 90.36$\pm$4.33\\ & Ada& UGRWO& \textbf{98.53$\pm$0.02}& 68.66$\pm$0.17& \textbf{97.24$\pm$3.91}& 75.04$\pm$14.18& \textbf{100.00$\pm$0.00}\\ &Boost& GRWO& 89.29$\pm$0.04& \textbf{78.51$\pm$0.06}& 85.26$\pm$6.69& \textbf{82.75$\pm$5.28}& 90.35$\pm$7.08\\ &M1& RWO& 94.13$\pm$0.02& 55.95$\pm$0.25& 89.75$\pm$4.53& 65.69$\pm$22.40& 97.41$\pm$3.74\\ && MWMOTE& 92.25$\pm$0.04& 69.25$\pm$0.02& 95.25$\pm$6.23 &80.25$\pm$1.23& 99.58$\pm$2.45\\ && SMOTE& 93.39$\pm$0.11& 61.24$\pm$0.11& 88.75$\pm$2.75& 68.86$\pm$10.01& 97.73$\pm$2.46\\ && ROS& 94.93$\pm$0.01& 61.48$\pm$0.13& 91.05$\pm$2.24& 67.45$\pm$10.90& 99.48$\pm$0.82\\ Glass& NB& UGRWO& 87.66$\pm$0.09& 88.19$\pm$0.07& 86.20$\pm$8.58& 88.60$\pm$8.52& 79.16$\pm$14.53\\ && GRWO& 84.93$\pm$0.10& 94.36$\pm$0.03& 91.79$\pm$6.04& 87.46$\pm$8.15& 79.30$\pm$12.83\\ && RWO& 88.10$\pm$0.05& 94.30$\pm$0.02& 92.31$\pm$3.56& 89.81$\pm$4.22& 83.36$\pm$6.61\\ && MWMOTE& 90.25$\pm$0.01& 96.02$\pm$0.03& 93.25$\pm$0.03 &90.25$\pm$3.02& 95.25$\pm$0.02\\ && SMOTE& 91.12$\pm$0.06& \textbf{96.17$\pm$0.02}& 94.66$\pm$3.85& 94.03$\pm$5.56& 92.91$\pm$9.98\\ && ROS& \textbf{92.52$\pm$0.06}& 95.85$\pm$0.03& \textbf{94.67$\pm$4.15}& \textbf{94.96$\pm$4.88}& \textbf{96.18$\pm$8.06}\\ & 5-NN& UGRWO& 88.53$\pm$0.09& 94.81$\pm$0.03& 92.68$\pm$5.70& 89.42$\pm$8.94& 80.69$\pm$15.90\\ && GRWO& 85.80$\pm$0.07& 96.56$\pm$0.01& 94.49$\pm$2.76& 87.89$\pm$7.35& 78.66$\pm$13.89\\ && RWO& 81.44$\pm$0.10& 92.49$\pm$0.03& 89.32$\pm$5.51& 83.41$\pm$8.29& 70.72$\pm$12.65\\ && MWMOTE& 95.25$\pm$0.13& 98.25$\pm$0.01& 98.25$\pm$0.02& 90.25$\pm$2.59& 95.25$\pm$3.26\\ && SMOTE& 98.23$\pm$0.02& \textbf{99.21$\pm$0.01}& 98.91$\pm$1.74& \textbf{99.22$\pm$1.24}& \textbf{100.00$\pm$0.00}\\ && ROS& \textbf{98.61$\pm$0.03}& 99.18$\pm$0.01& \textbf{98.97$\pm$2.31}& 99.20$\pm$1.80& \textbf{100.00$\pm$0.00}\\ & DT& UGRWO& 96.06$\pm$0.05& 97.99$\pm$0.02& 97.35$\pm$3.65& 96.68$\pm$4.91& 95.00$\pm$8.74\\ && GRWO& 97.97$\pm$0.03& 99.24$\pm$0.01& 98.90$\pm$1.76& 98.35$\pm$2.91& 97.63$\pm$4.93\\ && RWO& 93.99$\pm$0.06& 96.97$\pm$0.03& 95.97$\pm$4.65& 95.19$\pm$5.09& 93.00$\pm$6.74\\ && MWMOTE& 98.25$\pm$0.02& 98.25$\pm$0.02& 95.25$\pm$0.98& 95.18$\pm$0.02& 99.25$\pm$0.03\\ && SMOTE& 99.47$\pm$0.01& \textbf{99.74$\pm$0.05}& 99.65$\pm$1.09& \textbf{99.75$\pm$0.80}& \textbf{100.00$\pm$0.00}\\ && ROS& \textbf{99.52$\pm$0.01}& 99.74$\pm$0.00& \textbf{99.66$\pm$1.05}& 99.74$\pm$0.80& \textbf{100.00$\pm$0.00}\\ & Ada& UGRWO& 95.11$\pm$0.04& 97.22$\pm$0.03& 96.36$\pm$5.11& 95.68$\pm$4.19& 94.02$\pm$8.20\\ &Boost& GRWO& 96.20$\pm$0.05& 98.49$\pm$0.01& 97.84$\pm$3.05& 96.92$\pm$4.54& 95.13$\pm$8.61\\ &M1& RWO& 94.87$\pm$0.05& 97.51$\pm$0.02& 96.65$\pm$3.17& 95.66$\pm$4.40& 93.09$\pm$8.20\\ && MWMOTE& 99.25$\pm$0.02& \textbf{99.85$\pm$0.02}& \textbf{99.85$\pm$0.25}& \textbf{99.85$\pm$0.02}& 99.85$\pm$3.02\\ && SMOTE& \textbf{99.33$\pm$0.02}& 99.75$\pm$0.00& 99.64$\pm$1.12& 99.35$\pm$2.04& 98.75$\pm$3.95\\ && ROS& 99.04$\pm$0.02& 99.48$\pm$0.01& 99.33$\pm$1.40& 99.49$\pm$1.06& \textbf{100.00$\pm$0.00}\\ \end{longtable}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Since the thermodynamics of black holes has been established, the entropy of horizon is a fascinated research interest. The recent popular topic is the additional equalities in thermodynamics, which are expected to be useful of understanding the origin of black hole entropy at the microscopic level. These equalities include the entropy product of multi-horizons black hole in super-gravity model \cite{Cvetic:2010mn,Toldo:2012ec,Cvetic:2013eda,Lu:2013ura,Chow:2013tia}, Einstein gravity \cite{Detournay:2012ug,Castro:2012av,Visser:2012zi,Chen:2012mh,Castro:2013kea,Visser:2012wu,Abdolrahimi:2013cza,Pradhan:2013hqa} and other modified gravity models \cite{Castro:2013pqa,Cvetic:2013eda,Faraoni:2012je,Lu:2013eoa,Anacleto:2013esa} in both four and high dimensions, and the entropy sum \cite{Wang:2013smb,Xu:2013zpa} of multi-horizons black hole. It is always shown that the entropy product and sum are often independent of the mass of the black hole \cite{Cvetic:2010mn,Castro:2012av,Toldo:2012ec,Chen:2012mh,Visser:2012zi,Cvetic:2013eda,Abdolrahimi:2013cza,Lu:2013ura, Anacleto:2013esa,Chow:2013tia,Castro:2013kea,Lu:2013eoa,Wang:2013smb,Xu:2013zpa}, while the former fails in some asymptotical non-flat spacetime \cite{Faraoni:2012je,Castro:2013pqa,Detournay:2012ug,Visser:2012wu}. However, in order to preserve the mass independence, one need include the necessary effect of the un-physical ``virtual'' horizons \cite{Visser:2012wu,Wang:2013smb,Xu:2013zpa}. Only in this way, these additional equalities of multi-horizons of black holes are ''universal''. On the other hand, by using of the new thermodynamics relations the construction of thermodynamics for inner horizon of black hole catches more attentions recently \cite{Detournay:2012ug,Castro:2012av,Chen:2012mh,Pradhan:2013hqa,Castro:2013pqa,Pradhan:2013xha,Ansorg:2008bv,Ansorg:2009yi,Ansorg:2010ru}. These make investigating the properties of inner horizons of black hole more interesting, especially for the above additional equalities of multi-horizons to be found. After looking at the entropy product or sum of multi-horizons of the black hole, one can find that they only depend on the conserved charges: the electric charge $Q$, the angular momentum $J$, or the cosmological constant $\Lambda$ (which can be treated as pressure in AdS spacetime after explaining the mass of the black hole as enthalpy rather than internal energy of the system). For example, the area product (entropy product) of stationary $(3+1)$ dimensional black hole is $Q$ and $J$ dependence \cite{Ansorg:2008bv,Ansorg:2009yi,Ansorg:2010ru} \begin{align} A_{C}A_{E}=(8\pi)^2\left(J^2+\frac{Q^4}{4}\right). \end{align} where $A_{C}$ and $A_{E}$ denote the horizon areas of inner Cauchy horizon and event horizon respectively. The conclusion is generalized to rotating multicharge black holes, both in asymptotically flat and asymptotically anti-de Sitter spacetime in four and higher dimensions \cite{Cvetic:2010mn}. The $\Lambda$ dependence is shown in the entropy sum of four dimensional RN-(A)dS black hole \cite{Wang:2013smb,Xu:2013zpa}: $\sum S_i=\frac{6\pi k}{\Lambda}.$ In this paper, we introduce more types of entropy products and consider their conserved charge dependence in high dimensions. We are aimed in high dimensions, as studies of gravity in higher dimensions might reveal deeper structures of general relativity, such as stability issues and classification of singularities of spacetime in various dimensions. Entropy products of multihorizons in four dimensions help us to understand the origin of black hole entropy at the microscopic level. In this sense, the entropy relations in high dimensions might also lead to a further studying on this issue. However, this is not a new idea, as in \cite{Visser:2012wu}, some interesting entropy product with mass independence were shown in four dimensions. Generalizations to higher dimensional spacetime, the results in this paper agree with this aim and have shown a whole look at the extra thermodynamical relationships, especially for some ``universal'' and mass independent entropy product. Firstly we find this type of entropy product, $\sum_{1\leq i<j\leq D}(S_{i}S_{j})^{\frac{1}{d-2}}$, only depends on the cosmological constant $\Lambda$ and horizon topology $k$, and which does not depend on the other conserved charges. Here and below we use $D$ to denote the number of the roots of polynomial, i.e. the number of the horizons. For another type of charged black hole the $\prod_{i=1}^DS_i$ only depends on the electric charge $Q$. However, the electric charge $Q$ plays a switch role in latter case. When $Q$ vanishes in the solution, the entropy product turns to mass dependence. This paper is organized as follows. In the next Section, we will investigate the``entropy product'' of higher dimensional (A)dS black holes in Einstein-Maxwell Gravity. In Section 3, we discuss the``entropy product'' of (A)dS black holes in $f(R)$-Maxwell Gravity. Section 4 is devoted to the conclusions and discussions. \section{Higher Dimensional (A)dS Black Holes in Einstein-Maxwell Gravity} \subsection{Charged Black Holes} The Einstein-Maxwell Lagrangian in higher dimensions $d$ $(d\geq4)$ reads \begin{align} \label{eq:Lag_of_Einstein-Maxwell} \mathcal{L}=\frac{1}{16\pi G}\int d^d x \sqrt{-g}[R-F_{\mu\nu}F^{\mu\nu}-2\Lambda], \end{align} where $\Lambda=\pm\frac{(d-1)(d-2)}{2\ell^2}$ is the cosmological constant associated with cosmological scale $\ell$. Here, the negative cosmological constant corresponds to AdS spacetime while positive one corresponds to dS spacetime. For a charged static black hole in maximal symmetric Einstein space, the metric takes the ansatz \begin{align} \label{eq:metric ansatz} d s^2=-V(r) d t^2+\frac{d r^2}{V(r)}+r^2 d\Omega^2_{d-2}, \end{align} where $d\Omega^2_{d-2}$ represents the line element of a $(d-2)$-dimensional maximal symmetric Einstein space with constant curvature $(d-2)(d-3)k$, and the $k=1, 0$ and $-1$, corresponding to the spherical, Ricci flat and hyperbolic topology of the black hole horizon, respectively. The metric function $V(r)$ is given by \cite{Liu:2003px,Astefanesei:2003gw,Brihaye:2008br,Belhaj:2012bg} \begin{align} V(r)=k-\frac{2M}{r^{d-3}}+\frac{Q^2}{r^{2(d-3)}}-\frac{2\Lambda}{(d-1)(d-2)}r^2, \label{V(r)1} \end{align} where $M$ is the mass parameter and $Q$ is the charge of the black hole. In the Einstein-Maxwell gravity, the entropy of horizon located in $r=r_i$, as usual, is given by (We have set $G=1$.) \begin{align} S_i=\frac{A_i}{4}=\frac{\pi^{(d-1)/2}}{2\Gamma\left(\frac{d-1}{2}\right)}r_i^{d-2}, \label{entropy1} \end{align} where $r_i$ which are the roots of $V(r)$, are the horizon coordinates. According to the horizon function (\ref{V(r)1}), in principle this high order polynomial has $D=2(d-2)$ roots. The real positive roots correspond to physical horizons while the negative or complex ones correspond to un-physical horizons which we mean ``virtual'' horizons. We can introduce some entropy relations, which are either conserved quantity independent or special conserved quantity dependent. Rewriting $V(r)=0$ by (\ref{V(r)1}), we obtain \begin{equation} \frac{2\Lambda}{(d-1)(d-2)}r^{2d-4}-kr^{2(d-3)}+2Mr^{d-3}-Q^2=0 \end{equation} By using the Vieta theorem on the above equation, one can find \begin{align} &\sum_{1\leq i<j\leq D}r_{i}r_{j}=-\frac{k(d-1)(d-2)}{2\Lambda},\\ &\prod_{i=1}^Dr_i=-\frac{Q^2(d-1)(d-2)}{2\Lambda},\\ &\sum_{1\leq i_1< i_2\cdots< i_a\leq D}\,\prod_{a=1}^{d-1}r_{i_a}=(-1)^{d-1}\frac{M(d-1)(d-2)}{\Lambda}. \end{align} After inserting the inverse of the entropy (\ref{entropy1}) as \begin{equation} r_i=\left(\frac{2\Gamma\Bigl(\frac{d-1}{2}\Bigr)S_i}{\pi^{\frac{d-1}{2}}}\right)^{\frac{1}{d-2}} \end{equation} one can obtain the following type of entropy product \begin{equation} \label{eq:varentropyproduct_of_E-M} \sum_{1\leq i<j\leq D}(S_{i}S_{j})^{\frac{1}{d-2}} =-\frac{k(d-1)(d-2)\pi^{\frac{d-1}{d-2}}}{2\Lambda\left(2\Gamma\Bigl(\frac{d-1}{2}\Bigr)\right)^{\frac{2}{d-2}}} \end{equation} which only depends on the cosmological constant $\Lambda$ and horizon topology $k$. Another type of entropy product with the well-known mass independence is \begin{equation} \prod_{i=1}^DS_i=\left(-\frac{Q^2(d-1)(d-2)\pi^{d-1}}{8\Lambda\Gamma\Bigl(\frac{d-1}{2}\Bigr)^2}\right)^{d-2} \end{equation} Once again, it depends on cosmological constant $\Lambda$ and electric charge $Q$, except for mass $M$. The same phenomenon also happens in the entropy product study of (A)dS black holes \cite{Cvetic:2010mn,Visser:2012wu}. And the last one \begin{align} \sum_{1\leq i_1< i_2\cdots< i_a\leq D}\left(\prod_{a=1}^{d-1}S_{i_a}\right) ^{\frac{1}{d-2}}=(-1)^{d-1}\left(\frac{\pi^{\frac{d-1}{2}}}{2\Gamma\Bigl(\frac{d-1}{2}\Bigr)}\right) ^{\frac{d-1}{d-2}}\frac{M(d-1)(d-2)}{\Lambda}. \end{align} is mass dependent. \subsection{Neutral Black Holes} If the Maxwell term vanishes in the Lagrangian (\ref{eq:Lag_of_Einstein-Maxwell}), the situation will be different from our discussion above. Then the Lagrangian is the standard Einstein case \begin{align} \mathcal{L}=\frac{1}{16\pi G}\int d^d x \sqrt{-g}(R-2\Lambda). \end{align} We still make the same metric ansatz (\ref{eq:metric ansatz}) i.e. the maximal symmetric, static black hole solution. In this case the metric function reduces to \begin{align} V(r)=k-\frac{2M}{r^{d-3}}-\frac{2\Lambda}{(d-1)(d-2)}r^2. \end{align} In fact when $d=4$, the metric reduce to Schwarzschild (A)dS black holes. Rewriting it to a polynomial equation of $r$, which shows the horizons of the black hole, \begin{align} \frac{2\Lambda}{(d-1)(d-2)}r^{d-1}-kr^{d-3}+2M=0. \end{align} The black hole possesses $D=d-1$ horizons at most, including ``virtual'' horizon. By the same help of Vieta theorem, we have \begin{align} &\sum_{1\leq i<j\leq D}r_{i}r_{j}=-\frac{k(d-1)(d-2)}{2\Lambda},\\ &\prod_{i=1}^Dr_i=(-1)^D\frac{M(d-1)(d-2)}{\Lambda}. \end{align} which immediately result in the following two entropy products \begin{align} &\sum_{1\leq i<j\leq D}(S_{i}S_{j})^{\frac{1}{d-2}} =-\frac{k(d-1)(d-2)\pi^{\frac{d-1}{d-2}}}{2\Lambda\left(2\Gamma\Bigl(\frac{d-1}{2}\Bigr)\right)^{\frac{2}{d-2}}},\label{product02}\\ &\prod_{i=1}^DS_i=\left(\frac{\pi^{\frac{d-1}{2}}}{2\Gamma\Bigl(\frac{d-1}{2}\Bigr)}\right)^{d-1} \left(\frac{M(d-1)(d-2)}{\Lambda}\right)^{d-2}. \end{align} The former one has the same behavior with the charged case (\ref{eq:varentropyproduct_of_E-M}). They both only depend on the cosmological constant $\Lambda$ and horizon topology $k$. One need note that the two entropy products have different number of horizons $D$, including physical and un-physical (``virtual'') ones. This type of entropy product relation is also firstly introduced in \cite{Visser:2012wu}. In that paper, instead of the entropy $S$, horizon area $A$ is used for $S=\frac{1}{4}A$. The latter entropy product, however, is mass dependent, which destroys the well-known mass independence. For this latter one, the electric charge $Q$ seems to play a switch role: when $Q$ vanishes, the entropy product turns to mass dependence. A detailed discussion that considers these two types of entropy products in an asymptotical flat spacetime can be found in the \hyperref[sec:Asymptotical Flat cases]{Appendix}. It is shown that, for the former one, the cosmological constant $\Lambda$ seems to play a switch role: when $\Lambda$ vanishes, the entropy product turns to mass dependence. The phenomenon for additional equalities in thermodynamics also happens in the entropy sum study of multi-horizon black hole \cite{Wang:2013smb}. However, another type of entropy product has the expected electric charge $Q$ dependence. \section{Higher Dimensional (A)dS Black Holes in $f(R)$ Gravity} In this section, we will consider the $f(R)$ gravity for a further test of the entropy products of multi-horizon black holes. \subsection{Charged Solution} Since the standard Maxwell energy-momentum tensor is not traceless, people failed to derive higher dimensional black hole/string solutions from $f(R)$ gravity coupled to standard Maxwell field \cite{Xu:2013zpa}. This makes us can only discuss the charged black hole in four dimensions. Let us consider the action for four dimensional $f(R)$ gravity with Maxwell term, \begin{align} \mathcal{L}=\frac{1}{16\pi G}\int d^4 x \sqrt{-g}[R+f(R)-F_{\mu\nu}F^{\mu\nu}], \end{align} where $f(R)$ is an arbitrary function of the scalar curvature $R$. We will focus on the static, spherically symmetric constant curvature ($R=R_0$) solution with the same metric ansatz as Eq.(\ref{eq:metric ansatz}), in which the horizon function behaves as \cite{Sheykhi:2012zz,Moon:2011hq,Hendi:2011eg,Cembranos:2011sr} \begin{align} V(r)=k-\frac{2\mu}{r}+\frac{q^2}{r^2}\frac{1}{(1+f^{\prime}(R_0))}-\frac{R_0}{12}r^2. \end{align} where the cosmological constant of this theory is then have the form $\Lambda_f=\frac{R_0}{4}$. The parameters $\mu$ and $q$ are related to the mass and charge of black hole, respectively. The number of horizons can be $D=4$ at most, including the ``virtual'' horizon. The entropy of all horizons is \begin{align} S_i=\frac{A_i}{4}(1+f^{\prime}(R_0)). \end{align} Where $f^{\prime}(R_0)=\left.\frac{\partial f(R)}{\partial R}\right|_{R=R_0}$ and $A_i=4\pi r_i^2$. We are interested in the entropy product of multi-horizons, which are the roots of the following equation \begin{align} \frac{R_0}{12}r^4-kr^2+2\mu r-\frac{q^2}{1+f^{\prime}(R_0)}=0. \end{align} Then we can get \begin{align} &\sum_{1\leq i<j\leq 4}r_ir_j=-\frac{12k}{R_0},\\ &\prod_{i=1}^4r_i=-\frac{12 q^2}{(1+f^{\prime}(R_0))R_0}. \end{align} Obviously these two relationships lead to the following entropy product \begin{align} &\sum_{1\leq i<j\leq 4}\sqrt{S_iS_j}=-\frac{12k}{R_0} (1+f^{\prime}(R_0))\pi \label{eq:varentropyproduct of 4d charged f(R)} \intertext{and} &\prod_{i=1}^4S_i=\frac{144\pi^4q^4(1+f^{\prime}(R_0))^2}{R_0^2}. \end{align} After inserting $R_0=4\Lambda_{f}$, the conserved charge dependence of entropy products, $\sum_{1\leq i<j\leq D}(S_{i}S_{j})^{\frac{1}{d-2}}$ and $\prod_{i=1}^DS_i$, are tested in $f(R)$ gravity. They behavior like that in Einstein gravity: the former depends on cosmological constant $\Lambda$ and the horizon topology $k$; while the latter depends on $\Lambda$ and electric charge $Q$. Both of them are independence of mass $M$. \subsection{Neutral Solution} Let us consider the Lagrangian of $f(R)$ gravity in higher dimensions \begin{align} \mathcal{L}=\int d^d x \sqrt{-g}(R+f(R)). \end{align} After choosing the same metric ansatz as Eq.(\ref{eq:metric ansatz}), the metric function is \cite{Sheykhi:2012zz} \begin{align} \label{V(r)22} V(r)=k-\frac{2m}{r^{d-3}}-\frac{R_0}{d(d-1)}r^2, \end{align} The parameter $m$ is a integral constant related to the mass (total energy) of the black hole. The cosmological constant is $\Lambda_{f}=\frac{d-2}{2d}R_0$. The entropy also satisfies the area theorem, and takes the form as \begin{align} \label{eq:entropy of higher d f(R)} S_i=\frac{A_i}{4}(1+f^{\prime}(R_0)).\\ \intertext{with the area of every horizon} A_i=\frac{2\pi^{(d-1)/2}}{\Gamma\left(\frac{d-1}{2}\right)}r_i^{d-2} \notag \end{align} where $r_i$ is the horizon and should be the root of Eq.(\ref{V(r)22}), namely, \begin{equation} \frac{R_0}{d(d-1)}r^{d-1}-kr^{d-3}+2m=0. \end{equation} The black hole at most can possess $D=d-1$ horizons including the ``virtual'' horizon. Then according to Vieta's theorem, we can obtain \begin{align} &\sum_{1\leq i<j\leq D}r_ir_j =-\frac{k\,d(d-1)}{R_0}\\ &\prod_i^Dr_i =(-1)^D\frac{2m\,d(d-1)}{R_0} \end{align} Followig the same procedure we inverse (\ref{eq:entropy of higher d f(R)}) as \begin{equation} r_i=\left(\frac{2\Gamma\Bigl(\frac{d-1}{2}\Bigr)S_i}{\pi^{\frac{d-1}{2}}(1+f^{\prime}(R_0))}\right)^{\frac{1}{d-2}}. \end{equation} Then we get this type of entropy product \begin{align} \label{eq:varentropyproduct_of_f(R)} \sum_{1\leq i<j\leq D}(S_{i}S_{j})^{\frac{1}{d-2}}= -\frac{kd(d-1)}{R_0} \left(\frac{\pi^{\frac{d-1}{2}}(1+f^{\prime}(R_0))}{2\Gamma\Bigl(\frac{d-1}{2}\Bigr)}\right)^{\frac{2}{d-2}} \end{align} We note that $R_0=\frac{2d\Lambda_{f}}{d-2}$, which means the above one only depends on the cosmological constant. To consider (\ref{eq:varentropyproduct_of_E-M}), (\ref{product02}), (\ref{eq:varentropyproduct of 4d charged f(R)}) and (\ref{eq:varentropyproduct_of_f(R)}) together, one can treat them as one type of entropy product, consisting of the sum of "part" of the entropy product and only having different numbers of the possible horizons $D$. Furthermore, this type of entropy product, $\sum_{1\leq i<j\leq D}(S_{i}S_{j})^{\frac{1}{d-2}}$ is always conserved quantities independent. That means this quantity does not relate to the mass or electric charge distribution. This entropy product only depends on the cosmological constant and horizon topology in both Einstein gravity and $f(R)$ gravity with or without Maxwell field source in the spacetime. In this sense, we conclude the entropy product, $\sum_{1\leq i<j\leq D}(S_{i}S_{j})^{\frac{1}{d-2}}$, is ``universal''. However, the entropy product \begin{align} \prod_{i=1}^DS_i=\left(\frac{\pi^{(d-1)/2}(1+f^{\prime}(R_0))}{2\Gamma\left(\frac{d-1}{2}\right)}\right)^{d-1} \left(\frac{2m\,d(d-1)}{R_0}\right)^{d-2} \end{align} is mass dependent, even including the effect of the un-physical ``virtual'' horizon. It is quite different from the suggestion provided in \cite{Cvetic:2010mn} and \cite{Visser:2012wu}, where the mass independence of entropy product $\prod_{i=1}^DS_i$ always fails for the uncharged black holes. \section{Conclusion} We have discussed the entropy products of the higher dimensional (A)dS black holes in Einstein-Maxwell and $f(R)$(-Maxwell) gravity, which have possessed similar formula and can be concluded as below \begin{enumerate} \item There exists a ``universal'' quantity $\sum_{1\leq i<j\leq D}(S_{i}S_{j})^{\frac{1}{d-2}}$, which only depends on the cosmological constant $\Lambda$ and background topology $k$ and does not depend on the conserved charges $Q$, nor even the mass $M$. \item Being different from \cite{Cvetic:2010mn} and \cite{Visser:2012wu}, another entropy product $\prod_{i=1}^DS_i$ in our case has an unexpected behavior. It is shown that the electric charge $Q$ plays an important role in this entropy product. The entropy product of charged black holes only depend on the electric charge $Q$ and it is mass independence. When $Q$ vanishes in the solution, it becomes mass dependent, even when including the effect of the unphysical virtual horizons. In this sense, the ``universal property'' of this entropy product is destroyed. \end{enumerate} After having a whole look at entropy product or entropy sum to multi-horizons of the black hole, one can find that they all only depend on the conserved charges: the electric charge $Q$, the angular momentum $J$, or the cosmological constant $\Lambda$ (which can be treated as pressure in AdS spacetime after interpreting the mass of the black hole as the thermodynamic enthalpy rather than internal energy of the system). The use of the entropy relation now is not clear, but it may be relevant to the holographic description of black holes in the future. A further test on the entropy product of the rotating black hole and its reduced static black hole is planned to be a future task. \begin{acknowledgments} This work is partially supported by the Natural Science Foundation of China (NSFC) under Grant No.11075078 and by the project of knowledge innovation program of Chinese Academy of Sciences. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }